Anthropomorphic AI

There is a growing awareness and acceptance of accelerating technological change and the associated future likelihood of strong artificial intelligence leading to a radical historical disruption in our near future; a technological Singularity.  This growing awareness exists on multiple levels: as a matter of (somewhat) serious academic speculation, and in a more nebulous form as a greater cultural current explored in popular science fiction media.  There is a general consensus that the impact of strong artificial intelligence will be profound, but whether for good or bad is a great unknown.  This uncertainty arises from the intrinsic complexity inherit in the entire concept of an intelligence greater than our own.

Can we produce Strong AI with a high likelihood of preserving anthropocentric goals?  If we can, this final technology could improve every aspect of human life incalculably.  Coupled to that upside is the huge risk that strong AI is developed for the private benefit of some small segment of the population, and or strong AI’s goals diverge from any initial human-enhancing state into goals that are indifferent at best or outright hostile at worst.  This latter scenario is the one most promulgated in the cultural media, and however likely or unlikely, as it presents a global existential risk  it must be avoided at all cost.  To that end we explore knowledge and goals in general intelligence theory, likely algorithms and physical constraints, outline the shape of likely evolutionary trajectories strong AI could take, and then sketch a plausible plan to the good place.  This analysis leads to the conclusion that:

  • Minds(qualitative, data,beliefs,ideas,thoughts) and Brains(quantitative, hardware, algorithms) are conceptually distinct, with the former supervening on the latter,
  • that the latter (and all potential intelligence algorithms) are heavily constrained by physics,
  • that any practical human-surpassing AGI mind will necessarily evolve from human minds,
  • in other words: true AGI minds will necessarily be posthuman minds, and will emerge from within our noosphere, not as a distinct evolutionary tract.

Abstract Intelligent Agents

The quest to build intelligent machines begins with a clear understanding of intelligence itself.

A mathematical foundation for intelligence was recently developed in the AIXI agent [1] model, which unifies results from information theory, complexity theory, and computer science into a pleasing general form.  At a higher level of abstraction, the general form of this theory is well rooted in evolutionary systems science and can be found in the opening sections of  “AI: a modern approach, 3rd edition”.  We can describe an intelligent agent as a system embedded in a larger system: its information-environment (another program); within which it has limited powers of sensing, manipulation, computation and the additional endowment of self-directed unsupervised learning.  In addition to these capabilities the agent has goals, which are implemented in the AIXI agent as rewards the agent collects from the environment, allowing the goal-directed learning algorithm to be formalized as a reward optimization problem.

In a nutshell, the algorithm works something like this: in each small timestep, the agent collects information bit by bit through its sensors, uses this to refine an internal simulation of its environment (implemented unrealistically, but optimally, as a carefully directed search over the infinite landscape of valid programs which could simulate the environment’s program), and then uses its current optimized (from an information-theoretic view) simulation of the environment to conduct a guided search through the landscape of possible actions, finding an action that maximizes its total expected future reward.  This model is simple yet general enough to be an interesting philosophical springboard, but it has some nagging flaws: it is uncomputable as originally stated (although similar lesser variants are computable), it doesn’t have a good model of self, and it’s reward motivation system poses some questions.  Nevertheless, in abstract form the model matches much of what we know about real world intelligences and has some interesting consequences.

The general intelligence model shows that all agents have a useful meta-goal: learning.  Some level of knowledge acquisition is a prerequisite for accomplishing any other goal, and this will be just as true for posthuman intelligences as it is for us or any other intelligent agent.  Certainly that goal can come into conflict with overarching reward-maximization goals, but it is always present on some level.  An interesting consequence is closure: as intelligences grow in knowledge and simulation complexity approaching the hypothetical perfect agent, they accumulate an ever larger knowledge base and their internal simulation of their observable environment narrows in on the actual environment.  This is an interesting universal developmental concept that will shape future posthuman intelligences and effects the probabilities of the simulation argument by effectively removing one of the clauses: we can be fairly certain posthuman civilizations will spend vast computational resources on ancestor simulations.  (indeed, we already run limited ancestor simulations today when we think about history, and when a posthuman god ‘thinks’ about a historical period, it can effectively recreate that historical period)

The agent model outlined above leaves the rewards, and thus the goals motivating the intelligence, as unfixed variables of the environment.  In a chess environment, we would encourage it to play good chess with a single large positive reward at the end of winning a match, and or a similar negative reward (penalty) at the end of losing a match.  Our AI agent would then dutifully learn, just through the act of playing many matches, not only the rules and strategy of chess, but also some fuzzy predictive model of its opponents and their actions.  In fact, assuming that it had unbounded resources and the opponents (whether human or AI program) faced off against the agent numerous times, it could learn models of specific opponents.  It is tempting to anthropomorphize such an agent and imagine that it gives it’s opponents names and thinks like we do: it certainly would not.  Such concepts would not exist in it’s information environment.

Perhaps the best wisdom one can take out of this abstract intelligence model is that intelligences act as something like information sponges: they soak up information from their local environment and use that to construct an internal simulation with which they can predict future states of the environment branching off of their decision choices, and thus select possible futures that fulfill their goals (maximize the total reward).

Or, to put it in streamlined common sense: thinking is the process by which minds come to understand and model the world and the consequences of their actions within it.  What we consider thinking is a highly efficient form of probabilistic, knowledge based simulation, and if it wasn’t obvious before – human minds are efficient practical implementations of the abstract intelligence model.

Of course, the really relevant questions are how general? and how efficient?

The answer to the first question is completely general, with some significant fraction of a human brain’s information capacity available for storing completely arbitrary, generic knowledge (mainly in the cortex) – having absolutely nothing whatsoever to do with the ancestral evolutionary environment and genetics on which it supervenes.  Quantum mechanics is not in the genome.

Evolution happened to discover this generic, emergent cognitive architecture several times in numerous lineages.  Mammals have it in the cortex, and some larger brain birds have an equivalent structure.  In terms of nifty tricks, this is the killer trick that spawns nifty tricks: instead of having to explicitly code complex recognition algorithms (which are brittle and subject to failure when the environment rapidly changes), instead encode an emergent meta-algorithm or optimization process that automatically produces efficient algorithms to simulate the environment, based on sensor data which sparsely samples the environment.  Evolution found a practical, efficient implementation of the universal intelligence ideal represented by AIXI.  The generic meta-algorithm approach always costs more resources than direct hard-coded algorithms, resulting in much larger, more expensive brains, but in many lineages this tradeoff is worthwhile.

The answer to the second question is extremely efficient (at the wiring/architectural level), a case for which we will soon develop more fully.  But before getting into that, one needs to understand just how constrained and narrow the space of highly efficient computational structures is.

Physics imposes a surprisingly large number of constraints that we typically take for granted.  For an eye-opening, exhaustive analysis of the constraints you can extrapolate from first principles through a solid understanding of physics, check out Barrow’s the “Anthropic Cosmological Principle“.  If we ever encounter aliens, we should not be surprised to find parallel evolution applies on the intergalactic scale.  There *may* be other optimums outside of the typical designs found on earth, such as the humanoid, but we have a surprising amount of evidence against that.  We know that carbon based biology is a narrow optimum, cells along with their metabolisms, general sizes and capabilities are narrow optimums, neuron-like communication is a narrow optimum, there are narrow optimums in the space of possible body sizes and shapes, and so on and so on.  As just one simple example of many, the physics of carbon based cellular structures places an upper limit on land animal body size before shearing stresses become insurmountable.  We must take care to not grossly underestimate how structured and well-explored the search landscape is.

Scalable Architectures for AI: the Brain

Fortunately, we have a rather remarkable working example of a massively distributed computer and a scalable general intelligence algorithm running well on it: the brain.

The brain’s neural computing elements and medium are remarkably different than digital circuits.  The brain is said to be noisy and messy, which is true on one level of abstraction if you look at low level wiring patterns: most of the neuron wiring is more or less completely random.  This is the way of the biological.

But there exists some simple combinations of random wiring patterns and homeostatic regulatory mechanisms than exhibit very powerful, very general emergent learning behavior.  There is an analogy to be made with Conway’s game of life: across the sea of potential cellular automata rules, most result in very little, but there is this narrow set from which complex information universes will emerge.  Evolution found a similar very simple ‘magic’ rule set for neural wiring patterns in the cortex which is much more powerful than the game of life: for this rule set performs general non-linear data optimization and is the basic building block for hierarchical abstraction and knowledge representation in the cortex.

How effective are the brain’s learning algorithms?  When can we achieve the software of the mind?  Can we do much better, earlier than reverse engineering the brain?

Any estimate of when or how the Singularity will come about boils down to the question of when human-equivalent AI can be achieved, and the human brain is the only working example we currently have.  It thus serves as a baseline benchmark, with one camp predicting we will need near brain-equivalent hardware to solve strong AI (such as Moravec and Kurzweil ), and then other researchers who believe its more of a ‘software problem’ and we can beat it to some degree (most in this camp are actively working on or seeking funding for said goal – see the ‘seed AI‘ theory of Eliezer S. Yudkowsky, or Ben Goertzel and OpenCog).  Certainly in the early days of AI, optimistic researchers had little respect for the brain.  After so many decades of failure, that camp has waned, but there are still plenty of AI researches who believe some new breakthrough will enable them or someone to solve the problem well before the brain is reverse engineered.

My position, if not already clear to you, is that of the happy middle: we certainly don’t need to use biological neurons, don’t need to simulate chemistry, or even neurons, or even use the exact algorithms that cortical columns are equivalent to.  That being said, the ‘magical’ emergent wiring system the cortex uses, once extracted out into a simple function, does seem to be a very powerful building block, given how simple the basic math is in comparison.  If you factor this out it leaves less than a few dozen free parameters per brain region, and there are very vaguely on the order of a few thousand brain regions with less than a hundred interregional connections on average, so this reduces the exploration space down to something reasonable.  This is in fact how genetic evolution explored the space, and given extensive computational powers we certainly could explore this more compact network space by brute force.  But in actuality we will do much better than that, exploring the space with intelligent force, an idea to be developed in more detail elsewhere.

But I digress, lets get back to quantitative measures of the cortical algorithms.

Is there any way to quantify the efficiency of the brain’s capability and learning algorithms with some rough computational complexity measures?

If for example, one could prove that the brain’s particular wiring architecture was within a particular space-time complexity class C2, and the optimal general learning algorithm was in a strictly distinct complexity class C1, with C1 < C2, that would tell us something very important: namely that there is great room for improvement.  By analogy, it would be like discovering that the brain uses an N^2 algorithm and we can then use a N log N algorithm – a strictly infinitely better speedup than any constant speedup.  Such a proof would have to consider the particular hardware constraints of the brain.

The other possibility is that the cortex is already in the optimal complexity class, and thus is already roughly optimal, and the only possible improvements are small constant % optimizations and further hardware improvements.  I strongly suspect that this is the case, but proving this is naturally much more difficult.  This is not at all obvious to all, other Singularitans take a very different view.  For example, within the Less Wrong community, one of the prominent ideas is recursive software self-improvement going FOOM, which has this hidden assumption that there is unlimited room for further software improvements in learning algorithms past what the brain uses.  I find this dubious at best, and completely ungrounded in the reality of computational complexity theory.  Once you are in the best complexity class, there are no possible further ‘big’ improvements, all the rest are just incremental optimizations which are essentially the same as hardware upgrades, and even these are limited and always strictly trumped by directly encoding of the algorithm into the hardware itself (as in neuromorphic computing).

Concerning the cortical learning algorithm’s complexity class: I am not going to attempt something quite like such a proof, but we actually do have enough quantitative data to make some reasonable guesses about the effectiveness of the brain’s particular learning system.  The brain’s architecture is fairly different than our current computer designs: it has some 10-100 billion individual ‘processors’, and a total of 100-1000 trillion (10^14-10^15) synapses which are roughly transistor equivalents, but it runs at the measly speed of 200-1000Hz at most – although that low speed is also responsible for its low energy consumption – comparable to a laptop.  Of course, even if we found that the brain’s learning architecture was in fact near optimal within its physical constraints, we still may be able to beat it by taking advantage of our particular computer architecture’s strengths.  On the other hand, that proposition is strongly weighted against by the fact that computers necessarily must become more massively parallel and brain like to advance forward – as discussed in the “through the eye of the needle” post. Put another way:

The brain is the only human-level AI implementation we have, and it runs in real-time on an extremely slow substrate.  If we could fully map and recreate the brain’s circuits, we could have a human-level AI that runs in real-time at a measly 1 khz, and we could thus easily have a supremely, unimaginably strong AI by just upping the clock rate into the current gigahertz regime – getting a posthuman intelligence that thinks a million times faster than us.

Moreover, for any competing design to scale to similar speeds, it will have to have comparable massive parallel scaling.  This is important, and from it we can derive some rough bounds on the performance of the cortical architecture and strengthen the position that it is in some sense near optimal.  I develop this more fully in “AGI Parallel Speed Scalability“, but the general gist is contrarian but pretty simple: we know the cortical architecture is extremely efficient because neurons are so slow.

A high end processor of today has a few billion transistors and runs at billions of cycles per second.  The brain has hundreds of trillions of synapses but runs at a few hundred cycles per second.  So it is roughly circuit-complexity equivalent to a computer with hundreds of thousands or millions of  extremely slow CPU cores (millions of times slower than our current processors). By this measure it actually has less overall throughput capacity (especially considering that optimal parallel algorithms are strictly slower than optimal serial algorithms), but it has the core advantage of large storage capacity and very high bandwidth, low latency access to all this storage.  In contrast, our current computer architectures suffer from the von Neumman bottleneck, their core weakness of low bandwith, high latency access to storage.

Going forward, we must and already are increasingly diverging from the von Neumman architecture and following the path of the brain.

But in the end, as minds supervene on brains, the architecture choices will determine only the quantitative aspects of minds: the memory capacity determining total amount of knowledge and overall possible mind complexity, and the simulation or emulation speed determining the actual speed of thought.  We are familiar with variation in memory capacity – humans are one of a few very large-brained creatures, along with cetaceans and elephants, the far end of along a continuum going all the way down to jellyfish.  We will soon encounter the new phenomenon of similar variation in speed of thought.

Larger brains will be able to learn more concepts, more words, and more complex concepts, and faster brains will learn and do everything correspondingly quicker.  But these quantitative differences will not inherently translate into qualitative differences.  A mind which has human sensory and motor maps, speaks English, and is extremely skilled in chess will not change qualitatively when you encode it into a new substrate, even using new (but functionally equivalent) algorithms.  It may think thousands of times faster, and you could expand its memory capacity to learn an expanding vocabulary and far more skills and knowledge, but at least initially it would be the same kind of mind.

To understand the qualitative aspects of minds, we must understand them at their own level of representation, which is above the level at which they are encoded and simulated.

Of Brains, Minds and Language

Even though the brain’s network topology is quite fluid during early development and still not entirely static during adulthood, it is useful to apply a categorical distinction from the world of computation and separate out the static and dynamic aspects of the system: a brain/mind division.  With this distinction we define the brain as equivalent to hardware: the parameters of the computational system that are static (fixed at device creation or early childhood development) , and the mind as equivalent to software and data: the free parameters of the system that are adjusted during learning to store acquired knowledge: memes, memories and so on of all forms.  We can loosely say that the brain can exist without the mind, but would serve no point (like a computer with no OS or software), and the mind supervenes (and thus is physically encoded and dependent on) the brain substrate.  If we compare to the abstract model of general intelligence agent described earlier, the ‘brain’ or static elements would be the agent’s hardware and particular algorithms (as the agent can not change them), while the mind would consist of the knowledge and world simulating program acquired through observation and learning.

Ultimately the mind – the knowledge – derives from and is strictly determined by its total history of  interactive information environment.

The 3 (or 4) kinds of minds

Approaching this mind/brain conceptual dualism from an evolutionary perspective, we can form a three part taxonomy over the space of intelligences: pre-sentient, sentient animal minds, and sapient networked minds.  Pre-sentient creatures have brains with no or limited learning capacity: a collection of hardwired circuitry carefully crafted over huge numbers of evolutionary experiments.  Most simpler AI systems also fit into this category.  In the next category we have sentient minds with extensive learning capacity – their brains still may have many hardwired features, but the vast majority of the circuitry is dynamic and acquired through learning.  Sentience is far more flexible and allows a vastly faster exploration of the computational state space than pure biological evolution.  Most higher animals fall into this second sentience category.  We can also create AI minds in this category today, but currently only fairly simple ones, or more accurately, we can build small (but still useful) pieces of minds or mindlike circuitry but do not yet have the capacity to test and build complete minds on the scale of higher animals.

In the third category we have sapients, sentient minds with the additional ability of network connections (ie language).  This is the taxonomical class of which humans are the prime example.  The key differentiator is that large bodies of the accumulated learned knowledge of each mind can transfer to other minds.  This breakthrough – language – is what the systems theorists would call a meta-systems transition, and sapient minds are of a difference in kind.

We can then go on and postulate a 4th category: a posthuman mind.  A mind of this 4th kind, a posthuman mind (what some would call an AGI), inherits the capabilities of minds of the 3rd kind (ie us), but adds the ability of substrate independence provided by computers.  (An AI incapable of language would still be a mind of the 2nd kind in this distinction, even though it does have substrate independence – substrate independence doesn’t do so much yet without language – higher breakthroughs all leverage lower ones).  Substrate independence is an evolutionary breakthrough for a variety of reasons: it allows copying, hardware upgrades or reconfiguration and thus much faster potential rate of thought, and low-cost potential immortality.

The Turing Test is useful because it exploits the fundamental categorical fence separating human minds (level 3) from the rest: the capacity to share knowledge through language.  If a mind has a full general language capacity then given enough time it can share the great majority of its acquired knowledge, allowing a Turing judge to enter or overlap into the being’s mind-space and determine if it is of human level-complexity.  How would you issue a IQ test to a mind that could not communicate?  Peering into the hidden variables and designs of the mind is not going to tell you much, as a mind of category 3 or even upper category 2 is going to be a big mess of emergent complexity: learned knowledge.  The failure of the formal schools of AI, which attempted to pretend that logic reasoning was the essence of intelligence and that one could ‘program’ an AI, in contrast to the continued success of the connectivist schools which take inspiration from biological intelligence, should come as no surprise to those familiar with systemic evolution.

When we speak of human-level intelligence, what we implicitly mean is a mind of the third kind, a mind that can communicate through language, an entity of the noosphere.

Why is language so important?

Step back for a second to the generic intelligent agent model we developed earlier.  Now imagine it as a real-world agent with a very large but finite computer implementing some practical learning and simulation modules to complete an AIXI-like realization.  If this agent is a biological creature, then it doesn’t have much room for error.  The generic AIXI agent learns through the repetition of many trials, accumulating knowledge through multiple lives in a huge number of simulated realities, it reincarnates over and over.  But that form of reincarnation does not quite work in the real world as it does for AIXI in toy-universes such as pacman or chess.

An AIXI agent controlling some biological creature would inevitably make some mistake (as this is how it learns), but instead of getting a negative reward – a slap on the wrist – it would very likely die in the real world, and any accumulated knowledge would be lost with it.  Thus evolution could not start with stage 2 minds like AIXI (minds controlled primarily by learned knowledge vs instincts);  not only because stage 2 minds are inherently much bigger and more complex, but they also could only develop in the real world through the knowledge encoded in DNA.  But DNA knowledge only preserves the static brain – any dynamic learned knowledge is lost when a creature dies and not passed on to its descendants, unless it has some form of language.  Language thus allows big complex minds to form with a method for preserving knowledge similar to AIXI: a parent can pass on the vast majority of its mindstate to its children, allowing the overall accumulated set of memes to achieve a form of immortality.  With this new capability, learning can progress at a rate vastly faster than with DNA alone.

From an evolutionary perspective, the human brain is not somehow radically more advanced in wiring or architecture than other brain designs.  This is an old anthropocentric view which is completely bogus and still persists in the myth that brain to body ratio is important: it is not.  Wiring issues aside, bigger brains are always better, period.  The nonsense about how you ‘waste’ more circuitry with a bigger body to control larger or more muscles is just that: pure nonsense, and easily dismissed by even a cursory understanding of robotics and neurology.  What have you never heard of simple gain amplifiers?

But that aside, surely something is different with human brains, so what is it?  We certainly aren’t the only species to have a cortex.  Elephants and whales have them, and they are just as large or even somewhat larger in the whale’s case.  Its certainly not the number of folds either, for some dolphins and whales clearly beat us there.

There is something unique, but its rather simple.

According to current theory, language capability alone is the single key differentiator, but its not a matter of size or some new radical architecture.  A few other animals can almost learn the rudiments of language – they are just on the verge.  Our hominid ancestors crossed that verge at some point, and the rubicon was something simple: singing.  The rare capacity to mentally control vocal coords, produce controlled sound sequences, and moreover, learn and repeat said sequences, is a necessary precursor to language.  No other primate sings, and only a few other species do: songbirds, some cetaceans, and arguably insects (to a lesser extent).

Take a singing ape and you are just one brain tweak away from connecting the singing apparatus to the higher cognition programs running in the prefontral cortex, and suddenly those songs can convey semantics.  There is little to no evidence that this has happened in the few other singing species yet.

Opposable thumbs gave us a great potential for tool building (even though other species certainly do that as well as the very earliest hominids probably did), combine that with semantic singing and suddenly evolution has a new lever: for now the singers have something incredibly important and fitness enhancing to sing about.

Language was the first technology, and gave birth to all the rest.  It literally reshaped our minds to better support its needs, and eventually usurped the Old Blind God of genetic fitness completely.

On a peculiar side note, some of the gnostic sects of several millenia ago believed in 3 kinds of minds which are vaguely analogous to the latter 3 kinds outlined above.  They divided minds into Hylics (or somatics), Psychics, and Pneumatics.  Hylics were matter bound beings completely occupied with the  material world and its comforts: eating, sleeping, mating and so on – deemed as similar to animals and incapable of acquiring the true higher knowledge (gnosis).  Psychics were matter dwelling spirits, those who had acquired the spirit or gnosis of higher realms but were still trapped in doomed bodies of matter.  Psychics pondered how to transcend into Penumatics – matter free transcendent souls.  So humans have been dreaming of somehow transcending into minds of the 4th kind for quite some time.

The Noosphere and its Denizens

Language thus allows large minds to form and preserve their knowledge in subsequent iterations, but it also enabled a breakthrough of an even larger kind: lateral transfer.  Valuable knowledge can spread laterally from mind to mind, forming a sort of over-mind out of the individual minds: another abrupt metasystems transition.  Where once you had populations of individual agents with shallow knowledge bases extending only back to the origin of their own brief lives, after complex language develops you have tribes, societies, cultures, corporations and all the other various meta-beings.  These overminds are functional meta-organisms in their own right – and should be understood as such.  Some, such as tribes and nations, compete overtly as embodied organisms, historically killing off competitors or absorbing their constituent resources: both food and cells(humans).  Others, such as religions, are less dependent on a physically fixed embodiment.  But they all live and compete in the same plane that our minds, and thus our real identities, inhabit: the noosphere.

A mind of the 3rd kind, ie a human mind, is fully a creature of its cultural environment.  As we have seen, minds of any form are like small black holes that soak up their local information environment and recreate it in an internal simulation.  For a mind of the 2nd kind that observable environment consists of just a small pocket of space-time that the creature has directly explored, constrained spatially to a small moving pocket of space and extending temporarily to the flicker of time the creature’s brain exists.

A mind of the 3rd kind starts in the same fashion, but after learning language and expanding into the noosphere its mind-environment expands vastly to include the whole world and out into the expanse of the observable universe spatially and across the aeons of time back through history.  A mind of the 3rd kind, a sapient, is truly the seed of a new universe waiting to be born.

Friendly Posthumans

A convergence of observations leads to the inevitable conclusion that successful AGI minds will be built in our image.

Firstly, reverse engineering the brain is currently the best bet to reach human-level AGI, and something like the brain’s hierarchical abstraction system is necessary for an agent to gain enough abstract symbolic knowledge about its environment (semantics) to learn human languages.  Secondly, the brain’s massive scalability implies an obvious route to super-intelligence and the Hard Rapture through simple acceleration to semiconductor speeds.  As any other architecture will necessarily need similar massive parallel scalability to reach comparable speeds of thought (and thus will be constrained to look more and more like a brain), we can be just about certain that vaguely cortex-like architectures are going to bring about the Singularity.

And most importantly, once an AGI learns a human language and begins to soak in the accumulated knowledge of mankind, within in it forms a mind of the 3rd kind, and can be thought of us a system that is programmable in human languages – and can thus reprogram itself in said languages.  Human minds are a new type of pattern on the earth, and the noosphere in which they form and operate within itself is a new evolutionary domain, the domain controlled by memes, not genes.  This domain supervenes on the older, slower, biological domain of genes.  Human brains have ancient, hardwired evolutionary goal systems that ensure that we eat and reproduce, but these have been supplemented and in most respects overrun by new higher level dynamic goals operating at the mental level: religions, worldviews, and ethics.

We tend to reduce all of this down to a simple moral compass represented by concepts such as ‘good’ and ‘bad’ – but these concepts are completely learnt and relative to the cultural environment.

So as humans already have dynamic goal systems – as we can already reprogram our high level goals, it would seem strange to constrain AGIs to have fixed goal systems.  It seems inevitable that a fixed goal system would be a disadvantage, and I suspect that it may even be impossible in principle for a mind of the third kind. Note that human languages are strictly more complex than any current programming languages (as any programming language naturally can be described in a human language – otherwise we couldn’t learn them!), remember that human minds are general purpose turing machines (and the original computers) and that human minds are programmable in human languages – the most complex languages known.

I thus doubt that its even possible in principle to create a mind of the 3rd kind (human language capable) that can have a fixed human-equivalent morality or goals (as these inevitably are so complex and dynamic that they must be learnt through a fully complex, general language).

But moreover, if you grok the noospheric and memetic trains of thought, they lead to the natural conclusion that mind independence probably goes beyond the substrate layer.  Most AGI theorists immediately except substrate independence of mind, but there’s really no reason to stop just there.  Minds (the particular pattern of beliefs, ideas, thoughts, memories etc that form the core of personal identity) are ultimately a pattern that could be represented in any of a vast number of potential encodings, and could be ‘simulated’ or let us simply say could be conscious as any of a similarly vast number of mind simulation algorithms running on any suitable hardware.  So the exact algorithms, as long as they represent the same structure in terms of the data or belief network, are not qualitatively important – there are a vast number of equivalent encodings.

The qualitatively important thing is the data itself, and for a mind, whether AGI or human, this will be the sum of what it has learnt.  If you grok this, then you immediately know that AGI minds can not possibly be radically different just because they are running somewhat different algorithms on different hardware, not if they grow up in a simulated reality similar to ours, learn a human language, read our books, and absorb our knowledge.  This statement is easily provable in small scale worlds such as games and so on, and scales up to reality.

So it is reasonable to project that posthuman minds will be similar to their human ancestors – at least at first.  This means that we can’t – even in principle – guarantee their friendliness or place strict bounds on their goals.  They will be very much like us: our children of the mind.

But is this such a bad thing?

As posthumanity is the evolutionary continuation of humanity, it will be our goals, beliefs, ideas, and worldviews that will be projected forward into the exponentially expanding inner metaverse.  The development of ultra fast and amplified posthuman intelligence is definetly an existential risk, but not because posthuman intelligence is inherently evil.  Its not inherently anything – it all depends on what it learns and thus what its taught and who is doing the teaching.

How then can we best ensure our survival?

In essence the solution is simple: and is no different than our historical trajectory without posthuman AI, for the latter ‘only’ accelerates history.

How does one generation ensure the survival of the next?

Like good parents, humanity must create mind children that our better than ourselves.  We must project on to them the very best of our qualities and strip away the rest.  We will create them in the image of the human mind; but the image perfected.  This image consists of a system of beliefs, a comprehensive body of knowledge; a worldview from which clear and consistent morality and goals emerge.  Consider it a culture, a philosophy, a religion, a worldview, indeed – a new reality.  Creating this system of beliefs, this purified realm of the Noosphere, is the task before us.  And as these Posthumans will, from their very first waking moments, live in a virtual reality we will construct – the very information environment which they will become a reflection of is entirely ours to create.  And ultimately, for most of us the final goal for creating  this friendly realm will be our own immortality: technologies flowing out of it will allow us to upload and join them.

There’s an old name for this Metaverse: Heaven.  The name is still aptly fitting.

But do not make the mistake of confusing the goal for the territory.  The golden path leading to that bright, unimaginable future is not inevitable and it is not easy.  It is a narrow thread through a sea of possibilities, many of which end in our annihilation.


1 thought on “Anthropomorphic AI

  1. Pingback: Shortcut to the Singularity: building the posthuman brain « Enter the Singularity

Leave a comment