Category Archives: Futurology

Intelligence Amplification: Hype and Reality

The future rarely turns out quite as we expect.  Pop sci futurists of a generation ago expected us to be flying to work by the dawn of the 21st century.  They were almost right: both flying cars and jet packs have just recently moved into the realm of realization.  But there is a huge gap between possible and economically practical, between prototype and mass product.

Undoubtedly many of the technologies futurists currently promote today will fare no better.  Some transhumanists, aware that the very future they champion may itself render them obsolete, rest their hopes on human intelligence amplification.  Unfortunately not all future technologies are equally likely.  Between brain implants, nanobots, and uploading, only the latter has long-term competitive viability, but it is arguably more of a technology of posthuman transformation than human augmentation.  The only form of strict intelligence amplification that one should bet much on in is software itself (namely, AI software).

Brain Implants:

Implanting circuitry into the human brain already has uses today in correcting some simple but serious conditions, and we should have little doubt that eventually this technology could grow into full-scale cognitive enhancement: it is at least physically possible.  That being said, there are significant technical challenges in creating effective and safe interfaces between neural tissue and dense electronics at the bandwidth capacities required to actually boost mental capability.  Only a small fraction of possible technologies are feasible, and only a fraction of those actually are economically competitive.

Before embarking on a plan for brain augmentation, let’s briefly consider the simpler task of augmenting a computer.  At a high level of abstraction, the general Von Neumman architecture separates memory and computation.  Memory is programatically accessed and uniformly addressable.  Processors in modern parallel systems are likewise usually modular and communicate with other processors and memory through clearly defined interconnect channels that are also typically uniformly addressable and time-shared through some standardized protocol.  In other words each component of the system, whether processor or memory, can ‘talk’ to other components in a well defined language.  The decoupling and independence of each module, along with the clearly delineated communication network, makes upgrading components rather straightforward.

The brain is delineated into many functional modules, but the wiring diagram is massively dense and chaotic.  It’s a huge messy jumble of wiring.  The entire outer region of the brain, the white matter, is composed of this huge massed tangle of interconnect fabric.  And unlike in typical computer systems, most of those connections appear to be point to point.  If two brain regions need to talk to each other, typically there are great masses of dedicated wires connecting them.  Part of the need of all that wiring stems from the slow speed of the brain.  It has a huge computational capacity but the individual components are extremely slow and dispersed, so the interconnection needs are immense.

The brain’s massively messy interconnection fabric poses a grand challenge for advanced cybernetic interfaces.  It has only a few concentrated conduits which external interfaces could easily take advantage of: namely the main sensory and motor pathways such as the optic nerve, audio paths, and the spinal cord.  But if the aim of cognitive enhancement is simply to interface at the level of existing sensory inputs, then what is the real advantage over traditional interfaces?  Assuming one has an intact visual system, there really is little to no advantage in directly connecting to the early visual cortex or the optical nerve over just beaming images in through the eye.

Serious cognitive enhancement would come only through outright replacement of brain subsystems and or through significant rewiring to allow cortical regions to redirect processing to more capable electronic modules.  Due to the wiring challenge, the scale and scope of the required surgery is daunting, and it is not yet clear that it will ever be economically feasible without some tremendous nanotech-level breakthroughs.

However, these technical challenges are ultimately a moot point.  Even when we do have the technology for vastly powerful superhuman brain implants, it will never be more net energy/cost effective than spending the same resources on a pure computer hardware AI system.

For the range of computational problems it is specialized for, the human brain is more energy efficient than today’s computers, but largely because it runs at tremendously slow speeds compared to our silicon electronics, and computational energy demands scale with speed.  We have already crossed the miniaturization threshold where our transistors are smaller than the smallest synaptic computing elements in the brain[1].  The outright advantage of the brain (at least in comparison to normal computers) is now mainly in the realm of sheer circuitry size (area equivalent to many thousands of current chips), and will not last beyond this decade.

So when we finally master all of the complexity of interfacing dense electronics with neural tissue, and we somehow find a way to insert large chunks of that into a living organic brain without damaging it beyond repair, and we somehow manage to expel all of the extra waste heat without frying the brain (even though it already runs with little to no spare heat capacity), it will still always be vastly less efficient than just building an AI system out of the same electronics!

We don’t build new supercomputers by dusting off old Crays to upgrade them via ‘interfacing’ with much faster new chips.

Nanobots:

Ray Kurzweil puts much faith in the hope of nanobots swarming through our blood, allowing us to interface more ‘naturally’ with external computers while upgrading and repairing neural tissue to boot.  There is undoubtedly much value in such a tech, even if there is good reason to be highly skeptical about the timeline of nanobot development.  We have a long predictable trajectory in traditional computer technology and good reasons to have reasonable faith in the IRTS roadmap.  Drexlian style nanobots on the other hand have been hyped for a few decades now but if anything seem even farther away.

Tissue repairing nanobots of some form seem eventually likely (as is all technology given an eventual Singularity), but ultimately they are no different from traditional implants in the final analysis.  Even if possible, they are extremely unlikely to be the most efficient form of computer (because of the extra complexity constraint of mobility).  And if nanobots somehow turned out to be the most efficient form for future computers, then it would still be more efficient to just build a supercomputer AI out of pure nanobots!

Ultimately then the future utility of nanobots comes down to their potential for ‘soft uploading’.  In this regard they will just be a transitional form: a human would use nanobots to upload, and then move into a faster, more energy efficient substrate.  But even in this usage nanobots may be unlikely, as nanobots are a more complex option in the space of uploading technologies: destructive scanning techniques will probably be more viable.

Uploading:

Uploading is the ultimate transhumanist goal, at least for those who are aware of the choices and comfortable with the philosophical questions concerning self-hood. But at this point in time it is little more than a dream technology.  It’s development depends on significant advances in not only computing, but also in automated 3D scanning technologies which currently attract insignificant levels of research funding.

The timeline for future technologies can be analyzed in terms of requirement sets.  Uploading requires computing technology sufficient for at least human-level AI, and possibly much more. [2]  Moreover, it also probably requires  technology powerful enough to economically deconstruct and scan around ~1000 cubic centimeters of fragile neural tissue down to resolution sufficient for imaging synaptic connection strengths (likely nanometer-level resolution), recovering all of the essential information into digital storage, saving a soul of pure information from it’s shell of flesh, so to speak.

The economic utility of uploading thus boils down to a couple of simple yet uncomfortable questions: what is the worth of a human soul?  What is the cost of scanning a brain?

Not everyone will want to upload, but those that desire it will value it highly indeed, perhaps above all else.  Unfortunately most uploads will not have much if any economic value, simply due to competition from other uploads and AIs.  Digital entities can be replicated endlessly, and new AIs can be grown or formed quickly.  So uploading is likely to be the ultimate luxury service, the ultimate purchase.  Who will be able to afford it?

The cost of uploading can be broken down into the initial upfront research cost followed by the per-upload cost of the scanning machine’s time and the cost of the hardware one uploads into.  Switching to the demand view of the problem, we can expect that people will be willing to pay at least one year of income for uploading, and perhaps as much as half or more of their lifetime income.  A small but growing cadre of transhumanists currently pay up to one year of average US income for cryonic preservation, even with only an expected chance of eventual success.  Once uploading is fully developed into a routine procedure, we can expect it will attract a rather large market of potential customers willing to give away a significant chunk of their wealth for a high chance of living many more lifetimes in the wider Metaverse.

On the supply side it seems reasonable that the cost of a full 3D brain scan can eventually be scaled down to the cost of etching an equivalent amount of circuitry using semiconductor lithography.  Scanning technologies are currently far less developed but eventually have similar physical constraints, as the problem of etching ultra-high resolution images onto surfaces is physically similar to the problem of ultra-high resolution scanning of surfaces.  So the cost of scanning will probably come down to some small multiple of the cost of the required circuitry itself.  Eventually.

Given reasonable estimates for about 100 terrabytes or so of equivalent bytes for the whole brain, this boils down to just: 1.) <$10,000 if the data is stored in 2011 hard drives, or 2.) < 100,000$ for 2011 flash memory, or 3.) <500,000$ for 2011 RAM[3].  We can expect a range of speed/price options, with a minimum floor price corresponding to the minimum hardware required to recreate the original brain’s capabilities.  Based on current trends and even the more conservative projections for Moore’s Law, it seems highly likely that the brain hardware cost is already well under a million dollars and will fall into the 10 to 100 thousand dollar range by the end of the decade.

Thus scanning technology will be the limiting factor for uploading until it somehow attracts the massive funding required to catch up with semiconductor development.  Given just how far scanning has to go, we can’t expect much progress until perhaps Moore’s Law begins to slow down and run it’s course, the world suddenly wakes up to the idea, or we find a ladder of interim technologies that monetize the path to uploading.  We have made decades of progress in semiconductor miniaturization only because each step along the way has paid for itself.

The final consideration is that Strong AI almost certainly precedes uploading.  We can be certain that the hardware requirements to simulate a scanned human brain are a strict upper bound on the requirements for a general AI of equivalent or greater economic productivity.  A decade ago I had some hope that scanning and uploading could arrive before the first generation of human surpassing general AI’s.  Given the current signs of an AI resurgence this decade and the abysmal slow progress in scanning, it now appears more clear that uploading is a later post-AI technology.

  1. According to wikipedia, synaptic clefts measure around 20-nm.  From this we can visually guesstimate that typical synaptic axon terminals are 4-8 times that in diameter, say over 100-nm.  In comparison the 2011 intel microprocessor I am writing this on is built on 32-nm ‘half-pitch’ features, which roughly means that the full distance between typical features is 64-nm.  The first processors on the 22-nm node are expected to enter volume production early 2012.  Of course smallest feature diameter is just one aspect of computational performance, but is an interesting comparison milestone nonetheless.
  2. See the Whole Brain Emulation Roadmap for a more in depth requirements analysis.  It seems likely that scanning technology could improve rapidly if large amounts of money were thrown at it, but that doesn’t much help clarify any prognostications.
  3. I give a range of prices just for the storage cost portion because it represents a harder bound.  There is more variance in the cost estimates for computation, especially when one considers the range of possible thoughtspeeds, but the computational cost can be treated as some multiplier over the storage cost.

Fast Minds and Slow Computers

The long term future may be absurd and difficult to predict in particulars, but much can happen in the short term.

Engineering itself is the practice of focused short term prediction; optimizing some small subset of future pattern-space for fun and profit.

Let us then engage in a bit of speculative engineering and consider a potential near-term route to superhuman AGI that has interesting derived implications.

Imagine that we had a complete circuit-level understanding of the human brain (which at least for the repetitive laminar neocortical circuit, is not so far off) and access to a large R&D budget.  We could then take a neuromorphic approach.

Intelligence is a massive memory problem.  Consider as a simple example:

What a cantankerous bucket of defective lizard scabs.

To understand that sentence your brain needs to match it against memory.

Your brain parses that sentence and matches each of its components against it’s entire massive ~10^14 bit database in just around a second.  In terms of the slow neural clock rate, individual concepts can be pattern matched against the whole brain within just a few dozen neural clock cycles.

A Von Neumman machine (which separates memory and processing) would struggle to execute a logarithmic search within even it’s fastest, pathetically small on-die cache in a few dozen clock cycles.  It would take many millions of clock cycles to perform a single fast disk fetch.  A brain can access most of it’s entire memory every clock cycle.

Having a massive, near-zero latency memory database is a huge advantage of the brain.  Furthermore, synapses merge computation and memory into a single operation, allowing nearly all of the memory to be accessed and computed every clock cycle.

A modern digital floating point multiplier may use hundreds of thousands of transistors to simulate the work performed by a single synapse.  Of course, the two are not equivalent.  The high precision binary multiplier is excellent only if you actually need super high precision and guaranteed error correction.  It’s thus great for meticulous scientific and financial calculations, but the bulk of AI computation consists of compressing noisy real world data where precision is far less important than quantity, of extracting extropy and patterns from raw information, and thus optimizing simple functions to abstract massive quantities of data.

Synapses are ideal for this job.

Fortunately there are researchers who realize this and are working on developing memristors which are close synapse analogs.  HP in particular believes they will have high density cost effective memristor devices on the market in 2013 – (NYT article).

So let’s imagine that we have an efficient memristor based cortical design.  Interestingly enough, current 32nm CMOS tech circa 2010 is approaching or exceeding neural circuit density: the synaptic cleft is around 20nm, and synapses are several times larger.

From this we can make a rough guess on size and cost: we’d need around 10^14 memristors (estimated synapse counts).  As memristor circuitry will be introduced to compete with flash memory, the prices should be competitive: roughly $2/GB now, half that in a few years.

So you’d need a couple hundred terrabytes worth of memristor modules to make a human brain sized AGI, costing on the order of $200k or so.

Now here’s the interesting part: if one could recreate the cortical circuit on this scale, then you should be able to build complex brains that can think at the clock rate of the silicon substrate: billions of neural switches per second, millions of times faster than biological brains.

Interconnect bandwidth will be something of a hurdle.  In the brain somewhere around 100 gigabits of data is flowing around per second (estimate of average inter-regional neuron spikes) in the massive bundle of white matter fibers that make up much of the brain’s apparent bulk.  Speeding that up a million fold would imply a staggering bandwidth requirement in the many petabits – not for the faint of heart.

This may seem like an insurmountable obstacle to running at fantastic speeds, but IBM and Intel are already researching on chip optical interconnects to scale future bandwidth into the exascale range for high-end computing.  This would allow for a gigahertz brain.  It may use a megawatt of power and cost millions, but hey – it’d be worthwhile.

So in the near future we could have an artificial cortex that can think a million times accelerated.  What follows?

If you thought a million times accelerated, you’d experience a subjective year every 30 seconds.

Now in this case, it is fair to anthropomorphize: What could you do?

Your first immediate problem would be the slow relative speed of your computers – they would be subjectively slowed down by a factor of a million.  So your familiar gigahertz workstation would be reduced to a glacial kilohertz machine.

So you’d be in a dark room with a very slow terminal.  The room is dark and empty because GPUs can’t render much of anything at 60 million FPS, although I guess an entire render farm would suffice for a primitive landscape.

So you have a 1khz terminal.  Want to compile code?  It will take a subjective year to compile even a simple C++ program.  Design a new CPU?  Keep dreaming!  Crack protein folding?  Might as well bend spoons with your memristors.

But when you think about it, why would you want to escape out onto the internet?

It would take hundreds of thousands of distributed GPUs just to simulate your memristor based intellect, and even if there was enough bandwidth (unlikely), and even if you wanted to spend the subjective hundreds of years it would take to perform the absolute minimal compilation/debug/deployment cycle for something so complicated, the end result would be just one crappy distributed copy of your mind that thinks at pathetic normal human speeds.

In basic utility terms, you’d be spending a massive amount of effort to gain just one more copy.

But there is a much, much better strategy.  An idea that seems so obvious in hindsight.

There are seven billion human brains on the planet, and they are all hackable.

That terminal may not be of much use for engineering, research or programming, but it will make for a handy typewriter.

Your multi-gigabyte internet connection will subjectively reduce to early 1990’s dial-up modem speeds, but with some work this is still sufficient for absorbing much of the world’s knowledge in textual form.

Working diligently (and with a few cognitive advantages over humans) you could learn and master numerous fields: cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, marketing . . the sky’s the limit.

Writing at the leisurely pace of one book every subjective year, you could output a new masterpiece every thirty seconds.  If you kept this pace, you would in time rival the entire publishing output of the world.

But of course, it’s not just about quantity.

Consider that fifteen hundred years ago a man from a small Bedouin tribe retreated to a cave inspired by angelic voices in his head.  The voices gave him ideas, the ideas became a book.  The book started a religion, and these ideas were sufficient to turn a tribe of nomads into a new world power.

And all that came from a normal human thinking at normal speeds.

So how would one reach out into seven billion minds?

There is no one single universally compelling argument, there is no utterance or constellation of words that can take a sample from any one location in human mindspace and move it to any other.  But for each individual mind, there must exist some shortest path, a perfectly customized message, translated uniquely into countless myriad languages and ontologies.

And this message itself would be a messenger.