Intelligence Amplification: Hype and Reality

The future rarely turns out quite as we expect.  Pop sci futurists of a generation ago expected us to be flying to work by the dawn of the 21st century.  They were almost right: both flying cars and jet packs have just recently moved into the realm of realization.  But there is a huge gap between possible and economically practical, between prototype and mass product.

Undoubtedly many of the technologies futurists currently promote today will fare no better.  Some transhumanists, aware that the very future they champion may itself render them obsolete, rest their hopes on human intelligence amplification.  Unfortunately not all future technologies are equally likely.  Between brain implants, nanobots, and uploading, only the latter has long-term competitive viability, but it is arguably more of a technology of posthuman transformation than human augmentation.  The only form of strict intelligence amplification that one should bet much on in is software itself (namely, AI software).

Brain Implants:

Implanting circuitry into the human brain already has uses today in correcting some simple but serious conditions, and we should have little doubt that eventually this technology could grow into full-scale cognitive enhancement: it is at least physically possible.  That being said, there are significant technical challenges in creating effective and safe interfaces between neural tissue and dense electronics at the bandwidth capacities required to actually boost mental capability.  Only a small fraction of possible technologies are feasible, and only a fraction of those actually are economically competitive.

Before embarking on a plan for brain augmentation, let’s briefly consider the simpler task of augmenting a computer.  At a high level of abstraction, the general Von Neumman architecture separates memory and computation.  Memory is programatically accessed and uniformly addressable.  Processors in modern parallel systems are likewise usually modular and communicate with other processors and memory through clearly defined interconnect channels that are also typically uniformly addressable and time-shared through some standardized protocol.  In other words each component of the system, whether processor or memory, can ‘talk’ to other components in a well defined language.  The decoupling and independence of each module, along with the clearly delineated communication network, makes upgrading components rather straightforward.

The brain is delineated into many functional modules, but the wiring diagram is massively dense and chaotic.  It’s a huge messy jumble of wiring.  The entire outer region of the brain, the white matter, is composed of this huge massed tangle of interconnect fabric.  And unlike in typical computer systems, most of those connections appear to be point to point.  If two brain regions need to talk to each other, typically there are great masses of dedicated wires connecting them.  Part of the need of all that wiring stems from the slow speed of the brain.  It has a huge computational capacity but the individual components are extremely slow and dispersed, so the interconnection needs are immense.

The brain’s massively messy interconnection fabric poses a grand challenge for advanced cybernetic interfaces.  It has only a few concentrated conduits which external interfaces could easily take advantage of: namely the main sensory and motor pathways such as the optic nerve, audio paths, and the spinal cord.  But if the aim of cognitive enhancement is simply to interface at the level of existing sensory inputs, then what is the real advantage over traditional interfaces?  Assuming one has an intact visual system, there really is little to no advantage in directly connecting to the early visual cortex or the optical nerve over just beaming images in through the eye.

Serious cognitive enhancement would come only through outright replacement of brain subsystems and or through significant rewiring to allow cortical regions to redirect processing to more capable electronic modules.  Due to the wiring challenge, the scale and scope of the required surgery is daunting, and it is not yet clear that it will ever be economically feasible without some tremendous nanotech-level breakthroughs.

However, these technical challenges are ultimately a moot point.  Even when we do have the technology for vastly powerful superhuman brain implants, it will never be more net energy/cost effective than spending the same resources on a pure computer hardware AI system.

For the range of computational problems it is specialized for, the human brain is more energy efficient than today’s computers, but largely because it runs at tremendously slow speeds compared to our silicon electronics, and computational energy demands scale with speed.  We have already crossed the miniaturization threshold where our transistors are smaller than the smallest synaptic computing elements in the brain[1].  The outright advantage of the brain (at least in comparison to normal computers) is now mainly in the realm of sheer circuitry size (area equivalent to many thousands of current chips), and will not last beyond this decade.

So when we finally master all of the complexity of interfacing dense electronics with neural tissue, and we somehow find a way to insert large chunks of that into a living organic brain without damaging it beyond repair, and we somehow manage to expel all of the extra waste heat without frying the brain (even though it already runs with little to no spare heat capacity), it will still always be vastly less efficient than just building an AI system out of the same electronics!

We don’t build new supercomputers by dusting off old Crays to upgrade them via ‘interfacing’ with much faster new chips.


Ray Kurzweil puts much faith in the hope of nanobots swarming through our blood, allowing us to interface more ‘naturally’ with external computers while upgrading and repairing neural tissue to boot.  There is undoubtedly much value in such a tech, even if there is good reason to be highly skeptical about the timeline of nanobot development.  We have a long predictable trajectory in traditional computer technology and good reasons to have reasonable faith in the IRTS roadmap.  Drexlian style nanobots on the other hand have been hyped for a few decades now but if anything seem even farther away.

Tissue repairing nanobots of some form seem eventually likely (as is all technology given an eventual Singularity), but ultimately they are no different from traditional implants in the final analysis.  Even if possible, they are extremely unlikely to be the most efficient form of computer (because of the extra complexity constraint of mobility).  And if nanobots somehow turned out to be the most efficient form for future computers, then it would still be more efficient to just build a supercomputer AI out of pure nanobots!

Ultimately then the future utility of nanobots comes down to their potential for ‘soft uploading’.  In this regard they will just be a transitional form: a human would use nanobots to upload, and then move into a faster, more energy efficient substrate.  But even in this usage nanobots may be unlikely, as nanobots are a more complex option in the space of uploading technologies: destructive scanning techniques will probably be more viable.


Uploading is the ultimate transhumanist goal, at least for those who are aware of the choices and comfortable with the philosophical questions concerning self-hood. But at this point in time it is little more than a dream technology.  It’s development depends on significant advances in not only computing, but also in automated 3D scanning technologies which currently attract insignificant levels of research funding.

The timeline for future technologies can be analyzed in terms of requirement sets.  Uploading requires computing technology sufficient for at least human-level AI, and possibly much more. [2]  Moreover, it also probably requires  technology powerful enough to economically deconstruct and scan around ~1000 cubic centimeters of fragile neural tissue down to resolution sufficient for imaging synaptic connection strengths (likely nanometer-level resolution), recovering all of the essential information into digital storage, saving a soul of pure information from it’s shell of flesh, so to speak.

The economic utility of uploading thus boils down to a couple of simple yet uncomfortable questions: what is the worth of a human soul?  What is the cost of scanning a brain?

Not everyone will want to upload, but those that desire it will value it highly indeed, perhaps above all else.  Unfortunately most uploads will not have much if any economic value, simply due to competition from other uploads and AIs.  Digital entities can be replicated endlessly, and new AIs can be grown or formed quickly.  So uploading is likely to be the ultimate luxury service, the ultimate purchase.  Who will be able to afford it?

The cost of uploading can be broken down into the initial upfront research cost followed by the per-upload cost of the scanning machine’s time and the cost of the hardware one uploads into.  Switching to the demand view of the problem, we can expect that people will be willing to pay at least one year of income for uploading, and perhaps as much as half or more of their lifetime income.  A small but growing cadre of transhumanists currently pay up to one year of average US income for cryonic preservation, even with only an expected chance of eventual success.  Once uploading is fully developed into a routine procedure, we can expect it will attract a rather large market of potential customers willing to give away a significant chunk of their wealth for a high chance of living many more lifetimes in the wider Metaverse.

On the supply side it seems reasonable that the cost of a full 3D brain scan can eventually be scaled down to the cost of etching an equivalent amount of circuitry using semiconductor lithography.  Scanning technologies are currently far less developed but eventually have similar physical constraints, as the problem of etching ultra-high resolution images onto surfaces is physically similar to the problem of ultra-high resolution scanning of surfaces.  So the cost of scanning will probably come down to some small multiple of the cost of the required circuitry itself.  Eventually.

Given reasonable estimates for about 100 terrabytes or so of equivalent bytes for the whole brain, this boils down to just: 1.) <$10,000 if the data is stored in 2011 hard drives, or 2.) < 100,000$ for 2011 flash memory, or 3.) <500,000$ for 2011 RAM[3].  We can expect a range of speed/price options, with a minimum floor price corresponding to the minimum hardware required to recreate the original brain’s capabilities.  Based on current trends and even the more conservative projections for Moore’s Law, it seems highly likely that the brain hardware cost is already well under a million dollars and will fall into the 10 to 100 thousand dollar range by the end of the decade.

Thus scanning technology will be the limiting factor for uploading until it somehow attracts the massive funding required to catch up with semiconductor development.  Given just how far scanning has to go, we can’t expect much progress until perhaps Moore’s Law begins to slow down and run it’s course, the world suddenly wakes up to the idea, or we find a ladder of interim technologies that monetize the path to uploading.  We have made decades of progress in semiconductor miniaturization only because each step along the way has paid for itself.

The final consideration is that Strong AI almost certainly precedes uploading.  We can be certain that the hardware requirements to simulate a scanned human brain are a strict upper bound on the requirements for a general AI of equivalent or greater economic productivity.  A decade ago I had some hope that scanning and uploading could arrive before the first generation of human surpassing general AI’s.  Given the current signs of an AI resurgence this decade and the abysmal slow progress in scanning, it now appears more clear that uploading is a later post-AI technology.

  1. According to wikipedia, synaptic clefts measure around 20-nm.  From this we can visually guesstimate that typical synaptic axon terminals are 4-8 times that in diameter, say over 100-nm.  In comparison the 2011 intel microprocessor I am writing this on is built on 32-nm ‘half-pitch’ features, which roughly means that the full distance between typical features is 64-nm.  The first processors on the 22-nm node are expected to enter volume production early 2012.  Of course smallest feature diameter is just one aspect of computational performance, but is an interesting comparison milestone nonetheless.
  2. See the Whole Brain Emulation Roadmap for a more in depth requirements analysis.  It seems likely that scanning technology could improve rapidly if large amounts of money were thrown at it, but that doesn’t much help clarify any prognostications.
  3. I give a range of prices just for the storage cost portion because it represents a harder bound.  There is more variance in the cost estimates for computation, especially when one considers the range of possible thoughtspeeds, but the computational cost can be treated as some multiplier over the storage cost.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s