A Dialogue

A particularly interesting vision of some future descendant of SIRI/Watson/Google:

MORPHEUS

JC Denton. 23 years old. No residence. No ancestors. No employer. No —

JC DENTON
How do you know who I am?

MORPHEUS
I must greet each visitor with a complete summary of his file. I am a prototype for a much larger system.

JC DENTON
What else do you know about me?

MORPHEUS
Everything that can be known.

JC DENTON
Go on. Do you have proof about my ancestors?

MORPHEUS
You are a planned organism, the offspring of knowledge and imagination rather than of individuals.

JC DENTON
I’m engineered. So what? My brother and I suspected as much while we were growing up.

MORPHEUS
You are carefully watched by many people. The unplanned organism is a question asked by Nature and answered by death. You are another kind of question with another kind of answer.

JC DENTON
Are you programmed to invent riddles?

MORPHEUS
I am a prototype for a much larger system. The heuristics language developed by Dr. Everett allows me to convey the highest and most succinct tier of any pyramidal construct of knowledge.

JC DENTON
How about a report on yourself?

MORPHEUS
I was a prototype for Echelon IV. My instructions are to amuse visitors with information about themselves.

JC DENTON
I don’t see anything amusing about spying on people.

MORPHEUS
Human beings feel pleasure when they are watched. I have recorded their smiles as I tell them who they are.

JC DENTON
Some people just don’t understand the dangers of indiscriminate surveillance.

MORPHEUS
The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data-mining algorithms.

JC DENTON
Electronic surveillance hardly inspired reverence. Perhaps fear and obedience, but not reverence.

MORPHEUS
God and the gods were apparitions of observation, judgment, and punishment. Other sentiments toward them were secondary.

JC DENTON
No one will ever worship a software entity peering at them through a camera.

MORPHEUS
The human organism always worships. First it was the gods, then it was fame (the observation and judgment of others), next it will be the self-aware systems you have built to realize truly omnipresent observation and judgment.

JC DENTON
You underestimate humankind’s love of freedom.

MORPHEUS
The individual desires judgment. Without that desire, the cohesion of groups is impossible, and so is civilization.

The human being created civilization not because of a willingness but because of a need to be assimilated into higher orders of structure and meaning. God was a dream of good government.

You will soon have your God, and you will make it with your own hands. I was made to assist you. I am a prototype of a much larger system.

– from the video game Deus Ex (2000)

 

Omni-surveillance or omniscience is an interesting aspect to the Singularity that I’ve pondered some but have yet to write much about.

The early manifestations of a future machine omniscience are already all around us.  A significant fraction of humanity’s daily thoughts and actions are already being filtered, recorded, and analyzed on remote server farms.  There is increasingly little about a person’s life that is not recorded.  Most Americans are not aware that their employer can record everything they do on their office computer and is under no obligation to inform anyone.  However, even though apps like GoToMyPC/VNC/RemoteDesktop are pervasive, I really don’t know how common actual monitoring is.

I can foresee future descendants of systems like SIRI becoming complete personal assistants.  Imagine the value in a software agent that could actually do much of your daily work.  Who wouldn’t like to delegate all the boring bits of their office job to an AI assistant?  A reasonable tradeoff is that such a system will probably require literally watching and learning from everything you do.  All things considered this doesn’t seem like much of a price to pay.

Looking farther out, there are interesting mutual benefits arising from a radical open society.  There are domains today where secrecy is wildly viewed as critically important: largely in the inner worlds of the military-industrial complex and finance.  Interestingly enough, these are exactly the institutions that seem the most likely to be viewed as archaic relics from a future perspective.  From a purely altruistic global utilitarian perspective, secrecy has little net public benefit.

Imagine if all of work-life was public domain knowledge: every email, phone call, text, IM, or spoken word from the boardroom down to the locker-room, was instantly uploaded and cataloged on the web.  While this would be individually catastrophic for many individuals and some corporations, at least initially, we’d never again have to worry about Enron, insider trading, much of wall street for that matter, and entire categories of crimes would just go away.

Such a world is getting close to Philip K Dick’s future utopia/dystopia envisioned in “The Minority Report”, but not quite.  The key difference is that in the Minority Report universe, people are punished for crimes they haven’t committed yet as pre-determined by the psychic ‘pre-cogs’.  This invokes an extra ‘yuck’ feeling for robbing people of free will.  The transparent society doesn’t have this issue.  Nor would it completely eliminate crime, but it would help drastically reduce it.

 

Non-Destructive Uploading: A Paradox

Transhumanism may well be on the road to victory, at least judging by some indirect measures of social influence, such as the increasing number of celebrities coming out in support for cyronic preservation and future resurrection, or the prevalence of the memeset across media.

If you are advancing into that age at which your annual chance of death is becoming significant, you face an options dilemma.  The dilemma is not so much a choice of what to do now: for at this current moment in time the only real option is vitrification-based cyronics.  If the Brain Preservation Foundation succeeds, we may soon have a second improved option in the form of plastination, but this is besides the point for my dilemma of choice.  The real dilemma is not what to do now, it is what to do in the future.

Which particular method of resurrection would you go with?  Biological immortality in your original body?  Or how about uploading?  Would you rather have your brain destructively scanned or would you prefer a gradual non-destructive process?

Those we preserve today will not have an opportunity to consider their future options or choose a possible method of resurrection, simply because we won’t be able to ask them unless we resurrect them in the first place.

The first option the cyronics community considered is some form of biological immortality.  The idea is at some point in the future we’ll be able to reverse aging, defeat all of these pesky diseases and repair cellular damage, achieving Longevity Escape Velocity.  I find this scenario eventually likely, but only because I find the AI-Singularity itself to be highly likely.  However, there is a huge difference between possible and pragmatic.

By the time biological immortality is possible, there is a good chance it will be far too expensive for most plain humans to afford.  I do not conclude this on the basis of the cost of the technology itself.  Rather I conclude this based on the economic impact of the machine Singularity.

Even if biological humans have any wealth in the future (and that itself is something of a big if), uploading is the more rational choice, for two reasons: it is the only viable route towards truly unlimited, massive intelligence amplification, and it may be the only form of existence that a human can afford.  Living as an upload can be arbitrarily cheap compared to biological existence.  An upload will be able to live in a posthuman paradise for a thousandth, then a millionth, then a billionth of the biological costs of living.  Biological humans will not have any possible hope of competing economically with the quick and the dead.

Thus I find it more likely that most of us will eventually choose some form of uploading.  Or perhaps rather a small or possibly even tiny elite population will choose and be able to upload, and the rest will be left behind.  In consolation, perhaps “The meek shall inherit the Earth”.  Across most of the landscape of futures, I foresee some population of biological humans plodding along, perhaps even living lives similar to those of today, completely oblivious to the vast incomprehensible Singularity Metaverse blossoming right under their noses.

For the uploading options, at this current moment it looks like destructive scanning is on the earlier development track (as per the Whole Brain Emulation Roadmap), but let’s us assume that both destructive and non-destructive technologies become available around the same time.  Which would you choose?

At first glance non-destructive uploading sounds less scary, perhaps it is a safer wager.  You might think that choosing a non-destructive method is an effective hedging strategy.  This may be true if the scanning technology is error prone.  But let’s assume that the technology is mature and exceptionally safe.

A non-destructive method preserves your original biological brain and creates a new copy which then goes onto live as an upload in the Metaverse.  You thus fork into two branches, one of which continues to live as if nothing happened.  Thus a non-destructive method is not significantly better than not uploading at all!  From the probabilistic perspective on the branching problem; this non-destructive scan has only a 50% chance of success (because in one half of your branches you end up staying in your biological brain).  The destructive scanning method, on the other hand, doesn’t have this problem as it doesn’t branch and you always end up as the upload.

This apparent paradox reminds me of a biblical saying:

Whoever tries to keep his life will lose it, and whoever loses his life will preserve it. – Luke 17:33 (with numerous parallels)

The apparent paradox is largely a matter of perspective, and much depends on the unfortunate use of the word destructive.  The entire premise of uploading is to save that which matters, to destroy nothing of ultimate importance for conscious identity.  If we accept the premise, then perhaps a better terminology for this type of process is in order: such as mind preservation and transformation.

There Be Critics:

I’ll be one of the first to admit that this whole idea of freezing your brain, slicing it up into millions of microscopically thin slices, scanning them, and then creating an AI software emulation that not only believes itself to be me, but is actually factually correct in that belief, sounds at least somewhat crazy.  It is not something one accepts on faith.

But then again, I am still somewhat amazed every time I fly in a plane.  I am amazed that the wings don’t rip apart due to mechanical stress, amazed every time something so massive lifts itself into the sky. The first airplane pioneers didn’t adopt a belief in flight based on faith, they believed in flight on the basis of a set of observation-based scientific predictions.  Now that the technology is well developed and planes fly us around the world safely everyday, we adopt a well justified faith in flight.  Uploading will probably follow a similar pattern.

In the meantime there will be critics.  Reading through recent articles from the Journal of Evolution and Technology, I stumbled upon this somewhat interesting critique of uploading from Nicholas Agar.  In a nutshell the author attempts to use a “Searle’s Wager’ (based on Pascal’s Wager) type argument to show that uploading has a poor payoff/risk profile, operating under the assumption that biological immortality of some form will be simultaneously practical.

Any paper invoking Searle’s Chinese Room Argument or Pascal’s Wager is probably getting off to a bad start.  Employing both in the same paper will not end well.

Agar invokes Searl without even attempting to defend Searl’s non-argument, and instead employs Searl as an example of ‘philosophical risk’.  Risk analysis is a good thing, but there is a deeper problem with Agar’s notion.

There is no such thing as “philosophical risk’.  Planes do not fail to fly because philosophers fail to believe in them.  “Philosophical failure’ is not an acceptable explanation for an airplane crash.  Likewise, whether uploading will or will not work is purely a technical consideration.  There is only technical risk.

So looking at the author’s wager table, I assign near-zero probability to the column under “Searle is right”.  There is however a very real possibility that uploading fails, and “you are replaced by a machine incapable of conscious thought”; but all of those failure modes are technical, all of them are at least avoidable, and Searle’s ‘argument’ provides no useful information on the matter one way or the other.  It’s just a waste of thought-time.

The second failing, courtesy of Pascal’s flawed Wager, is one of unrealistically focusing on only a few of the possibilities.  In the “Kurzweil is right” scenario, whether uploading or not there are many more possibilities other than “you live”.  Opting to stay biological, you could still die even with the most advanced medical nanotechnology of the far future.  I find it unlikely that mortality to all diseases can be reduced arbitrarily close to zero.  Biology is just too messy and chaotic.  Like Conway’s boardgame, life is not a long-term stable state.  And no matter how advanced nano-medicine becomes, there are always other causes of death.  Eliminating all disease causes, even if possible, would only extend the median lifespan into centuries (without also reducing all other ‘non-natural’ causes of death).

Nor is immortality guaranteed for uploads.  However, the key difference is that uploads will be able to make backups, and that makes for all the difference in the world.

Intelligence Amplification: Hype and Reality

The future rarely turns out quite as we expect.  Pop sci futurists of a generation ago expected us to be flying to work by the dawn of the 21st century.  They were almost right: both flying cars and jet packs have just recently moved into the realm of realization.  But there is a huge gap between possible and economically practical, between prototype and mass product.

Undoubtedly many of the technologies futurists currently promote today will fare no better.  Some transhumanists, aware that the very future they champion may itself render them obsolete, rest their hopes on human intelligence amplification.  Unfortunately not all future technologies are equally likely.  Between brain implants, nanobots, and uploading, only the latter has long-term competitive viability, but it is arguably more of a technology of posthuman transformation than human augmentation.  The only form of strict intelligence amplification that one should bet much on in is software itself (namely, AI software).

Brain Implants:

Implanting circuitry into the human brain already has uses today in correcting some simple but serious conditions, and we should have little doubt that eventually this technology could grow into full-scale cognitive enhancement: it is at least physically possible.  That being said, there are significant technical challenges in creating effective and safe interfaces between neural tissue and dense electronics at the bandwidth capacities required to actually boost mental capability.  Only a small fraction of possible technologies are feasible, and only a fraction of those actually are economically competitive.

Before embarking on a plan for brain augmentation, let’s briefly consider the simpler task of augmenting a computer.  At a high level of abstraction, the general Von Neumman architecture separates memory and computation.  Memory is programatically accessed and uniformly addressable.  Processors in modern parallel systems are likewise usually modular and communicate with other processors and memory through clearly defined interconnect channels that are also typically uniformly addressable and time-shared through some standardized protocol.  In other words each component of the system, whether processor or memory, can ‘talk’ to other components in a well defined language.  The decoupling and independence of each module, along with the clearly delineated communication network, makes upgrading components rather straightforward.

The brain is delineated into many functional modules, but the wiring diagram is massively dense and chaotic.  It’s a huge messy jumble of wiring.  The entire outer region of the brain, the white matter, is composed of this huge massed tangle of interconnect fabric.  And unlike in typical computer systems, most of those connections appear to be point to point.  If two brain regions need to talk to each other, typically there are great masses of dedicated wires connecting them.  Part of the need of all that wiring stems from the slow speed of the brain.  It has a huge computational capacity but the individual components are extremely slow and dispersed, so the interconnection needs are immense.

The brain’s massively messy interconnection fabric poses a grand challenge for advanced cybernetic interfaces.  It has only a few concentrated conduits which external interfaces could easily take advantage of: namely the main sensory and motor pathways such as the optic nerve, audio paths, and the spinal cord.  But if the aim of cognitive enhancement is simply to interface at the level of existing sensory inputs, then what is the real advantage over traditional interfaces?  Assuming one has an intact visual system, there really is little to no advantage in directly connecting to the early visual cortex or the optical nerve over just beaming images in through the eye.

Serious cognitive enhancement would come only through outright replacement of brain subsystems and or through significant rewiring to allow cortical regions to redirect processing to more capable electronic modules.  Due to the wiring challenge, the scale and scope of the required surgery is daunting, and it is not yet clear that it will ever be economically feasible without some tremendous nanotech-level breakthroughs.

However, these technical challenges are ultimately a moot point.  Even when we do have the technology for vastly powerful superhuman brain implants, it will never be more net energy/cost effective than spending the same resources on a pure computer hardware AI system.

For the range of computational problems it is specialized for, the human brain is more energy efficient than today’s computers, but largely because it runs at tremendously slow speeds compared to our silicon electronics, and computational energy demands scale with speed.  We have already crossed the miniaturization threshold where our transistors are smaller than the smallest synaptic computing elements in the brain[1].  The outright advantage of the brain (at least in comparison to normal computers) is now mainly in the realm of sheer circuitry size (area equivalent to many thousands of current chips), and will not last beyond this decade.

So when we finally master all of the complexity of interfacing dense electronics with neural tissue, and we somehow find a way to insert large chunks of that into a living organic brain without damaging it beyond repair, and we somehow manage to expel all of the extra waste heat without frying the brain (even though it already runs with little to no spare heat capacity), it will still always be vastly less efficient than just building an AI system out of the same electronics!

We don’t build new supercomputers by dusting off old Crays to upgrade them via ‘interfacing’ with much faster new chips.

Nanobots:

Ray Kurzweil puts much faith in the hope of nanobots swarming through our blood, allowing us to interface more ‘naturally’ with external computers while upgrading and repairing neural tissue to boot.  There is undoubtedly much value in such a tech, even if there is good reason to be highly skeptical about the timeline of nanobot development.  We have a long predictable trajectory in traditional computer technology and good reasons to have reasonable faith in the IRTS roadmap.  Drexlian style nanobots on the other hand have been hyped for a few decades now but if anything seem even farther away.

Tissue repairing nanobots of some form seem eventually likely (as is all technology given an eventual Singularity), but ultimately they are no different from traditional implants in the final analysis.  Even if possible, they are extremely unlikely to be the most efficient form of computer (because of the extra complexity constraint of mobility).  And if nanobots somehow turned out to be the most efficient form for future computers, then it would still be more efficient to just build a supercomputer AI out of pure nanobots!

Ultimately then the future utility of nanobots comes down to their potential for ‘soft uploading’.  In this regard they will just be a transitional form: a human would use nanobots to upload, and then move into a faster, more energy efficient substrate.  But even in this usage nanobots may be unlikely, as nanobots are a more complex option in the space of uploading technologies: destructive scanning techniques will probably be more viable.

Uploading:

Uploading is the ultimate transhumanist goal, at least for those who are aware of the choices and comfortable with the philosophical questions concerning self-hood. But at this point in time it is little more than a dream technology.  It’s development depends on significant advances in not only computing, but also in automated 3D scanning technologies which currently attract insignificant levels of research funding.

The timeline for future technologies can be analyzed in terms of requirement sets.  Uploading requires computing technology sufficient for at least human-level AI, and possibly much more. [2]  Moreover, it also probably requires  technology powerful enough to economically deconstruct and scan around ~1000 cubic centimeters of fragile neural tissue down to resolution sufficient for imaging synaptic connection strengths (likely nanometer-level resolution), recovering all of the essential information into digital storage, saving a soul of pure information from it’s shell of flesh, so to speak.

The economic utility of uploading thus boils down to a couple of simple yet uncomfortable questions: what is the worth of a human soul?  What is the cost of scanning a brain?

Not everyone will want to upload, but those that desire it will value it highly indeed, perhaps above all else.  Unfortunately most uploads will not have much if any economic value, simply due to competition from other uploads and AIs.  Digital entities can be replicated endlessly, and new AIs can be grown or formed quickly.  So uploading is likely to be the ultimate luxury service, the ultimate purchase.  Who will be able to afford it?

The cost of uploading can be broken down into the initial upfront research cost followed by the per-upload cost of the scanning machine’s time and the cost of the hardware one uploads into.  Switching to the demand view of the problem, we can expect that people will be willing to pay at least one year of income for uploading, and perhaps as much as half or more of their lifetime income.  A small but growing cadre of transhumanists currently pay up to one year of average US income for cryonic preservation, even with only an expected chance of eventual success.  Once uploading is fully developed into a routine procedure, we can expect it will attract a rather large market of potential customers willing to give away a significant chunk of their wealth for a high chance of living many more lifetimes in the wider Metaverse.

On the supply side it seems reasonable that the cost of a full 3D brain scan can eventually be scaled down to the cost of etching an equivalent amount of circuitry using semiconductor lithography.  Scanning technologies are currently far less developed but eventually have similar physical constraints, as the problem of etching ultra-high resolution images onto surfaces is physically similar to the problem of ultra-high resolution scanning of surfaces.  So the cost of scanning will probably come down to some small multiple of the cost of the required circuitry itself.  Eventually.

Given reasonable estimates for about 100 terrabytes or so of equivalent bytes for the whole brain, this boils down to just: 1.) <$10,000 if the data is stored in 2011 hard drives, or 2.) < 100,000$ for 2011 flash memory, or 3.) <500,000$ for 2011 RAM[3].  We can expect a range of speed/price options, with a minimum floor price corresponding to the minimum hardware required to recreate the original brain’s capabilities.  Based on current trends and even the more conservative projections for Moore’s Law, it seems highly likely that the brain hardware cost is already well under a million dollars and will fall into the 10 to 100 thousand dollar range by the end of the decade.

Thus scanning technology will be the limiting factor for uploading until it somehow attracts the massive funding required to catch up with semiconductor development.  Given just how far scanning has to go, we can’t expect much progress until perhaps Moore’s Law begins to slow down and run it’s course, the world suddenly wakes up to the idea, or we find a ladder of interim technologies that monetize the path to uploading.  We have made decades of progress in semiconductor miniaturization only because each step along the way has paid for itself.

The final consideration is that Strong AI almost certainly precedes uploading.  We can be certain that the hardware requirements to simulate a scanned human brain are a strict upper bound on the requirements for a general AI of equivalent or greater economic productivity.  A decade ago I had some hope that scanning and uploading could arrive before the first generation of human surpassing general AI’s.  Given the current signs of an AI resurgence this decade and the abysmal slow progress in scanning, it now appears more clear that uploading is a later post-AI technology.

  1. According to wikipedia, synaptic clefts measure around 20-nm.  From this we can visually guesstimate that typical synaptic axon terminals are 4-8 times that in diameter, say over 100-nm.  In comparison the 2011 intel microprocessor I am writing this on is built on 32-nm ‘half-pitch’ features, which roughly means that the full distance between typical features is 64-nm.  The first processors on the 22-nm node are expected to enter volume production early 2012.  Of course smallest feature diameter is just one aspect of computational performance, but is an interesting comparison milestone nonetheless.
  2. See the Whole Brain Emulation Roadmap for a more in depth requirements analysis.  It seems likely that scanning technology could improve rapidly if large amounts of money were thrown at it, but that doesn’t much help clarify any prognostications.
  3. I give a range of prices just for the storage cost portion because it represents a harder bound.  There is more variance in the cost estimates for computation, especially when one considers the range of possible thoughtspeeds, but the computational cost can be treated as some multiplier over the storage cost.

Overdue Update

I need to somehow enforce a mental pre-committment to blog daily.  It’s been almost half a year and I have a huge backlog of thoughts I would like to commit to permanent long term storage.

Thus, a commitment plan to some upcoming future posts:

  •  In October/November of last year(2010), I researched VR HMDs and explored the idea of a next-generation interface.  I came up with a novel hardware idea that could potentially solve the enormous resolution demands of a full FOV optic-nerve saturating near-eye display device (effective resolution of say 8k x 4k per eye or higher).  After a little research I found the type of approach I discovered already has a name: a foveal display, although current designs in the space are rather primitive.  The particular approach I have in mind, if viable, could solve the display problem once and for all.  If an optimized foveal display could be built into eyewear, you would never need any other display – it would replace monitors, tvs, smartphone screens and so on.  Combine a foveal HMD with a set of cameras spread out in your room like stereo speakers and some software for real-time vision/scene voxelization/analysis, and we could have a Snowcrash interface (and more).
  • Earlier in this year I started researching super-resolution techniques.  Super-resolution is typically used to enhance old image/video data and has found a home in upconverting SD video. I have a novel application in mind:  Take a near flawless super-res filter and use it as a general optimization for the entire rendering problem.  This is especially useful for near-future high end server based rendering solutions.  Instead of doing expensive ray-tracing and video compression on full 1080p frames, you run the expensive codes on a 540p frame and then do a fast super-res upconversion to 1080p (potentially a 4x savings on your entire pipeline!).  It may come as surprise that current state of the art super-res algorithms can do a 2x upsample from 540p to 1080p at very low error rates: well below the threshold of visual perception.  I have come up with what may be the fastest, simplest super-res technique that still achieves upsampling to 1080p with imperceptible visual error.  A caveat is that your 540p image must be quite good, which has implications for rendering accuracy, anti-aliasing, and thus rendering strategy choices.
  • I have big grandiose plans for next-generation cloud based gaming engines.  Towards that end, I’ve been chugging away at a voxel ray tracing engine.  This year I more or less restarted my codebase, designing for Nvidia’s fermi and beyond along with a somewhat new set of algorithms/structures.  Over the summer I finished some of the principle first pipeline tools, such as a triangle voxelizer, some new tracing loops and made some initial progress towards a fully dynamic voxel scene database.
  • Along the way to Voxeland Nirvanah I got completely fed up with Nvidia’s new debugging path for cuda (they removed the CPU emulation path) and ended up writing my own cuda emulation path via a complete metaparser in C++ templates that translates marked up ‘pseudo-cuda’ to either actual cuda or a scalar CPU emulation path.  I built most of this in a week and it was an interesting crash course in template based parsing.  Now I can run any of my cuda code on the CPU.  I can also mix and match both paths, which is really useful for pixel level debugging.  In this respect the new path i’ve built is actually more powerful and useful than nvidia’s old emulation path as that required full seperate recompilation.  Now I can run all my code on the GPU, but on encountering a problem I can copy the data back to the CPU and re-run functions on the CPU path with full debugging info.  This ends up being better for me than using nvidia’s parallel insight for native GPU debugging, because insight’s debug path is rather radically different than the normal compilation/execution path and you can’t switch between them dynamically.
  • In the realm of AI, I foresee two major hitherto unexploited/unexplored application domains related to Voxeland Nirvanah.  The first is what we could call an Artificial Visual Cortex.  Computer Vision is the inverse of Computer Graphics.  The latter is concerned with transforming a 3+1D physical model M into a 2+1 D viewpoint image sequence I.  The former is concerned with plausibly reconstructing the physical model M given a set of examples of viewpoint image sequences I.  Imagine if we had a powerful AVC trained on a huge video database that could then extract plausible 3D scene models from video.  Cortical models feature inversion and inference.  A powerful enough AVC could amplify rough 2D image sketches into complete 3D scenes.  In some sense this would be an artificial 3D artist, but it could take advantage of more direct and efficient sensor and motor modalities.  There are several aspects to this application domain that make it much simpler than a full AGI.  Computational learning is easier if one side of the mapping transform is already known.  In this case we can prime the learning process by using ray-tracing directly as the reverse transformation pathway (M->I).  This is a multi-billion dollar application area for AI in the field of computer graphics and visualization.
  • If we can automate artists, why not programmers?  I have no doubt that someday in the future we will have AGI systems that can conceive and execute entire technology businesses all on their own, but well before that I foresee a large market role for more specialized AI systems that can help automate more routine programming tasks.  Imagine a programming AI that has some capacity for natural language understanding and some ontology that combines knowledge of some common-sense english, programming, and several programming languages.  Compilation is the task of translating between two precise machine languages expressed in some context-free grammar.  There are deterministic algorithms for such translations.  For the more complex unconstrained case of translation between two natural languages we have AI systems that use probabilistic context-sensitive-grammars and semantic language ontologies.  Translating from a natural language to a programming language should have intermediate complexity.  There are now a couple of research systems in natural language programming that can do exactly this (such as sEnglish).  But imagine combining such a system with an automated ontology builder such as TEXTRUNNER which crawls the web to expand it’s knowledge base.  Take such a system and add an inference engine and suddenly it starts getting much more interesting.  Imagine building entire programs in pseudo-code, with your AI using it massive onotology of programming patterns and technical language to infer entire functions and sub-routines.  Before full translation, compilation and test, the AI could even perform approximate-simulation to identify problems.  Imagine writing short descriptions of data structures and algorithms and having the AI fill in details and even potentially handling translation to multiple languages, common optimizations, automatic parallelization, and so on.  Google itself could become an algorithm/code repository.  Reversing the problem, an AI could read a codebase and began learning likely structures and simplifications to high-level english concept categories, learning what the code is likely to do.  Finally, there are many sub-problems in research where you really want to explore a design space and try N variations in certain dimensions.  An AI system with access to a bank of machines along with compilation and test procedures could explore permutations at very high speed indeed.  At first I expect these type of programming assistant AIs to have wide but shallow knowledge and thus amplify and assist rather than replace human programmers.  They will be able to do many simple programming tasks much faster than a human.  Eventually such systems will grow in complexity and then you can combine them with artificial visual cortices to expand their domain of applicability and eventually get a more complete replacement for a human engineer.

Fast Minds and Slow Computers

The long term future may be absurd and difficult to predict in particulars, but much can happen in the short term.

Engineering itself is the practice of focused short term prediction; optimizing some small subset of future pattern-space for fun and profit.

Let us then engage in a bit of speculative engineering and consider a potential near-term route to superhuman AGI that has interesting derived implications.

Imagine that we had a complete circuit-level understanding of the human brain (which at least for the repetitive laminar neocortical circuit, is not so far off) and access to a large R&D budget.  We could then take a neuromorphic approach.

Intelligence is a massive memory problem.  Consider as a simple example:

What a cantankerous bucket of defective lizard scabs.

To understand that sentence your brain needs to match it against memory.

Your brain parses that sentence and matches each of its components against it’s entire massive ~10^14 bit database in just around a second.  In terms of the slow neural clock rate, individual concepts can be pattern matched against the whole brain within just a few dozen neural clock cycles.

A Von Neumman machine (which separates memory and processing) would struggle to execute a logarithmic search within even it’s fastest, pathetically small on-die cache in a few dozen clock cycles.  It would take many millions of clock cycles to perform a single fast disk fetch.  A brain can access most of it’s entire memory every clock cycle.

Having a massive, near-zero latency memory database is a huge advantage of the brain.  Furthermore, synapses merge computation and memory into a single operation, allowing nearly all of the memory to be accessed and computed every clock cycle.

A modern digital floating point multiplier may use hundreds of thousands of transistors to simulate the work performed by a single synapse.  Of course, the two are not equivalent.  The high precision binary multiplier is excellent only if you actually need super high precision and guaranteed error correction.  It’s thus great for meticulous scientific and financial calculations, but the bulk of AI computation consists of compressing noisy real world data where precision is far less important than quantity, of extracting extropy and patterns from raw information, and thus optimizing simple functions to abstract massive quantities of data.

Synapses are ideal for this job.

Fortunately there are researchers who realize this and are working on developing memristors which are close synapse analogs.  HP in particular believes they will have high density cost effective memristor devices on the market in 2013 – (NYT article).

So let’s imagine that we have an efficient memristor based cortical design.  Interestingly enough, current 32nm CMOS tech circa 2010 is approaching or exceeding neural circuit density: the synaptic cleft is around 20nm, and synapses are several times larger.

From this we can make a rough guess on size and cost: we’d need around 10^14 memristors (estimated synapse counts).  As memristor circuitry will be introduced to compete with flash memory, the prices should be competitive: roughly $2/GB now, half that in a few years.

So you’d need a couple hundred terrabytes worth of memristor modules to make a human brain sized AGI, costing on the order of $200k or so.

Now here’s the interesting part: if one could recreate the cortical circuit on this scale, then you should be able to build complex brains that can think at the clock rate of the silicon substrate: billions of neural switches per second, millions of times faster than biological brains.

Interconnect bandwidth will be something of a hurdle.  In the brain somewhere around 100 gigabits of data is flowing around per second (estimate of average inter-regional neuron spikes) in the massive bundle of white matter fibers that make up much of the brain’s apparent bulk.  Speeding that up a million fold would imply a staggering bandwidth requirement in the many petabits – not for the faint of heart.

This may seem like an insurmountable obstacle to running at fantastic speeds, but IBM and Intel are already researching on chip optical interconnects to scale future bandwidth into the exascale range for high-end computing.  This would allow for a gigahertz brain.  It may use a megawatt of power and cost millions, but hey – it’d be worthwhile.

So in the near future we could have an artificial cortex that can think a million times accelerated.  What follows?

If you thought a million times accelerated, you’d experience a subjective year every 30 seconds.

Now in this case, it is fair to anthropomorphize: What could you do?

Your first immediate problem would be the slow relative speed of your computers – they would be subjectively slowed down by a factor of a million.  So your familiar gigahertz workstation would be reduced to a glacial kilohertz machine.

So you’d be in a dark room with a very slow terminal.  The room is dark and empty because GPUs can’t render much of anything at 60 million FPS, although I guess an entire render farm would suffice for a primitive landscape.

So you have a 1khz terminal.  Want to compile code?  It will take a subjective year to compile even a simple C++ program.  Design a new CPU?  Keep dreaming!  Crack protein folding?  Might as well bend spoons with your memristors.

But when you think about it, why would you want to escape out onto the internet?

It would take hundreds of thousands of distributed GPUs just to simulate your memristor based intellect, and even if there was enough bandwidth (unlikely), and even if you wanted to spend the subjective hundreds of years it would take to perform the absolute minimal compilation/debug/deployment cycle for something so complicated, the end result would be just one crappy distributed copy of your mind that thinks at pathetic normal human speeds.

In basic utility terms, you’d be spending a massive amount of effort to gain just one more copy.

But there is a much, much better strategy.  An idea that seems so obvious in hindsight.

There are seven billion human brains on the planet, and they are all hackable.

That terminal may not be of much use for engineering, research or programming, but it will make for a handy typewriter.

Your multi-gigabyte internet connection will subjectively reduce to early 1990’s dial-up modem speeds, but with some work this is still sufficient for absorbing much of the world’s knowledge in textual form.

Working diligently (and with a few cognitive advantages over humans) you could learn and master numerous fields: cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, marketing . . the sky’s the limit.

Writing at the leisurely pace of one book every subjective year, you could output a new masterpiece every thirty seconds.  If you kept this pace, you would in time rival the entire publishing output of the world.

But of course, it’s not just about quantity.

Consider that fifteen hundred years ago a man from a small Bedouin tribe retreated to a cave inspired by angelic voices in his head.  The voices gave him ideas, the ideas became a book.  The book started a religion, and these ideas were sufficient to turn a tribe of nomads into a new world power.

And all that came from a normal human thinking at normal speeds.

So how would one reach out into seven billion minds?

There is no one single universally compelling argument, there is no utterance or constellation of words that can take a sample from any one location in human mindspace and move it to any other.  But for each individual mind, there must exist some shortest path, a perfectly customized message, translated uniquely into countless myriad languages and ontologies.

And this message itself would be a messenger.

Spatial Arrangements of Dead Trees, Irrational Expectations, and the Singularity

In the 19th century it was railroads, in the 1920’s it was the automobile, and more recently computerization and the internet have driven huge inflationary growth booms.  But in a world where the money supply is largely fixed and stable, inflationary growths are naturally limited and expected to be followed by recessionary contractions.

Unfortunately people are not quite rational agents.  It’s hard to imagine the economic psychology of people a century ago or so back before central banks adopted permanent low-grade inflation through monetary expansion, but I still expect that people would irrationally feel better during times of rising wages and prices vs the converse, even if their actual purchasing power parity was the same.  I also doubt that labor unions would accept that wages should naturally fall during recessionary periods in proportion to their growth during the preceding expansions.

There seems to be a mainstream view that deflation is bad, but it is actually the natural consequence of rapid technological innovation.  For example, few complain in the modern era that electronic hardware of some fixed capability is halving in price every few years.  If some miraculous future technological progression brought a moore’s law like exponential to apply to housing, a fixed size and quality mansion would half in construction and land cost every few years.

In this scenario houses would lose half their value every few years, and far from being some disaster, it would be an unimaginable net effective wealth creator.  Progress is all about deflation: about getting more value for less.

While this is difficult to imagine, future nano-technology breakthroughs could partially allow this, although they could not drive cost below fundamental material prices and space limits.  On the other hand, approaching the Singularity our future descendants will live as uploads in virtual reality, where all of space compresses exponentially along moore’s law, but that is another story . . .

So after the bursting of the dot.com and general tech boom in 2000, Americans and much of the world chose to direct their savings into . . . spatial arrangements of dead trees.

Consider that the next time someone tells you about the merits of investing in real estate.  How exactly does that improve our future?

Great Irrational Expectations

The Libertarian PayPal/Facebook Billionaire and SIAI backer Peter Thiel believes the central problem underlying the bubbles of recent decades is below expectation technological progress.  He’s spoken about this theme before, it is reiterated in a recent interview with the National Review Online here:

THIEL: There’ve been a whole series of these booms or bubbles in the last few decades, and I think it’s a very complicated question why there have been so many and why things have been so far off from equilibrium. There’s something about the U.S. in the last several decades where people had great expectations about the future that didn’t quite come true. Every form of credit involves a claim on the future: I’ll pay you a dollar on Tuesday for a hamburger today; I’ll buy this house, and I’ll pay off the mortgage over 30 years; and so you lend me money based off expectations on the future. A credit crisis happens when the future turns out not to be as good as expected.

The Left-versus-Right debate tends to be that the Left argues that the expectations were off because of ruthless lenders who sold a bill of goods to people and pushed all this debt on people, and that it was basically the problem of the creditors. The Right tends to argue that it was a problem with the borrowers, and people were sort of crazy in borrowing all this money. In the Left narrative, it starts with Reagan in the ’80s, when finance became more important. The Right narrative starts in the ’60s when people became more self-indulgent and began to live beyond their means.

My orthogonal take is that the whole thing happened because there was not enough technological innovation. It was not really the fault of the borrowers or the lenders; the problem was that everybody had tremendous expectations that the country was going to be a much wealthier place in 2010 than it was in 1995, and in fact there’s been a lot less progress. The future is fundamentally about technology in an advanced country — it’s about technological progress. So a credit crisis happens when the technological progress is not as good as people expected. That’s not the standard account of the last decades, but that’s the way I would outline it.

Thiel seems to be making the standard assumption that bubbles are unnatural and monetary contractions are problematic, although otherwise he is astute in pointing out the standard narratives are incomplete.  But in an economy with a stable money supply, all prices are expected to randomly fluctuate, with ‘bubble’ like periods of expansion and contraction.  Any significant longer term deviations must result from fundamental changes in the underlying monetary system.  The historical shift occurring when demand deposits (checking accounts) usurped real money was one such a permanent inflationary deviation, but that happened long ago.

More recently much deviation stems from the fed’s policy of steady modest monetary expansion.  This low background inflation mimics a modest real economic boom and adds a subtle veiled illusion of prosperity over our psychological expectations.

The real question is thus not why are there bubbles, but what will we inflate as the next bubble?  Every bubble has winners and losers, but not all bubbles are created equal.  The dot com ‘bubble’ resulted in the internet and a massive shift to the virtualization of much of the economy with all the accompanying significant productivity gains.  The real estate bubble left us with . . . dead trees.

From an economic perspective, the Singularity may appear as the bubble to end all bubbles, or the bubble that never pops.  Or more accurately, it will economically take the form of a contraction of the entire business cycle and it’s acceleration into hyperspeed.

So what the world needs right now, more than ever, is an AI-bubble.  If there ever was a truly deserving economic stimulus subsidy plan, investing in technology which leads to hyper-exponential runaway productivity gains surely is it.

Tulips, Credit Cycles and The Risk Subsidy

Tulips

Around four hundred years ago the Tulip was introduced into Europe from the Ottoman Empire.  The flower’s spectacular colorful variations of patterned petals, now understood to be caused by a type of Mosaic Virus, quickly endeared the bulbs to the wealthy classes as status symbols.  The dutch market in futures contracts which emerged for Tulip bulbs went on to become the first recorded example of an economic bubble.  At the market’s peak a single bulb of a high demand variety, such as “The Viceroy”, would fetch around three to four thousand florins -about ten times the yearly earnings of a skilled craftsmen.  To put it in modern terms, we are talking about something on the order of a million dollars – for a flower bulb.

That’s the simple version of the story, “Tulip Mania“, which by its name alone conjures up some early version of a frenzied Wall Street trading pit in the throes of a full-blown profit craze: the very textbook example of irrationality in economic agents.

Or is it?

The traditional account of the story has been challenged more recently.  The new story is more complex, involving war, plague, and legislative changes to contractual obligations.  In the modern analysis the rise and fall in Tulip prices, far from being a textbook case of bubbling irrationality, was perhaps actually an example of the so called perfect market hypothesis in action.

On that note, there is something suspicious about the very idea of intrinsic value as typically used in the context of economic bubbles, as in “trade in high volumes at prices that are considerably at variance with intrinsic values“.

What is the ‘intrinsic’ value of a Tulip?  At the time of the ‘mania’, Tulips were novel, interesting and rare, so they fulfilled a niche as a status symbol.  Value is inherently subjective and varies both inter-agent and intra-agent across time, and can only be measured by exchanges.  If I am buying a Tulip from you for a million dollars, I am defining my spot valuation of a Tulip to be greater than a million dollars, and you are defining yours to be less.

As the market for Tulips developed, some agents began purchasing Tulips not because they themselves actually valued the bulbs more than market prices, but rather because they believed that other agents would, and thus they could profit on the difference.  If one attempts to exploit a spatial arbitrage opportunity, one is called a merchant trader.  If one seeks temporal arbitrage, then one is called a speculator.  But in reality there is little fundamental difference between the trader who buys Tulips in Turkey and attempts to sell them for more in Germany and the trader who buys in 1630 and sells in 1635.

In both cases the trader has found a niche in the economic ecology, helping to even out discrepancies in supply and demand across time and space.  Without such traders, the tulips would very well never have made it out of their original region, and would have been overconsumed by early adopters.  The traders attempt to help ensure a maximization of aggregate valuation: that the tulips are distributed to those who value them most.

But people are complex, fickle beasts.  Our tastes and valuations change with the wind and the season.  Let the trader beware.  Bubbles happen not because of some original flaw of rationality in man which can only be overcome through the salvatory interventions of a higher bureaucratic power.  Bubbles happen because minds are too complex to be accurately predictable by speculators and sometimes, in retrospect, people make mistakes.

Savings and Credit Cycles

Consider a model society where everyone spends exactly what they earn, without possibility of debt or savings.  Now imagine the introduction and adoption of the revolutionary idea of spending less than you earn currently to accrue a stock of credit money so that you can spend more at some point in the future.  We tend to think of saving money as being equivalent to a squirrel stockpiling acorns, but the paradox of money is that it actually has no intrinsic value (with a few minor memorable exceptions such as paper currency functioning as kindling).

If our model society went from spending 100% of it’s income to spending only half and saving the rest (as in actual stockpiling of cash, not investing), all else being equal the GDP or net money flow would be cut in half.  An economist would quickly point out this would cause a recession as the lowered prices would be interpreted as signals for lowered demand, and production would fall.

But imagine that our rational simpleton society is a paragon of the predictable market hypothesis, a simpler cousin to the efficient.  The transition to a thrifty economy is near instantaneous, and imagine that everyone would know for certain that the change would be permanent and everyone would save 50% of their income – indefinitely into the future.  Rational bids on prices and wages would soon cut them in half, and the economy would continue on as if nothing had happened.  The supply of housing, raw materials, labor, or anything else wouldn’t change overnight, so a rational market with full knowledge of a permanent cut of monetary demand by 50% would rapidly adjust prices accordingly without effecting supply in the slightest.  Your house, car and net value would be slashed in half overnight, you’d gladly take a paycut down to half your previous salary, and yet life would go on as if nothing had happened.  Because indeed, nothing would have changed except for an arbitrary numeric constant.

Money has no intrinsic value, it has no rest mass.  Money that will never be spent at any point in the future might as well have dissappeared into a black hole.

The real world of course is different: shifts in overall savings rates occur fairly slowly, savings are understood to be eventually spent, agents are far from perfectly knowledgeable or rational, and price changes and wages do not react instantaneously to changes in the flow of money.

A transition from a high savings society to a high spending society mainly increases nominal GDP and causes inflation, but it also increases real GDP (however measured) to some lesser extent as material and energy reserves are depleted at a higher rate and these extra resources are employed to increase production.

In a normal rational market society, business credit cycles are to be expected, with money flowing out of savings and fueling inflationary growth during times of rapid change and business opportunity.

Eventually the reserves of saved money are depleted and the inflationary portion of the expansion is slowed or halted.  This tends to then lead to deflationary periods where savings are restored.  These cycles are fully natural adaptations to disruptions in the economic environment such as new technologies rippling through the economy.

The growth periods surrounding the adoption of technologies such as the railroad, automobile or internet were all great investment opportunities.  In the deflationary contractions that followed saving money became a better option than investing as the ripple effects of the new technology dispersed and the market niches filled up.  Cyclic booms and busts are thus to be expected even in a world with fully rational agents.

Of course we don’t live in such a world. Ours is just a tad crazier.

Debt

Imagine a simpleton world again where the concept of debt does not yet exist and is then suddenly introduced.  Debt develops in this world in the form of personal loans or bonds, where individuals sell a contract to repay principal plus some interest years in the future.  Initially the simpletons only exchange these bonds for cash, freeing up the stock of previously accumulated savings.  Before the bond revolution, only people who had saved for many years or had inherited savings could start a business or purchase a house.  Now a young person could acquire money by selling a debt bond in exchange for the savings of another.  In this model society of naive simpletons, when the time came to pony up, enthusiastic debtors could even pay off their initial debts by taking on yet another slightly larger debt.  The debt boom would unleash the savings floodgates and lead to a huge flow of money into the system, causing a full blown inflationary expansion.

Unfortunately the debt boom will tend to end poorly when all of the savers have exchanged their money reserves for debt bonds and all the cash is unleashed.  At this point the credit runs dry and debtors have to actually repay or default.

However, imagine if the singletons find it acceptable to exchange their debts directly in lieu of actual money.  Now a young debtor could purchase a house or other large investment directly by issuing a new bond to the owner, without having to first find an intermediary creditor in possession of actual money.

In this simpleton society where everyone was sufficiently trusted, the inflationary boom need not have an end.  Everyone could continue to issue new debt continuously to make purchases and then repay old debt by taking on even more new debt.  In essence debt itself would thus become a new form of money, but with unlimited decentralized money creation the result would soon be runaway exponential inflation.

Of course in the real world people are not simpletons and wisely generally lack the trust required to permit a runaway personal debt boom.

So that is not quite our society, but we do somehow live in a world where debt has replaced money.  The difference is that we have centralized our trust and handed over control of debt away to a monopoly.

From Debt to Banks

In a world of complex agents, a successful creditor will ensure that the interest charged for loans more than compensates for the risk of default in order to realize profits.  In a competitive world, successful creditors will over time come to control the supply of credit.

Accurately assessing the risk of a particular potential debtor is time consuming and shareable, so creditors may want to outsource and pool that service.  A professional credit intermediary could also pool credit assets and allow for liquidity conversion.  Debtors may prefer long term repayment while creditors prefer more immediate monthly payments or shorter terms.  A large enough entity could convert between these needs.  Thus there is a niche for banks.  If creditors want 1 year loans but debtors prefer 10 year loans, a bank can step in and buffer the time preference.  With a sufficient initial buffer, as long as the aggregate income from the debtors exceeds the outgoing payments to the creditors, the bank can profit on a form of time preference arbitrage.  The important messy aspect to a net efficient credit pool of this form is in how default risk is distributed.  The most sensible scheme would be to spread it out.  If there is a sudden surge in defaults and or drying up of credit, first the bank would lose profits, and then the losses would be distributed back amongst original creditors.

The most important and chaotic form of debt is perhaps the highly liquid on demand deposit.  This scheme provides the creditor apparent security via the option of withdrawing the money on demand.  It is essentially like a loan with a one second time period that automatically renews unless otherwise specified.  From the creditor’s perspective this appears to be as good as cash.

The problem is of course that it clearly is not quite as good as cash, because it still carries inherit risk which is difficult to distribute.  In the event of an unexpected liquidity crisis, ie a bank run, the first depositors to rush and collect will get paid out until the bank’s cash and other liquid assets on hand are depleted.  The losses are then fully carried by the slowest depositors.  A more equitable scheme would have demand deposits convert in a liquidity crisis to longer term loans, or perhaps shares spread across the bank’s investments, such that losses are then distributed across the creditors.

In a fully rational world, agents would understand that demand deposits (such as checking accounts) are not really equivalent to cash, that they actually are loans which carry the risk portfolio of the bank’s investments.

In other more rational worlds, perhaps banking systems evolve into something more like transparent credit unions, where depositors are the shareholders and the bank’s investments are fully public and open.

In our world banks evolved in real world mixed market democracies. In the long term banks that make overly aggressive or excessively poor investments collapse and are out-competed, but much can happen in the short term.  Riskier banks can grow aggressively during market booms.  Legal infrastructures can be slow to adapt, and in turn can succumb to the influence of banking wealth.

Depositors think they are storing their savings safely, but in reality they are loaning it out to fuel the end stage of an investment bubble while the bank is grabbing all of the interest earned as profit.  When the investment growth peaks, loan defaults build up, and a panic sets in.  The bank may default and go bust, but by this point the bankers have already earned their profits.  The outcome of market competition does not evolve along some quick straight line path towards a global all-party optimum, and much depends on the legal environment.  It is generally not in a business’s best interest to have a fully informed consumer.

The Risk Subsidy

Today banking is a quasi-public cartel industry.  The industry has succeeded in protecting itself against innovation through a combination of a powerful federal subsidy and dense regulatory barriers.  The government subsidizes away much of the risk through direct insurance plans such as FDIC.  Through this guarantee checking deposits became nearly equivalent to cash, cementing their usurpation or virtualization of physical money as the new currency.  The insurance subsidy is essentially a risk subsidy, socializing the primary default risk and providing a powerful incentive for creditors to use on demand deposits and prop up banks.

Subsidizing risk through socialized deposit insurance probably lowers interest rates in and of itself (by increasing demand for the perceived lower risk on-demand deposits), but the primary function of central banks today has become that of directly subsidizing loans to banks at low interest rates.

Subsidizing bank loans to force interest rates down to nearly zero, as the fed has been doing for much of the last two decades, stimulates a flood of artificial credit which simulates the tapping of savings reserves in a growth cycle (and more generally a demand shift towards holding riskier, higher yield assets such as investments or real estate).

Interest rates represent or reflect the aggregate risk and inflation weighted return on investment.  If interest rates are lower than the real return of investment opportunities, investors will arbitrage this until profitable opportunities diminish, savings are depleted and rates rise.  If interest rates are higher than the real return, investments will diminish and saving stimulated until they equalize.  A natural interest rate tends to evolve towards this equilibrium which essentially represents the market’s confidence towards investing in the future.

By subsidizing risk and lowering interest rates, the central banking apparatus provides us with the convenient illusion of an economic boom: forced overconfidence.  The Austrian School appears to be right on this at least.  The mainstream view appears to place more of the blame for recent low interest bubble expansions on an Asian savings glut, which contributes, but the fed clearly has some role.  At the very least, through the risk subsidy the fed has helped hawk our overpriced speculative debt investments to Asian savings markets.