Category Archives: Mind Uploading

Non-Destructive Uploading: A Paradox

Transhumanism may well be on the road to victory, at least judging by some indirect measures of social influence, such as the increasing number of celebrities coming out in support for cyronic preservation and future resurrection, or the prevalence of the memeset across media.

If you are advancing into that age at which your annual chance of death is becoming significant, you face an options dilemma.  The dilemma is not so much a choice of what to do now: for at this current moment in time the only real option is vitrification-based cyronics.  If the Brain Preservation Foundation succeeds, we may soon have a second improved option in the form of plastination, but this is besides the point for my dilemma of choice.  The real dilemma is not what to do now, it is what to do in the future.

Which particular method of resurrection would you go with?  Biological immortality in your original body?  Or how about uploading?  Would you rather have your brain destructively scanned or would you prefer a gradual non-destructive process?

Those we preserve today will not have an opportunity to consider their future options or choose a possible method of resurrection, simply because we won’t be able to ask them unless we resurrect them in the first place.

The first option the cyronics community considered is some form of biological immortality.  The idea is at some point in the future we’ll be able to reverse aging, defeat all of these pesky diseases and repair cellular damage, achieving Longevity Escape Velocity.  I find this scenario eventually likely, but only because I find the AI-Singularity itself to be highly likely.  However, there is a huge difference between possible and pragmatic.

By the time biological immortality is possible, there is a good chance it will be far too expensive for most plain humans to afford.  I do not conclude this on the basis of the cost of the technology itself.  Rather I conclude this based on the economic impact of the machine Singularity.

Even if biological humans have any wealth in the future (and that itself is something of a big if), uploading is the more rational choice, for two reasons: it is the only viable route towards truly unlimited, massive intelligence amplification, and it may be the only form of existence that a human can afford.  Living as an upload can be arbitrarily cheap compared to biological existence.  An upload will be able to live in a posthuman paradise for a thousandth, then a millionth, then a billionth of the biological costs of living.  Biological humans will not have any possible hope of competing economically with the quick and the dead.

Thus I find it more likely that most of us will eventually choose some form of uploading.  Or perhaps rather a small or possibly even tiny elite population will choose and be able to upload, and the rest will be left behind.  In consolation, perhaps “The meek shall inherit the Earth”.  Across most of the landscape of futures, I foresee some population of biological humans plodding along, perhaps even living lives similar to those of today, completely oblivious to the vast incomprehensible Singularity Metaverse blossoming right under their noses.

For the uploading options, at this current moment it looks like destructive scanning is on the earlier development track (as per the Whole Brain Emulation Roadmap), but let’s us assume that both destructive and non-destructive technologies become available around the same time.  Which would you choose?

At first glance non-destructive uploading sounds less scary, perhaps it is a safer wager.  You might think that choosing a non-destructive method is an effective hedging strategy.  This may be true if the scanning technology is error prone.  But let’s assume that the technology is mature and exceptionally safe.

A non-destructive method preserves your original biological brain and creates a new copy which then goes onto live as an upload in the Metaverse.  You thus fork into two branches, one of which continues to live as if nothing happened.  Thus a non-destructive method is not significantly better than not uploading at all!  From the probabilistic perspective on the branching problem; this non-destructive scan has only a 50% chance of success (because in one half of your branches you end up staying in your biological brain).  The destructive scanning method, on the other hand, doesn’t have this problem as it doesn’t branch and you always end up as the upload.

This apparent paradox reminds me of a biblical saying:

Whoever tries to keep his life will lose it, and whoever loses his life will preserve it. – Luke 17:33 (with numerous parallels)

The apparent paradox is largely a matter of perspective, and much depends on the unfortunate use of the word destructive.  The entire premise of uploading is to save that which matters, to destroy nothing of ultimate importance for conscious identity.  If we accept the premise, then perhaps a better terminology for this type of process is in order: such as mind preservation and transformation.

There Be Critics:

I’ll be one of the first to admit that this whole idea of freezing your brain, slicing it up into millions of microscopically thin slices, scanning them, and then creating an AI software emulation that not only believes itself to be me, but is actually factually correct in that belief, sounds at least somewhat crazy.  It is not something one accepts on faith.

But then again, I am still somewhat amazed every time I fly in a plane.  I am amazed that the wings don’t rip apart due to mechanical stress, amazed every time something so massive lifts itself into the sky. The first airplane pioneers didn’t adopt a belief in flight based on faith, they believed in flight on the basis of a set of observation-based scientific predictions.  Now that the technology is well developed and planes fly us around the world safely everyday, we adopt a well justified faith in flight.  Uploading will probably follow a similar pattern.

In the meantime there will be critics.  Reading through recent articles from the Journal of Evolution and Technology, I stumbled upon this somewhat interesting critique of uploading from Nicholas Agar.  In a nutshell the author attempts to use a “Searle’s Wager’ (based on Pascal’s Wager) type argument to show that uploading has a poor payoff/risk profile, operating under the assumption that biological immortality of some form will be simultaneously practical.

Any paper invoking Searle’s Chinese Room Argument or Pascal’s Wager is probably getting off to a bad start.  Employing both in the same paper will not end well.

Agar invokes Searl without even attempting to defend Searl’s non-argument, and instead employs Searl as an example of ‘philosophical risk’.  Risk analysis is a good thing, but there is a deeper problem with Agar’s notion.

There is no such thing as “philosophical risk’.  Planes do not fail to fly because philosophers fail to believe in them.  “Philosophical failure’ is not an acceptable explanation for an airplane crash.  Likewise, whether uploading will or will not work is purely a technical consideration.  There is only technical risk.

So looking at the author’s wager table, I assign near-zero probability to the column under “Searle is right”.  There is however a very real possibility that uploading fails, and “you are replaced by a machine incapable of conscious thought”; but all of those failure modes are technical, all of them are at least avoidable, and Searle’s ‘argument’ provides no useful information on the matter one way or the other.  It’s just a waste of thought-time.

The second failing, courtesy of Pascal’s flawed Wager, is one of unrealistically focusing on only a few of the possibilities.  In the “Kurzweil is right” scenario, whether uploading or not there are many more possibilities other than “you live”.  Opting to stay biological, you could still die even with the most advanced medical nanotechnology of the far future.  I find it unlikely that mortality to all diseases can be reduced arbitrarily close to zero.  Biology is just too messy and chaotic.  Like Conway’s boardgame, life is not a long-term stable state.  And no matter how advanced nano-medicine becomes, there are always other causes of death.  Eliminating all disease causes, even if possible, would only extend the median lifespan into centuries (without also reducing all other ‘non-natural’ causes of death).

Nor is immortality guaranteed for uploads.  However, the key difference is that uploads will be able to make backups, and that makes for all the difference in the world.

Intelligence Amplification: Hype and Reality

The future rarely turns out quite as we expect.  Pop sci futurists of a generation ago expected us to be flying to work by the dawn of the 21st century.  They were almost right: both flying cars and jet packs have just recently moved into the realm of realization.  But there is a huge gap between possible and economically practical, between prototype and mass product.

Undoubtedly many of the technologies futurists currently promote today will fare no better.  Some transhumanists, aware that the very future they champion may itself render them obsolete, rest their hopes on human intelligence amplification.  Unfortunately not all future technologies are equally likely.  Between brain implants, nanobots, and uploading, only the latter has long-term competitive viability, but it is arguably more of a technology of posthuman transformation than human augmentation.  The only form of strict intelligence amplification that one should bet much on in is software itself (namely, AI software).

Brain Implants:

Implanting circuitry into the human brain already has uses today in correcting some simple but serious conditions, and we should have little doubt that eventually this technology could grow into full-scale cognitive enhancement: it is at least physically possible.  That being said, there are significant technical challenges in creating effective and safe interfaces between neural tissue and dense electronics at the bandwidth capacities required to actually boost mental capability.  Only a small fraction of possible technologies are feasible, and only a fraction of those actually are economically competitive.

Before embarking on a plan for brain augmentation, let’s briefly consider the simpler task of augmenting a computer.  At a high level of abstraction, the general Von Neumman architecture separates memory and computation.  Memory is programatically accessed and uniformly addressable.  Processors in modern parallel systems are likewise usually modular and communicate with other processors and memory through clearly defined interconnect channels that are also typically uniformly addressable and time-shared through some standardized protocol.  In other words each component of the system, whether processor or memory, can ‘talk’ to other components in a well defined language.  The decoupling and independence of each module, along with the clearly delineated communication network, makes upgrading components rather straightforward.

The brain is delineated into many functional modules, but the wiring diagram is massively dense and chaotic.  It’s a huge messy jumble of wiring.  The entire outer region of the brain, the white matter, is composed of this huge massed tangle of interconnect fabric.  And unlike in typical computer systems, most of those connections appear to be point to point.  If two brain regions need to talk to each other, typically there are great masses of dedicated wires connecting them.  Part of the need of all that wiring stems from the slow speed of the brain.  It has a huge computational capacity but the individual components are extremely slow and dispersed, so the interconnection needs are immense.

The brain’s massively messy interconnection fabric poses a grand challenge for advanced cybernetic interfaces.  It has only a few concentrated conduits which external interfaces could easily take advantage of: namely the main sensory and motor pathways such as the optic nerve, audio paths, and the spinal cord.  But if the aim of cognitive enhancement is simply to interface at the level of existing sensory inputs, then what is the real advantage over traditional interfaces?  Assuming one has an intact visual system, there really is little to no advantage in directly connecting to the early visual cortex or the optical nerve over just beaming images in through the eye.

Serious cognitive enhancement would come only through outright replacement of brain subsystems and or through significant rewiring to allow cortical regions to redirect processing to more capable electronic modules.  Due to the wiring challenge, the scale and scope of the required surgery is daunting, and it is not yet clear that it will ever be economically feasible without some tremendous nanotech-level breakthroughs.

However, these technical challenges are ultimately a moot point.  Even when we do have the technology for vastly powerful superhuman brain implants, it will never be more net energy/cost effective than spending the same resources on a pure computer hardware AI system.

For the range of computational problems it is specialized for, the human brain is more energy efficient than today’s computers, but largely because it runs at tremendously slow speeds compared to our silicon electronics, and computational energy demands scale with speed.  We have already crossed the miniaturization threshold where our transistors are smaller than the smallest synaptic computing elements in the brain[1].  The outright advantage of the brain (at least in comparison to normal computers) is now mainly in the realm of sheer circuitry size (area equivalent to many thousands of current chips), and will not last beyond this decade.

So when we finally master all of the complexity of interfacing dense electronics with neural tissue, and we somehow find a way to insert large chunks of that into a living organic brain without damaging it beyond repair, and we somehow manage to expel all of the extra waste heat without frying the brain (even though it already runs with little to no spare heat capacity), it will still always be vastly less efficient than just building an AI system out of the same electronics!

We don’t build new supercomputers by dusting off old Crays to upgrade them via ‘interfacing’ with much faster new chips.


Ray Kurzweil puts much faith in the hope of nanobots swarming through our blood, allowing us to interface more ‘naturally’ with external computers while upgrading and repairing neural tissue to boot.  There is undoubtedly much value in such a tech, even if there is good reason to be highly skeptical about the timeline of nanobot development.  We have a long predictable trajectory in traditional computer technology and good reasons to have reasonable faith in the IRTS roadmap.  Drexlian style nanobots on the other hand have been hyped for a few decades now but if anything seem even farther away.

Tissue repairing nanobots of some form seem eventually likely (as is all technology given an eventual Singularity), but ultimately they are no different from traditional implants in the final analysis.  Even if possible, they are extremely unlikely to be the most efficient form of computer (because of the extra complexity constraint of mobility).  And if nanobots somehow turned out to be the most efficient form for future computers, then it would still be more efficient to just build a supercomputer AI out of pure nanobots!

Ultimately then the future utility of nanobots comes down to their potential for ‘soft uploading’.  In this regard they will just be a transitional form: a human would use nanobots to upload, and then move into a faster, more energy efficient substrate.  But even in this usage nanobots may be unlikely, as nanobots are a more complex option in the space of uploading technologies: destructive scanning techniques will probably be more viable.


Uploading is the ultimate transhumanist goal, at least for those who are aware of the choices and comfortable with the philosophical questions concerning self-hood. But at this point in time it is little more than a dream technology.  It’s development depends on significant advances in not only computing, but also in automated 3D scanning technologies which currently attract insignificant levels of research funding.

The timeline for future technologies can be analyzed in terms of requirement sets.  Uploading requires computing technology sufficient for at least human-level AI, and possibly much more. [2]  Moreover, it also probably requires  technology powerful enough to economically deconstruct and scan around ~1000 cubic centimeters of fragile neural tissue down to resolution sufficient for imaging synaptic connection strengths (likely nanometer-level resolution), recovering all of the essential information into digital storage, saving a soul of pure information from it’s shell of flesh, so to speak.

The economic utility of uploading thus boils down to a couple of simple yet uncomfortable questions: what is the worth of a human soul?  What is the cost of scanning a brain?

Not everyone will want to upload, but those that desire it will value it highly indeed, perhaps above all else.  Unfortunately most uploads will not have much if any economic value, simply due to competition from other uploads and AIs.  Digital entities can be replicated endlessly, and new AIs can be grown or formed quickly.  So uploading is likely to be the ultimate luxury service, the ultimate purchase.  Who will be able to afford it?

The cost of uploading can be broken down into the initial upfront research cost followed by the per-upload cost of the scanning machine’s time and the cost of the hardware one uploads into.  Switching to the demand view of the problem, we can expect that people will be willing to pay at least one year of income for uploading, and perhaps as much as half or more of their lifetime income.  A small but growing cadre of transhumanists currently pay up to one year of average US income for cryonic preservation, even with only an expected chance of eventual success.  Once uploading is fully developed into a routine procedure, we can expect it will attract a rather large market of potential customers willing to give away a significant chunk of their wealth for a high chance of living many more lifetimes in the wider Metaverse.

On the supply side it seems reasonable that the cost of a full 3D brain scan can eventually be scaled down to the cost of etching an equivalent amount of circuitry using semiconductor lithography.  Scanning technologies are currently far less developed but eventually have similar physical constraints, as the problem of etching ultra-high resolution images onto surfaces is physically similar to the problem of ultra-high resolution scanning of surfaces.  So the cost of scanning will probably come down to some small multiple of the cost of the required circuitry itself.  Eventually.

Given reasonable estimates for about 100 terrabytes or so of equivalent bytes for the whole brain, this boils down to just: 1.) <$10,000 if the data is stored in 2011 hard drives, or 2.) < 100,000$ for 2011 flash memory, or 3.) <500,000$ for 2011 RAM[3].  We can expect a range of speed/price options, with a minimum floor price corresponding to the minimum hardware required to recreate the original brain’s capabilities.  Based on current trends and even the more conservative projections for Moore’s Law, it seems highly likely that the brain hardware cost is already well under a million dollars and will fall into the 10 to 100 thousand dollar range by the end of the decade.

Thus scanning technology will be the limiting factor for uploading until it somehow attracts the massive funding required to catch up with semiconductor development.  Given just how far scanning has to go, we can’t expect much progress until perhaps Moore’s Law begins to slow down and run it’s course, the world suddenly wakes up to the idea, or we find a ladder of interim technologies that monetize the path to uploading.  We have made decades of progress in semiconductor miniaturization only because each step along the way has paid for itself.

The final consideration is that Strong AI almost certainly precedes uploading.  We can be certain that the hardware requirements to simulate a scanned human brain are a strict upper bound on the requirements for a general AI of equivalent or greater economic productivity.  A decade ago I had some hope that scanning and uploading could arrive before the first generation of human surpassing general AI’s.  Given the current signs of an AI resurgence this decade and the abysmal slow progress in scanning, it now appears more clear that uploading is a later post-AI technology.

  1. According to wikipedia, synaptic clefts measure around 20-nm.  From this we can visually guesstimate that typical synaptic axon terminals are 4-8 times that in diameter, say over 100-nm.  In comparison the 2011 intel microprocessor I am writing this on is built on 32-nm ‘half-pitch’ features, which roughly means that the full distance between typical features is 64-nm.  The first processors on the 22-nm node are expected to enter volume production early 2012.  Of course smallest feature diameter is just one aspect of computational performance, but is an interesting comparison milestone nonetheless.
  2. See the Whole Brain Emulation Roadmap for a more in depth requirements analysis.  It seems likely that scanning technology could improve rapidly if large amounts of money were thrown at it, but that doesn’t much help clarify any prognostications.
  3. I give a range of prices just for the storage cost portion because it represents a harder bound.  There is more variance in the cost estimates for computation, especially when one considers the range of possible thoughtspeeds, but the computational cost can be treated as some multiplier over the storage cost.

Know thyself: personal identity, uploading, and duplicity

The so called consciousness conundrum

The future according to the Singularity posits a new age of wonders, and the promise of effective immortality through radical new technologies such as mind uploading and medical nanobots.  The broad scope of augmentation and change these developments will enable is the basis for the concepts of Transhumanity and Posthumanity.  We are but a stage in the unfolding evolution of the universe, and history shows that life must always change and adapt in order to survive and progress.

For many, the changes envisioned by Singularity-futurists are too radical, and some even argue that the broader Transhumanist agenda itself is set on the extinction of humanity[1].  A related persistent set of critics maintain that while some forms of immortality – such as indefinite biological repair – are feasible, other technologies such as mind uploading, which permit duplicity, could not possibly preserve what we most care about: personal subjective identity[2].  The second viewpoint may be common, it was espoused indirectly by Bill Gates in dialog in the “Singularity is Near”, for example.  Much depends on just what exactly we choose to identify with, both individually and collectively.

There is overwhelming evidence and consensus within the scientific community that the mind, and thus personal identity, has a physical basis in the brain.  A complete analysis of the accumulated evidence from psychology, neurology, cognitive science, and yes even artificial intelligence leads to the typical conclusions that consciousness is a physical information processing phenomenon.  Thus most people today can accept that other systems, such as sufficiently advanced computer systems, could exhibit not only human level intelligence, but consciousness similar or equivalent to the human experience.  But to accept that uploading is possible requires more than just encoding consciousness in a machine substrate: it requires encoding a particular mind such that one’s particular personal identity and consciousness is preserved and then realized in a new substrate.

Personal Identity: What am I?

Sometimes our habits of everyday experience and language obscure the deeper issues of personal identity.  If someone showed you a picture of a child, and you recognized it as your own childhood picture, you might say “Thats me.  I was seven years old then.”  You thus self-identify with the child in the picture.  Of course, assuming you are not a seven year old looking at a live video feed, the child in the picture no longer exists – you are thus self-identifying with a historical person.

Imagine then a time portal which brings your previous child-self into the present, into your presence.  Would it still be correct then to say, “Thats me.” ?  Clearly this child would not be you, as you and the child would both exist separately in the present: adult-you and child-you would be separate intelligent brains with their own threads of consciousness and sense of personal identity.  But curiously, the time travel portal would not change you nor the child.  So unless you accept that you can be two people simultaneously, the child can’t be you.

To avoid this dilemma, let us recognize that even without the time travel, its not quite correct to say “that’s me”; it would be more correct to say “that was me”.  At some point in the past, you were a child. Out of the space of all possible people, that child became you.  So you can correctly self-identity with it, but only partially – you have probably changed considerably since then.  That partial self-identification has a psychological and physical basis in memories you may have, and an arrow of evolutionary development, a continuity extending back from your current state of mind to that of the historical child.

Likewise, going forward in time, you will change.  You will become someone else, and if a time portal transported that future version of you back into your presence right now, it would clearly not be you – and could potentially be less similar to your current self then other people.  And yet, you self-identify with a future version of yourself, you project your identity forward in time, and unless you are suicidal, even make sacrifices in the present for the benefit of that future person.

The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value.  Yet the exact consciousness you experience right now exists as only a brief moment in time, and you are never exactly the same person as any past or future version of yourself.  Your cells, and the neurons of your brain, are made of completely different molecules, and the configurations of those molecular walls and the all-important synaptic junctions which are the locus of the brain’s information processing and storage change as we form new memories, beliefs, ideas, thoughts, and feelings.

We are constantly changing, yet we maintain a strong sense of personal identity stretching back into our history and projecting forward into our future.  As John Locke ingeniously argued more than 300 years ago, we are the same person to the extent that we are conscious of our past and future thoughts and actions in the same way as we are conscious of our present thoughts and actions.  Or put another way, I am that who I remember myself to be, that who I am conscious of being.

Those who accept this line of reasoning so far usually accept technologies which change non-essential physical elements of the brain but permit only a single forward thread of consciousness, ie no branching or duplicities.  Even if these technologies significantly change the brain, as long as they preserve the essential physical system underlying conscious identity, they are no more problematic than the significant physical changes the brain undergoes during regular life, including the frequent molecular replacement of cells including neurons, and the continuously shifting web of patterns encoded in their synapses.

We can denote the essential physical information system underlying conscious identity as the mind: the essential subset of the brain that must be maintained.  Current physical evidence shows that the mind is physically encoded in the microscopic synaptic junctions: the fundamental circuit building blocks of the brain.  The rest, such as the skull, circulatory system, glial cells, and even the neurons themselves, are largely secondary structures, supporting the computation of the synaptic network.  Its also important to remember that continuity of identity is fluid and variable: some change is inevitable, but we must draw the line somewhere.

Consider some thought experiments:

Complete Amnesia:  Jane’s mind is wiped by some highly selective destructive process which just randomizes the delicate synaptic connections, but otherwise leaves the brain and all of its neurons structurally intact.  If the brain could survive this, current science predicts that Jane would essentially restart life as an infant – all memories, learned behaviors, personality traits, etc – everything to mentally identify Jane as Jane – would be erased.  Suppose Jane is then abducted and transported to a foreign country, and grows up speaking a new language, culture, and social identity as Katie.  To what extent can we say that Jane and Katie are the same person?  Is conscious identity preserved?

Mind Transfer:  Suppose Jane’s mind is wiped as in Complete Amnesia, but instead of randomizing the synaptic connections, the memories and patterns of another person are encoded into the synapses – say those of Bill.  (yes, I’m aware that this may be near-impossible without also altering neuron wirings or adding or removing neurons, but suppose far future technology minimizes that)  Bill has all of his memories intact, never has any mental connection to Jane, but now inhabits Jane’s body.  Is Bill still Jane somehow?  Is Jane’s conscious identity preserved?

Brain Transfer: Same as above, except Bill’s entire brain is transfered into Jane’s skull, and Jane’s brain is thrown out.

According to current understanding of the brain’s circuitry, all of these cases result in Jane’s death – the irreversible cessation of her personal conscious identity.  (unless her brain synaptic structures are recorded and preserved)

Teleportation and Duplicity:

Slice and Dice: A far future technology near instantly slices up your entire body into very small pieces and then just as quickly perfectly reassembles you.  If this had no physically detectable effects, would it effect your conscious identity in any way?  Would it still be you?  Does it matter how fine the slices are?  Macroscopic, microscopic, cellular, molecular, atomic, sub-atomic, does it matter?

Slice, Dice, and Store: You are sliced and diced, but instead of being immediately reassembled, your pieces are stored in a perfect stasis, and you are then resembled as before, but sometime later.  Do you die?  Or are you just in a form of stasis?  Does it matter how long your are in stasis?  Would your conscious identity continue?

Slice, Dice and Teleport: You are sliced, diced and stored as above, but instead of being reassembled immediately, your pieces are transported and resembled elsewhere.  Now imagine that a couple of the pieces are replaced in transit with their complete information description, which are then used to construct perfect replicas of those pieces somewhere else from building blocks.  Is it still you?  Does it matter how many pieces are replaced?  Remember that as far as the universe is concerned, there is no detectable difference for external observers no matter how many pieces are replaced.  And of course, our constituent pieces are being replaced at the molecular level continuously as part of natural organic metabolism.

Slice, Dice, and Duplicate:  Now imagine that your atomic pieces are scanned and then divided into two groups.  Each piece is carefully scanned and it’s physical structure recorded as information just as before, but using a non-destructive process. Then the pieces are randomly divided into two groups: A and B.  Each group is then sent to a different location: location A and B, along with the full information description.  From each subset we then reconstruct the missing pieces to reform you wholly: you are reassembled at location A from the original subset A and a reconstruction of subset B, and you are also reassembled at location B from the original subset B and a reconstruction of subset A.

You are thus reconstructed at both locations A and B, and both reconstructions are identical except for their location.  Crucially, neither version is wholly a copy nor wholly original – both are built out of a mix of original and copied components but both versions are 100% physically indistinguishable from their versions constructed in the prior experiments.  Which version are you?  Does your conscious identity continue in one, both, or neither?

This last thought experiment is initially unsettling to most people: its difficult to accept that both versions are still you in the same sense as the prior thought experiments, it is difficult to accept that you could essentially duplicate your conscious identity, becoming two (or more) future selves.  Its easier to think that one is the ‘original’, and one is the ‘copy’, and that your consciousness is only preserved in the ‘original’ and not the ‘copy’, but its clear that any such designation is completely arbitrary: neither A nor B have any more or less of a claim of being the ‘original’.  Its also difficult to accept that this process results in two new beings who do not continue your conscious identity, ie that this process somehow kills you, when clearly it is not any worse than the prior thought experiments.

The consistent solution of course is rather simple: there is no such thing as an intrinsically unique pattern in the universe.  Physics does not impose a ‘no cloning’ theorem.  Anything can be copied, anything can exist more than once, including you or I.

The Duplicity Problem:

Our evolved capacity for introspective conscious self-awareness and forward prediction of that self-awareness to future versions of ourselves never had to contend with anything more than a single forward path of conscious identity.  Thus, duplicity thought experiments are difficult to intuitively accept.  However confusing to us, the universe can never be confused, only we can.  However difficult to intuit, the laws of physics have nothing against our streams of consciousness forking and branching into two or more paths.  

In the slice, dice, and duplicate thought experiment, you become both A and B.  At that point forward, those two people will be two instances of yourself, and will then slowly begin to diverge.  Both will be you, they will both self-identify with you just as easily and as much as you self-identify with the person you were a minute ago.  Both will have an equally valid claim to being you.  You will become both.  Its intuitively easier and almost equivalent to consider that your conscious identity stream will continue randomly into one path: ie you will randomly become one or the other.  Its not quite as correct, but nearly equivalent in terms of consequences.

The consequences of the rational ‘patternist’ approach to personal identity and duplicity are:

  • Various forms of uploading are possible, and any form that fully preserves the essential physical information of the mind – ie the synaptic connectivity information, is sufficient to preserve personal conscious identity, including uploading and transfer to a non-biological substrate
  • Conscious identity changes over time and their is a slippery slope spectrum of possible preservations: some arbitrary legal delineation must be made
  • Duplication does not present any problem for personal conscious identity: assuming all duplicates are equally valid copies, they all preserve conscious identity and should all have equivalent rights and legal inheritance.  Essentially this means that all valid variants of a duplicating mind should equally inherit that mind’s legal and economic identity, wealth, and so on, while also being recognized as new individuals going forward
  • Duplicating oneself does help ensure survival, but that is no consolation to any future version which dies