Category Archives: Conscious Identity

Non-Destructive Uploading: A Paradox

Transhumanism may well be on the road to victory, at least judging by some indirect measures of social influence, such as the increasing number of celebrities coming out in support for cyronic preservation and future resurrection, or the prevalence of the memeset across media.

If you are advancing into that age at which your annual chance of death is becoming significant, you face an options dilemma.  The dilemma is not so much a choice of what to do now: for at this current moment in time the only real option is vitrification-based cyronics.  If the Brain Preservation Foundation succeeds, we may soon have a second improved option in the form of plastination, but this is besides the point for my dilemma of choice.  The real dilemma is not what to do now, it is what to do in the future.

Which particular method of resurrection would you go with?  Biological immortality in your original body?  Or how about uploading?  Would you rather have your brain destructively scanned or would you prefer a gradual non-destructive process?

Those we preserve today will not have an opportunity to consider their future options or choose a possible method of resurrection, simply because we won’t be able to ask them unless we resurrect them in the first place.

The first option the cyronics community considered is some form of biological immortality.  The idea is at some point in the future we’ll be able to reverse aging, defeat all of these pesky diseases and repair cellular damage, achieving Longevity Escape Velocity.  I find this scenario eventually likely, but only because I find the AI-Singularity itself to be highly likely.  However, there is a huge difference between possible and pragmatic.

By the time biological immortality is possible, there is a good chance it will be far too expensive for most plain humans to afford.  I do not conclude this on the basis of the cost of the technology itself.  Rather I conclude this based on the economic impact of the machine Singularity.

Even if biological humans have any wealth in the future (and that itself is something of a big if), uploading is the more rational choice, for two reasons: it is the only viable route towards truly unlimited, massive intelligence amplification, and it may be the only form of existence that a human can afford.  Living as an upload can be arbitrarily cheap compared to biological existence.  An upload will be able to live in a posthuman paradise for a thousandth, then a millionth, then a billionth of the biological costs of living.  Biological humans will not have any possible hope of competing economically with the quick and the dead.

Thus I find it more likely that most of us will eventually choose some form of uploading.  Or perhaps rather a small or possibly even tiny elite population will choose and be able to upload, and the rest will be left behind.  In consolation, perhaps “The meek shall inherit the Earth”.  Across most of the landscape of futures, I foresee some population of biological humans plodding along, perhaps even living lives similar to those of today, completely oblivious to the vast incomprehensible Singularity Metaverse blossoming right under their noses.

For the uploading options, at this current moment it looks like destructive scanning is on the earlier development track (as per the Whole Brain Emulation Roadmap), but let’s us assume that both destructive and non-destructive technologies become available around the same time.  Which would you choose?

At first glance non-destructive uploading sounds less scary, perhaps it is a safer wager.  You might think that choosing a non-destructive method is an effective hedging strategy.  This may be true if the scanning technology is error prone.  But let’s assume that the technology is mature and exceptionally safe.

A non-destructive method preserves your original biological brain and creates a new copy which then goes onto live as an upload in the Metaverse.  You thus fork into two branches, one of which continues to live as if nothing happened.  Thus a non-destructive method is not significantly better than not uploading at all!  From the probabilistic perspective on the branching problem; this non-destructive scan has only a 50% chance of success (because in one half of your branches you end up staying in your biological brain).  The destructive scanning method, on the other hand, doesn’t have this problem as it doesn’t branch and you always end up as the upload.

This apparent paradox reminds me of a biblical saying:

Whoever tries to keep his life will lose it, and whoever loses his life will preserve it. – Luke 17:33 (with numerous parallels)

The apparent paradox is largely a matter of perspective, and much depends on the unfortunate use of the word destructive.  The entire premise of uploading is to save that which matters, to destroy nothing of ultimate importance for conscious identity.  If we accept the premise, then perhaps a better terminology for this type of process is in order: such as mind preservation and transformation.

There Be Critics:

I’ll be one of the first to admit that this whole idea of freezing your brain, slicing it up into millions of microscopically thin slices, scanning them, and then creating an AI software emulation that not only believes itself to be me, but is actually factually correct in that belief, sounds at least somewhat crazy.  It is not something one accepts on faith.

But then again, I am still somewhat amazed every time I fly in a plane.  I am amazed that the wings don’t rip apart due to mechanical stress, amazed every time something so massive lifts itself into the sky. The first airplane pioneers didn’t adopt a belief in flight based on faith, they believed in flight on the basis of a set of observation-based scientific predictions.  Now that the technology is well developed and planes fly us around the world safely everyday, we adopt a well justified faith in flight.  Uploading will probably follow a similar pattern.

In the meantime there will be critics.  Reading through recent articles from the Journal of Evolution and Technology, I stumbled upon this somewhat interesting critique of uploading from Nicholas Agar.  In a nutshell the author attempts to use a “Searle’s Wager’ (based on Pascal’s Wager) type argument to show that uploading has a poor payoff/risk profile, operating under the assumption that biological immortality of some form will be simultaneously practical.

Any paper invoking Searle’s Chinese Room Argument or Pascal’s Wager is probably getting off to a bad start.  Employing both in the same paper will not end well.

Agar invokes Searl without even attempting to defend Searl’s non-argument, and instead employs Searl as an example of ‘philosophical risk’.  Risk analysis is a good thing, but there is a deeper problem with Agar’s notion.

There is no such thing as “philosophical risk’.  Planes do not fail to fly because philosophers fail to believe in them.  “Philosophical failure’ is not an acceptable explanation for an airplane crash.  Likewise, whether uploading will or will not work is purely a technical consideration.  There is only technical risk.

So looking at the author’s wager table, I assign near-zero probability to the column under “Searle is right”.  There is however a very real possibility that uploading fails, and “you are replaced by a machine incapable of conscious thought”; but all of those failure modes are technical, all of them are at least avoidable, and Searle’s ‘argument’ provides no useful information on the matter one way or the other.  It’s just a waste of thought-time.

The second failing, courtesy of Pascal’s flawed Wager, is one of unrealistically focusing on only a few of the possibilities.  In the “Kurzweil is right” scenario, whether uploading or not there are many more possibilities other than “you live”.  Opting to stay biological, you could still die even with the most advanced medical nanotechnology of the far future.  I find it unlikely that mortality to all diseases can be reduced arbitrarily close to zero.  Biology is just too messy and chaotic.  Like Conway’s boardgame, life is not a long-term stable state.  And no matter how advanced nano-medicine becomes, there are always other causes of death.  Eliminating all disease causes, even if possible, would only extend the median lifespan into centuries (without also reducing all other ‘non-natural’ causes of death).

Nor is immortality guaranteed for uploads.  However, the key difference is that uploads will be able to make backups, and that makes for all the difference in the world.

Know thyself: personal identity, uploading, and duplicity

The so called consciousness conundrum

The future according to the Singularity posits a new age of wonders, and the promise of effective immortality through radical new technologies such as mind uploading and medical nanobots.  The broad scope of augmentation and change these developments will enable is the basis for the concepts of Transhumanity and Posthumanity.  We are but a stage in the unfolding evolution of the universe, and history shows that life must always change and adapt in order to survive and progress.

For many, the changes envisioned by Singularity-futurists are too radical, and some even argue that the broader Transhumanist agenda itself is set on the extinction of humanity[1].  A related persistent set of critics maintain that while some forms of immortality – such as indefinite biological repair – are feasible, other technologies such as mind uploading, which permit duplicity, could not possibly preserve what we most care about: personal subjective identity[2].  The second viewpoint may be common, it was espoused indirectly by Bill Gates in dialog in the “Singularity is Near”, for example.  Much depends on just what exactly we choose to identify with, both individually and collectively.

There is overwhelming evidence and consensus within the scientific community that the mind, and thus personal identity, has a physical basis in the brain.  A complete analysis of the accumulated evidence from psychology, neurology, cognitive science, and yes even artificial intelligence leads to the typical conclusions that consciousness is a physical information processing phenomenon.  Thus most people today can accept that other systems, such as sufficiently advanced computer systems, could exhibit not only human level intelligence, but consciousness similar or equivalent to the human experience.  But to accept that uploading is possible requires more than just encoding consciousness in a machine substrate: it requires encoding a particular mind such that one’s particular personal identity and consciousness is preserved and then realized in a new substrate.

Personal Identity: What am I?

Sometimes our habits of everyday experience and language obscure the deeper issues of personal identity.  If someone showed you a picture of a child, and you recognized it as your own childhood picture, you might say “Thats me.  I was seven years old then.”  You thus self-identify with the child in the picture.  Of course, assuming you are not a seven year old looking at a live video feed, the child in the picture no longer exists – you are thus self-identifying with a historical person.

Imagine then a time portal which brings your previous child-self into the present, into your presence.  Would it still be correct then to say, “Thats me.” ?  Clearly this child would not be you, as you and the child would both exist separately in the present: adult-you and child-you would be separate intelligent brains with their own threads of consciousness and sense of personal identity.  But curiously, the time travel portal would not change you nor the child.  So unless you accept that you can be two people simultaneously, the child can’t be you.

To avoid this dilemma, let us recognize that even without the time travel, its not quite correct to say “that’s me”; it would be more correct to say “that was me”.  At some point in the past, you were a child. Out of the space of all possible people, that child became you.  So you can correctly self-identity with it, but only partially – you have probably changed considerably since then.  That partial self-identification has a psychological and physical basis in memories you may have, and an arrow of evolutionary development, a continuity extending back from your current state of mind to that of the historical child.

Likewise, going forward in time, you will change.  You will become someone else, and if a time portal transported that future version of you back into your presence right now, it would clearly not be you – and could potentially be less similar to your current self then other people.  And yet, you self-identify with a future version of yourself, you project your identity forward in time, and unless you are suicidal, even make sacrifices in the present for the benefit of that future person.

The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value.  Yet the exact consciousness you experience right now exists as only a brief moment in time, and you are never exactly the same person as any past or future version of yourself.  Your cells, and the neurons of your brain, are made of completely different molecules, and the configurations of those molecular walls and the all-important synaptic junctions which are the locus of the brain’s information processing and storage change as we form new memories, beliefs, ideas, thoughts, and feelings.

We are constantly changing, yet we maintain a strong sense of personal identity stretching back into our history and projecting forward into our future.  As John Locke ingeniously argued more than 300 years ago, we are the same person to the extent that we are conscious of our past and future thoughts and actions in the same way as we are conscious of our present thoughts and actions.  Or put another way, I am that who I remember myself to be, that who I am conscious of being.

Those who accept this line of reasoning so far usually accept technologies which change non-essential physical elements of the brain but permit only a single forward thread of consciousness, ie no branching or duplicities.  Even if these technologies significantly change the brain, as long as they preserve the essential physical system underlying conscious identity, they are no more problematic than the significant physical changes the brain undergoes during regular life, including the frequent molecular replacement of cells including neurons, and the continuously shifting web of patterns encoded in their synapses.

We can denote the essential physical information system underlying conscious identity as the mind: the essential subset of the brain that must be maintained.  Current physical evidence shows that the mind is physically encoded in the microscopic synaptic junctions: the fundamental circuit building blocks of the brain.  The rest, such as the skull, circulatory system, glial cells, and even the neurons themselves, are largely secondary structures, supporting the computation of the synaptic network.  Its also important to remember that continuity of identity is fluid and variable: some change is inevitable, but we must draw the line somewhere.

Consider some thought experiments:

Complete Amnesia:  Jane’s mind is wiped by some highly selective destructive process which just randomizes the delicate synaptic connections, but otherwise leaves the brain and all of its neurons structurally intact.  If the brain could survive this, current science predicts that Jane would essentially restart life as an infant – all memories, learned behaviors, personality traits, etc – everything to mentally identify Jane as Jane – would be erased.  Suppose Jane is then abducted and transported to a foreign country, and grows up speaking a new language, culture, and social identity as Katie.  To what extent can we say that Jane and Katie are the same person?  Is conscious identity preserved?

Mind Transfer:  Suppose Jane’s mind is wiped as in Complete Amnesia, but instead of randomizing the synaptic connections, the memories and patterns of another person are encoded into the synapses – say those of Bill.  (yes, I’m aware that this may be near-impossible without also altering neuron wirings or adding or removing neurons, but suppose far future technology minimizes that)  Bill has all of his memories intact, never has any mental connection to Jane, but now inhabits Jane’s body.  Is Bill still Jane somehow?  Is Jane’s conscious identity preserved?

Brain Transfer: Same as above, except Bill’s entire brain is transfered into Jane’s skull, and Jane’s brain is thrown out.

According to current understanding of the brain’s circuitry, all of these cases result in Jane’s death – the irreversible cessation of her personal conscious identity.  (unless her brain synaptic structures are recorded and preserved)

Teleportation and Duplicity:

Slice and Dice: A far future technology near instantly slices up your entire body into very small pieces and then just as quickly perfectly reassembles you.  If this had no physically detectable effects, would it effect your conscious identity in any way?  Would it still be you?  Does it matter how fine the slices are?  Macroscopic, microscopic, cellular, molecular, atomic, sub-atomic, does it matter?

Slice, Dice, and Store: You are sliced and diced, but instead of being immediately reassembled, your pieces are stored in a perfect stasis, and you are then resembled as before, but sometime later.  Do you die?  Or are you just in a form of stasis?  Does it matter how long your are in stasis?  Would your conscious identity continue?

Slice, Dice and Teleport: You are sliced, diced and stored as above, but instead of being reassembled immediately, your pieces are transported and resembled elsewhere.  Now imagine that a couple of the pieces are replaced in transit with their complete information description, which are then used to construct perfect replicas of those pieces somewhere else from building blocks.  Is it still you?  Does it matter how many pieces are replaced?  Remember that as far as the universe is concerned, there is no detectable difference for external observers no matter how many pieces are replaced.  And of course, our constituent pieces are being replaced at the molecular level continuously as part of natural organic metabolism.

Slice, Dice, and Duplicate:  Now imagine that your atomic pieces are scanned and then divided into two groups.  Each piece is carefully scanned and it’s physical structure recorded as information just as before, but using a non-destructive process. Then the pieces are randomly divided into two groups: A and B.  Each group is then sent to a different location: location A and B, along with the full information description.  From each subset we then reconstruct the missing pieces to reform you wholly: you are reassembled at location A from the original subset A and a reconstruction of subset B, and you are also reassembled at location B from the original subset B and a reconstruction of subset A.

You are thus reconstructed at both locations A and B, and both reconstructions are identical except for their location.  Crucially, neither version is wholly a copy nor wholly original – both are built out of a mix of original and copied components but both versions are 100% physically indistinguishable from their versions constructed in the prior experiments.  Which version are you?  Does your conscious identity continue in one, both, or neither?

This last thought experiment is initially unsettling to most people: its difficult to accept that both versions are still you in the same sense as the prior thought experiments, it is difficult to accept that you could essentially duplicate your conscious identity, becoming two (or more) future selves.  Its easier to think that one is the ‘original’, and one is the ‘copy’, and that your consciousness is only preserved in the ‘original’ and not the ‘copy’, but its clear that any such designation is completely arbitrary: neither A nor B have any more or less of a claim of being the ‘original’.  Its also difficult to accept that this process results in two new beings who do not continue your conscious identity, ie that this process somehow kills you, when clearly it is not any worse than the prior thought experiments.

The consistent solution of course is rather simple: there is no such thing as an intrinsically unique pattern in the universe.  Physics does not impose a ‘no cloning’ theorem.  Anything can be copied, anything can exist more than once, including you or I.

The Duplicity Problem:

Our evolved capacity for introspective conscious self-awareness and forward prediction of that self-awareness to future versions of ourselves never had to contend with anything more than a single forward path of conscious identity.  Thus, duplicity thought experiments are difficult to intuitively accept.  However confusing to us, the universe can never be confused, only we can.  However difficult to intuit, the laws of physics have nothing against our streams of consciousness forking and branching into two or more paths.  

In the slice, dice, and duplicate thought experiment, you become both A and B.  At that point forward, those two people will be two instances of yourself, and will then slowly begin to diverge.  Both will be you, they will both self-identify with you just as easily and as much as you self-identify with the person you were a minute ago.  Both will have an equally valid claim to being you.  You will become both.  Its intuitively easier and almost equivalent to consider that your conscious identity stream will continue randomly into one path: ie you will randomly become one or the other.  Its not quite as correct, but nearly equivalent in terms of consequences.

The consequences of the rational ‘patternist’ approach to personal identity and duplicity are:

  • Various forms of uploading are possible, and any form that fully preserves the essential physical information of the mind – ie the synaptic connectivity information, is sufficient to preserve personal conscious identity, including uploading and transfer to a non-biological substrate
  • Conscious identity changes over time and their is a slippery slope spectrum of possible preservations: some arbitrary legal delineation must be made
  • Duplication does not present any problem for personal conscious identity: assuming all duplicates are equally valid copies, they all preserve conscious identity and should all have equivalent rights and legal inheritance.  Essentially this means that all valid variants of a duplicating mind should equally inherit that mind’s legal and economic identity, wealth, and so on, while also being recognized as new individuals going forward
  • Duplicating oneself does help ensure survival, but that is no consolation to any future version which dies