A Case Study in Personal Identity: Altered Carbon

Related image

Altered Carbon is a 2003 science fiction novel written by Richard K. Morgan, which has recently been made into a gritty, futuristic, and definitely R-18 TV series. The central plot device revolves around pieces of technology called ‘cortical stacks’. They are small, palm-sized devices that are implanted at the top of the spinal column and function as digital receptacles for the human consciousness. When you die, as long as your ‘stack’ isn’t damaged, you can be brought back to life by having your consciousness downloaded from your cortical stack into the cortical stack of another body (called a ‘sleeve’), which can be either a real human body or a synthetic, grown one. This transferral process is called ‘needlecasting’ and usually involves deleting the consciousness in the first stack before making the transfer. In this way, the super-rich (called ‘meths’, a reference to the long-lived Methuselah of Biblical fame) have allegedly achieved immortality. In this article, I want to investigate this assumption by asking, are you the same person after needlecasting as you were before?

In my last article, I wrote about John Locke’s position that personal identity is conserved through the continuation of the same consciousness, where consciousness is the self-reflexive awareness that accompanies all thinking so that we are always, not just perceiving, but aware that we are perceiving, and aware in such a way that it (consciousness) considers “itself as itself, the same thinking thing, [albeit] in different times and places”. As Locke put it; if I am conscious of the actions of Napoleon, then that is sufficient for me to be the same person with Napoleon. It seems to me that any account of personal identity must, at the very least, include something like continuity of consciousness of this sort, so this will be my starting point. For personal identity to be preserved, continuity of consciousness is at least a necessary condition.

Note: Locke’s above definition of consciousness is essentially equivalent with relatively loose definitions of what we might call ‘mind’ or even ‘self’. If you object to consciousness the way it is used here, either of those terms can be substituted without losing much for the purposes of this article, I think.

 

A Sceptical Preamble re: Cortical Stacks

Cortical stacks allegedly ‘store’ a person’s consciousness, but it’s worth briefly questioning this notion because it strikes me as highly problematic for one reason; consciousness isn’t a ‘thing’. Consciousness is a sense, or awareness, that accompanies human mental phenomena, like thoughts, feelings, preferences, desires, personality traits, etc. It is precisely the sense that these mental phenomena happen, or belong, to ‘me’; i.e. a single, coherent consciousness. So, consciousness, while not being a separate ‘thing’ added on to thoughts, nevertheless accompanies them without actually being reducible to them. The problem then is how can a ‘sense’, or ‘awareness’, be recorded?

But surely it’s reasonable to assume that consciousness naturally arises from the type of mental phenomena humans experience, so that wherever these mental phenomena exist, consciousness also necessarily arises? Actually, I am sympathetic to this argument, (so I’m sceptical about the existence of so-called philosopher’s zombies) but it won’t get us out of trouble because the task of recording mental phenomena (thoughts, feelings, desires, etc.) itself seems highly problematic. How does one capture and record a thought? Make a database? Ok, so even if it were possible to somehow capture and record; i.e. list in a database, every single thought one has over a lifetime, how could such a list ever be used to produce a thinking, conscious being? Even more problematic, how could one capture and record aspects of human consciousness that aren’t amenable to storage in a database; things like personality traits; how patient, insecure, friendly to strangers, etc., one is, or attitudes towards various things, or emotions, and so on? I can’t see a way out of this except to blindly appeal to the superior knowledge of future generations in the hope that maybe they will find a way. This is what I call the faith option.

Let’s not stop there though. What would this faith option look like? There are actually two I can think of; a simplistic version and a sophisticated one. The former might involve the postulation of an AI in the cortical stacks which somehow records every thought, action, inclination, emotion, etc., you have or do. I call this the ‘simplistic’ version because it reduces very complex processes to almost laughably simplistic ones. How would it even work? Would the AI notice you giving money to a beggar and then give you +1 for generosity? Maybe +0.5 for kindness? Subtract 0.5 from selfishness? But would it then also notice you only gave the money because you felt guilty for not donating to that charity yesterday when those people asked you in front of the bank, and so maybe cut generosity to only +0.7, but increase sense of civic duty by 0.4? Clearly, this is arbitrary and simplistic to the point of idiocy. Furthermore, all you’re getting here is a third person interpretation of your consciousness, your mental phenomena as seen from the outside, when what we want is a direct first person account of who you are.

The more sophisticated version would forego the attempt to parse mental phenomena at the human level (thoughts, emotions, etc.) and go straight for the raw data, maybe neuronal firing patterns, thereby bypassing all of those messy higher level concepts. Of course, this is precisely the “faith option” I mentioned earlier. Neuronal firing patterns? What would such data mean? Even knowing exactly how every single neuron behaves every single second of your life wouldn’t tell you anything about what kind of person you are or what thoughts you had, unless you could somehow translate that data into those messy higher level concepts. And voila, we’re back to the superior knowledge of future generations.

 

Consciousness Transference and Personal Identity

So, having pointed out what is a potentially fatal problem with the whole cortical stack idea, let’s disregard that and assume that somehow they are able to faithfully record every event in your life as you experience it ‘from the inside’, as it were. In addition, the information in these stacks is able to integrate with a brain in such a way that all of the encoded mental phenomena; your memories, thoughts, feelings, preferences, attitudes, etc., are engaged and capable of producing a living consciousness, identical to you in every way. Even assuming all of this, have we succeeded in conserving consciousness; i.e. preserving personal identity; i.e. achieving immortality?

Intuitively, it seems we have. Imagine what the first person account would look like. You die, your body and brain stop functioning, consciousness terminates, and then in the very next moment you wake up in a new body, remembering everything from your past life and, in terms of consciousness, being absolutely identical to the person who died. From your perspective, it would be no more disruptive than entering a phase of very deep, dreamless sleep. Interestingly, I think this satisfies Locke’s criterion of continuity of consciousness.

 

What about this scenario though? Imagine you don’t die. While you are in very deep, dreamless sleep, someone copies the information in your stack and uploads it into another stack in a sleeve which has been cloned from your DNA. The following morning, you don’t wake up. It turns out you were really tired and are going to sleep through the day. Meanwhile however, the other ‘you’ (you*) gets up and goes out to do all the things you had originally planned on doing. You wake up later that evening and are faced with the question, who went out that day? You? It couldn’t have been. You were asleep and have absolutely no memory, or awareness, of what you* did.

Even more confusing is the fact that you* insists that he is you. From his perspective, he went to bed, woke up, and went out just like any other day. He noticed that he woke up in a different room than the one he went to sleep in but just assumed a friend had played a trick on him. Other than that, absolutely nothing was amiss. His life and consciousness have perfect continuity. Of course, you also went to bed and woke up (albeit after a prolonged sleep) feeling your life and consciousness have perfect continuity.

 

The curious reality in this situation is that from the first person perspectives of both you and you*, personal identity has been preserved, that is, your consciousness of being you is the same consciousness of being you that you* has (again satisfying Locke’s criterion). This, however, is not because you are now living two lives; rather, it is because your consciousness has been copied and instantiated in another body. Although your* mental phenomena were identical to yours, at the moment he received your copied consciousness, from that point on, his experiences, memories, thoughts, emotions, etc., all diverged, so that at the end of the day, you and you* no longer share an identical consciousness.[1] Not only that, you and you* obviously don’t have the same first person perspective on the world. You see it from your perspective over here, while you* sees it from his perspective over there. No matter how you (no pun intended) look at this, you and you* are not the same person, despite the fact that both claim to be the same person; i.e. ‘you’, and both actually experience a continuous, diachronic sense of personal identity.

The only difference between the first and second thought experiments is that in the first one, you died (your body and brain, hence your consciousness, stopped functioning) before the ‘stacked’ information was transferred, while in the second, you lived on after the transferral. The unavoidable conclusion then is that if you and you* aren’t the same person in the second scenario after the cortical stack ‘transferral’, then you and the rejuvenated you from the first scenario can’t be the same person either. Personal identity and consciousness will continue in the new ‘you’ (as assessed from his first person perspective) but this is no consolation to the original ‘you’, whose personal identity and consciousness (as assessed from your first person perspective) actually do terminate with your death.

 

Analysis

There are two things I want to focus on in this section. What went wrong with our original intuition in the first thought experiment, and why don’t cortical stacks work as advertised?

To the first point; here is the first person account as I wrote it earlier for the first thought experiment:

“You die, your body and brain stop functioning, consciousness terminates, and then in the very next moment you wake up in a new body, remembering everything from your past life and, in terms of consciousness, being absolutely identical to the person who died”

The problem is that this whole passage was written from the first person point of view of the rejuvenated ‘you’. It’s true that this person’s identity does encompass both the new life, which he is just starting now, and the previous one, which you lived. Unfortunately, your identity ended when you died. Likewise, when this new person dies and a third person is animated, the third person’s identity will encompass his, the second person’s, and your, life. In this way, the sense that one is immortal is only felt in the latest incarnation of a person. All of the intervening people are thoroughly dead and gone, and more to the point, your life is also going to end when you die; hardly a sterling endorsement for immortality.

So, the problem was that there are actually two points of view here, not one. The second point of view aligns with the quote. The first point of view, however, is much shorter: you die, your body and brain stop functioning, and consciousness terminates. The end.

 

And the second point; why doesn’t this work? We’ve seen that cortical stacks don’t make one immortal but why not. The answer lies in what the cortical stack actually stores, or rather, what it doesn’t store. Thinking that one can live forever through a process like the one in Altered Carbon only works if consciousness is a ‘thing’, and that ‘thing’ can be captured in one cortical stack and transferred whole to another. Clearly, this makes consciousness into something like a soul, which just as clearly (to me, at least) makes it a fiction. (If you believe in souls, all of these problems disappear, by the way – of course, then you are left with the biggest problem of all; explaining souls)

If, on the other hand, consciousness (again; ‘mind’, ‘self’, whatever term you are most comfortable with in this context) isn’t a ‘thing’, but a perspective that accompanies mental phenomena, then when you die, there is literally no-thing to transfer anywhere. At best you can have information about those mental phenomena (although I’ve argued against even this) which can be re-created in another stack. Clearly, this will produce a copy, or a clone, not ‘you’; a clone that has all of your memories and is mentally identical to you, but a clone nonetheless. Your clone may carry on and live a full life, but you, alas, will not be doing so in any way that could have meaning to you as you exist now.

 


 

[1] Technically though, you stopped sharing the same consciousness the second after your consciousness was downloaded into you*.

Advertisements

One thought on “A Case Study in Personal Identity: Altered Carbon

  1. Pingback: Mind Upload | Absurd Being

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s