This is a follow-on article from an earlier one I wrote entitled A Case Study in Personal Identity: Altered Carbon, where I argued that a person’s mind/consciousness could not be ‘stored’ in a digital medium, and even if it could, transferring that stored information into another body wouldn’t grant the original mind/consciousness immortality. After writing this, I came across an Australian RN Radio podcast (on a program called The Philosopher’s Zone) called Mind Upload. In it, the host, David Rutledge, discusses with Max Cappuccio, a philosophy professor from the United Arab Emirates University, whether it will ever be possible to upload the mind into some type of digital environment. Cappuccio turns out to be as sceptical as I am about this and while his argument was similar to the one I originally put forward, it was different enough that I felt compelled to spell it out in this separate article.
Mind upload enthusiasts believe that it is possible to upload a person’s mind into a computer or some other digital environment, where it could, in theory, live on indefinitely (or at least until someone accidentally drops the hard drive). Now, there is one absolutely indispensable condition associated with this; namely, that after the upload is complete, the mind in the computer be me; i.e. not just a copy of me. In other words, I don’t want a merely qualitatively and functionally identical duplicate of me (i.e. an entity with the same qualities and capabilities as me) to live on forever in my hard drive. I want to live forever. This means that the mind which will exist in the computer (post-upload) and the mind which exists in my body now (pre-upload) must, in addition to being qualitatively identical, also be numerically identical.
To say that two things are numerically identical is to say that they are one and the same; i.e. one thing rather than two. The catch with numerical identity is that only things with material properties can have it. Imagine a block of gold melted down into a coin. The coin is obviously numerically identical with the block of gold it was before. The gold’s form may have altered but it has always been the same thing, before and after being melted down. Now, if I get another block of gold, of exactly the same size as the first, and melt it down into an exact replica, although the two are qualitatively and functionally identical, they obviously aren’t numerically identical because they are physically distinct.
Now think about abstract, immaterial entities that have no physical instantiation, like the number 3, or the example Cappuccio uses, the Pythagorean Theorem. What possible state of affairs could lead to the Pythagorean Theorem sharing numerical identity with something else? Cappuccio asks us to imagine two people thinking about the Pythagorean Theorem. These two distinct ideas are qualitatively identical (assuming they both have the same understanding of it), but are they numerically identical? Are they one and the same thing, so that we can say there is only one thought between the two? Surely not. In a very real sense, the question lacks meaning. Immaterial entities just aren’t the kind of things which can be numerically identical.
Cappuccio also gives us another way to think about this. Consider the meaning contained in an email. If that email is sent from computer A to computer B, could we then say that the meaning of that email has moved from A to B? The answer is no and the reason is that the ‘meaning’ of something isn’t physical. Like the Pythagorean Theorem, meaning doesn’t exist in a physical location; i.e. ‘here’ or ‘there’. It follows therefore that it can’t be sent from one physical location to another. The general rule here is that one can’t apply physical qualities or physical transformations to something non-physical.
In order for it to be possible to upload a person’s mind to a computer two conditions must hold. First, as outlined above, the uploaded mind must preserve numerical identity with the original, embodied mind or rather than the digitised version being me, it will just be a copy of me. Secondly, the human mind must not be material. If it is, the mind is the brain or a part of the brain, for example, then uploading the mind would obviously be impossible because one cannot upload, that is, digitise, physical matter. This condition is typically satisfied by appeal to the computational theory of mind, which assumes that the mind, rather than being some collection of matter, is actually an information processing system and cognition a form of computation. The mind will then require matter, while at the same time not being reducible to it.
These two conditions clearly leave us with a contradiction. The mind must be material (in order for numerical identity to be preserved) but it must also be immaterial (in order to be transferrable from one substrate to another). Cappuccio notes that there is only one way to resolve this dilemma, and that is through appeal to something like a Platonist or Cartesian metaphysics; i.e. a view of the world that reifies immaterial things, postulating for example, that the mind is an actual substance, which is at the same time, immaterial. Few people are prepared to bite the Cartesian bullet these days, which leaves the mind upload theory out in the cold.
Radical Enactive Cognition
So, mind upload is impossible. Perhaps we can build an intelligent computer program from the ground up then. Not so fast. In the last part of his segment of the podcast, Cappuccio talks about REC (Radical Enactive Cognition), a theory that holds that cognition/intelligence can only arise through the dynamic and intentional interactions between an organism and its environment. It claims that conscious organisms don’t passively receive stimuli from the environment (perception) and then convert these to meaningful internal representations. This seems to be at least partially supported by psychological findings like change blindness which have made it clear that reality isn’t simply dictated to a passive brain by sensory data; rather, it is an active blend of expectations and assumptions based on experience, combined with that incoming sensory data.
There does seem to be something to REC. Certainly human general intelligence (i.e. that sense of understanding by which human beings are able to operate in the world and which includes know-how (as opposed to know-that)) emerges only gradually over years of practical engagement with a world full of objects we can manipulate and which is responsive to our pokings and prodings. Without this active, practical engagement, I’m not sure (and it certainly isn’t obvious) that we could ever attain general intelligence. Consider learning to hit a ball in tennis. No matter how many books you read about it, how many games you watch, or how many hours of lectures you sit through, it isn’t until you actually pick up a racket and get onto the court that you will really understand how to hit the ball. Now, imagine if your whole life was like this. All you could ever do was read about things and listen to lectures. You might amass a formidable storehouse of theoretical information (this is essentially what narrow intelligence amounts to), but, lacking any real-world experience, you would never be able to knit all of this together to attain that depth of understanding which would enable you to actually live in the world; i.e. form plans, experience frustration when they go awry, make informed decisions, etc. (in other words; general intelligence).
The long and the short of this is that REC imposes further limitations on the somewhat over-zealous proponents of digital intelligence. It means that a computer program, no matter how sophisticated or vast its information processing capacities, could never develop true general intelligence unless, or until, it was embodied and actually able to experience the world about which we want it to develop general intelligence. Unfortunately, modern AI research has very much moved away from the only type of research that could possibly result in an embodied digital intelligence: robotics. Instead, AI developers seem to believe that with the right type of programming and sufficiently powerful hardware “the lights could come on” in a computer program and it could somehow achieve, or surpass, human level general intelligence. This seems quite unlikely to me.
 Of course, one may be able to digitise the information making up that section of the brain (although this is far from obvious), but the point is that this would not be numerically identical with the original.
 This embodiment might not necessarily have to take place in a body of flesh and blood, however a deeper discussion on the validity of substrate independence is beyond the scope of this article.