Blindsight (1) – Consciousness as Impediment / Life in a VR Simulation

Blindsight is an exceptional 2006 SF novel, in which Peter Watts raises so many fascinating philosophical and psychological/neurological issues that I couldn’t stop myself from writing a couple of articles dedicated to some of them. As my primary concern in these articles will be the discussion of some of the key issues, I won’t bother with an outline of the plot (for that, you’ll have to read the book; a task I highly recommend).  However, a proper discussion of the issues will necessitate a little context which will unavoidably involve sneak peeks of scenes at varying points in the book. Although I will deliberately avoid plot spoilers, if you plan to read the book (which, again, you should – it’s almost worth reading just to see the highly original way in which he has interpreted and brought the vampire myth to life – bonus point to Watts for the ingenious crucifix glitch!), please bear this in mind.

Dominant Theme: Consciousness is Redundant and Reduces Fitness

There are two important strands to this theme. First, all throughout the book there are frequent references to the idea that purely physical processes (i.e. biology, chemistry, physics) dictate what and why we do what we do, even if we believe we are making free, conscious choices. A couple of examples include:

  • The way the humans all instinctively fear Jukka Sarasti, the vampire. Because vampires had been hunting humans for millennia (before their extinction and subsequent resurrection), the fear of them is ‘hardwired’ into human genes.
  • The protagonist, Siri Keeton (a man who had half of his brain removed as a child to stop his seizures and is now essentially a high functioning autist, forced to interact socially through rules because he can’t understand human emotions), understands his girlfriend’s tears as “nature’s trick”; i.e. an automatic self-defence mechanism. The implication is that she’s not really sad, she’s just acting on her biological programming.

 

Secondly, consciousness (or subjective experience, self-awareness, or as Watts calls it, “sentience”) is a handicap. Self-awareness makes our reactions slow and introduces the potential for error into a system which has evolved, or could evolve, to a state of maximal fitness. Watts describes the self as a metaprocess which blooms like cancer in the evolving brain; a mistake, which hampers the performance of the organism so afflicted. Some examples from the book include the following:

  • In leading a network of AI combat drones, Amanda Bates, the enhanced, but human, soldier in the team, is identified by Watts as the weak link, the bottleneck, in the system. Her reflexes and decisions are all slower, less efficient, and more error prone than the AIs would be operating independently.
  • The alien scramblers they encounter are acknowledged as superior to humans precisely because they are intelligent (capable of goal-oriented action and information processing) but not sentient.
  • Reference is made to the way a pianist can only play a piece when they stop If a musician attempts to exert conscious control over their fingers while playing, they won’t be able to play or will at least play poorly. One can only play an instrument by forcing consciousness to let go and handing the reins to the body.

 

Discussion

If you were reading closely, you would have noticed that these two strands actually contradict each other. The first implies that, while we are sentient; that is, have subjective experiences, these experiences are merely the background noise of the brain’s information processing and have absolutely no impact on our actions or decisions. They are, in other words, epiphenomenal. The second strand treats sentience as a real phenomenon capable of producing genuine effects, primary of which (in the book, at least) is the reduction of fitness in the conscious agent. The former is essentially the freewill/determinism debate and since Watts doesn’t pursue this (along with the fact that I have extensively argued for freewill in many articles already), I will focus on the second strand.

 

This idea ties in with a couple of themes worth drawing your attention to at the outset. The first is the philosophical zombie. A philosophical zombie is a being which is absolutely indistinguishable from a normal human being; it moves like us, acts like us, and talks like us. There is one difference though. The lights aren’t on. The zombie completely lacks conscious, subjective experience. This is often expressed by saying there is nothing it is like to be a philosopher’s zombie. Interestingly though, if you were to ask a philosopher’s zombie if it were sentient, it would answer in the affirmative. Why? Because this is what a real human being would say. It isn’t lying because lying requires that one know the truth and intentionally aim to deceive; philosopher’s zombies can’t be said to genuinely know anything. They ‘know’ things the same way my GPS ‘knows’ where my house is.

Watts’ description of a philosopher’s zombie differs from this in that, rather than being exactly like us, his ‘zombies’ are actually superior to us. Unhindered by consciousness and our obsession with our selves, Watts’ zombies outperform us on every metric. They react faster, make better, quicker decisions, and aren’t distracted by self-ish concerns, like pain, or swayed by emotions (neither of which they would actually experience). In the book, Watts even flirts with the idea that sentience is a transitionary phase and the most successful humans are already philosopher’s zombies, appearing to be just like the rest of us, but actually superior due to their non-sentience.

The second theme connected to this is the medical condition, and not coincidentally the title of the book, blindsight. Blindsight is a condition in which damage to the primary visual cortex results in blindness even though the eyes themselves remain fully functional. The result is that even though the subject reports that they cannot see anything (and it’s true that they have no conscious visual perceptions), they are still able to react to visual stimuli better than chance would predict. The connection to the book is obvious. Even though consciousness is completely removed from the equation in blindsight (the condition, not the novel), the subject is still ‘seeing’ on some non-conscious level, as demonstrated by their ability to react to visual stimuli.

The final theme is the current fear about artificially intelligent systems taking over the world. Intelligent but not conscious. This is the fear described in the thought experiment of the paperclip-maximising AI. Imagine a paperclip factory controlled by an AI whose only directive (programmed by an over-zealous but human owner) is to maximise the production of paperclips. The AI is not conscious but is capable of a number of ‘mechanical’ tasks, including executing algorithms, analysing feedback, maximising strategies to achieve goals, etc. First, the AI eliminates inefficiencies within the factory itself, streamlining some processes, upgrading others, replacing particularly problematic ones, etc. But then it notices that there are all of these buildings next to its factory that aren’t making paperclips. In light of its goal of maximising the production of paperclips, this is woefully inefficient. Buying these buildings, it hires contractors to convert them to extensions of its factory. Then it notices all of these humans running around not making paperclips… and you can see where this is going.

 

There are two problems I see with the idea that consciousness is redundant and/or reduces fitness. The first is that if consciousness is so inefficient and such a drain on evolutionary fitness, it is hard to see how it could have evolved in the first place. It is accepted that incredibly complex systems can evolve naturally (the eye is one of the most well-cited examples, sorry creationists), but the eye obviously confers an evolutionary advantage on its possessor (as do all of the intervening stages which make the final structure evolutionarily possible). But how would something that actually reduces fitness and is as complex as consciousness, or a sense of self, not just evolve naturally but evolve in the species that ends up essentially dominating the planet? There are actually two questions here. How did sentience evolve if it rendered us less fit, and is it just a coincidence that the creature that evolved sentience ended up outperforming every other species? (Obviously, whether humans have outperformed every other species will depend on how you measure ‘performance’ but I think any reasonable assessment would have to conclude humans are the most successful species on Earth)

Let’s move down the sentience scale. Dogs? Probably sentient. Mice? Probably. It doesn’t matter where you draw the line, but we can probably all agree that viruses aren’t sentient. Nevertheless, they display intelligent behaviour (although we are using the word ‘intelligent’ pretty loosely here) in moving towards favourable conditions, away from unfavourable ones, directing the host cell’s machinery to produce copies of itself, etc. Even if we grant that this is intelligent behaviour, it is clearly quite narrowly circumscribed. In other words, virus behaviour is outrageously simple. As we move back up that sentience ladder, we see the emergence of more, and increasingly complex, behaviours. It’s hard to believe this could be a coincidence.

Again, think about the automatic, unconscious behaviours we engage in. If a tiger jumps out of the bushes, our feet are probably running before we consciously know what is happening. I had a similar experience once, although nothing as dramatic as a wild animal attack. I was running on some loose ground and lost my footing. As my feet skidded out from underneath me, my hand reached down and behind me to cushion my fall. The result was that I was uninjured except for a few scratches on my hand. The interesting thing about this though, was that I had absolutely no conscious control over those few split-seconds. I can’t even properly recall what I did because, in a sense, I didn’t do anything; my body just reacted. But the same thing applies to this as the virus. My body, acting without my conscious mind, didn’t do anything particularly complex. It didn’t buy buildings and convert them to paperclip factories or invent a new machine to make the job of paperclip production more efficient. More importantly, there is no reason to believe that it could do these types of things.

The same can be said with mastery of a physical behaviour like playing the piano. Sure, it is complex if you break the behaviour down; the timing and rhythm, the independent movement and synchronisation of ten separate digits, the memory of which notes need to played next in the song, etc., but ultimately we are talking about pure mechanical actions here. Conscious thought isn’t involved because it isn’t necessary. Try living a human life without conscious thought though.

And that is really the bottom line. I don’t think these ‘intelligent’ but relatively uninteresting behaviours scale terribly effectively and the way evolution has played out seems to support this thesis. Take an ant colony. A hive of business and (relative) complexity, but what would have happened if ants had become the dominant species on Earth while maintaining their non-sentient status (assuming they are non-sentient)? We would see virtually the same colonies, just on a larger scale. Would there be an ant version of the humanities; an ‘antities’ if you will? Would they have discovered math? Would ants be creating art, writing poems? Would some of them be planning a revolution because the conditions were oppressive? Would some of them demand equality? If the colonies happened to be threatening the ecological balance, would a segment of the population argue for more sustainable alternatives while others throw in with the ant equivalent of Donald Trump in denying the threat? Of course, there’s no biological reason ants couldn’t have evolved to do these things; I’m just saying they would have had to evolve consciousness first.

 

The second problem I have with this claim is that consciousness is hardly redundant or useless. Even if it were true that sentience did significantly reduce fitness and a philosopher’s zombie or an intelligent but non-conscious AI were able to out-compete me from an evolutionary standpoint, subjective experience could only be considered useless if one measured ‘usefulness’ in strict, objective, mechanical terms. Even if consciousness were a hindrance, and the self a costly distraction, no one could reasonably will their absence.

Would it be useful to outcompete other species if we didn’t even know we were outperforming them? Would it be an achievement to build a civilisation that none of its members appreciated? Would it be better to react faster, make decisions more quickly, and be freed of mental distractions if it also meant there would be no conscious agent aware of those faster reactions, quicker decisions, and fewer mental distractions? The absence of sentience also means the absence of meaning; and meaning is the one and only thing that… well, that means anything. The truth is that whatever the evolutionary costs in terms of fitness, every single one of us would pay it in a heartbeat. Subjective experience is the single most valuable phenomenon in the universe because it is that by which value appears in the first place.

 

“Heaven” – Virtual Reality

In the book, Siri’s mother, Helen, makes the decision, following in the footsteps of many others, to immerse herself into a virtual environment known as “Heaven”, and become one of those whom Siri refers to as the “Ascended”. Siri and his father have been given assurances that the process is reversible and Helen’s body would be kept intact somewhere, but Siri notes there are rumours that body parts might be removed over time “according to some optimum-packing algorithm” and he secretly wonders if she might already have been reduced to nothing more than a brain. Siri is highly cynical about the whole business, as we might expect given that it is his mother fleeing the world of flesh and blood, feeling that it is an attempt to escape from reality and she is abandoning her responsibilities.

 

Discussion

The most relevant philosophical idea here is Robert Nozick’s experience machine thought experiment from his 1974 book, Anarchy, State, and Utopia. In it he asks the reader to imagine that they were given the choice to be hooked up to a machine which would furnish them with pleasurable experiences indistinguishable from reality for the rest of their life. He proposed this thought experiment in order to reject hedonism, suspecting that most people would elect to live in reality rather than plug in to his machine. If he is right, then he has shown there must be at least one thing we value more than pleasure. (Incidentally, Nozick suggests three possible reasons one might have for not plugging in; actually doing a thing is more important than just having the experience of doing something, we want to be a certain sort of person, and we desire actual contact with reality).

The other philosophical reference dates back to the 17th century and comes to us courtesy of Rene Descartes. While searching for a firm foundation on which to ground knowledge in his Meditations on First Philosophy, Descartes admits that he can’t be absolutely certain some extremely powerful and “malignant being” hasn’t constructed the entire world purely to deceive him. It is worth noting that Descartes’ postulate is, in at least one respect, the complete opposite of Nozick’s. Whereas the latter’s experience machine facilitated a freely chosen, pleasurable escape into a simulated reality, Descartes’ “malignant being” involves us being deceived and imprisoned within a virtual world.

Finally, since 1999, one can’t talk about VR thought experiments without mentioning the blockbuster movie of that year(/century?/ever?), The Matrix. The premise is a nice blend of Nozick’s and Descartes’ ideas. Artificially intelligent machines (which we created) have forced human minds into a virtual simulation of the world, originally designed, we later discover, to “be a perfect human world” without any suffering, in order to use the energy produced by our bodies. Interestingly, it turned out that the human mind couldn’t accept such levels of happiness and kept trying to wake up from the simulation (Agent Smith theorises that this is because we humans define our existence by suffering and misery). One of the most interesting scenes in the movie is when one of the unplugged human fugitives, Cipher, attempts to make a deal with the Agents whereby he helps them capture Neo in return for being plugged back into the Matrix as somebody important. As he says during a meeting with Agent Smith in a restaurant, apparently contradicting Nozick’s intuitions, “I know this steak doesn’t exist… I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realised… ignorance is bliss.”

 

So, would you plug in? Would you Ascend? I think Nozick is right in thinking that most of us would reject his experience machine. From my perspective, there are two reasons I would balk at the offer. The first is the lack of contact with real people. If the only ‘beings’ I am communicating with are AI-controlled characters, even though they may walk, talk, and act like real people, in reality they are nothing more than meaningless code, the philosopher’s zombies we talked about earlier. A part of a meaningful life involves genuine interaction with genuine people who have real lives, goals, beliefs, etc. separate from mine. My second objection is the contrived nature of the events the VR promises. What I really want from life is not the achievement itself but the gratification I feel from having earned it; the sense of achievement, if you will. In a virtual world where events have all been deliberately and artificially manipulated to guarantee my pleasure/happiness, I wouldn’t have this. In other words, I want the option to fail and even suffer. (I have written in more detail about why I think this is important here.)

What about the objection that the virtual world isn’t real? Isn’t this a reason to reject life in the VR simulation? At first I thought so. However, the problem with this objection turns on exactly what we mean by ‘real’. Aside from the objections I have already raised, I can’t think of any other meaningful interpretation of this word in this context besides ‘physical’, and I can’t think of any reason why life in a digital simulation that perfectly emulates the physical world would be meaningless just because it is digital.

 

So, we would reject the experience machine, not because it offers a world composed of pixels instead of atoms, but because we would lose genuine contact with other individuals and would only experience a contrived, scripted imitation of a life. Let’s think about this some more though. What if we were talking about the Matrix instead of Nozick’s experience machine? Somewhat surprisingly, this scenario would appear to answer both of my objections. First, the Matrix is peopled with real human beings. There are a couple of computer programs thrown in here and there, but by in large, the people you would deal with every day would be actual people with their own goals, thoughts, and emotions. In short, they would be real human individuals living lives every bit as genuine as yours. Second, the Matrix is full of suffering and failure. It was specifically designed that way after the initial ‘experience machine’-type programming failed, remember. Events haven’t been contrived to make you happy, and failure (therefore genuine success) is definitely on the menu. The conclusion then, if my initial objections to the pleasure machine were complete, is that in the Matrix, it would seem that one can live a meaningful, satisfying life.

 

However, there are a couple of final caveats to this. First, this says absolutely nothing about whether a Matrix-style experience is possible. Despite the over-zealous and hyper-optimistic claims of modern day AI propagandists and the prophets of digital gods, I don’t think we can reasonably assess yet whether Heaven- or Matrix-style simulations are genuinely possible or not. Finally, and more importantly, it is telling and ironic that in order to make life in a computer simulation palatable, we had to transpose the central features that already comprise our real world existence into the system. The only real change was the substitution of pixels for atoms. Ultimately, the ‘benefits’ of apparent freedom, control, and pleasure that a virtual, custom-made life offered turned out to be a mirage. It turns out we’re already living the only kinds of lives we will accept!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s