Sean Carrol spoke to philosopher Patricia Churchland last year on his Mindscape podcast, where the two of them discussed the relevance of Churchland’s work in neuroscience to morality. Churchland argues that if we want to understand morality (and, I think, pretty much everything relating to mind), we need to understand the brain. This approach has resulted in her being shunned by her philosophy contemporaries, even as she has been welcomed by her neuroscience ones. In this article, I will investigate neurophilosophy, a term Churchland herself coined, as she discusses it in this podcast (to be fair, I haven’t read any of her books on the subject, so my comments in this article are restricted to the podcast), and discuss whether Churchland has been wrongly (or justly) excluded from her philosophical peers.
This is the second in my article series discussing philosophical issues raised in the excellent SF novel Blindsight by Peter Watts. In this article we will be looking at the brain. I will focus on the relatively recent idea that the brain is modular and also look at a number of fascinating neurological disorders Watts describes in the story.
The Brain and Neurology
There are three brain- and neurology-related issues Watts raises, which I will tackle in turn. The first concerns the protagonist, Siri Keeton. To prevent the seizures he was prone to as a child, Keeton had to have an operation which effectively involved the removal of half his brain. The effect of this operation was to leave him completely lacking in emotions and emotional understanding, so much so that in the book, he appears to be autistic, although highly functioning.
Blindsight is an exceptional 2006 SF novel, in which Peter Watts raises so many fascinating philosophical and psychological/neurological issues that I couldn’t stop myself from writing a couple of articles dedicated to some of them. As my primary concern in these articles will be the discussion of some of the key issues, I won’t bother with an outline of the plot (for that, you’ll have to read the book; a task I highly recommend). However, a proper discussion of the issues will necessitate a little context which will unavoidably involve sneak peeks of scenes at varying points in the book. Although I will deliberately avoid plot spoilers, if you plan to read the book (which, again, you should – it’s almost worth reading just to see the highly original way in which he has interpreted and brought the vampire myth to life – bonus point to Watts for the ingenious crucifix glitch!), please bear this in mind.
Altered Carbon is a 2003 science fiction novel written by Richard K. Morgan, which has recently been made into a gritty, futuristic, and definitely R-18 TV series. The central plot device revolves around pieces of technology called ‘cortical stacks’. They are small, palm-sized devices that are implanted at the top of the spinal column and function as digital receptacles for the human consciousness. When you die, as long as your ‘stack’ isn’t damaged, you can be brought back to life by having your consciousness downloaded from your cortical stack into the cortical stack of another body (called a ‘sleeve’), which can be either a real human body or a synthetic, grown one. This transferral process is called ‘needlecasting’ and usually involves deleting the consciousness in the first stack before making the transfer. In this way, the super-rich (called ‘meths’, a reference to the long-lived Methuselah of Biblical fame) have allegedly achieved immortality. In this article, I want to investigate this assumption by asking, are you the same person after needlecasting as you were before?
Think back to when you were ten years old. Was that child you? Bear in mind that not only is virtually nothing physical about you the same now as it was then, you almost certainly have completely different preferences, beliefs, attitudes, goals, thoughts… even your personality will have changed. If that child was you, the same person you are now, what is it that accounts for this continuity in identity?
17th century English philosopher John Locke addresses this problem in his An Essay Concerning Human Understanding. He starts with an atom and declares that it is the “same with itself” at any instant of its existence. Although we now know that atoms aren’t fundamental particles, Locke would certainly have been referring to an irreducible, unchanging, fundamental particle of nature. Nowadays, we would reserve this title for quarks and electrons. Perhaps it would be better if we paraphrase Locke and say that a fundamental particle is the “same with itself” at any instant of its existence. Identity, at the level of fundamental particles, coincides with physical existence. The fact of existence ensures continuity of identity.
This article is about Sam Harris’ 127th Waking Up podcast in which he talks with Michael Pollan about his latest book, How to Change Your Mind, a New York Times bestseller that investigates the revolution now taking place regarding psychedelic drugs. On the podcast, Harris and Pollan discuss the psychological benefits of psychedelic drug use for those suffering from conditions like depression, addiction, etc., and the general benefits of its use for otherwise healthy people.
Note: I haven’t read the book, so my comments are restricted to what is discussed on the podcast. I also won’t be discussing potential societal/health problems regarding making psychedelics legally available to the public.
Claim 1: The main benefit Pollan and Harris focused on regarding the use of psychedelics among otherwise healthy people was their ability to distance one from the (illusory) self. Pollan talks about the drugs dissolving his sense of self, which was freeing in the sense that it gave him an alternative “way to be”, another way to react to what happens in his life. He realised he doesn’t have to listen to his ego all the time. Of course, being an experience, it fades with time and, as he recounts, shortly afterwards, his ego was back in full force. Nevertheless, the alleged benefit was that it had given him a glimpse of another way to live, a way that can be developed more robustly through meditation.
The central issue underlying all this talk of the potential dangers of artificial intelligence is safety. We don’t want to ‘accidentally’ create something that will have disastrous consequences; consequences which could perhaps have been foreseen and avoided had we been a little more conscientious.
This is one issue about which I largely agree with the prophets of the singularity. No one can know for certain that the possibility of creating an artificial intelligence that will make us look as intellectually insignificant as ants look to us, is zero. We might disagree on what that number is, but we should all be able to agree that it is at least above zero and less than one hundred (per cent). Given this, we have an obligation to advance with caution.
The control problem refers to the difficulties inherent in maintaining control over a super-intelligent AI. The argument is that if we create an artificial super-intelligence and it becomes completely independent, it might enslave, or even destroy, us. Purveyors of AI doomsday scenarios seem to take a perverse delight in imagining how an AI might escape our control and cause havoc.
One obvious scenario is for a computer program to just go completely rogue and turn on us, its intellectually inferior creators. This fear gets its justification from nothing less than human history. What has happened whenever a more advanced race has encountered a less advanced one? Best case scenarios tend to slavery, worst case scenarios look more like genocide.
For the purposes of this article, I will define consciousness as the subjective experience that somehow accompanies human cognition – that sense of self-awareness or the feeling that it is like something to be me.
Is Artificial Consciousness Necessary for AI to Pose a Threat?
To his credit, Sam Harris, in his podcast discussion with Max Tegmark, acknowledges that consciousness is the only thing that provides meaning in the universe (although he also denies there is such thing as a self and believes we are all fully determined; so square all of those ideas if you can) but is of the opinion that it is irrelevant concerning AI because an artificial super-intelligence can still destroy the human race, even if it completely lacks consciousness; that is, even if it is incapable of subjective experiences. As Tegmark puts it, if you’re being chased by a heat-seeking missile, you aren’t going to care whether it is experiencing anything or not. The end result will be the same either way.
Before I start, I should point out that I am an expert in neither AI nor computer engineering. My thoughts and opinions are based on my limited understanding as a (somewhat) informed layperson.
The Brain / Computer Analogy
Is the brain a computer? Well, at a crude level brains can be thought of as information processing systems; that is to say, systems which accept inputs, perform some kind of operation on them, and then produce outputs. Since computers can also be described as information processors; in some sense, the brain is a computer. However, relying too heavily on this analogy conceals at least as much as it reveals, because a brain is simply not like any computer we know of nor is it even like any futuristic variant that anyone has any realistic idea at all about how to build.