This article is about Sam Harris’ 127th Waking Up podcast in which he talks with Michael Pollan about his latest book, How to Change Your Mind, a New York Times bestseller that investigates the revolution now taking place regarding psychedelic drugs. On the podcast, Harris and Pollan discuss the psychological benefits of psychedelic drug use for those suffering from conditions like depression, addiction, etc., and the general benefits of its use for otherwise healthy people.
Note: I haven’t read the book, so my comments are restricted to what is discussed on the podcast. I also won’t be discussing potential societal/health problems regarding making psychedelics legally available to the public.
Claim 1: The main benefit Pollan and Harris focused on regarding the use of psychedelics among otherwise healthy people was their ability to distance one from the (illusory) self. Pollan talks about the drugs dissolving his sense of self, which was freeing in the sense that it gave him an alternative “way to be”, another way to react to what happens in his life. He realised he doesn’t have to listen to his ego all the time. Of course, being an experience, it fades with time and, as he recounts, shortly afterwards, his ego was back in full force. Nevertheless, the alleged benefit was that it had given him a glimpse of another way to live, a way that can be developed more robustly through meditation.
Sartre, in the opening chapter of his very challenging read, Being and Nothingness, cleaves existence neatly in two; what he calls being-for-itself and being-in-itself.
The in-itself is being. I don’t recall Sartre ever explicitly describing it as physical matter, but that is basically what it amounts to. The in-itself is characterised by three features: 1) it is in-itself, 2) it is what it is, and 3) it is. Respectively, these mean: 1) the in-itself is independent; i.e. it doesn’t depend on anything else to exist, 2) it doesn’t refer to itself; i.e. it isn’t self-reflexive, and 3) it is neither possible nor necessary. It isn’t necessary because it didn’t have to be, but neither is it possible because inert, non-conscious matter has no possibilities.
The for-itself, on the other hand, is consciousness. What does this mean? Consciousness is precisely not being. It is an empty, ‘massless’perspective on, or relation to, being. The for-itself cannot be grasped because it is not a being, it’s not a thing, it is precisely no-thing… which is not the same as saying it is an illusion or that it doesn’t exist at all. If you find this scientifically implausible, I challenge you to describe consciousness in a way that preserves what conscious clearly is, all while staying within the confines of naturalistic materialism.
[This is a revised version of an article I wrote a few months ago and posted here. The changes were made after I read a rebuttal of it written by Francois Tremblay here. Despite the considerable revisions (particularly in the ‘Refutation’ section), my amendments don’t alter my original argument, which I think remains unchanged, but were necessary to clear up a few ambiguities and clarify certain points. I have noted my changes in blue]
I recently listened to a podcast on Sam Harris’ website in which he discusses anti-natalism (the view that it is morally wrong to have children) with David Benatar. You can find the podcast here. The core of Benatar’s argument rests on what he calls axiological asymmetry, a concept much easier to explain than the name might at first suggest. In this article, I will outline axiological asymmetry but argue that it doesn’t lead to anti-natalism.
Axiology is nothing more than the study of value so axiological asymmetry refers to an asymmetry in our values. Specifically, Benatar argues the following:
It is uncontroversial to say that
1) The presence of pain is bad
2) The presence of pleasure is good
However, such symmetrical evaluation does not seem to apply to the absence of pain and pleasure, for it strikes me [that is, Benatar] as true that
3) The absence of pain is good even if that good is not enjoyed by anyone,
4) The absence of pleasure is not bad unless there is somebody for whom that absence is a deprivation.
Since (3), then the absence of pain associated with any currently unconceived child must be accorded good. Since (4), then the absence of pleasure associated with any currently unconceived child must not be bad. The conclusion is that it is better not to conceive any child.
The central issue underlying all this talk of the potential dangers of artificial intelligence is safety. We don’t want to ‘accidentally’ create something that will have disastrous consequences; consequences which could perhaps have been foreseen and avoided had we been a little more conscientious.
This is one issue about which I largely agree with the prophets of the singularity. No one can know for certain that the possibility of creating an artificial intelligence that will make us look as intellectually insignificant as ants look to us, is zero. We might disagree on what that number is, but we should all be able to agree that it is at least above zero and less than one hundred (per cent). Given this, we have an obligation to advance with caution.
The control problem refers to the difficulties inherent in maintaining control over a super-intelligent AI. The argument is that if we create an artificial super-intelligence and it becomes completely independent, it might enslave, or even destroy, us. Purveyors of AI doomsday scenarios seem to take a perverse delight in imagining how an AI might escape our control and cause havoc.
One obvious scenario is for a computer program to just go completely rogue and turn on us, its intellectually inferior creators. This fear gets its justification from nothing less than human history. What has happened whenever a more advanced race has encountered a less advanced one? Best case scenarios tend to slavery, worst case scenarios look more like genocide.
For the purposes of this article, I will define consciousness as the subjective experience that somehow accompanies human cognition – that sense of self-awareness or the feeling that it is like something to be me.
Is Artificial Consciousness Necessary for AI to Pose a Threat?
To his credit, Sam Harris, in his podcast discussion with Max Tegmark, acknowledges that consciousness is the only thing that provides meaning in the universe (although he also denies there is such thing as a self and believes we are all fully determined; so square all of those ideas if you can) but is of the opinion that it is irrelevant concerning AI because an artificial super-intelligence can still destroy the human race, even if it completely lacks consciousness; that is, even if it is incapable of subjective experiences. As Tegmark puts it, if you’re being chased by a heat-seeking missile, you aren’t going to care whether it is experiencing anything or not. The end result will be the same either way.
Before I start, I should point out that I am an expert in neither AI nor computer engineering. My thoughts and opinions are based on my limited understanding as a (somewhat) informed layperson.
What is Intelligence?
Max Tegmark defines intelligence as the ability to accomplish complex goals. Now, this is interesting because on this definition, no computer is intelligent, even AlphaGo Zero. Why? They don’t have any goals and they certainly can’t be said to accomplish anything. Goals and accomplishments are things only conscious agents can have. Although we do sometimes use these terms to refer to non-conscious objects, when we do, we are speaking metaphorically. When I say the tree is growing or my computer is saving a document, I don’t literally mean either of them are actually trying to accomplish a goal. On the contrary, it is the human being who planted the tree or wrote the document who has the goal.
If we strip the conscious agent implications from the words ‘accomplish’ and ‘goal’, we can certainly get to the desired conclusion that computers are intelligent, but in widening the semantic net to let computers slip through, we also unwittingly let a whole host of undesirables through. If computers are intelligent, all life must be intelligent, including plant life. Grass achieves complex goals every time it converts light energy into chemical energy in order to grow. Do you think your lawn is intelligent? But we need not stop at that level of absurdity. Calculators must also be intelligent, so must thermometers, and even eco-systems. Gaia anyone? Ironically, at this stage we are no longer making our computers intelligent; rather, we are making ourselves less intelligent.