Reality Checking AI (4/5) – The Control Problem and Self-Replication

The Control Problem

Image result for computer locked away

The control problem refers to the difficulties inherent in maintaining control over a super-intelligent AI. The argument is that if we create an artificial super-intelligence and it becomes completely independent, it might enslave, or even destroy, us. Purveyors of AI doomsday scenarios seem to take a perverse delight in imagining how an AI might escape our control and cause havoc.

One obvious scenario is for a computer program to just go completely rogue and turn on us, its intellectually inferior creators. This fear gets its justification from nothing less than human history. What has happened whenever a more advanced race has encountered a less advanced one? Best case scenarios tend to slavery, worst case scenarios look more like genocide.

Continue reading

Advertisements

Reality Checking AI (3/5) – Consciousness

For the purposes of this article, I will define consciousness as the subjective experience that somehow accompanies human cognition – that sense of self-awareness or the feeling that it is like something to be me.

Is Artificial Consciousness Necessary for AI to Pose a Threat?

Image result for machine consciousness

To his credit, Sam Harris, in his podcast discussion with Max Tegmark, acknowledges that consciousness is the only thing that provides meaning in the universe (although he also denies there is such thing as a self and believes we are all fully determined; so square all of those ideas if you can) but is of the opinion that it is irrelevant concerning AI[1] because an artificial super-intelligence can still destroy the human race, even if it completely lacks consciousness; that is, even if it is incapable of subjective experiences. As Tegmark puts it, if you’re being chased by a heat-seeking missile, you aren’t going to care whether it is experiencing anything or not. The end result will be the same either way.

Continue reading

Reality Checking AI (2/5) – Intelligence

Before I start, I should point out that I am an expert in neither AI nor computer engineering. My thoughts and opinions are based on my limited understanding as a (somewhat) informed layperson.

What is Intelligence?

Image result for intelligence

Max Tegmark defines intelligence as the ability to accomplish complex goals. Now, this is interesting because on this definition, no computer is intelligent, even AlphaGo Zero. Why? They don’t have any goals and they certainly can’t be said to accomplish anything. Goals and accomplishments are things only conscious agents can have. Although we do sometimes use these terms to refer to non-conscious objects, when we do, we are speaking metaphorically. When I say the tree is growing or my computer is saving a document, I don’t literally mean either of them are actually trying to accomplish a goal. On the contrary, it is the human being who planted the tree or wrote the document who has the goal.

If we strip the conscious agent implications from the words ‘accomplish’ and ‘goal’, we can certainly get to the desired conclusion that computers are intelligent, but in widening the semantic net to let computers slip through, we also unwittingly let a whole host of undesirables through. If computers are intelligent, all life must be intelligent, including plant life. Grass achieves complex goals every time it converts light energy into chemical energy in order to grow. Do you think your lawn is intelligent? But we need not stop at that level of absurdity. Calculators must also be intelligent, so must thermometers, and even eco-systems. Gaia anyone? Ironically, at this stage we are no longer making our computers intelligent; rather, we are making ourselves less intelligent.

Continue reading

Reality Checking AI (1/5) – Brains and Computers

Before I start, I should point out that I am an expert in neither AI nor computer engineering. My thoughts and opinions are based on my limited understanding as a (somewhat) informed layperson.

 

The Brain / Computer Analogy

Image result for brains and computers

Is the brain a computer? Well, at a crude level brains can be thought of as information processing systems; that is to say, systems which accept inputs, perform some kind of operation on them, and then produce outputs. Since computers can also be described as information processors; in some sense, the brain is a computer. However, relying too heavily on this analogy conceals at least as much as it reveals, because a brain is simply not like any computer we know of nor is it even like any futuristic variant that anyone has any realistic idea at all about how to build.

Continue reading

Axiological Asymmetry and Anti-Natalism

I recently listened to a podcast on Sam Harris’ website in which he discusses anti-natalism (the view that it is morally wrong to have children) with David Benatar. You can find the podcast here. The core of Benatar’s argument rests on what he calls axiological asymmetry, a concept much easier to explain than the name might at first suggest. In this article, I will outline axiological asymmetry but argue that it doesn’t lead to anti-natalism.

The Argument

Axiology is nothing more than the study of value so axiological asymmetry refers to an asymmetry in our values. Specifically, Benatar argues the following:

It is uncontroversial to say that
1) The presence of pain is bad
and that
2) The presence of pleasure is good

 However, such symmetrical evaluation does not seem to apply to the absence of pain and pleasure, for it strikes me [that is, Benatar] as true that

3) The absence of pain is good even if that good is not enjoyed by anyone,
whereas
4) The absence of pleasure is not bad unless there is somebody for whom that absence is a deprivation.[1]

Since (3), then the absence of pain associated with any currently unconceived child must be accorded good. Since (4), then the absence of pleasure associated with any currently unconceived child must not be bad. The conclusion is that it is better not to conceive any child.

Continue reading

Freewill by Sam Harris – An Absurd Being Commentary

So, for this article, I’m assuming that you have read Sam Harris’ book, Freewill. If you haven’t, it’s very short, more essay than book, and well worth a read because it raises some interesting points that any proponent of freewill needs to address sooner or later. In lieu of this, you could read my previous article which briefly outlines what I took to be his main ideas.

Somewhat surprisingly, I agree with much of what Harris says… if we assume determinism to be true; specifically, what he has to say about fatalism, quantum indeterminacy, compatibilism and moral responsibility. All of the above are often given as reasons for resisting determinism and Harris, quite correctly in my opinion, rejects them in this capacity. Continue reading

Free Will by Sam Harris – An Absurd Being Book Review

In his short book, Freewill, Sam Harris mounts a concerted attack on the notion that we are free. He argues that our universe is predicated on some mix of determinism and randomness that doesn’t stop somewhere just outside our craniums, but rather penetrates all the way in to our thoughts and intentions carrying an inert ‘conscious witness’ along for the ride.

Past Behaviour and Thoughts

He starts out by identifying two assumptions that will serve to define freewill: 1. We could have behaved differently than we did in the past and, 2. We are the conscious source of our thoughts and actions. He asserts that both of these are false. Continue reading