Reality Checking AI (2/5) – Intelligence

Before I start, I should point out that I am an expert in neither AI nor computer engineering. My thoughts and opinions are based on my limited understanding as a (somewhat) informed layperson.

What is Intelligence?

Image result for intelligence

Max Tegmark defines intelligence as the ability to accomplish complex goals. Now, this is interesting because on this definition, no computer is intelligent, even AlphaGo Zero. Why? They don’t have any goals and they certainly can’t be said to accomplish anything. Goals and accomplishments are things only conscious agents can have. Although we do sometimes use these terms to refer to non-conscious objects, when we do, we are speaking metaphorically. When I say the tree is growing or my computer is saving a document, I don’t literally mean either of them are actually trying to accomplish a goal. On the contrary, it is the human being who planted the tree or wrote the document who has the goal.

If we strip the conscious agent implications from the words ‘accomplish’ and ‘goal’, we can certainly get to the desired conclusion that computers are intelligent, but in widening the semantic net to let computers slip through, we also unwittingly let a whole host of undesirables through. If computers are intelligent, all life must be intelligent, including plant life. Grass achieves complex goals every time it converts light energy into chemical energy in order to grow. Do you think your lawn is intelligent? But we need not stop at that level of absurdity. Calculators must also be intelligent, so must thermometers, and even eco-systems. Gaia anyone? Ironically, at this stage we are no longer making our computers intelligent; rather, we are making ourselves less intelligent.

Continue reading

Advertisements

Reality Checking AI (1/5) – Brains and Computers

Before I start, I should point out that I am an expert in neither AI nor computer engineering. My thoughts and opinions are based on my limited understanding as a (somewhat) informed layperson.

 

The Brain / Computer Analogy

Image result for brains and computers

Is the brain a computer? Well, at a crude level brains can be thought of as information processing systems; that is to say, systems which accept inputs, perform some kind of operation on them, and then produce outputs. Since computers can also be described as information processors; in some sense, the brain is a computer. However, relying too heavily on this analogy conceals at least as much as it reveals, because a brain is simply not like any computer we know of nor is it even like any futuristic variant that anyone has any realistic idea at all about how to build.

Continue reading

Reasons, Genes, and Misanthropes

When is a reason not a reason?

There are two ways we use the word ‘reason’ of interest to us here (I will be ignoring ‘reason’ used to mean ‘rational’). The first (A-type) is used to explain something with respect to factual events or the past; i.e. the reason the sky is blue is because molecules in the air scatter blue light more than they do red, or the reason I broke my leg was because I fell off my bike. The second type of reason (B-type) also explains something but is future-oriented; i.e. the reason she bought a bigger car is because she wants a large family. Importantly, while only conscious agents can have B-type reasons, anything can have an A-type reason.

The central problem I want to address in this article is whether all B-type reasons ultimately cash out as A-type reasons.

Continue reading