Reality Checking AI (2/5) – Intelligence

Before I start, I should point out that I am an expert in neither AI nor computer engineering. My thoughts and opinions are based on my limited understanding as a (somewhat) informed layperson.

What is Intelligence?

Image result for intelligence

Max Tegmark defines intelligence as the ability to accomplish complex goals. Now, this is interesting because on this definition, no computer is intelligent, even AlphaGo Zero. Why? They don’t have any goals and they certainly can’t be said to accomplish anything. Goals and accomplishments are things only conscious agents can have. Although we do sometimes use these terms to refer to non-conscious objects, when we do, we are speaking metaphorically. When I say the tree is growing or my computer is saving a document, I don’t literally mean either of them are actually trying to accomplish a goal. On the contrary, it is the human being who planted the tree or wrote the document who has the goal.

If we strip the conscious agent implications from the words ‘accomplish’ and ‘goal’, we can certainly get to the desired conclusion that computers are intelligent, but in widening the semantic net to let computers slip through, we also unwittingly let a whole host of undesirables through. If computers are intelligent, all life must be intelligent, including plant life. Grass achieves complex goals every time it converts light energy into chemical energy in order to grow. Do you think your lawn is intelligent? But we need not stop at that level of absurdity. Calculators must also be intelligent, so must thermometers, and even eco-systems. Gaia anyone? Ironically, at this stage we are no longer making our computers intelligent; rather, we are making ourselves less intelligent.

In the fifties and sixties (when the field of AI was full of even more confidence and bluster than it is now), computer scientists didn’t see the need to get ‘tricksy’ with their definitions. They weren’t hedging their bets through semantic games. They promised us that computers would be more intelligent than humans within the decade(!) and they didn’t have to explicitly define intelligence because everyone knew what they were talking about. Intelligence was intelligence. It was what allowed humans to make and execute plans, come up with solutions to problems, and, in general, operate in the world. Needless to say, this promise went unfulfilled. So, what did the AI community do? Revise their expectations? Admit to an overestimation? If only. Once they realised the difficulties involved in making computers intelligent, they realised it was easier to make intelligence less human.

Narrow and General Intelligence

Image result for narrow intelligence

Bearing in mind the caveat surrounding the word ‘intelligence’ mentioned above, the distinction between ‘narrow’ and ‘general’ intelligence is, however, a worthwhile and revealing one. Narrow intelligence refers to capacities within a restricted domain. Every computer ever made demonstrates narrow intelligence and the best of them surpass human abilities in their respective domain relatively quickly; think AlphaGo. General intelligence is the kind of intelligence humans possess and is characterised by things like common sense and know-how. It allows its possessor to function independently and successfully in the world.

The obvious thing to note here is that no matter how narrowly ‘intelligent’ a computer is, this won’t ever amount to even a modicum of general intelligence. The two kinds of intelligence are completely different. Listening to people talk about modern AI, it often seems as if they believe that the more narrowly ‘intelligent’ a system gets (the better a computer can play chess, for example), the closer we get to AGI (Artificial General Intelligence). There is absolutely no justification for this assumption. I’m reminded of an analogy Hubert Dreyfus made in What Computers Still Can’t Do, a book he originally wrote in the 70s but which is still remarkably relevant today. It’s like someone trying to reach the moon and claiming that by climbing a tree they are getting nearer their goal.

General intelligence requires attributes that seem to have nothing to do with narrow intelligence. Dreyfus suggests four capabilities of human cognition that make us different from computers (the processes in brackets are how computers operate):

  • Fringe consciousness (vs. heuristically guided search) – a vague awareness of the background, or context, which focuses perception
  • Ambiguity tolerance (vs. context-free precision) – the ability to narrow down the range of possible meanings by ignoring what, out of context, would be ambiguities
  • Essential/Inessential discrimination (vs. brute force trial and error search) – the ability to immediately identify what is essential and inessential in a situation with respect to a goal
  • Perspicuous grouping (vs. the creation character lists) – recognising similarities between different groups (a combination of fringe consciousness, insight, and context determination)

There is absolutely no evidence that humans accomplish their goals through any of the computer processes (in brackets) outlined above. One example is that of brute force trial and error search. When we are determining what is essential to our needs in a particular situation, we don’t run a massive, lightning-quick search through every variable; instead, we are somehow able to zero in on the relevant aspects immediately. No one knows exactly how the human brain does this, but it is extremely unlikely we do this the way a computer does; i.e. with a brute force trial and error search.

The features in the brackets (the way computers operate) are ideal for narrow intelligence and result in remarkable technological achievements (for humans, not computers). The problem for AI researchers is that there is no way to go from them to their human cognitive parallels. Climbing trees, even really tall ones, will never get you to the moon.

Super Intelligence

Image result for superintelligence

Intelligence obviously falls on a spectrum and it is almost certainly true that humans aren’t anywhere near the upper limits of that spectrum. In line with this, Nick Bostrom defines superintelligence as “intellects that greatly outperform the best current human minds across many very general cognitive domains.” So far, so good.

The question we are specifically interested in here though concerns the possibility of artificial superintelligence; that is, computers that perform better than humans across many very general cognitive domains. As we’ve seen, despite the astonishing increases in computing power we have seen in our lifetimes, we don’t yet have any computers that possess even a tiny bit of general intelligence. Of course, the bigger problem is that there is absolutely no evidence that binary processors like computers, no matter how much processing power we pack them with, can ever achieve even human-like general intelligence, let alone general superintelligence.

Advertisements

One thought on “Reality Checking AI (2/5) – Intelligence

  1. Pingback: Reality Checking AI (4/5) – The Control Problem and Self-Replication | Absurd Being

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s