Reality Checking AI (4/5) – The Control Problem and Self-Replication

The Control Problem

Image result for computer locked away

The control problem refers to the difficulties inherent in maintaining control over a super-intelligent AI. The argument is that if we create an artificial super-intelligence and it becomes completely independent, it might enslave, or even destroy, us. Purveyors of AI doomsday scenarios seem to take a perverse delight in imagining how an AI might escape our control and cause havoc.

One obvious scenario is for a computer program to just go completely rogue and turn on us, its intellectually inferior creators. This fear gets its justification from nothing less than human history. What has happened whenever a more advanced race has encountered a less advanced one? Best case scenarios tend to slavery, worst case scenarios look more like genocide.

If you doubt a super-intelligent AI would be as barbaric and amoral as human beings, there is a scenario for you too. Maybe our AI has high morals (perhaps we program these in from the beginning) and is a peace-loving do-gooder. After witnessing humanity’s chequered past, our wanton destruction of the environment, the thousands of species we have hunted/driven to extinction, etc., perhaps it would come to the same conclusion Agent Smith did in The Matrix; that humanity is a virus. From this it might follow that the only moral course of action is to facilitate the extermination of the human race.

But an AI doesn’t even need to have high morals for a nightmare scenario to play out. My favourite of these fictional flights of fancy is the paperclip manufacturing AI, programmed, apparently innocently, to maximise the production of paperclips. What could be less sinister? The AI is happily working away in its factory, ensuring every machine is working at maximum capacity and all employees are doing their jobs, when it suddenly realises that if it bought the building next door, it could expand its operation and produce more paperclips. Then it realises there are 7 billion odd people running around the planet not making paperclips. What a waste! And so on and so forth, until the planet is one massive paperclip factory churning out paperclips by the zillions.

In a similar vein, Harris speculates about a benign, artificial super-intelligence that we keep locked up out of fear of what might happen if we let it loose. He asks how you would feel if you had nothing but the interests of humanity at heart but were imprisoned by five year olds. Tegmark jumps on the bandwagon adding how much you want to teach these children how to plant food and make tools, etc., but they refuse to let you outside. The point here is that even if you (the intellectually superior entity) were completely benign, it would still make sense for you to engineer your escape (from your inferior captors).

Many of these doomsday scenarios don’t make sense because they suppose a general super-intelligence and then impose narrow intelligence restrictions on it. Are we really expected to believe that a (supposedly) super-intelligent entity could be so dumb as to take over the planet to make paperclips or conclude that the most moral action could possibly be to wipe out the human race (by Harris’ own admission, the only thing in the universe (via consciousness) that gives anything meaning)? This is an example of AI soothsayers wanting to have their cake and eat it too. On the one hand, the AI is so much more intelligent than humans (this must surely mean general intelligence, or it could never understand, much less navigate, our world to threaten us in the first place) that it can easily outwit us to pursue its goals. On the other hand, the AI is just a dumb (narrowly intelligent) computer so all it can do is mindlessly execute the programming it was given by us; i.e. to maximise paperclips. This is like fearing that a human child will become so smart that she will unlock the deepest mysteries of science and human behaviour and then use her super-intelligence to build a gigantic sandpit so she can eat sand whenever she wants.

The other problem that affects all of these scenarios is that AGI has been assumed. As we have seen (in my 2nd article) it is far from certain that this is possible.

 

Self-Replication

Related image

Self-replication is the idea that the moment a computer achieves human-like intelligence it will immediately proceed to make new, better versions of itself (or just upgrade itself), resulting in an explosive, exponential growth in machine intelligence that will quickly render natural, human intelligence hopelessly outdated.

The hidden, and unjustified, assumptions that have gone into this scenario include the following:

  • As with the control problem, AGI has just been assumed despite the uncertainty (and, as I’ve argued, unlikeliness) over whether it is even possible.
  • The intelligence of all AI machines will be equally applicable to any and every field. This assumption  maintains that every AI that reaches human-level intelligence will (somehow) be able to master any discipline it desires to. This is absolutely not how we have seen intelligence progress and develop in the only other instance we know of; ourselves. Despite the fact that all humans have human-level intelligence, we can’t all just read a whole bunch of quantum physics articles or books on medicine and start making revolutionary breakthroughs. And yet, despite the paucity of evidence in favour of it, this is precisely how AI enthusiasts seem to think artificial intelligence will progress. Intelligence (as far as we know) just doesn’t work this way.
  • An AI with human-level intelligence will know and understand everything about its own construction better than we do. Again, we already have human-level intelligence. If it would be so easy for a machine to improve on the computing designs we ourselves have made, why can’t we do it? This would be like a modern human being going back before Sapiens evolved and saying, “Once these apes reach Sapiens-level intelligence, there will be an intelligence explosion because they will start making new and better Sapiens.” Just because we are Sapiens doesn’t mean we automatically understand everything about ourselves, and even if we did, it wouldn’t follow from this that we could go into a laboratory and build better Sapiens. There is no reason to suppose artificial intelligence will be any different.

 

Both the control problem and the self-replication fear are completely dependent on AGI being possible. If this should turn out to not be as easy as the experts think (and have thought for the last six decades, remember), these issues will never rise above purely academic discussions. The question is will the broken promises and failed expectations eventually cause the AI community to re-evaluate their position or is the dreaded singularity destined to remain a spectre, always 20-30 years away.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s