Reality Checking AI (5/5) – Safety and Digital Gods

Safety

Related image

The central issue underlying all this talk of the potential dangers of artificial intelligence is safety. We don’t want to ‘accidentally’ create something that will have disastrous consequences; consequences which could perhaps have been foreseen and avoided had we been a little more conscientious.

This is one issue about which I largely agree with the prophets of the singularity. No one can know for certain that the possibility of creating an artificial intelligence that will make us look as intellectually insignificant as ants look to us, is zero. We might disagree on what that number is, but we should all be able to agree that it is at least above zero and less than one hundred (per cent). Given this, we have an obligation to advance with caution.

Having said this however, if we actually look at the situation objectively – that is to say, without a naïve, dogmatic belief that if we can imagine it, science can do it – we can exercise a reasonable amount of caution without the overblown, dire warnings about AI being a “fundamental existential risk for human civilisation” (Elon Musk) or exaggerated analogies like, “we humans are like small children playing with a bomb” (Nick Bostrom). In a similar (although on the face of it, more reasonable) vein, Max Tegmark suggests that we need to abandon the ‘learn from our mistakes’ model of development when it comes to AI and get into an engineering frame of mind where every single variable is predicted and analysed in detail so mistakes don’t happen in the first place (the reasonable part). The reason is because if we make a mistake with AI, it will be humanity’s last (the less reasonable part).

Other things that qualify as existential risks – climate change, biotechnology, genetic research, etc. – carry threats that we can be certain are real because there is absolutely no doubt over whether they are possible. Whether humans can affect the climate in a detrimental way isn’t an issue for most people (Trump and his base excluded) (although they may argue over whether and to what extent it is currently happening), nor do we doubt that human actions can cause the extinction of entire species (whether these upset the delicate ecological balance of the planet in a catastrophic way is another matter), nor do we wonder if we can alter the genetic code of living things. On the other hand, I’ve argued here that there is (or at least should be) considerable doubt concerning whether genuine artificial intelligence is even possible in the first place. Does this mean we can go off half-cocked without any thought for safety? Of course not. All I’m suggesting is that we keep our feet on the ground even as we acknowledge and plan for potential problems.

One might argue that all of these existential risks were in fact, in doubt at earlier times. As our technology improved and our knowledge progressed, these limits were overcome. Why should we not assume the same thing will happen regarding AI?

This is an exceptionally poor argument, and one which seems to be a direct consequence of the unbridled optimism that is currently running rampant in our scientific age. If this argument is valid, what can’t be done? Cold fusion must therefore also be on the horizon; faster-than-light travel too. They violate the laws of physics? No problem. Television violated the laws of physics as we knew them prior to the discovery of radio waves.

If our discourse is to be meaningful, it’s not enough to just have blind faith that science will find a way to achieve everything… somehow. Indeed, this suggestion is about as meaningless as it gets. Not only that, what could be less scientific? To me, yes, science is supposed to approach problems as if they are solvable, but one of the things that distinguishes it from endeavours like religion and new age mumbo-jumbo, is that it is balanced by a measure of scepticism; that is to say, it is grounded in reality, the experimental method, and facts.

I once had an argument with a friend about whether happiness was measurable and in the course of the conversation, he claimed we would eventually be able to build a machine which could measure the exact happiness of every single individual on the planet; past, present and future. To me, this is as meaningless as answering every question by saying God did it. And yet, this kind of untethered optimism is the logical conclusion for a person who accepts the argument I outlined earlier; namely, as our technology improved and our knowledge progressed in the past, past limits were overcome, therefore we ought to also assume that as our current technology improves and our current level of knowledge progresses, any present limits can and will be overcome given enough time. The more science drifts unchecked down this blindly optimistic road paved with faith and lined with future promises, the less useful it will become.

Digital Gods

Related image

If we believe what the Ray Kurzweils and Elon Musks of the AI industry tell us, some artificial system will eventually undergo an intelligence explosion which will see it progress far beyond human-level intelligence in a very short space of time. How far? How fast? Well, given the fact that we have already left the realm of science fact and crossed well into science fiction, the sky’s the limit. Indeed, the more extreme among the AI prophets claim that artificial intelligence will outstrip our own by such a degree that they will essentially be gods by comparison.

This reflects the curious tendency of our age to put computer processing up on a pedestal in relation to human thought. It’s easy to see why this is the case, of course; computers are truly amazing tools that have completely revolutionised human life. But they are still just tools. And even if they do somehow, against the odds, become genuinely intelligent one day, it is still wild speculation that they will become all-knowing, all-understanding, perfectly rational, electronic super-geniuses.

Ask yourself these questions. Could an intelligent machine have intellectual biases like us? Might it develop irrational beliefs, perhaps coming to believe in a supernatural power that safeguards order in the universe? Could it read a scientific paper and fail to understand it? Will it formulate opinions rather than conveying pure facts? Could it make mistakes?

If the machine is genuinely intelligent, the answer to all of these questions has to be a resounding, ‘yes’, and yet to the believers in the coming era of AI supremacy, they must surely seem ridiculous. Indeed, they are ridiculous as far as modern-day computers go, but this is only because modern-day computers are inanimate, unthinking, logical data-crunching objects, no more capable of bias or mistakes than a hammer. A computer can’t hold an irrational belief because it is incapable of beliefs, nor can it make a mistake because it only does what we have programmed it to. These features (‘flaws’?) are reserved for thinking beings. If a computer one day does become a genuine thinking being, it must surely follow that it will also suffer the same fall from grace humans did when we made the same leap from dumb animal to intelligent being. To think anything else is to have failed to understand what it means to be intelligent and capable of intelligent thought.

Not only are there no good reasons to think that artificially intelligent machines will be all-knowing, perfect thinkers, this thesis doesn’t even accord with any reasonable account of what genuine intelligent thought is. And yet, the prophets’ faith in the coming of their digital gods remains unshaken. It is at this point that the field of AI has completed the transition from science to religion, ironically reversing the trend that gave birth to science in the first place.

The myth of the perfect thinking being is a fantasy human beings have nurtured ever since we first found ourselves capable of conscious thought. Whether we vest that belief in deities in mythical paradises in the sky or digital superintelligences here on Earth, either way, it’s still a myth.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s