The central issue underlying all this talk of the potential dangers of artificial intelligence is safety. We don’t want to ‘accidentally’ create something that will have disastrous consequences; consequences which could perhaps have been foreseen and avoided had we been a little more conscientious.
This is one issue about which I largely agree with the prophets of the singularity. No one can know for certain that the possibility of creating an artificial intelligence that will make us look as intellectually insignificant as ants look to us, is zero. We might disagree on what that number is, but we should all be able to agree that it is at least above zero and less than one hundred (per cent). Given this, we have an obligation to advance with caution.
The Control Problem
The control problem refers to the difficulties inherent in maintaining control over a super-intelligent AI. The argument is that if we create an artificial super-intelligence and it becomes completely independent, it might enslave, or even destroy, us. Purveyors of AI doomsday scenarios seem to take a perverse delight in imagining how an AI might escape our control and cause havoc.
One obvious scenario is for a computer program to just go completely rogue and turn on us, its intellectually inferior creators. This fear gets its justification from nothing less than human history. What has happened whenever a more advanced race has encountered a less advanced one? Best case scenarios tend to slavery, worst case scenarios look more like genocide.
For the purposes of this article, I will define consciousness as the subjective experience that somehow accompanies human cognition – that sense of self-awareness or the feeling that it is like something to be me.
Is Artificial Consciousness Necessary for AI to Pose a Threat?
To his credit, Sam Harris, in his podcast discussion with Max Tegmark, acknowledges that consciousness is the only thing that provides meaning in the universe (although he also denies there is such thing as a self and believes we are all fully determined; so square all of those ideas if you can) but is of the opinion that it is irrelevant concerning AI because an artificial super-intelligence can still destroy the human race, even if it completely lacks consciousness; that is, even if it is incapable of subjective experiences. As Tegmark puts it, if you’re being chased by a heat-seeking missile, you aren’t going to care whether it is experiencing anything or not. The end result will be the same either way.