Reality Checking AI (3/5) – Consciousness

For the purposes of this article, I will define consciousness as the subjective experience that somehow accompanies human cognition – that sense of self-awareness or the feeling that it is like something to be me.

Is Artificial Consciousness Necessary for AI to Pose a Threat?

Image result for machine consciousness

To his credit, Sam Harris, in his podcast discussion with Max Tegmark, acknowledges that consciousness is the only thing that provides meaning in the universe (although he also denies there is such thing as a self and believes we are all fully determined; so square all of those ideas if you can) but is of the opinion that it is irrelevant concerning AI[1] because an artificial super-intelligence can still destroy the human race, even if it completely lacks consciousness; that is, even if it is incapable of subjective experiences. As Tegmark puts it, if you’re being chased by a heat-seeking missile, you aren’t going to care whether it is experiencing anything or not. The end result will be the same either way.

Let’s start with Tegmark’s heat-seeking missile. Clearly it doesn’t matter if it is conscious or not but this is both irrelevant and misleading. The question we ought to be asking is whether an unconscious computer could launch a heat-seeking missile in the first place.

An AI enthusiast might think this a ridiculous question. Can a human do it? Yes. Well then, obviously a computer could do it. We just have to build one smart enough. But this is precisely the (often blind and unjustified) assumption I have been questioning all throughout this series of articles.

What would an agent have to be like in order to be capable of launching a heat-seeking missile in the pursuit of a goal? I can think of at least three things:

  • First, it would need to be able to independently set its own goals. There is no evidence I am aware of to suggest that non-conscious collections of microchips could do such a thing. Couldn’t humans program the goal into the computer? Absolutely, but no real world goal can be achieved in one fell swoop. Intermediate goals will always be required. Imagine a human gives a computer the goal of maximising the production of paperclips. In order for the computer to realise this goal, it will have to create a whole host of intermediate goals for itself, one of which might include sending a heat-seeking missile your way.
  • Second, it would need to be capable of highly abstract thought. Without this it wouldn’t be able to see how the different actions necessary to realise its goals are connected, nor would it understand how these actions build on each other to achieve the goal.
  • Third, it would need to somehow develop an understanding of an incredibly complex, and non-obvious, human world. Without an understanding of the myriad of mundane things that we understand intuitively, like the practice of exchanging money to buy goods, the vulnerability of the human body to heat-seeking missiles, the concept of death (dead people can’t interfere with the plans of the living), etc., a computer wouldn’t have a chance of realising any goal.

Basically, this is just making explicit some of the assumptions that tend to get lost when the typically poorly defined term, artificial intelligence, is used. When we were discussing whether artificial intelligence required consciousness for its nefarious schemes, it seemed plausible to argue that it didn’t. But once we crack open the term AI and detail some of the specific cognitive abilities that underwrite any intelligence capable of carrying out the feats that certain members of the AI community claim are real threats, this conclusion no longer seems obvious.

I would argue that consciousness, that is, the subjective first-person experience, is required for goal-setting, abstract thought, and practical competence. Without a conscious awareness of oneself as an individual, separate agent operating within a world of objects, artefacts, and other agents, it is difficult to see how any collection of microchips could ever do anything we might call ‘intelligent’.

But look at animals. Even the simplest of them are capable of performing quite complex actions in the pursuit of their goals (including intermediate ones) all without consciousness.

Image result for mirror test

Animals are actually a good example. Different species appear to fall on a continuum of consciousness with certain of them (the usual suspects; primates, dolphins, etc.) seeming to be on the cusp of self-reflective, self-aware behaviour, while others act almost entirely on instinct. The interesting thing here is that those animals nearer to fully-fledged, human-level consciousness are capable of more complex behaviour while those at the lower end of the spectrum are almost completely unable to adapt their behaviour to a specific situation and intelligently solve problems.

Tellingly, the actions of human beings (the only fully conscious creature we know of) are so far above those of our animal cousins that many of our number are convinced they have been divinely bestowed. Is it a coincidence that the capacity for complex behaviour, intelligence, and degree of consciousness all seem to be correlated?

Is Artificial Consciousness Possible?

This then begs the next question; is it possible that an AI system could become conscious? As with so much of the speculation that surrounds AI research, the answer must be, of course it’s possible. However, it might just as likely be impossible. There is just so much that we don’t know. How much of the complex neurology of living brains is actually reproducible in an inanimate substrate comprised of NAND gates? Does consciousness emerge from nothing more than the complexity of connections or is it complexity plus certain other specific parameters? Are those parameters reproducible in non-living systems? And so on.

It could just be that once we cram enough NAND gates into a small enough space and allow sufficiently complex connections to develop between them, the lights will just somehow come on. But maybe they won’t. Once more, the point of these articles isn’t about rubbishing AI. It’s about keeping our feet on terra firma and arresting the blind optimism that seems to have infected much of the industry. As uncertain (and still unproven after decades of promises) a goal as human-like artificial intelligence is, artificial consciousness is orders of magnitude more so. Is it possible? We have to answer, yes. Is it likely? Given what we currently know about consciousness and computers, there is plenty of scope for healthy scepticism here.

[1] Ethical considerations aside.

One thought on “Reality Checking AI (3/5) – Consciousness

  1. The conversation around A.I. consciousness will only ever be a thought exercise. It will never happen, and my good friend Jacques Ellul pointed this out almost 70 years ago in The Technological Society.
    Technique is defined by Ellul as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of human development) in every field of human activity.” In short, all machines are designed to accomplish specific tasks efficiently. All machines are manifestations of specific values with inherent tasks in mind when designed. Therefore, programming A.I. to have emotional, sentient consciousness indistinguishable from ours would be grossly inefficient. What would engineers have in mind when designing such a machine? Better yet, why would our technological society ever be in need of such a machine, i.e. what is the purpose of such a machine in the context of the rational totality? Food for thought….

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s