X
Business

Artificial intelligence: Can AI crack the conundrum of consciousness?

What is intelligence? Does it need a body to exist? And could we ever really prove a machine is conscious or not?
Written by Natasha Lomas, Contributor

What is intelligence? Does it need a body to exist? And could we ever really prove a machine is conscious or not?

The discipline of artificial intelligence (AI) has existed for decades but questions around the nature of intelligence itself still dog the field to this day.

Is intelligence really just another artefact that can be designed, engineered and manufactured as we have fashioned aeroplanes or calculators or mobile phones? Where does it come from? And how do you measure it?

Right back at the start of AI, Alan Turing, the English mathematician who was interested in the notion of whether machines can think or be intelligent, realised that trying to determine whether a machine is actually thinking is not as easy as it first sounds: how can such a thing be proved? Isn't the whole venture too subjective to have a solid scientific foundation? And how do you define 'thinking' in any case?

Turing chose to focus on how artificial intelligence is perceived by humans, with only our biological intelligence to guide us. He posed the question of whether a machine can communicate in such a way as to pass for being a human and therefore trick us into believing it has a mind.

His work gave rise to the Turing Test, where a machine such as a chatbot engages in a text dialogue with a human judge, and attempts to fool them into believing they are talking with another person.

As interesting as the Turing Test is, especially to philosophers, it's not a test of machine consciousness, as inventor and futurist Ray Kurzweil has pointed out: after all, a sophisticated chatbot's dialogue may seem funny and smart, but it's a puppet show of intelligence with a hollowness at its heart.

And while machines may be able to fake humanity in a written dialogue, they still lack one of the key characteristics of being human: consciousness.

artificial intelligence robot body

Is consciousness the key to artificial intelligence?
(Photo credit: Shutterstock)

According to Susan Greenfield, professor of pharmacology at Oxford University's Lincoln College, this lack of consciousness is the key difference between people and machines.

"I would define [consciousness] as the first person subjective world as it seems to you," Greenfield told silicon.com. "It's an inner state - so what distinguishes us from a machine is that we have a subjective inner experience that no one else can share or hack into first hand. That's what distinguishes us from computers as far as we know.

"There's this lovely quote from the now late [psychologist] Stuart Sutherland who said he'd believe a computer was conscious when it ran off with his wife."

But does something have to be conscious to be intelligent? When it comes to biological intelligence as least, consciousness certainly seems to be a prerequisite for developing a sense of self - one indicator of intelligence.

But for scientists seeking to create a true AI, equipping a machine with a mind might not be enough - a body may be far more important...

Many of the subfields of AI are attempts to create some kind of artificial version of, or alternative to, the perceptive apparatus that is associated with biological intelligence: robotics for instance is the business of giving AI a body in which to 'feel' its world. But there are also researchers working on bodiless AIs - such as the field of whole brain emulation which seeks to create an exact model of a brain living only inside a computer, or pure programming attempts to create an AI that may exist purely inside a network.

Professor Alan Winfield, the Hewlett Packard professor of electronic engineering at the University of the West of England, is unconvinced that intelligence can exist without a body.

"Robotics researchers are deeply concerned with AI because that's what they need to make their robots work, whereas the same is not true for AI researchers - many AI researchers are not interested at all in robots because they think that you can just build AI in a computer. Personally I think they're mistaken... many roboticists believe that embodiment is a fundamental requirement of intelligence in general, that a disembodied intelligence doesn't make sense."

Oxford's Greenfield also believes you cannot have a brain without a body.

"My own view is that you can't disembody the brain," she tells silicon.com. "That it has to be seen in the context of the whole body because the brain doesn't work in isolation, it works with the immune system and the endocrine system and the autonomic nervous system - as well as itself, the central nervous system. Otherwise you have biological anarchy and I think we have to think of the brain as part of a body, working in concert with the rest of the body."

artificial intelligence brain

Does intelligence need a body - and if not, could a network one day think for itself?
(Photo credit: Shutterstock)

While both body and brain may be necessary for true AI, is it necessary for both to be mechanical? One far-distant AI scenario could see a hybrid of human intelligence with some form of mechanical robot body, with scientists already looking at the potential of human brain cells to power robots. But is the inclusion of human cells enough to create consciousness in itself?

"No one would say putting neurons in a dish they're going to be conscious, ever," Greenfield tells silicon.com.

However, by combining human and machine, scientists may be able to edge closer to the goal of creating consciousness.

"The interface of carbon and silicon I think is hugely likely and hugely possible and I would predict in the future that will outstrip even quantum computers if we had bio-computers with cultured nerve cells as components. But that's one thing compared to having computers that are conscious," she adds.

"My own view is that just by building ever more complex machines that have emergent properties it's an article of faith that they will magically become conscious and it begs the question they will do so. It also begs the question...

...do you know what consciousness is in the first place."

It's a fundamental question for AI and one that often sees debate on the subject run into questions of solipsism: an idea in philosophy that the existence of consciousness in another being cannot be proved.

It's what's known as 'the other minds problem', according to Ron Chrisley, a reader in philosophy at Sussex University and the director of the Centre for Research in Cognitive Science: "How do I know that other people are conscious? All I have is the third person functional behaviour to go on and then you start having this existential crisis and you think: 'Oh my god, maybe I'm the only person in the universe and all these other beings around me are just machines and they don't have any inner life'.

"Well, that's absurd."

But if consciousness cannot be proved, is the field of artificial intelligence hamstrung from the start? If its existence can't be established, can engineers and scientists ever really show an AI has been created?

Watch Professors Kevin Warwick and Noel Sharkey discussing machine intelligence

Being unable to disprove solipsism is even more of a reason to carry on building AIs in Chrisley's view, as such activity might in fact help us learn more about the phenomenon of minds and intelligence.

"We should always be looking to revise our notion of what intelligence is in the light of the results of our work," he said. "It's by doing this kind of work that we'll come up with a more refined notion of what consciousness or intelligence is."

"We might, by doing work and trying to build robots that behave in real-time like, say, the MIT researchers did under [former director of the MIT Computer Science and Artificial Intelligence Laboratory] Rodney Brooks, by doing that they found out that they emphasised different aspects of what intelligence is," he added.

According to Chrisley, there might be more than one type of intelligence capable of existing or coming into existence - perhaps one that's not so apparently tied up with an emotional life and a sense of self as our human intelligence appears to be.

"I acknowledge the great work that's being done about human consciousness but then wonder, well, might it not also be possible for other kinds of consciousness to be present in the universe, or other kinds of intelligence that don't have emotion tied in to them, or don't have consciousness as a key way that they achieve their intelligence?" he said.

Ultimately, Chrisley believes, it may be that we have to change the way we evaluate consciousness itself...

One example of a scenario that might start to test the boundaries of our notion of consciousness might be a robot that has the ability to store snapshots of its environment and then reference them at a later time to solve a problem, such as navigating a well-trodden route in the dark or being able to generate a mental collage in order to work out how to avoid an obstacle introduced into that route.

"Once you've got this ability to have perceptual-like representations of the world that aren't necessarily from the world directly but are from some combination of your experience of the world plus other things you're trying to imagine...then it becomes more and more tempting to talk about that robot as something that has some kind of experience because the experience...explains why it's able to behave appropriately even though the lights are out," he explains.

"I'm not saying that's sufficient - I'm not saying any robot that can do that is now a conscious robot - but that seems to be one of the components of consciousness, that capacity for an inner world, an inner mental life."

Chrisley's point is there's never necessarily going to be an absolute answer to the conundrum of consciousness. However, if an AI is built that behaves in such a way that a critical observer can't help but "attribute mentality to it" - based on a set of principles such as it having some form of inner life and an ability to plan and display motivation, emotion and so on - then it may be difficult to say it's not conscious even though it may be technically impossible to prove there's anyone home.

"Maybe it becomes harder and harder not to think of that robot as something that's experiencing the world or having a conscious experience," he says, adding: "It's hard to solve these things in advance purely theoretically - I think we'll find it even as scientists not just as consumers of furry robots but as scientists we'll find it harder and harder not to use experiential talk as a way of communicating with each other about these robots."

Taking a less rigid approach to concepts that are so shiftless, and at best only tacked down, may in the end be the only way to start to get close to recreating them artificially - taking small, iterative steps and learning all the while seems to be the sensible approach for AI to take.

Asked how much we know about the human brain, Oxford University's Greenfield's answer is immediate: "Everything and nothing really." The more we learn, she says, the more we realise how little is truly known.

brain scan

An MRI scan of the brain
(Photo credit: Shutterstock)

"I think there's a lot of smugness and self-satisfaction [in neuroscience], very misplaced, that just because we know how synapses impart function, just because we can identify and document the actions of a whole variety of signalling molecules - just because we know with brain scans that certain bits of the brain light up in co-relation with certain activities that really tells you nothing," says Greenfield.

If we are still so in the dark about...

...ourselves and the apparatus of our own intelligence it's difficult not to conclude we have an awful lot more groping around in pitch black to do before we can hope to even begin to start trying to create a truly intelligent AI.

"My own view is that if we mistakenly equate building clever things that learn - that is to say, have modified outputs in return to inputs - we're deluding ourselves if we think that that is really solving the big question," adds Greenfield.

"I mean robotics are very, very powerful. But of course that's not conscious, it's not running off with someone's wife, it's not a tree, it's not a rose - it's not having inner states, it's doing things, but computers have been doing things since the time of Babbage."

artificial intelligence robot body

How long until robots are running off with someone's wife?
(Photo credit: Shutterstock)

Still just because it's hard to see something, doesn't mean we shouldn't look - or that it's fruitless to do so. And there's always another angle from which to perceive and approach a problem - even something as complex as constructing an artificial intelligence.

One approach that seems to hold promise is an area of research called evolutionary AI - the key idea being to set off a process of intelligence evolution and thereby obfuscate the need for humans to truly understand intelligence in order to, theoretically at least, create it, just as natural selection created us.

Chrisley explains: "You don't build, you don't program down to the last detail what the robot's going to do, instead you evolve societies, populations of robots - you basically set up a situation in which intelligent behaviour will be rewarded in the sense [that] the robots will reproduce - it's an environment where only by being intelligent can you survive... The people that do this, they don't understand the solutions that the robots come up with or that the natural selection process comes up with - solutions appear through random mutation and crossover and are then selected."

Which sounds almost as if it could be AI's eureka moment. Even if our brains are too limited to truly understand what intelligence is, we might yet be clever enough to create machines that can set off on that path on their own.

It's a thought.

Editorial standards