Will AI ever become conscious?

Could our machines become self-aware?
17 October 2017

Interview with 

Henry Shelvin, Leverheume Centre for the Future of Intelligence

Share

Georgia Mills put this to philosopher of mind Henry Shevlin...

Henry - Consciousness is, on the one hand, one of the greatest mysteries that we still face as a species, but it is something where we are getting greater understanding from neuroscience and cognitive science more broadly. Still though, I think a lot of scientists would really be happy if we stopped talking about consciousness, but I don’t think we can. Consciousness is deeply bound up with our ideas about morality and value. Just to give a simple example, if we’re thinking about the ethics of boiling a lobster alive, which is often how lobsters are cooked, I think the key question we’re going to ask in thinking about whether that’s humane is does the lobster feel pain? Does it have conscious experience of it’s pain? So, I think, we can’t ignore problems about consciousness. Broadly speaking, there’s no reason to think we couldn’t build a conscious AI, but it may not be very obvious when we’ve done so. There’s still no clear consensus about what consciousness is for and what kind of functions in the human mind it’s associated with. So, even if we do build a conscious AI, we may not know it right away.

Georgia - Alexa; are you conscious?

Alexa - I know who I am - let’s put it that way.

Georgia - Alexa could have said anything there so I guess this takes us to the point how would we ever know if something was conscious or it wasn’t?

Henry - The gold standard for thinking about tests of consciousness is a thought experiment dating back to the amazing British polymath Alan Turing. In 1950 he posed this test, which is now known as the Turing test, where basically the idea is if you have a computer system in one room and a human in another and you’re talking to them both via a terminal so you can’t see which one is which. And you’re at a chance when you’re forced to guess as to which one is the computer and which one is the human. In other words, the computer can completely fool you into thinking it’s a human being. At that point, Turing says, we’ve got no business denying consciousness and intelligence to the computer system because it’s passing itself off completely as a human being.

Now that I think is quite a popular test for consciousness and it’s been very influential, but it did face some push back in the 80s by a classic followup thought experiment by American philosopher John Searle who came up with this idea of what he called The Chinese Room. And basically, the point of this is to show that you can get stuff that looks like real intelligent understanding which actually is very simple. The way the Chinese Room works is if you imagine you’ve got someone who doesn’t speak and Chinese at all and they’re sitting in a little booth surrounded by index cards. There’s a slot in the booth and you can post a question in Chinese script in through this slot. What the person inside the booth does - they don’t understand Chinese - but they can look in the index cards, each of which tell you what to reply in Chinese.

So the index card gets posted in, they find the corresponding answer card, write out these symbols that they don’t understand - they’re just copying them, and they post it out. The idea is if you had a big enough library of index cards so that all possible questions and answers were covered, this person could do a brilliant job of simulating that they understand Chinese whilst, in fact, all that they’d be doing is following a simple lookup table of instructions.

Now extending this to human beings, the idea is maybe we could have a computer that passed the Turing test by doing basically that. It’s got a huge, massive database of possible questions and answer responses it can give, but it doesn’t understand the words. So that’s put some pressure on the Turing test as this measure of consciousness. Maybe a computer could trick us into thinking it’s conscious by actually doing something pretty dumb.

Chris - Does it also matter, Henry, if a machine is conscious? Does that really matter?

Henry - I think it matters in two ways. The first is we actually start to worry about our ethical treatment of machines if there is a reason to think they’re conscious. This is something that’s explored in shows like West World or Humans where you have these tools basically that people use for their basest instincts. Maybe if you’re just dealing with things that are basically human shaped robotic vacuum cleaners, that doesn't make a difference. But, if you dealing with things with real emotions or real cognitive capabilities, then you might start to think we need to regulate that behaviour.

The other reason you think it might matter is if people start getting relationships, friendships,or even romantic relationships with AIs, as we’ve seen in movies like Her, then it might matter to the people involved. They want to know that not just that their AI can simulate love or emotions but it’s actually feeling and reciprocating these things.

Chris - Isn’t one of the attractions of things like AI that actually it’s not biased by many of the what we regard as human flaws like innate biases, or prejudices, or emotions that get in the way of a decision where your heart rules your head? And if we end up with machines that become a bit conscious and they start doing things very well like we do, they’ll end up like us, so they’ll think like us and be flawed like us, won’t they?

Henry - We can decide as we develop these machines what kind of architectures, what structures we want to put in place in terms of how their cognitive systems develop. So we might be able to make some conscious choices to avoid the biases along the way. I don’t think consciousness necessarily means the full emotional complexity of human beings, it could just be something as simple as feeling pain as a case of consciousness.

Nonetheless, I think there are reasons to worry about importing human biases over into machines. Particularly given that a lot of AI progress in recent years has been driven by mass data revolution. Taking data out there on the internet, plugging it into computational systems. There’s a risk in doing that in things like stereotypes about gender or race get imported into the AI just because they’re being exposed to the internet and all the different biases people have.

Georgia - Yes. In fact, wasn’t there an episode earlier in the year, Microsoft created a Twitter bot to learn from the rest of Twitter and I think within 24 hours it had become racist?

Henry - Yeah, pretty unsurprising, and you find things like gender stereotypes about profession. Just as Google or autocomplete faces worries about the kind of things people most frequently search for often carry hints of gender bias or racial bias. If that’s just the data we’re plugging into the machines, then it’s a bit much to expect them to be better than us.

Georgia - Could this be a real problem in society if people like the police end up using AI to do their jobs, could we end up with something that we assume is neutral, then not being, and this could cause problems?

Henry - Yeah. So this is a real motivation for a desire for transparency in AI. It’s particularly complicated these days because a lot of the algorithms that are being used are basically blind. We don’t know how they’re solving the problems. We can sort of reverse engineer them, but because there’s a lot of self-learning going on, it’s not always immediately clear why a system is making the decisions it’s making. If a system singles out a person for a stop and search, or someone going through airport customs gets singled out, we may not simply be able to say okay, what were the criteria used it it’s been subject to this kind of elaborate self-learning system, so we need to think about transparency too.

Georgia - Peter; as someone works on this, what would you say to this question of consciousness?

Peter - I think it’s very interesting. I still having thought quite a lot about it don’t really know what consciousness actually is. I think there’s that sort of sense of you being here and now and living in that moment and feeling, if that’s the definition of consciousness then I think that probably extends across the vast amount of the animal world. And I think if we’re going to worry about consciousness in artificial intelligence, we probably need to spend more time worrying about consciousnesses within nature.

Chris - Simon; a quick thought from you?

Simon - The definition of consciousness is the key thing here. Consciousness is an objective experience and we all feel we know what it is, but there have been various attempts to define consciousness in ways that allow us to make more progress in identifying it in animals or machines. So defining it as the property of informational systems by various axioms of conscious systems have so they're indivisible, they model the universe, they model themselves in relation to the universe, and so on.

I think that these kind of theories give us a lot of hope for understanding consciousness. My only problem is I think when a lot of people talk about consciousness they want it to be magic. They want it to be unexplainable and so if we come up with a system like this that does allow us to say, for instance, is Alexa conscious? Then people just won’t buy it because we feel that our consciousness is the last domain of our uniqueness as a species and we’re quite strongly set up to defend that. But, in defending it we tend to come up with phrases and questions that are just unanswerable, undefinable, and that is really getting in the way of clearly thinking through the ethics of AI.

Georgia - I think is one we could debate till the cows come home. Maybe something to take to the pub after the show.

Comments

Add a comment