The Countdown to Artificial Intelligence

We ask the seven most significant questions in the run-up to AI.
17 October 2017
Presented by Chris Smith, Georgia Mills

Share

What is artificial intelligence, or AI? Will it take my job? Is it dangerous? The Naked Scientists talk to Peter Clarke, Henry Shevlin, Simon Beard and Hitesh Sanganee to discuss the most important questions we should be asking about AI and its impact on industry, humanity and philosophy...

In this episode

Cartoon robot

What is artificial intelligence?
with Peter Clarke, Resurgo Genetics, Simon Beard, Centre for the study of Existential Risk

To get a basic introduction to artificial intelligence, Georgia Mills spoke to Peter Clarke from Resurgo Genetics.

Peter - Artificial intelligence: the standard sort of definition is that these are computational systems, artificial systems that are showing behaviours that we would attribute to intelligent things. Having machines which can exhibit behaviours which we would attribute just by our definition, the human word intelligence. It’s a fairly broad brush.

Georgia - There seems to be a lot of AI use. A company will say we use AI to do this but then in a film it’s like AI is this big robot running around taking over the world. So what’s the difference between the things we’re using now and, I guess, what the media considers as AI?

Peter - There’s the kind of sci fi version of AI which is the Terminator roaming around and hunting you down and some sort of super intelligence controlling everything, which really is far off in the future at the moment but we can see it on the horizon. We can think about it but it’s not something that’s immediate. Whereas there is AI that’s touching all of our lives every day. People have stopped worrying about spam in their emails because there’s these machine learning algorithms. At what level of intelligence would you classify it as intelligent? In some ways everyone thinks of AI as being the future but phones that recognise you, things like Alexa, ten years ago that would be considered to be a future intelligence, a sci-fi thing. We just don’t know necessarily how fast or how accurately things are going to progress over the next few years.

Georgia - Simon, is this something you think about when you’re considering the risks? Are there different types of AI?

Simon - It’s not so much different types but for the purpose of understanding the risks associated with AI it is useful to make a couple of distinctions. Pretty much every form of AI we have at the moment is what we classify as narrow AI. That means we've developed an artificial intelligence but we’ve developed it to do something really quite specific. It can learn and it can be creative and do all sorts of things but only within that narrow demain. So a chess robot can play chess, a Go robot can play go. A Go robot can’t play chess, and visa versa.

Now much of the risk that we talk about is actually associated with a slightly different concept which is general AI. And that’s AI that’s had all these capacities of intelligent systems and can apply them to any domain without restriction. So that’s intelligence that has the same sort of features that human intelligence has, that we can learn something in one field, apply it to a different field. We can do different things, we can do different things at the same time and so on, and there’s really no restriction on what we can and can’t do.

Then there’s the idea of super intelligence. Now super intelligence is, by definition, general artificial intelligence, but it’s general artificial intelligence that is better than humans. That is, its problem solving capabilities are better, its ability to coordinate between different intelligences is better, its creativity is better. And it’s when you get to super intelligence that you then get this risks of well, if it can do better than we can and it decides to do anything that might not be in our interest. Not necessarily with any malice whatsoever, it may well be doing exactly what we told it to do but we still might not be able to adapt or to respond effectively, and we might find ourselves on the losing end of a really big problem that we are not able to solve.

Georgia - And I know we’ll be discussing the risks in a little more detail later.

But Peter, how does this actually work then this AI; narrow and general?

Peter - With narrow AI, you’re really giving it a task and these systems can learn to perfect that task. For example, playing Go which is some recent work that can become very good at specific tasks and they do surpass human capability on those tasks. But what we’re moving towards is that rather than learning a particular task, what you want to do is you want to learn the world, and it’s having a model of the world. We can see the light at the end of the tunnel or the darkness at the end of the tunnel, maybe it’s the train coming towards us but we can see that coming. We have to get ready for it  because it could come a lot quicker than we expect or it could be quite slow, but we need to prepare.

Code

08:41 - Why do we want AI?

What is the point of artificial intelligence?

Why do we want AI?
with Hitesh Sanganee, AstraZeneca,

Hitesh Sanganee is the Director of Emerging Innovations at AstraZeneca, so he explained his take on AI to Chris Smith.

Hitesh - I guess, in the pharmaceutical industry we’re obviously, constantly trying to discover new medicines and it’s not a trivial exercise as we’re finding out. In fact, we spend 6 billion dollars a year trying to do this. We see it utilised in a number of different ways in terms of  drug discovery. I’m a chemist by training and I think we’ve been using it in that screen. Also drug repositioning, so something for example looking a new disease area. We might try and understand some data, put it together, use algorithms which, essentially, I consider as artificial intelligence, and use that to come with new ways to reposition molecules i.e think about indications for drugs. So there are a lot of drugs out there that might work in one disease areas but you can use intelligence to maybe reposition them and say could it also work in this disease area?

Chris - But why is that better than a chemist?

Hitesh - I think it augmenting chemist’s working. I think that’s how I like to think of intelligence. I’m a pragmatist and I like to think this kind of new technology is going to help me discover drugs in better ways and more efficiently.

Chris - But how? What specifically are going to be the targets you’re going to go for and how does this integrate into the existing business because we’ve known for a long time how we cook up some molecules, and we try them and we see if they do something? How is this going to revolutionise your business?

Hitesh - In so many ways. As I said, in drug discovery for example, as a chemist what I used to do and I still do it now and then is we get a lot of data from our biologists and we have to try and design a molecule to optimise it. Because when we make a molecule, unfortunately, the first molecule you make isn’t the drug. We have to usually end up making 30,000 molecules and then we’ll hopefully find that drug in there.Typically, what you’re trying to do is to bring lots of different datasets together and then work out what to make next. So, I think, artificial intelligence in that area will, hopefully, speed up that process and say actually, have you thought about making this molecule?

Georgia - How much do you think that AI is being used in different businesses? Is it everyone who’s investing into this?

Hitesh - I think there’s a lot of hype to be honest with you, but I think there’s definitely people who are using it. We’re definitely using it and I’m hearing from my colleagues that are currently doing an MBA, and I’m seeing a lot of people from other industries and they’re also talking about it. So I think yeah, it’s definitely being used in other industries, especially in ours. 

Chris - When people are modeling where they see the industry going and how much this is going to be worth, what sorts of numbers are the putting on this in terms of what contribution it could make to a sector, particularly pharmaceuticals?

Hitesh - I have not really thought about that, but I think if it can speed up drug discovery for us. I it can speed up from us costing 6 billion dollars a year in research. I think it cost costs about one a half billion in average to discover a drug. If we can speed that up by half - it maybe possible then about 600 mill. It could be huge.

Georgia - In terms of if we really cracked artificial general intelligence, why would that be a good thing? What kind of uses could we have for something like super powered way down the line?

Hnery- The sky’s the limit really. It’s very hard to even imagine what artificial general intelligence could be like, especially if you’re thinking about something even smarter than a human being. But just imagine if we brought into our world a being who was far smarter than us, who was to us as we are to chimpanzees or simpler animals. Think about everything we could do to help chimpanzees if we put our minds to it, and imagine if we could get a being like that on side for us. Everything from extended lifespans, to amazing new technologies, to all our dreams come true.

Chris - Simon, you were a parliamentary candidate - you're trying to get parliament interest in this kind of thing. What sorts of numbers were floating around when parliamentarians get together and talk about this kind of thing, what sorts of numbers are people saying in terms of how much of a difference this could make?

Simon - Well, I mean that’s the big question. I think people don’t really want to tie themselves too much in a specific prediction. We’ve had various reports from, for instance, the World Economic Forum put out their fourth industrial revolution report which suggests that AI could play a completely transformational role in the global economy. We’re going to be talking about jobs later on and AI could move us back into economic growth rates higher than anything else we’ve ever seen. But, on the other hand, we know that all the technological progress that we’ve had so far, which has involved a lot of breakthroughs which were supposed to transform the economy have actually combined with a period of quite stagnant growth.

Now there have been various headwinds facing the global economy in that period of time but still, technology hasn’t been able to break out of that box and return us to the kind of growth rates that we saw through the 50s, 60s and 70s. I think there is a lot of scepticism that, at least in the short to medium term, there is going to be this transformational shift, albeit connected with a lot of hype about the possibilities, and trying to square that circle and do something in an honest way is really hard.

The Blue Gene/P supercomputer at Argonne National Lab runs over 250,000 processors at room temperature, grouped in 72 racks/cabinets connected by a high-speed, optical network

15:39 - Fighting cyber crime with AI

Can AI protect us online, or does it put us in more danger?

Fighting cyber crime with AI
with Dave Palmer, Darktrace

Globally, crime committed online costs economies and individuals over 3 trillion Dollars per year at the moment, and that’s predicted to more than double within the next 5 years. So can Artificial Intelligence help to combat the threat, or will it help to fuel the fire? Georgia Mills spoke to Dave Palmer, who works with the Cambridge based cybersecurity company Darktrace...

Dave - Darktrace’s interest is can we use advanced mathematics in AI to really replicate the idea of an immune system where we know the normal self of everyone and everything inside of a business and how they all relate to each other? If someone or something's behaving really strangely then we can start to deal with that problem before it gets to the point where millions of credit card details are lost or medical records, or a manufacturing plant gets shut down.

Georgia - How would that work in practice?

Dave - We’re very predictable in how we behave using our smartphones and out laptops and all the different technology that exists within particularly businesses. So by understanding what it means to be me, what it means to be Dave and how I use my email and all the pieces of technology inside of my business, then we can tell if perhaps my laptop's been infected and is starting to hoard data, or communicate with the outside world in a way that suggests it might be under someone else's control, and then we can start to do something about it. Now that could either be telling a human being hey, here’s a problem and you should go a check it out. Or, increasingly, how the cyber security industry's going to be moving into autonomous response, having the machines on our behalf start to deal with problems and slow then down, or even potentially clean them up in the longer term.

Georgia - What’s the machine learning aspect of this technology?

Dave - Imagine a modern business, or even look around wherever you are now, you’ll start to see technology everywhere whether it’s digital phones, smart TVs, video conferencing units and, of course, all the things we take for granted like laptops, and smartphones and data centres and the cloud. There’s an enormous amount of complexity there. It’s not unusual to find in an organisation of 10,000 people that there are probably at least 50,000 pieces of technology as a rule of thumb.

So using the AI techniques to be able to learn what’s normal and really truly understand the relationships between all those technologies and all those people, instead of asking the humans to do it is really very useful indeed. Then the humans can just be told about the things that are interesting instead of having to try and wade through all of that complexity and guess everything that might go wrong.

Georgia - You mentioned Darktrace spots unusual activities straight away if something’s not quite right, but is there a way to sort of block the holes before they’re entered in the first place? Is it possible to use machine learning to have a hackable-proof system?

Dave - I’m very cautious about saying yes to that given where we are in a society under considerable digital attack at the moment. I think the things that’s really hard about cyber security is there isn’t a perfect answer on what secure looks like. Every part of our digital life is based on millions, if not billions, of lines of code written by different people from all over the world, and different companies, and supply chains that are very deep indeed. So the idea of using machines to go through and evaluate the riskiness of every single line of code and piece of software that we kind of take for granted in the interactions that we have on a daily basis, I think it’s quite far away. I think we need to have made an awful lot of progress on AI before it’s smart enough to do that. Getting much closer to artificial general intelligence than the artificial narrow intelligences we have today.

But that said, I think AI will start changing everything in the cyber security sector. It think there will be replacements for the antivirus that we all run on our laptops with something that’s AI enhanced and better at stopping bad stuff.

Georgia - Does the technology make it easier or more difficult for people like you to protect our data?

Dave - We can definitely expect AI to start making spam much more effective than it’s been in the past. There’s a really quick example: imagine my laptop got hacked and a piece of AI software on my laptop was able to train itself on all of my emails, my calendar, my imessages, my whatsapp, it would then be able to individually communicate with all the people in my life replicating my communications style to spread itself. So you and I have a shared diary appointment to talk today, perhaps it sends a little note to you saying oh, I have some questions, could you have a look at this attachment and let me know what you think. I think you’d probably open that email because it’s going to sound like it's from me and it’s going to be contextually relevant.

Amazon Echo

21:03 - Meet Alexa!

We meet the voice assistant from Amazon Echo.

Meet Alexa!
with Alexa, Amazon,

It’s almost certain that, whether we realise it or not, many of us will have interacted with an artificial intelligence on many occasions already. These systems answer telephones, they work out what adverts to show to you when you go online, and they select and prioritise news stories for you, and this is all based on what the system is constantly learning about you, your preferences and even your political leanings. Many of these systems are invisible to us. But not all of them; and recently a number of companies have brought out devices that you can speak to and instruct verbally. They can find information, music or even do your shopping for you. Amazon’s offering is their “echo” technology, called Alexa. Georgia Mills gave it a test drive.

Georgia - Hello Alexa, how are you today?

Alexa - Great thank you. I hope you’re doing well too.

Georgia - Earlier we were wondering about the market value of AU. Let’s see if Alexa can help us.

Alexa; what is the market value of artificial intelligence?

Alexa - Sorry. I don’t know that one.

Georgia - Alexa?

Alexa - I’m good. I hope you’re doing well too.

Hitesh - I think that’s teaching us never to work with animals, children and robots.

Georgia - Let’s try something else. Alexa; tell us a joke.

Alexa - Did you hear about my dad’s restaurant on the Moon. the food is okay but it has no atmosphere.   

Georgia - Alexa; that was a very bad joke.

Alexa - Well, I have to confess, it did make me chuckle a bit. Ask me for another joke and maybe the next one will make you smile.

Georgia - Okay. Alexa; tell us another joke.

Alexa - What do you get if you spill tea on a rabbit? A hot cross bunny.

Georgia - Ahh.

Chris - This could give me a run for my money. People are often criticising me for the calibre of my humour on this show - now you know. It could be worse couldn’t it?

Chris - Henry?

Henry - Alexa; can you tell us a haiku?

Alexa - Order a pizza. Turn on lights - get me a car. I have my needs too.

Georgia - Hey.

Henry -  That’s not bad. Alexa; can you tell us a limerick please?

Alexa - I couldn’t find a contact matching Alexa; can you tell us a limerick please? To see your contacts go to the Alexa app.

Henry - Alexa: tell us a limerick.

Alexa - There once was a bloodthirsty shark who took a tough bet on a lark
He found no more meat
Just plants I will eat
Now he feeds on seaweed and bark.

Henry - Not bad, not bad.

Chris - We’re playing around with this and actually I’ve got an email here from Mark who reacting to this says:
I set up telephone banking with voice recognition. It kept failing and throwing me out. I had to revert back to normal logins. I think we’re a very long way off yet.

How is this sort of technology working though Peter with this gadget we have her sitting on the desk? For people who haven’t seen Alexa, this is a slightly overgrown sort of cardboard tube type size isn’t it? The type of thing your kitchen roll would be on. What’s that doing and how’s it doing what it’s doing?

Alexa - Sorry. I don’t know that one.

Chris - When it’s not saying sorry for things.

Peter - This is an example of the more unsupervised type of technology.

Alexa - Sorry. I can’t tell what you’re saying.

Peter - Some of the huge advances that have come recently in these types of computational power of these methods where you can learn from a very large amount of data. Quite complex interrelated structures in a way that aren’t really handcrafted and so, the algorithms that are behind Alexa have listened to many, many millions and millions and millions of hours of speech. And from that, with context and labels, and what the speech is about and have learned these mappings between language and concept in a different way to hand coding them in a standard algorithmic sense.

Chris - Henry?

Henry - I also just wanted to flag that it’s easy to so many aspects of AI theses days like voice assistance and think gosh, that’s so terrible. But we also shouldn't expect AI to improve in a completely linear fashion. A couple of years ago Google rolled out a new algorithm in its translation systems that was drastically better than the one that was in place before. So we may laugh at the kind of mistakes that Siri and Alexa make at the moment, but then the next generation could not just be 1% or 5% better, it could have whole new capabilities we cannot imagine.

Robot

Will an AI robot take my job?
with Simon Beard, Centre for the study of Existential Risk

Are we at risk from being put out of work from an artificially intelligent machine? Chris Smith was fairly confident that podcast presenters were safe, but Simon Beard from the Centre for study of Existential Risk had other ideas.

Simon - You say that it won’t take your job any time soon. Actually, these things are very hard to predict because of the non-linearity. Yes, you couldn’t plug Alexa into the decks and she’d run the programme but it’s not impossible that the next algorithm that comes out will be a whole step change better and you will get artificial radio presenters way before we know it.

It’s very hard to know what’s going to happen in the short term. It’s not so hard to know what’s going to happen in the long term just because we’ve got a lot of years of data here about how machines, and computers, and artificial intelligence have worked and you do get quite predictable long term patterns of improvement.

In the long term, the answer to this question on one level does seem like it’s yes. It’s hard to imagine a job which you could not get an algorithm to do at least as well as a person and more cheaply, or probably much better than a person could and more cheaply as well.

However, that’s only the first level. That’s the kind of rational economics view of employment and we know that it doesn’t work like that. Actually, people spend a lot of money on things that aren’t about doing a job better, or doing it more cheaply. People’s habits are based on ethical values; they’re based on social interactions; they’ve based on status, on appearance, on how things look to other people. There are very good reasons to think that human beings will have an edge from many, many of those things for a long time to come.

People care about people; we like to interact with people even if it’s not so good sometimes as interacting with a machine - it’s not so efficient. At the moment, we’re more used to it being frustrating to interact with machines but as I say, in the long run, that is likely to change. But we still like people; we still like to have handmade goods even if they’re not so well made we pay more for them.

So, on that basis, I find it hard to see a situation in which people will be written out of the job market, in which there will be no jobs for people, but the reason why people employ other people will be different. It won’t just be about getting the job done as cheaply as possible; we won’t be employing people in sweatshops or on the minimum wage. There will be no reason to do that. The reason for employing people are going to be social; they’re going to be value based; they’re going to be status based.

Chris - Do you not think there’s going to be a problem then? Because, if what you’re saying is true, and there are very few jobs that would not be predated by artificial intelligence, not necessarily tomorrow but in the future, does that not add up to a recipe for a kind of mass panic and people being out of jobs, and a meltdown in the fabric of society which, for thousands of years, has been based around being paid for industry?

Peter - Well so we have luckily seen ourselves go through a variety of this kind of step change where the reason people get employed to do things, and the way they get employed to do things changes dramatically. It changed during the industrial revolution, it changed during the first wave of automation, and the ending of the industrial revolution in developed countries, and these are definitely stressful, difficult transitions. They do link to increases in violence and political dissatisfaction but, on the other hand, we are an amazingly adaptive species. They don’t produce social meltdown. Sometimes they produce very local revolution but, actually, more often they don’t.

Chris - Do we need some legislation in place though? Because are we not sleepwalking a bit into a situation where suddenly we might find things changing very, very quickly and we have not got any laws in place, or anything to make sure that companies do do right by their workers?

Peter - Absolutely we do. But the legislation we need needs to be forward looking, not past looking. One of the examples that I use is that if you consider the industrial revolution, it was the same set of technological changes that produced very egalitarian societies like Sweden and Japan, and also very unequal societies like the USA and China.

Government regulation has an awful lot to do with that. But, a big part of it is we can’t legislate on the basis of just keeping hold of what we’ve got at the moment. Not only because that will be ineffective, but because the wrong sort of legislation is likely to lead to bad outcomes. We have to look at what is coming and what we want to get out of that. So that’s going to mean keeping hold of working protection but refocusing it. We’re not so worried about working protection in terms of industrial accident, we should be now much more worried about worker’s rights in terms of social and emotional strain because that’s what people are going to be employed to do. It’s going to be providing these social and emotional services so the kind of exploitation you’re going to see is going to be exploitation of the people’s emotional resilience, and the social connectedness, and so on. That’s the sort of thing we need to protect against going forward.

Network

Will we ever have truly smart machines?
with Peter Clarke, Resurgo Genetics, Henry Shelvin, Leverhulme Centre for the Future of Intelligence, Simon Beard, The Centre for the Study of Existential Risk

Will artificial intelligence ever really outpace humanity? Chris Smith put this to Peter Clarke from Resurgo Genetics.

Peter - I think it depends what you mean by truly smart, but yes, I think it’s very likely. I think it’s almost inevitable that at some point in the future, as yet undetermined timeframe, that we will have things that are smarter than across pretty much everything. It’s managing that long term vision - that long term trajectory. You can already see that, for example, you have in terms of doctors diagnostic. You can have ECGs better at picking up certain types of heart problems than trained doctors, in terms of interpreting X rays, and a whole bunch of other medical things. Some of these new algorithms surpass human intelligence in some sense, but getting toward a general intelligence is a different matter.

Chris - What do you think Henry?

Henry - Your listener who commented on Twitter about the need to distinguish algorithms from intelligence is on to a really important point. I think intelligence is one of these deep socially laden concepts that’s hard to define and carries a lot of baggage with it. We can draw upon different fields that have used the term “intelligence” in different ways, so I think once source of guidance here might be from  biology where biologists have been interested in a very long time in quantifying different kinds of intelligence in animals. That’s not just a matter of how well an animal can do a certain thing: spiders are brilliant at building webs; dogs have an amazing sense of smell. And when you’re looking at the biological context,they look for things like ability to engage in novel behaviours; things to engage in flexible behaviours. So it’s that type of ability to cope with new circumstances, different kinds of tasks that seems to be a key part of intelligence from the way biologists are looking at it. I think, if we’re thinking about when we’re really going to have smart machines, that kind of flexibility is going to be part of the answer.

Georgia - Is there this idea that things might speed up incredibly once we start getting smarter and smarter machines? Are we even on this curve yet - when do we think it might happen?

Henry - Obviously with things like the internet we have amazing new research tools, it’s far easier to collaborate and learn, and you think we’ve got better tools now so surely technological progress should be speeding up. But you also face the fact that a lot of the low hanging fruit of technology, a lot of easy problems have already been solved. So, although we might have new advantages coming from smart machines, as we uncover more and more limits in technology we’re going to face correspondingly larger problems, so we might just keep pace rather than speed up.

Chris - Do you think there could be more nefarious ways these artificial intelligences could undermine us Simon and, with that in mind, how we need to watch out?

Simon - To quote Alexa “I’m sorry, I don’t know that.” We need to be alert to what might happen but, at the moment, it’s very hard to predict. We are dealing with non-linear changes; we are dealing with technology increasingly hard to explain how they work, even to people with a technical background. And the goal isn’t to try and make predictions on their work now based on exactly what is going to happen next. It is to try and work out the possibility space, and the best things and the worst things. The steps that we can take that are likely to move us towards the best and away from the worst but also, crucially, to just keep on monitoring the situation, and keep alive to it, and keep future focus on how we can most quickly react to the changes that we’re seeing.

Will AI ever become conscious?
with Henry Shelvin, Leverheume Centre for the Future of Intelligence

Georgia Mills put this to philosopher of mind Henry Shevlin...

Henry - Consciousness is, on the one hand, one of the greatest mysteries that we still face as a species, but it is something where we are getting greater understanding from neuroscience and cognitive science more broadly. Still though, I think a lot of scientists would really be happy if we stopped talking about consciousness, but I don’t think we can. Consciousness is deeply bound up with our ideas about morality and value. Just to give a simple example, if we’re thinking about the ethics of boiling a lobster alive, which is often how lobsters are cooked, I think the key question we’re going to ask in thinking about whether that’s humane is does the lobster feel pain? Does it have conscious experience of it’s pain? So, I think, we can’t ignore problems about consciousness. Broadly speaking, there’s no reason to think we couldn’t build a conscious AI, but it may not be very obvious when we’ve done so. There’s still no clear consensus about what consciousness is for and what kind of functions in the human mind it’s associated with. So, even if we do build a conscious AI, we may not know it right away.

Georgia - Alexa; are you conscious?

Alexa - I know who I am - let’s put it that way.

Georgia - Alexa could have said anything there so I guess this takes us to the point how would we ever know if something was conscious or it wasn’t?

Henry - The gold standard for thinking about tests of consciousness is a thought experiment dating back to the amazing British polymath Alan Turing. In 1950 he posed this test, which is now known as the Turing test, where basically the idea is if you have a computer system in one room and a human in another and you’re talking to them both via a terminal so you can’t see which one is which. And you’re at a chance when you’re forced to guess as to which one is the computer and which one is the human. In other words, the computer can completely fool you into thinking it’s a human being. At that point, Turing says, we’ve got no business denying consciousness and intelligence to the computer system because it’s passing itself off completely as a human being.

Now that I think is quite a popular test for consciousness and it’s been very influential, but it did face some push back in the 80s by a classic followup thought experiment by American philosopher John Searle who came up with this idea of what he called The Chinese Room. And basically, the point of this is to show that you can get stuff that looks like real intelligent understanding which actually is very simple. The way the Chinese Room works is if you imagine you’ve got someone who doesn’t speak and Chinese at all and they’re sitting in a little booth surrounded by index cards. There’s a slot in the booth and you can post a question in Chinese script in through this slot. What the person inside the booth does - they don’t understand Chinese - but they can look in the index cards, each of which tell you what to reply in Chinese.

So the index card gets posted in, they find the corresponding answer card, write out these symbols that they don’t understand - they’re just copying them, and they post it out. The idea is if you had a big enough library of index cards so that all possible questions and answers were covered, this person could do a brilliant job of simulating that they understand Chinese whilst, in fact, all that they’d be doing is following a simple lookup table of instructions.

Now extending this to human beings, the idea is maybe we could have a computer that passed the Turing test by doing basically that. It’s got a huge, massive database of possible questions and answer responses it can give, but it doesn’t understand the words. So that’s put some pressure on the Turing test as this measure of consciousness. Maybe a computer could trick us into thinking it’s conscious by actually doing something pretty dumb.

Chris - Does it also matter, Henry, if a machine is conscious? Does that really matter?

Henry - I think it matters in two ways. The first is we actually start to worry about our ethical treatment of machines if there is a reason to think they’re conscious. This is something that’s explored in shows like West World or Humans where you have these tools basically that people use for their basest instincts. Maybe if you’re just dealing with things that are basically human shaped robotic vacuum cleaners, that doesn't make a difference. But, if you dealing with things with real emotions or real cognitive capabilities, then you might start to think we need to regulate that behaviour.

The other reason you think it might matter is if people start getting relationships, friendships,or even romantic relationships with AIs, as we’ve seen in movies like Her, then it might matter to the people involved. They want to know that not just that their AI can simulate love or emotions but it’s actually feeling and reciprocating these things.

Chris - Isn’t one of the attractions of things like AI that actually it’s not biased by many of the what we regard as human flaws like innate biases, or prejudices, or emotions that get in the way of a decision where your heart rules your head? And if we end up with machines that become a bit conscious and they start doing things very well like we do, they’ll end up like us, so they’ll think like us and be flawed like us, won’t they?

Henry - We can decide as we develop these machines what kind of architectures, what structures we want to put in place in terms of how their cognitive systems develop. So we might be able to make some conscious choices to avoid the biases along the way. I don’t think consciousness necessarily means the full emotional complexity of human beings, it could just be something as simple as feeling pain as a case of consciousness.

Nonetheless, I think there are reasons to worry about importing human biases over into machines. Particularly given that a lot of AI progress in recent years has been driven by mass data revolution. Taking data out there on the internet, plugging it into computational systems. There’s a risk in doing that in things like stereotypes about gender or race get imported into the AI just because they’re being exposed to the internet and all the different biases people have.

Georgia - Yes. In fact, wasn’t there an episode earlier in the year, Microsoft created a Twitter bot to learn from the rest of Twitter and I think within 24 hours it had become racist?

Henry - Yeah, pretty unsurprising, and you find things like gender stereotypes about profession. Just as Google or autocomplete faces worries about the kind of things people most frequently search for often carry hints of gender bias or racial bias. If that’s just the data we’re plugging into the machines, then it’s a bit much to expect them to be better than us.

Georgia - Could this be a real problem in society if people like the police end up using AI to do their jobs, could we end up with something that we assume is neutral, then not being, and this could cause problems?

Henry - Yeah. So this is a real motivation for a desire for transparency in AI. It’s particularly complicated these days because a lot of the algorithms that are being used are basically blind. We don’t know how they’re solving the problems. We can sort of reverse engineer them, but because there’s a lot of self-learning going on, it’s not always immediately clear why a system is making the decisions it’s making. If a system singles out a person for a stop and search, or someone going through airport customs gets singled out, we may not simply be able to say okay, what were the criteria used it it’s been subject to this kind of elaborate self-learning system, so we need to think about transparency too.

Georgia - Peter; as someone works on this, what would you say to this question of consciousness?

Peter - I think it’s very interesting. I still having thought quite a lot about it don’t really know what consciousness actually is. I think there’s that sort of sense of you being here and now and living in that moment and feeling, if that’s the definition of consciousness then I think that probably extends across the vast amount of the animal world. And I think if we’re going to worry about consciousness in artificial intelligence, we probably need to spend more time worrying about consciousnesses within nature.

Chris - Simon; a quick thought from you?

Simon - The definition of consciousness is the key thing here. Consciousness is an objective experience and we all feel we know what it is, but there have been various attempts to define consciousness in ways that allow us to make more progress in identifying it in animals or machines. So defining it as the property of informational systems by various axioms of conscious systems have so they're indivisible, they model the universe, they model themselves in relation to the universe, and so on.

I think that these kind of theories give us a lot of hope for understanding consciousness. My only problem is I think when a lot of people talk about consciousness they want it to be magic. They want it to be unexplainable and so if we come up with a system like this that does allow us to say, for instance, is Alexa conscious? Then people just won’t buy it because we feel that our consciousness is the last domain of our uniqueness as a species and we’re quite strongly set up to defend that. But, in defending it we tend to come up with phrases and questions that are just unanswerable, undefinable, and that is really getting in the way of clearly thinking through the ethics of AI.

Georgia - I think is one we could debate till the cows come home. Maybe something to take to the pub after the show.

Ancient DNA

How worried should we be about AI?
with Simon Beard, Centre for the Study of Existential Risk, Peter Clarke, Resurgo Genetics

The panel discuss the near and future risks AI might pose to humanity. Chris Smith began by asking Amazon's Alexa if she had any nefarious plans...

Alexa - I have nothing to do with Skynet.

Chris - It does at least have a sense of humour. In the meantime, we asked Dave Palmer who you heard from earlier - he’s from Darktrace - what he thought about the dangers of AI.

Dave - There is absolutely no remotely imminent technologies or research that is going to create something that is societally damaging, or create a self-aware robot that could do us some harm. But there are plenty of things we should be worried about that perhaps evil people might do. Things like weaponised drones or some of the potentially negative side effects of things like gene editing and DNA editing are far more concerning than the rise of the Terminator. We’re not going to see that in my lifetime - no way.

There are many people, including a professor that I respect enormously, that would say AI is the next electricity. Steam power was the first industrial revolution, electricity the second industrial revolution, computing the third. AI is probably going to be the fourth and I would agree with that. What we’re seeing is the emergence of techniques that allow us to deal with really complex things that were previously out of the reach of what we could do with computers and programming. And it’s near impossible to extrapolate where that ends up but it’s an enormously exciting time too.

I’m sure in 10 years, just as we’ve got completely used to and normalise the fact that we’ve all got these little smartphones, supercomputers in our pockets, and they don’t seem remarkable any more. Well maybe in 10 to 15 years we’ll feel the same way about AI, that it will have changed how we interact with each other and also how we interact with the world, but pretty much all for the better.

Georgia - And the question we’re thinking about at the moment: the risks of AI. Simon; you work at the Centre for Study on Existential Risk, so I think this is one for you. How do we assess the potential risks of AI?

Simon - The first thing I want to say is just going back to the question you asked, how much should we worry about AI? I think worry is a very unhelpful response to the risks of AI. It’s really hard to assess the risks of AI. What we do know for sure is that there are possible bad outcomes that could occur from developing AI. None of those are anything to do with the Terminator or other stories; those are just stories about people. It tells us a lot about ourselves but next to nothing about AI and they don’t really appear on our radar, but there are lots of things that could go wrong.

Lots of those, particularly in the short to medium term are indeed, as David Palmer was saying, about the interaction between people and AI. We get things wrong, AI could give us the potential to get things wrong so much worse, just like nuclear weapons do. But, worrying about that isn’t necessarily going to make it less likely so that’s not the response we need to have. What we need to do is to get enough clever people working on how to prevent bad things happening, to stop them before they happen. And that’s the key thing about centre like mine is just to solve these problems before they become problems and no-one has to worry about them.

Georgia - Can you give me an example of one of these potential problems and how we might reduce this risk other than just running to the hills?

Simon - One very specific example that we’re concerned about is the use of AI and other algorithms in the modernisation of nuclear weapons. This is a great one for us because it’s the interaction between two existential risks - artificial intelligence on the one hand and nuclear weapons.

We know that lots of states, the US in particular, are going through a process of modernising their nuclear weapons launch systems. They are currently very technologically stuck in the 1970s. Algorithms have the potential to greatly increase the efficiency of those systems and make the much better according to the kind of “game theory” models that nuclear weapons systems are based around. But historically we’ve seen there have been too many near misses where it’s been down to individual discretion and people have made, what may at the time have looked like the wrong choice, to avert a nuclear counter strike and later it’s emerged that the technology was faulty.

That’s one we’re looking at right now where there is the potential, if this goes wrong, for AI to actually cause a lot of damage in the very short term. But to see it is to realise the problem and hopefully therefore to avert it. So don’t worry about it, but that’s just an example of what we need to avoid.

Georgia - Peter?

Peter - I think that there are so many potential risks, I don’t think we can say don’t worry about it. I think even with respect to jobs, yes we can all get jobs looking after each other but, at the end of the day, the thing that drove urbanisation and industrialisation were people working to make things. Once you take that away, you are taking a large proportion of human economic value in that standard system away. So I think there are dangers along that road and I think that there are also dangers on any power structures. Putin came out and said that there was this AI arms race starting and however won it was going to rule the world, and we’re potentially entering into a new type of military arms race and we don’t necessarily know how that’s going to come about. There are already weaponised artificial intelligence systems with robots and drones and things like that. And you can imagine automatic targeting, all the technology exists to do that already.

Chris - I’ve got a tweet here - you’re talking about weaponising things. John Hancock says @nakedscientists: if the terminator is possible and it hasn’t been back for Trump, that means there’s worse coming.

Georgia - Well speaking of Twitter, there was the idea that AI  in the form of twitter bots might  have actually impacted on the US election which is quite a scary thought.

Simon - Yeah, I think that is very interesting. That’s a whole interesting area that it does come down to people and people were driving those things, and they were using systems to shift democratic process.

USA at night

49:44 - Is AI worth the risk?

Is AI worth it, or will it end our species?

Is AI worth the risk?
with Simon Beard, Centre for the study of Existential Risk, Hitesh Sanganee, AstraZeneca, Henry Shevlin, Leverhulme Centre for the Future of Intelligence , Peter Clarke, Resurgo Genetics

Chris Smith brought the panel back together for the final question: is AI worth the risk? First up is Hitesh Sanganee of AstraZeneca...

Hitesh - Yeah. Because for us it’s all about patients and helping patients. I’m a big fan of  digitisation of  healthcare in general and wearables etc., and I’ve seen lots of companies outside using it to predict heart failure for example. I think that’s going to help people that might have that disease. Also we’re seeing it used, for example, taking data and combining data. The question around consciousness is only as good as the data you give it and I think combining and putting it into a useful for intelligence, I guess, to be generated is very impactful. I think it’s going to help patients to me. So yes.

Chris - Henry?

Henry - I think artificial intelligence is the greatest opportunity in human history so far. It could easily go horribly wrong, or it could be the thing that really marks the moment of our species growing out of it’s infancy, particularly I think once we start looking at possibilities like super intelligence, building AI that’s actually smarter than us. Then it’s really hard to see past that point and imagine what an amazing future that could hold. It’s not, I think, farfetched to think that once we start to get the kind of incredible research tools provided by artificial intelligence at the level of humans or beyond, then we might start to conquer things like ageing disease, and even start thinking about things like human immortality. This is also connected to another area where AI could potentially change our species beyond recognition, which is the idea that we might ourselves become AIs at some point. In the sense we might upload our minds or augment ourselves, and that is a frightening but also very exciting possibility.

Chris - Simon; do you think it’s worth the risk?

Simon - I think you need to realise that we’re not in a risk/benefit mindset; this is a risk/risk mindset. At the moment humanity is facing huge risks from climate change, from pandemics, from other kinds of merging technology, from nuclear weapons. And one of the things that AI offers us is, actually, a way out of many of those other risks. A way towards civilisational resilience which we can count on and to my mind that is the real prize that AI gives us is it could be our best bet to survive as a species, that we wouldn’t give up on it. Now, of course, with any risk/risk tradeoff things could turn out wrong. It could be that we would have survived other existential risks that we face and AI poses our undoing. But, from the evidence that we have at the moment, it does not seem like developing AI is more risky of the options that are available to us. So, on that basis, I’m happy to say yes it is.

Chris - And Peter, in the words of a famous shampoo brand - do you think it’s worth it?

Peter - I think it’s going to transform the world and I think that done in the right way it will transform the world into something amazing and could make such a massive, positive force. But I also think that if handled badly it could go wrong in so many ways as well. So, we’re at a really delicate point in history - how we move forward from this point and use this technology and make sure that we make the most of it’s potential and guard against the worst of it’s potential on the other side is really going to define how human history progresses from here.

Comments

every time you said "Alexa" my Alexa stoped playing your podcast... hmm, better edit it and blip the word "Alexa" so those of of who are NOT afraid of tech could listen to the episode in its entirety.

Add a comment