AI Safety Summit raises hopes and fears of new tech

But little in the way of concrete commitments for its regulation..
03 November 2023

Interview with 

Michael Wooldridge, University of Oxford

CODE.jpeg

Code

Share

World leaders and top tech experts have been attending a landmark summit on artificial intelligence at Bletchley Park. The site is synonymous with code-breaking because Alan Turing and his colleagues were based there during the Second World War.

I’ve been speaking to Mike Wooldridge, professor of computer science at the University of Oxford, and a leading authority on artificial intelligence…

Michael - AI has obviously been a thing now for the last decade. We've seen lots of announcements in the press and so on over the last decade. Everybody's got very excited about AI so that over the last year in particular we've seen the mass market adoption of general purpose AI tools for the first time. And of course I'm referring here to ChatGPT. And this has brought the AI safety debate onto the international front burner. And, in particular, there is concern that all of a sudden the rate of progress is such that we need to take AI concerns very, very seriously.

Chris - Who's there?

Michael - We are seeing some world leaders, we're seeing the head of the EU there, Kamala Harris is there. We don't of course have access to the exact list of who was invited, but the big question on everybody's lips was whether China was going to be there. And indeed China has been there. I've just seen pictures this morning of the China delegation speaking. We've got representatives of big tech, most famously Elon Musk, and leaders of big AI companies. So I believe Sam Altman from OpenAI is going to be there, and then there are a number of academic thinkers around AI safety with a number of government representatives and so on. So that's the mix. It's around 120 or so people.

Chris - Rishi Sunak has pushed quite hard on the Terminator style scenarios and the big threat, the existential risk. But there's quite a strong piece which has just come out in the Daily Telegraph by the lady who leads Big Brother watch and she says the elephant in the room here, it's actually a progressive erosion of freedoms. And she uses the fact that we've got police body cams using this sort of technology but in fact we've misidentified twice as many people as we've caught using this sort of technology and it's corrosive for that reason. And she's worried on that basis. And that doesn't seem to be being mentioned.

Michael - This is raising a point that many AI researchers have raised over the last year. And whenever you talk about AIand the future of AI, the conversation always tends to the dystopian very quickly and the most dystopian AI scenario is that somehow AI might eliminate the human race. But at the same time, there are a long list of concerns around the harms that AI can do, for example, around privacy and human rights, around misinformation on a potentially massive scale as we head into elections for which there is really no debate. The communities very largely agree that they are real and they are immediate. Whereas there is a much wider range of views about the existential scenarios. And so I think what this article is doing is pointing to exactly this point. Whenever we start talking about the terminator scenario, it just sucks all the oxygen out of the room. That's the only thing that gets heard and there's never space to discuss these much more immediate and very real concerns.

Chris - They are certainly figuring in the considerations of politicians though, aren't they? Because the EU has got some policies in place on this already, haven't they?

Michael - The EU has had some draft AI regulation now for a couple of years. That's been very widely discussed. There are roughly three models of AI regulation that are in circulation in the world at the moment. There's the US model, which is very innovation led, aims to protect innovation, aims to protect the tech industry. It's fairly lightweight and so on. Then there's the EU regulation, which is very human rights driven, very much out of the model of GDPR which is the data protection regulation that's been with us for a few years, very focused on protecting human rights, protecting individuals by way of identifying high risk scenarios for applications of AI and trying to regulate those. Then there is the Chinese model, which is basically a state-led model, which is very much concerned with protecting the institutions of the Chinese state, the current operation of Chinese society. So those are the three models that are in the air. The UK is in quite an interesting position in that we actually have some freedom, not complete freedom, but some freedom to negotiate our own space there. It's worth bearing in mind though that if we want to do business around AI with the EU, then we're not going to have an option about whether we buy into that regulation. We absolutely will have to buy into it.

Chris - Will this amount to anything? Because it's all very well to say we're going to have some regulations and the UK wants to lead the way in doing this, but there are so many jurisdictions around the world that don't buy into any of this sort of thing, and in fact they trade in fake news; Russia, North Korea. Would they not just potentially continue to be equally or more malignant because of these regulations, even if you put them in place?

Michael - Well, this is one of the many dilemmas. One of the difficulties with AI regulation is there are just so many voices in the room that are just flatly contradicting one another and how you navigate your way through those contradictory voices and contradictory arguments. What do you do if we all play nicely but other countries around the world don't? And let's be completely blunt, obviously nations who are not our allies will be considering how to weaponise it and use it against us: misinformation and disinformation is just one of those. And there are many different ways, and I say cybersecurity attacks is one which is clearly very prominent on the government's agenda, so I do think we have to be realistic about what's achievable.

Chris - We've dwelled very heavily on the negatives and the risks and so on, and it's important that people are cognisant of those. But what are the potential benefits? Where do you see the big wins coming in the next five years from all this?

Michael - I've always been tremendously excited particularly around the applications of AI in healthcare. And I do not see AI as replacing physicians anytime soon, the point is that AI is just going to be another incredibly powerful tool that doctors can use to help them in their jobs and has enormous scope through, for example, wearable technology to give us advance warning of the onset of heart disease, to give us feedback on how stressed we are in our lives. And the long-term benefits of that I think are going to be absolutely enormous. Many people say, well, I don't want an AI doctor, I'd much rather have a human doctor. And that's a rather first world concern. In some parts of the world, it may well be that it's AI healthcare or nothing at all. That's the kind of thing that gets AI researchers out of bed in the morning. That's actually what we're excited about. That's why we're doing what we do because we can see those benefits coming and they are going to be tremendous.

Comments

Add a comment