Big tech goes all in on AI

From Microsoft, to Meta, to Google. Everyone wants a piece of the pie.
25 July 2023

Interview with 

Josh Cowls, University of Oxford

BUSINESS-MAN.jpg

A silhouetted business man running to a meeting...

Share

Big tech companies all want in on the action, whether it’s Microsoft’s GPT4, Google’s LaMDA or Meta’s new Llama 2, which was announced this week.

Speaking with James Tytko about how the industry is embracing AI, Josh Cowls, from the Oxford Internet Institute…

Josh - I think looking at the business model of these companies is really interesting in terms of how they actually try to market and sell AI. So Microsoft has been pretty upfront in some of its early iterations that to use their AI products you'll have to pay. So they've recently announced that it will cost almost double what it currently costs to use their office 365 suite if you want to have the AI powered tools on top as well. So that will take it from about $35 a month to 65 in the US. That's one way and the most obvious way perhaps of actually making money off the systems. Meta has come along with an alternative approach, at least in the first instance, and decided to essentially give this away for free. That reveals some interesting differences in how I think these companies maybe see AI as a whole. So what companies like Meta have done, particularly in social media as well, is really gone for market share. So the idea is to get as many individuals and businesses using these technologies and then figure out how to monetise that after the fact and whichever of those approaches wins out will I think help to determine the future of these technologies.

James - The risk to these companies that are gobbling up market share and working out the profit later is the limitations of these big large language models, namely that, because of the vast amounts of data they're trained on, things can slip into their output which become unhelpful. And especially as we start to view these tools not just as impressive toys that make us think, 'wow, what lifelike output' but we try and apply them to professional settings, there's not going to be so much take up in those environments unless we can trust a higher proportion of the output than we currently can.

Josh - I think we need to get really clearly focused on the domain in which we're deploying these AI systems. If you think about it as a bit of a parallel when social media first came along, we could connect with long lost friends and family and shoot the breeze, essentially. Few people could have foreseen what those networks would be used for just 5 or 10 years later, having a role in things like the Arab Spring, say, and of course later playing a darker role in elections in the US. And so that range of possibilities, that scope, is incredibly broad, even broader perhaps for AI than it is for social media. So one of the interesting challenges for policymakers and for people developing this technology is how much do you try to show what AI can do in a really specific narrow domain, cracking really difficult problems with health or with astrophysics or with whatever else, and how much you say, let's just give or sell this product package to people and see what they want to do with it. And I think both of those approaches obviously come with clear benefits, but also clear risks.

James - Are we certain that tools like ChatGPT are going to influence the way in which the average person works going into the future? I ask because I read a study this week from MIT finding that for workers using ChatGPT for professional writing tasks, productivity gains were most marked in the least skilled among them. And it just got me thinking that there's a limit to how helpful, especially these super large language models are going to be for businesses to think, I want to spend money on this moving forward.

Josh - I think in all of our day-to-day lives, we have a mix of cognitive tasks and other tasks which require a lot of either thinking or working on a particular problem. And then a lot of things which we feel, well maybe if there was a sophisticated chatbot that could take care of that email or that conversation for us. So I don't think that this is going to render everyone except a really highly skilled small set of people unemployed or unemployable, but clearly in a really wide range of capacities, it's going to have an influence. If Microsoft is licensing this technology to put in its office products, those have billions of users, so clearly many working lives are going to be touched in some way. How much that has an impact on productivity, I think is a really difficult and interesting question to consider, particularly when we think about what we are really optimising for. Are we optimising for productivity for productivity's sake? Are we trying to free ourselves up for either a life of leisure or a life of interesting cognitive tasks which machines can't do particularly well yet? So I suppose on the most optimistic end, it might allow us to think about what really makes us distinctively human and ideally usher these technologies towards solving the things which are perhaps less interesting and less distinctively human for us, whilst we work out what it is we really want to do.

James - The reason I bring that up is not to be a party pooper, because I think everyone working in AI seems to agree that this technology really is amazing and the advances have been so impressive. But is there perhaps a sense that we need to rethink how we deploy the technology if we want to make best use of it, for example, by training language models on more specific sets of data so they can become more bespoke tools, as it were?

Josh - I do agree that I think once we have an idea of exactly the problems we're trying to solve or help to solve with large language models that will allow us, right at the top of the AI development streams, figure out how we actually gather the data and train models on data sets all the way down to how they get applied in different contexts. If we start at the top of that stream with a clearer idea of what exactly we want to do, this set of technologies can really help optimise for particular outcomes. The nice thing about that is that it also allows a larger set of people in society to actually have a say, because when we're agreed on the objective, helping to tackle cancer, for example, looking at things like protein folding, you can include ordinary people in that conversation and then design systems that are aimed to reflect the intentions of those people. Now it may of course either not work, it may be counterproductive, it could even be dangerous even with those clearer guardrails, but at least you have a clearer register of the purposes and the interests that are being served by the development of a particular AI model before it gets deployed. And then you can assess it against that benchmark in stark contrast to what we've seen with ChatGPT, which is very general purpose. But once you plug away at it as a domain expert in the law or whatever else, you can start to see its flaws relatively quickly.

Comments

Add a comment