Making social media sources more trustworthy

Adding an "I trust this" button alongside "likes" might make fake social media content easier to spot...
06 June 2023

Interview with 

Laura Globig, UCL

SOCIAL MEDIA

Social media icons

Share

According to various sources, about 5 billion of the world's 8-plus billion population regularly use some form of social media. Facebook alone has about 3 billion active accounts. Consequently the societal impact of these media - and specifically the messages and information that people convey through them - is huge. During the Covid-19 pandemic, claims that vaccines contained microchips so that Bill Gates could track us were everywhere. Someone even sent me a circuit diagram for the chips in the vaccine I had! Although when I looked closely, it was actually the circuitry for a guitar effects pedal! But did someone share - and "like" - that information because they too were amused by it, or because they genuinely believed what the post purported to say? That, UCL's Laura Globig argues, is the problem with many social media platforms: engineered to engage and promote information exchange, they don't reward users for the veracity of what they share. So, as she explains to Chris Smith, she's come up with a way to recognise and reward users for the trustworthiness of what they share...

Laura - The spread of misinformation online has skyrocketed and this has had quite drastic consequences such as increasing polarisation and resistance to climate action and vaccines. And so far existing measures to halt the spread of misinformation online, such as, for example, flagging or reporting posts has had only limited impact. So we wanted to know if we could help address this issue of misinformation online.

Chris - Before we come to that with what you just said, that there's been this association between public behaviour and social media. Do we know that's causal? That because of people putting inflammatory things on Twitter and Facebook and all the other places that is translating into people not having vaccines and so on?

Laura - The causal relationship is really difficult to establish. But what we know about the spread of misinformation is that because of the advent of the internet and social media platforms, it's become incredibly easy to reach people and to share information. And that information doesn't have to be reliable. So therefore it can also be true that false information is shared. And in times of crisis, people then turn to the internet as a source of information and then they're more likely to trust it. And that might then lead to vaccine hesitancy, for example.

Chris - And is it true what people hear about these various platforms that they are craftily coded so that they're almost addictive and they appeal to people in a very specific way and encourage them to - a bit like a mainline drug - take more of them?

Laura - It is true that these platforms tend to rely on metrics of engagement - so reward and punishment dynamics - to motivate people to post information online and also to react to posts. And that is very similar to any sort of reward mechanism you would see in the real world. So just like giving you money, your brain behaves the same way. If you receive likes on social media, you process it in a similar manner.

Chris - And are there any particular groups who are more susceptible to this or is, is everyone potentially a sucker for it?

Laura - Everyone is susceptible to a certain extent. It is true that people who tend to not question information as much, so who have less critical thinking ability, tend to fall victim more.

Chris - So what have you done here and would it work for those people?

Laura - People are actually quite good at distinguishing true from false information. So it's not a lack of ability. In fact, existing research shows that lay people are just as good as professional fact checkers at telling apart true from false information. Instead, one reason for the spread of misinformation online is the lack of incentives on social media platforms to share true information and avoid sharing false information. People tend to choose actions that lead to rewards or positive feedback and avoid those that lead to punishment. And on social media platforms, these rewards and punishments come in the form of likes and dislikes. But the issue with these likes and dislikes is that they aren't representative of the accuracy of the information you're sharing. For example, you could like an obviously false post because you think it's amusing. So we now propose that the key to reducing the spread of misinformation online is not to tell people what's true and what's false, but instead to directly incentivise them to share more true relative to false information. And so we need an incentive structure where these social rewards and punishments are directly contingent on the accuracy of the information.

Chris - So what you are saying is instead of there being thumb up, thumb down, like dislike, I could have, "I trust this", "I don't trust this"?

Laura - Exactly. So in this study we do this by slightly altering the engagement options offered to users. So we're not taking away the like and the dislike button, but instead we added an option to react to posts using, just as you said, the trust and distrust buttons.

Chris - You can envisage why people would be incentivised to use that because it's an additional badge of honour for them saying, oh, I'm sharing this, but that's a bit iffy. And then if it turns out that it is a bit iffy, they can say, well, I told you so! So it does kind of play into the same reward system, but it's for the benefit of of more clear communication.

Laura - Exactly. Here there's no ambiguity in the use of the trust and distrust. So trust by definition is related to reliability. It's a firm belief in the truth and reliability of something. And so what we found in this study is that people would use these buttons to actually differentiate between true and false posts.

Chris - So what data have you got that suggests this will actually work?

Laura - What we did was we created simulated social media platforms and in these platforms, users saw both true and false information. And then we added an option to react to posts using a trust and a distrust button, in addition to the usual like and dislike buttons. And so then what we found was that people used these buttons to differentiate true from false information more so than they used the like and dislike buttons. And, as a result of that, to receive more trust rewards and fewer distrust punishments, other participants were then also more likely to share true information relative to false information. So what we saw was a large reduction in the amount of misinformation being spread.

Chris - Does the person effectively score points for trusting something that turns out to be true? Is that how it feeds back and endorses that so that person's building reputation? Is that one of the incentives?

Laura - So the incentive is receiving the trust themselves. So we have three experiments and in the first experiment we gave participants the option to react to posts using a trust, distrust, like and dislike button. And the incentive here is just to engage with the post itself. And so what we found was people use the trust and distrust buttons. And then in the second and third experiment we looked at how receiving trust and distrust feedback from other participants would impact sharing. So their people are motivated to share true posts so that they receive a large number of trusts and very few distrusts.

Chris - And of course your timing is perfect because in the UK at least the online safety bill is making its way through the government process. This is the idea of trying to make the internet a safer place where misinformation propagates more slowly. So really the whole world, the business world should be receptive to ways that they can improve, not just the the engagement and uh, but but the quality of the engagement.

Laura - Exactly. That's also our hope and what we're doing here is we are not reliant on any fact checkers or anyone definitively determining whether something is true or not, but instead we're putting the onus on the user, which actually increases user autonomy, which also again would be very appealing to the platforms and hopefully to social media users themselves.

Comments

Add a comment