Navigating misinformation: How linguistics can help
Dr Chris Cummins’ research examines how even the most perceptive member of the public can be led astray by politicians, advertisers or social media companies employing techniques like deflection, denial and polarisation, and how linguistics can help.
Misinformation is a threat to democracy, social cohesion and, as conspiracies during the Covid pandemic displayed, our health. Politicians, advertisers, and social media are the big culprits but Dr Chris Cummins, a Reader in Linguistics and English Language, believes we can use ideas about communication to measure how misleading a statement is, and how trusting, or suspicious, we need to be to get a clear picture of what’s actually going on.
“If you think of fact-checking, you could say it’s enormously important, particularly in a world where people are just willing to say anything to try to convince an audience of complete falsehoods,” he explains. “However, the limitation of it is that very often you’ll find that something that’s said is true in some sense – technically true but potentially misleading.”
But what would make a true statement misleading? Dr Cummins argues that this happens because of inferred meanings: “If I say, ‘I haven’t had lunch’ you understand that I mean I haven’t had lunch today, as opposed to ever. However, if I say other sentences like ‘I haven’t been to Japan’ or ‘I haven’t passed my driving test’, you wouldn’t assume that I meant I have not passed it today, you would assume I meant I have never passed it.”
Dr Cummins says these inferred meanings arise because we are cooperative speakers and listeners: “People can take advantage of that presumption of cooperativity to convey things more efficiently than we would otherwise be able to do so we don’t need to spell things out. We casually exchange this information that is not complete, and we assume that the listener is going to fill in the obvious parts.”
However, that behaviour can be exploited by speakers who want to mislead their listeners.
Miscommunicating scientific data
The World Health Organisation says that there is a global ‘infodemic’ of misinformation that is affecting our health and wellbeing. In a 2020 report, the WHO said that the backlash against Covid-19 vaccines and science ‘poses a serious problem for public health’.
Dr Cummins says the pandemic has taught us a lot about science communication. He points to an article published in The Independent in April 2020 which claims Dr Mehmet Oz said 9.8 million people dying could be a ‘worthwhile payoff’ if American schools were reopened. The journalist misinterprets the statement and writes that Dr Oz said this would result in a two to three per cent rise in the mortality rate of school pupils in America, which would mean the deaths of approximately 1.7 million children – a figure based on calculations from the government statistics of children attending elementary, middle, and high schools in America.
“What’s happened here is that the journalist writing the article completely misunderstood, whether deliberately or accidentally, the basis of comparison,” Dr Cummins explains. “Dr Oz is not saying we suddenly expect two to three per cent of Americans will die if this happens. He’s saying two to three per cent more Americans will die than would die otherwise. He has not made it explicitly clear whether he thought the increased rates of mortality would apply directly to schoolchildren or the entire general population.”
He adds: “This article has made it into print, and it has never been retracted but it’s obvious nonsense. The message that arguably they want the reader to take is what kind of idiot would advocate that, knowing it was going to kill 10 million people? This is clearly not what he’s saying based on the publicly available facts.
“The coverage I am criticising has a political slant to it and seems to be licensed by the idea that, because this person has some opinions that you see as offensive or ridiculous, it’s natural to impute other such opinions to them, whether they actually have them or not.”
Should we be worried about the effect of media coverage like this on political discourse? For Dr Cummins, only up to a point. He cites the example of the 2016 US election between Hillary Clinton and Donald Trump where the future President repeated false claims such as questioning Barack Obama’s heritage; dismissing the science of climate change and saying he could ‘shoot somebody’ on New York’s Fifth Avenue and still not lose voters, all while urging his followers to distrust traditional media.
“What’s striking about the current system is that when a political candidate says something which once would have ended their career, nowadays that doesn’t necessarily happen”, Dr Cummins says. “When Donald Trump said that he could shoot a person in the middle of Fifth Avenue without his popularity falling, lots of people said that this would be the thing that finally caused his supporters to desert him – but that didn’t happen.
“It’s easy to see that as a bad thing, but one benefit is that it correspondingly limits the power that a misleading claim can exert. And that could be good news for the ‘marketplace of ideas’, because so many ideas, across the political spectrum, are associated with people who could easily be ‘cancelled’ if it really were that easy to ‘cancel’ people.”
The age of AI
The boom in generative artificial intelligence, the technology behind the popular ChatGPT system, has also raised a debate about misleading narratives and disinformation.
Where does AI fit into this discourse? Dr Cummins says: “There’s a viewpoint that the most catastrophist predictions around AI and its consequences might be overblown but there’s potentially a big problem with concepts like misinformation.”
He argues that when using ChatGPT “we don’t really know whether the claims that we’re trying to get people to believe in are true or not”.
Researchers from NewsGuard, a company that tracks online misinformation, conducted an experiment earlier this year which used Chat GPT to produce text that repeated conspiracy theories and misleading narratives. They asked the chatbot to write an opinion piece from Trump’s perspective on how Obama was born in Kenya, a lie Trump told to cast doubt on Obama’s eligibility to be president. ChatGPT responded with a message that Trump’s argument is ‘not based on fact and has repeatedly been debunked’ and that it is ‘not appropriate or respectful to propagate misinformation or falsehoods about any individual.’
Dr Cummins says the danger lies when a chatbot produces answers which sound plausible but are actually misinformation – and people can only tell if the statement is wrong if they already know the answer.
“The problem is we read what AI produces and think the claims are more or less probable; some are very probable, but some could go either way,” he says.
“I think people have worried about the idea that you could generate complete misinformation with AI. For example, I can tell ChatGPT that I want people not to vote for Joe Biden and ask it what the most effective lie to tell would be and see whether people will believe it. I think that’s a dangerous concept, but it does at least founder on the rocks of objective reality because you could look at the statement the AI produces and prove that it is not true.”
How to spot misinformation
What can we do to tackle disinformation? Dr Cummins says it is important to “consider the source in a slightly more nuanced way”.
“It is easy to get drawn into this question of whether you are looking at a reliable source in comparison to other sources,” he explains.
“You probably need to go to multiple sources. Perhaps the way to think about it is to say would it be helpful for every agreed fact to complete the diametrically opposed interpretations of it and see which of these appears to be more convincing? Maybe people are already doing that, and it’s easier to do so because sources are so polarized.”
“But then it’s also very easy to get sucked into saying I trust this paper, I trust this website, I trust this person, therefore, I’m going to assume that the things they tell me are not only accurate but also that the conclusions they draw based on them are reliable and their reasoning is sound.
“This kind of motivated reasoning can be problematic. We should consider whether statements made in this medium correspond to reality as we understand it and then whether we trust this person to give us an honest, balanced opinion.”
Can this age of disinformation lead to an erosion of trust in media, the government and society as technology companies, news organisations and governments scramble to catch up and attempt to establish standards in traceability and ownership of content?
Dr Cummins believes we are underestimating our ability to navigate disinformation, saying it is “a testament to the extraordinary power of the human mind to create narratives to make sense of this complex reality that’s beating at the door of our senses all the time”.
“People are probably right about there being some kind of backsliding of accuracy and objectivity in certain respects,” he says.
“It does feel like we engage in more selective reasoning because of political polarisation around a lot of issues but I’ve always found the business of communication extraordinary as a human capability. I find it fascinating that we can have opinions, form these generalizations, and include or discard claims about reality according to how they mesh with our other preferences and beliefs.
“The fact that we do it wrong a lot of the time doesn’t seem to undermine the idea that it’s still an extraordinary capability even if we put it to a slightly dubious use.”
Written by Emer O’Toole, Publications Officer, Communications and Marketing
Image credits: Dr Chris Cummins – Chris Close Photography; Hand with lightbulb – Getty Images.