|
Sander van der Linden |
The Role of ‘Prebunking’ for Media Literacy
Robert Berkman’s Q&A With Sander van der Linden
In digging into the status and trends in media literacy, I took a somewhat different tack by interviewing Sander van der Linden, professor of social psychology in society at the University of Cambridge (U.K.), who is also the director of its Social Decision-Making Lab. The Q&A took place in late March 2023 via Zoom. His research on the use of “prebunking” to inoculate persons against disinformation has received wide popular attention, as has his just-published book: Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity (HarperCollins/WW Norton).
Below is an edited summary of the conversation.
Where did the term “prebunking” come from?
It wasn’t until 2016 that I, along with John Cook, a researcher in Australia, began using the word. We both thought it would be a good term as it was a nice contrast with debunking. We were doing a lot of work in parallel, and in 2017, our studies on this came out, and we replicated each other’s findings. Then around 2018–2019, the term started taking off. We had previously been using the word “inoculation” but found that journalists and fact-checkers used the prebunking term often and it began getting more traction.
Is prebunking actually being utilized anywhere for teaching media literacy?
We’ve been doing it in classrooms since 2018, and it is being used in college education too. The News Media Literacy project has used our own Bad News game for teaching media literacy to a variety of environments, including prisons and for those in at-risk communities.
Prebunking is predicated on knowing what disinformation to expose people to ahead of time, so they are less likely to be fooled when they encounter it for real in the future. But as methods, technologies, and strategies for creating disinformation change so quickly, how do you know what to prebunk for?
Well, we don’t focus just on prebunking against specific issues, but try to demonstrate general themes and strategies, whether it is what a conspiracy theory might “look like” or how people are often impersonated, and these can serve as a kind of umbrella that includes a range of potential future deceptions. You do need a lot of examples, though. We also found that exposing people to one form of disinformation (e.g., polarization techniques) will actually provide some protection when they encounter others (e.g., conspiratorial messaging).
The other point is that many of the disinformation techniques we see today have been used for a very long time. In fact, some of the anti-vax scare techniques of today are the same ones that have been used as far back as the 1800s! There are entirely predictable tropes being repeated.
Is it tricky though to figure out what is currently disinformation vs. just an outlier opinion that eventually is accepted? In one of your scholarly articles from a few years ago, when examining conspiracy theories, you cited as an example people who believed that COVID-19 emerged from the virology lab in Wuhan, China. While that theory was initially widely dismissed, more recently, it has received credibility from certain U.S. government agencies as the most probable cause, though still with low confidence.
The problem here was that the real possibility that the virus came from a lab became wrapped around the conspiratorial belief that it was bioengineered by China as a weapon. [Note: The techniques of taking something potentially true, or with a grain of truth, and then adding false information on top of it is a common disinformation technique that can be quite effective . —RB]
There is a larger issue here too—the wider communications from the scientific community and WHO [World Health Organization] has been problematic, and they did not inform people about the range of possibilities and natural accidents that can happen. And this can create people who become overly skeptical of accounts as a side effect. It raises the interesting question of what is the right amount of skepticism. In our instruction and prebunking activities, with feedback, we can actually turn up or down these levels of skepticism. So, if people are overly skeptical of things—not everything is a conspiracy—we can help them turn that down.
When it comes to figuring out the right level of skepticism, which is a good question at the moment, I’d say that the optimum attitude and orientation is what we’d call “actively open-minded thinking”—where one can hold multiple hypotheses, be open, flexible, and leave room for uncertainty.
So much of your work, and by others in this field, naturally focuses on matters like applying rational thinking, logic, probability, critical thinking, and other cerebral approaches. But so often the way people react to information is driven by emotion, not rational thinking. How, if at all, are you able to deal with this?
One way is to look at the impact of social contagion, where people react to how others respond to a piece of information. Ideally, we would want to take our inoculation theory to its logical conclusion and try to get it to reach the equivalent of herd immunity. Unlike biological viruses, psychological beliefs can be passed from one to another as a form of social contagion, and people can have group immunity so it does not get passed along so much. We also use humor and other, less cerebral modes to help bridge divides.
Could malevolent entities also create and spread maliciously created prebunks to predispose people to doubt true information that is circulating online?
Yes, evil actors can also use this for nefarious purposes. The problem is particularly acute when the population is in a closed system such as in China or Russia and people cannot easily access other information. But this is not new either. For example, cults have been doing this for a long time—telling people to distrust others.
And with deep fakes getting more sophisticated, spotting fake images and videos is getting harder now?
They are getting so much more sophisticated and realistic. We can’t rely anymore on some of the cues we used to be able to spot fakes—things like eyes or hair looking odd.
So instead, what we want to teach people is to be more aware of the context in which they are viewing the content. What are the surrounding cues? Those typically are also misleading if the images and videos are fake; so we can inoculate people about what kind of context are tip-offs to disinformation.
The latest concern over misinformation centers around generative AI systems like ChatGPT. It seems that prebunking against this powerful form of human-like communication would require a different set of instructions and approaches. What might you advise here?
We have been working on papers for years on AI and misinformation. One finding was that they are good at creating headlines containing misinformation that sound very real. But we have also found AI can work well for our own training purposes, too, for example to create psychological scales to test people and even to mass-produce prebunks that can be used in our games and instruction.
U.S. Rep. Jim Jordan of Ohio and certain other Republican congresspersons have sent letters out to the University of Washington and other schools and research centers that have been studying disinformation, asking for documents. They allege that this work on disinformation is an effort to support a “pro-censorship Biden regime” and to oppose conservative principles. What is your reaction?
This is super-disturbing. What these people are doing is trying to use the principles of free speech to legitimatize the spread of misinformation. The issue is that although misinformation can and does come from all parts of the political spectrum, there is an asymmetry: More disinformation has been coming from superspreaders on the extreme right side of the spectrum, so they will, in fact, be called out more often. But then this gets spun into a “You’re targeting us” message.
One difference here in the U.K. from the U.S. is that conservatives and liberals are both likely to oppose extreme misinformation and will even suspend or sanction those that spread it. But in the U.S., a lot of prominent people in Congress are endorsing or staying silent, and then there are no social sanctions. |