The Evolving Leader

‘Fake News and Why We Fall For It’ with Sander van der Linden

Sander van der Linden Season 7 Episode 16

During this episode of The Evolving Leader podcast, co-hosts Jean Gomes and Scott Allender discuss the pervasive issue of misinformation and its implications for society with Professor Sander van der Linden. Sander is Professor of Social Psychology in Society in the Department of Psychology at the University of Cambridge and Director of the Cambridge Social Decision-Making Lab and has also held posts at Princeton and Yale University. His research interests centre around the psychology of human judgment and decision-making and in particular he is interested in the social influence and persuasion process and how people are influenced by (mis)information and how they can gain resistance to persuasion through psychological inoculation. 

Sander is also interested in the psychology of conspiracy theories, extremism and radicalization, (digital) media effects, social networks, polarization, and the emergence of social norms, reasoning about evidence, and public understanding of risk and uncertainty. He has published around 175 papers and is ranked among the top 1% of all social scientists worldwide.  

In 2023, Sander’s book ‘FOOLPROOF: Why Misinformation Infects Our Minds and How to Build Immunity’ was published.


Referenced during this episode:

https://foolproofbook.com

https://inoculation.science/

 

Other reading from Jean Gomes and Scott Allender:
Leading In A Non-Linear World (J Gomes, 2023)

The Enneagram of Emotional Intelligence (S Allender, 2023)

 

 

Social:

Instagram           @evolvingleader

LinkedIn             The Evolving Leader Podcast

Twitter               @Evolving_Leader

Bluesky            @evolvingleader.bsky.social

YouTube           @evolvingleader

 

The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.

Send a message to The Evolving Leader team

Jean Gomes:

What's the impact of misinformation on your future leadership agenda? Perhaps it's more than just the responsibility of your finance and IT departments. One in five people got scammed last year in the UK. 9 million people. The annual cost worldwide of online fraud of consumers is put at somewhere around ten billion but it's hard to know exactly, because many of us don't report because we're ashamed or time poor. But we're not just being fooled out of our money. We're also being robbed of our ability to think for ourselves. Upwards of 90% of people have fallen for fake news in the past year. The rising flood of MIS and disinformation affects our political beliefs, our relationships with our friends, neighbours and communities, our mental health and our sense of control. It's driving polarisation that isn't just confined to online spaces, but is creating rising tensions and politics in our workplace, as more of what used to be personal becomes political, and then a matter of what affects us at work, leaders must continue to track the shifting agenda of these new responsibilities. In this show, we talk to one of the world's leading researchers on misinformation, how it spreads in online networks, and what behavioural interventions we can design to counter it at scale, tune In for a crucial conversation on The Evolving Leader.

Scott Allender:

Hey, folks, welcome to The Evolving Leader, a show born from the belief that we need deeper, more accountable, more expansive, more comprehensive. What else Jean? What else do we need?

Jean Gomes:

More eloquent?

Scott Allender:

More eloquent, most definitely more eloquent, and more human leadership to confront the world's biggest challenges. I'm Scott Allender

Jean Gomes:

and I'm John Gomes.

Scott Allender:

How are you feeling today? Mr. Gomes,

Jean Gomes:

I am feeling two things. I'm feeling glad that I'm at the back end of flu. I'm coming out of that. I'm feeling more more like myself, and I'm also feeling a real urge to get some thinking from our guest today, because I think we all need this, given what's been happening in recent months and years. So I'm very excited to kind of be in the space to think differently about how I think. How are you feeling?

Scott Allender:

I'm feeling a mix of things. I'm feeling grateful for my toasty house right now as it snows outside and my kids are home for their second snow day and a matter of only being back from holidays for a week. But I'm also feeling a lot of sadness and sort of deep concern for many family members and friends and neighbours from my old stomping grounds and the Los Angeles area, and watching all of that unfold has been horrific, so bringing a mix of emotions with me today and quite a bit of excitement as well, because we've been looking forward to this conversation for quite some Time. Because today, we're joined by soder Vander Linden Saunder is a professor of social psychology and society at the Department of Psychology at the University of Cambridge, and director of the Cambridge social decision making lab. Before coming to Cambridge, he held posts at Princeton and Yale University. His research interests centre around the psychology of human judgement and decision making. In particular, he is interested in the social influence and persuasion process, and how people are influenced by misinformation and gain resistance to persuasion through psychological inoculation. He is also interested in the psychology of conspiracy theories, extremism and radicalization. Not sure where he's seen that in the world, but we'll be interested to talk to him about that media effects social networks, polarisation and the emergence of social norms, reasoning about evidence and public understanding of risk and uncertainty. He has published around 175 papers and is ranked among the top 1% of all social scientists worldwide, and we loved his multiple award winning book, fool proof, why misinformation infects our minds and how to build immunity, which we're gonna dive deep into today. Sander, welcome to The Evolving Leader.

Sander van der Linden:

Thanks so much for having me on.

Jean Gomes:

Sanda, welcome to the show. How are you feeling today?

Sander van der Linden:

I'm good, I'm good. You know also mix of feelings about everything that's going on in in the world. You know, from the wildfires, I have two colleagues whose whose house is burned down as well in in LA to being depressed about meta shutting down its fact checking programme. Right to, you know, being warm inside while it's cold outside, and, you know, being back from holidays and ready to do some work again.

Scott Allender:

Well, we're so excited to talk to you. And I love the idea when I when I heard this, that you're called the Defence Against the Dark Arts teacher at Cambridge. And before we get into the depth of your work, I'm interested in knowing when was the last time you were fooled by misinformation.

Sander van der Linden:

Well, you know it happens more than, than than people would think. Just yesterday, I had a meeting with my my students, and I'm organising a conference the Cambridge disinformation Summit. This is not an advertisement for it, but, but one of my students said, Oh yeah, oh yeah, the dates are April 12 to 16th. When, when somebody asked us, and I was like, Oh yeah, that's, that's right. And then I got home and I looked, and those were not the dates of the conference, the, you know, it's the last week of April. But for some reason, there was this kind of blind trust in an otherwise, you know, very smart student. So I used a cue of this person must be smart. Hence, you know what they're saying makes sense, which most of the time, is a good heuristic, but can fool us some of the time when, especially when people speak about things that may not be in their expertise, and, of course, organising my own conferences within my expertise, but I had forgotten the the actual date. So I can't blame my student. I can only blame myself. But there you go. I was fooled by misinformation,

Scott Allender:

misinformation going around. About the disinformation conference. About the disinformation

Sander van der Linden:

conference, yeah, and, you know, I think this was misinformation because, I mean, it wasn't intentional. She wasn't trying to deceive anyone, right? It was just an error. So, you know, fairly innocent, innocuous misinformation, but just an illustration of how our brains are quick to jump to conclusions about things that kind of half match what we think is true about the world. And, you know, it happens on a on a frequent basis. And there's some people who think that we're, you know, we're all so smart and immune to everything, but from what I know based based on my research in my daily experience, I'm full of all the time.

Jean Gomes:

Your passion to understand and debunk Fake news is tied to your sense of identity. I think. Can we start with how your childhood experiences in the 90s in the Netherlands shaped where you are today? Yeah. So,

Sander van der Linden:

you know, I grew up fairly you know, we weren't overtly religious, but, you know, my parents were Jewish, and there's a sort of tradition of talking about what would happen in in the Holocaust. And you know, when you're little, it's hard, it's hard to wrap your head around this. You know, your family tells you like you ask, okay, where are we? You know, where is my cousins? Where are my nephews? You know, why don't we have more family members? And it's kind of strange when your parents tell you, Well, you know, most of them were executed during World War Two. And so, you know, initially you just kind of accept that as an explanation. But, you know, when I got older, I started thinking about, but, you know, why do these things happen? You know, how do people act on information that isn't consistent with with facts or based on conspiracy theories? How does that motivate people to do, to do bad things? And so that that didn't directly lead me to then investigate misinformation, but it actually sparked my interest in discovering the field of psychology, of why do people do what they do? How does the brain work? How does information impact us? And so that, you know, I think was, was something that did get me interested in trying to understand human psychology. And then eventually I got back to the study of propaganda and persuasion in different contexts. But certainly, yeah, certainly, there was a personal element to it, in that, you know, I was interested in, in, in how atrocities happen, and what role propaganda plays. Tell,

Scott Allender:

tell us more about that journey of discovery. I'm curious to know kind of, you know, as you started to dive into the psychology of it, what was most surprising for you,

Sander van der Linden:

actually, what was surprising to me is that it's, it's actually very hard to study. So there's a lot of sort of historical documentation about Nazi propaganda. How you know what went into it, how they designed it, things like the big lie, which has a lot of relevance today, where it's really about the idea that if you create a lie that's so outrageous, it's actually more likely to work because people. Imagine that you would just come up with such an outrageous lie, and that the more you repeat it, the more common it becomes for people. So you start out with an outrageous lie which seems kind of crazy, but then over time, it becomes normalised. And so what surprised me is that some of these propaganda tactics actually seems seem to be repeated throughout history and in a different context. And I started thinking about the fact that, you know, it wasn't just, you know, World War Two. I mean, I certainly did some of the groundwork in terms of figuring out what's effective. But what surprised me is actually the scale at which this is replicated across different types of Conflicts and situations. And I started thinking, well, there's actually this playbook to propaganda that we should be uncovering, and the research around is actually very difficult, both for ethical reasons. I mean, you know, nobody's really done experimental research on Nazi propaganda. I mean, you know, there's just no ethics committee at a university that would approve an experiment where you expose people to Nazi propaganda and see if they go out and attack Jews or or LGBT members of LGBT communities, right? That That just doesn't happen. So it's all, it's all a bit indirect, in terms of the Mecca, you know, trying to uncover the mechanisms by which it happened. So I became fascinated. I was kind of surprised by the fact that it's not something that you can easily study in a controlled and, you know, experimental setting. It takes a lot of interdisciplinary, you know, historians and and psychologists and and simulated studies and economists actually doing very interesting work. There was a study thing, was a great study, which also cover in the book, which looked at cohort, birth cohorts. And I looked at, you know, people who were born before the Nazi regime, during and after, and of course, the Nazi years includes years of indoctrination in schools. There were pamphlets everywhere. You know, it's very extensive programme, and they did find, and this is maybe to come back to Scott's point about What's so surprising is that, you know, of course, you know, the expectation was that people who were who grew up during the Nazi regime would have more anti semitic views than people who didn't. But what was so surprising is that, you know, even, you know, basically this was done, you know, 2000 you know, 2006 2010 so not too long ago, we're talking, you know, 1015, years ago. So even people who are currently living in Germany still have more anti semitic attitudes if they were, you know, part of a birth cohort that grew up during the Nazi regime than people who didn't. And so they have these surveys, the National German Social Survey, and they ask, you know, are Jews behind world events? And you know, should Jews be allowed to, you know, walk freely in German society, and you know things like that. And you know people who, you know regions, both regions where people voted for Nazi parties and who were more exposed to propaganda were still more anti semitic to this present day. And it was kind of surprising to me, because I kind of thought it was all over, you know, like this happened during World War Two and and you would find an effect during that period, but not now, or, you know, 10 years ago, but that's what they found. And I think that's that's talks really about the the longevity and the persistence of being exposed to propaganda. And I worry that, you know, when we live in a time where we're just bombarded with lies and propaganda and yeah, you know, sometimes people say, oh, you know, it's not so bad, or, you know, it's, you know, it's not going to affect us. But I think what we know from propaganda is that, sure, yeah, people don't wake up one day and decide to start putting people in the gas chambers. That's not how it works. It's years of exposure to first, subtle and more implicit propaganda. And then, you know, you soften people up, you get them ready. And then, you know, becomes more active, and then you start bombarding people with the with the more dangerous ideas. And throughout that whole process, it can actually leave an imprint on people many, many decades later. And yeah, that was that was surprising to me in terms of the outcome of that study. And, of course, crucially, I should say that, you know, we're not empty vessels. You know, we're not exposed to propaganda. And all of the sudden, you know, we become, you know, you get green lights and you become, you know, green eyes. You know, a little, a little green light in your eyes, and you become a manipulated robot. I mean, that's not how it works, but, but you know, because areas that were already susceptible to this kind of rhetoric before the Nazi regime, that's why the propaganda was most successful, right? So they they had, you have to find people who are susceptible to your message in order for propaganda to work. But it was surprising that it also worked. And in areas where people didn't have that prior susceptibility, you know, worked less well. Obviously there was more resistance. But yeah, it's kind of worrying to think about it that that, you know, you don't even need that initial seat necessarily, but certainly helps. And that was also the conclusion from goebbel who who ran the, you know, he was the minister of propaganda during the Nazi regime, who said that, you know, it works best amongst people who were already congenial to to the message. And yeah.

Jean Gomes:

So, I mean, we might live in a world where we think the underlying assumptions that we've kind of evolved past kind of likelihood to be fooled by these kind of messages. What do you think is different today about how our brains have adapted to the online space? Yeah,

Sander van der Linden:

I think what's different today? And yeah, sure, some people say like, oh, you know, but we've always had propaganda, right? We could go back further than World War Two, we go back to mediaeval times where, you know, people were stoning women and calling them witches because they had, you know, ideas of independence and autonomy and and, and, you know, they were mastering certain skills and crafts. And, you know, we label them witches and burn them at the stake. So that's, you know, that's, that's, that's been happening for, for a long time. And so people ask, okay, what's, what's different now? But I do think technology has fundamentally shaped our relationship with information. I think that's, you know, that's, that's what's different, but also the speed and velocity at which it travels. I think in the book, I actually calculated how long it would take in the Roman Empire to spread a message by, you know, horse and waggon. And it would take, you know, a week, even if you had the fastest cart, you know, it might take a week to get your message out there. And then it would spread in a village. And so I kind of go through that diffusion process of, you know, back in the olden days. And of course. Now it all happens within the split of a second. You know, you send a message on WhatsApp to a group of, you know, 100 people who forward it to another group of 100 people. And before you know it, you know it's factorial, right? So before you know it, millions of people have been exposed within this, you know, couple seconds, and same on social media. I mean, it's the networks are different, and so the structures are different, and how it, how it, how information diffuses, but we can reach, you know, hundreds of millions of people, you know, within within seconds, and people just, oh, you know, we didn't necessarily evolve being bombarded by information all the time. That's also why we rely on rules of thumb, right? The brain is always trying to look for ways for shortcuts to try to manage information load. And you know, it's a difficult task for people to now log on and be exposed to so much information. And it's not just social media, right? It's cable news, it's podcast, it's radio, it's print, it's friends and family, it's social media sites, and more and more social media sites are popping up, and so we're trying to juggle all this information, and we know that if you stress the brain out, even on a simple task, if I give you a little memory task, you'll get less accurate at doing other tasks, because cognitively, our resources are divided. We only have so much, and we have to be biassed in some ways, and bias in terms of screening out other things. And so I think the information overload makes that process works in that we become more biassed, we start being more selective, screening out more and by selective, I mean, you know, it's easier to process information. Sorry that you've heard that you've heard before, that feels familiar, that sounds familiar, that aligns with what you want to be true about the world. So all of those undesirable kind of biases that do have a function but creep in and lead people lead people astray. There's a relationship between the online and the offline world. Sometimes people think, well, it's okay, it's all just virtual conversations. But that's not true. You know, as we've seen in the UK, you know, what starts as a fake news story about, let's say, you know, a tragedy. This was people who don't know there was an assailant who stabbed a number of young girls, right? That that dance? I think it was a, you know, it was, it was a dance class, and there was fake news about who this person was. There was some fake story that it was a Muslim asylum seeker who came to the UK by boat. I mean, this was all total nonsense, but it was the catalyst for, you know, major offline riots that that occurred. Buildings were, you know, put on fire, people got violent. And you see that. You see that elsewhere. To take January 6 in the US, right, capital riots, you know, false conspiracy theories that start online about stolen elections, actually, then can can lead to to violence, and it doesn't always, you know. There were this election, there were left wing conspiracy theories about the election being fraudulent, and that didn't lead to violence in that case, but sometimes it does. There's a there's a probability there. There's a concept called stochastic terrorism, or stochastic violence. And so you know, all of that online hostility, toxicity raises the probability of of stochastic violence. And so, you know, that's, that's new, you know, it's not something we've dealt with before. And then the last thing I'll say is, you know, with the role of AI and deep fakes, micro targeting, right? What's, what's, you know, Hitler, Hitler didn't have targeted ads, right? Imagine if he had access to, you know, voters, digital footprints, and he could target them specifically based on their prior beliefs, because that's what they were looking for, right, people who were congenial to their message. What if they had such a fine grained machinery that actually allows them to target a single person with their specific preferences. And so that's the environment that we find ourselves in now that there's a technology that can do that. And, you know, couple years ago, we people were right, you know, there were Russian farms, troll farms, who were writing these messages by hand or on the computer, right? Oh, oh, this person, you know, is watching these YouTube videos. So let's craft a message that plays into, you know, the type of content that they're watching. That's labour intensive. It's costly. Now you can have aI write very 100 1000s of variations on the message within a split of a second, right? You can just automate it. Deep fakes. People are not used to that. People are very bad at it. You know, research shows, you know, people struggle with differentiating synthetic from from authentic content online. You know, luckily, we're not inundated by by hyper realistic, deep fakes yet, but the technology is there, and we know that people are bad at discerning regulations trying to catch up. So, yeah, I would say there's a lot law going on in the online information environment that makes our situation very different from what it was before. And the last thing I'll say is, is that there's a, there's been a big change, I think, in the news model too. So you know, when you go from the printing press and the first, you know, yellow journalism, the first tabloids, right, which were already a bit grey area content, sometimes with some hoax stories here and there that were drooping people to cable news, which also lowered the standard a bit of what you can say. But at the end of the day, news, traditional news, Legacy news, has producers, editors, fact checkers, there's guard rails, there's lots of layers. You can't just go and they're regulators. You know, in the UK, you have of come, for example, in the US, you have, you know independent watchdogs, their procedures. You know false advertising. There's laws around that. So there's a there's a long history that, you know, the evolution of TV took a long time right before there was full penetration of television in the in the general population, there was a lot of time for legislation, evolve, try things out, test things out, see what's going on with social media. It all happened within the course of just a few years, right? It exploded, and I think we're still trying to catch up. You know, whole different debate with with Jonathan heights book and people going back and forth over whether social media is bad for our mental health or good, is it dangerous for teenagers and teens and and so on. And those are, you know, difficult debates, and the evidence is difficult to comprehend because there's so much exposure and so little research in terms of what we actually know, and trying to catch up to to that is, is difficult. And I think that's, that's partly what's going on with with this debate as well, you know? So, so what I would say is that a on YouTube, you know, there's no guard rails, there's no editor, no fact checker, there's no producer. You can anyone can just say what they want. There's no barrier to entry. So that influencer model of news, most people, you know, not well. A lot of people are getting their news from influencers now online. More and more people getting their news from social media. I mean, there's no verification process, and so that's that's a whole different model from the model that we're used to with news, where people assume, expect, that things have been fact checked. Of course, errors are made, but there's a process, there are committees, there's procedures, right? And we don't have that with the current online vine. There was a survey from UNESCO, the top influencers on Instagram, Tiktok, and they asked them if, if they checked their content before they put it out. And they said, No, think 90% or something said, said, No, we don't really view our role as, you know, as being relevant doesn't have to be factual, and we're just trying to influence people with with interesting content, right? And they don't see themselves as a purveyor of accurate news, and that's a whole different model. So

Scott Allender:

you've said so many important things in there, I want to pull. Some of it apart a little bit farther, if we can, because you talked about susceptibility, and we're all susceptible, and there's surveillance capitalism and targeted marketing according to what you engage with. And that's a real problem. We've talked about that on the show. You also hit on the group of people who are most likely to gravitate towards information that aligns with what they want to be true about the world. And I see that a lot in the sort of coaching work that I do as well, that there are some people who are really more prone to that, and some people who are less likely to that, they have a more inherent sense of doubting that's healthy. Of like, I'm going to challenge that idea that I've seen, or I'm going to explore it. I'm going to look for other avenues. So you talk about this thing, wherever you fall on that spectrum. You talk about this idea of psychological vaccination against misinformation, that you can inoculate yourself. So can you start pulling that idea apart a bit further for us?

Sander van der Linden:

Yeah. And so that is based on this idea that, you know, there is variation in susceptibility. We're all susceptible, but some people are more susceptible than others. Some some factors have to do with with the brain or people's personal characteristics, but sometimes it has to do with the environment in which you find yourself, and that kind of determines susceptibility, right? Then there is how misinformation spreads on social media, and it turns out you can actually use models from epidemiology that we use to study viruses, to study how information spreads, and they actually fit pretty well. I mean, they're simple models. You know, people spreading information is slightly more complex than than than a virus replicating itself, but, but there's a really close analogy there that those models actually do really well at predicting how, you know, viral rumours to fuse. And so we started wondering, well, if misinformation spreads like a virus, is it possible to then inoculate or vaccinate people against it? And this sort of research goes back to the 1960s where people have been asking the question of whether it's possible to to protect. And this was actually in the context this psychologist McGuire, was really interested in this idea, because the US government, this was at the time of the Korean War, and, you know, there were prisoners of war in Chinese prison camps at the time that voluntarily stayed behind after the war. And the US government was concerned that they had been brainwashed, and, you know, that they were going to explore communism. And of course, reality turned out to be much more complex than that, but, but they didn't know, you know, at the time it was going on. And so their their whole impetus was like, Okay, for the next generation of soldiers, we're going to give them more facts, and we're going to teach them about all the things you know, why America is so great and and if they ever get, you know, captured again, they won't be confused about, you know, why America is is right. And McGuire kind of said, I don't think that's the right approach. I think what's happening is that when there were captured and people were attacking their belief system, they had no mental defences. And that is the problem, that that what you need to do is actually simulate an attack on soldiers, so to speak. You know, give them a sense of the types of threats that they might be facing in the future, then refute those in advance, so that you give them the ammunition to build up resistance over time. And that, I think, was a very powerful idea. I mean, he never tested it with soldiers or anything like that. In fact, when he tried to do a survey in his class, there was actually mixed support for capitalism and communism. So it wasn't about value judgments, about about these things, but he wanted to know the process. And so he did some, some studies that were interesting. But then, yeah, then, then he left that research for what it was. And so one day, I became fascinated about this idea of, of whether anyone's already looked into that. And I came across, in the library, came across an article from the 60s, where, where, you know, he, of course, McGuire, I didn't have the internet at the time. He didn't know about epidemiological models. But actually, let's take that basic idea, that metaphor, and, you know, let's, let's test that empirically in, you know, in the 21st century. And that's kind of how, how it evolved, and how we build out this theory of psychological inoculation or pre bunking, instead of debunking so how does it work? You know, it just follows the vaccination analogy. So the idea is that instead of just giving people facts, you actually give people a weakened dose of the types of misinformation they might see in the future, or the techniques used to produce misinformation, and you deconstruct, neutralise to pre bunk them in advance, so that people become more immune to it in the future. So just as vaccines introduce a weakened pathogen into the body that triggers the production of of of antibodies to help confer resistance against future infection, it turns out you can do the same with information and the brain. And you know, the body's looking for right antibodies are made, and it's looking for potential. Invaders, and the brain does the same thing, like when it comes to threatening information, the brain's looking for, you know, what's threatening me? And so it benefits from seeing lots of micro dose examples of what's manipulation, what's not manipulation, so it can help discriminate better between the two. Can

Jean Gomes:

you give us an example of something that you know listening right now you could give us, like, an inoculation against some Yeah, yeah, information

Sander van der Linden:

absolutely, yeah, no, it's a good point. So because, you know, does sound a bit abstract, I'll make it very concrete. Now I'll choose a non political example. Because, you know, sometimes, you know, people might, you know, say, oh, but you know who determines what information is correct and not correct. So let's sidestep that for a second, and let me give you an example that that's more about the techniques that are used in in disinformation. And one of those is the the false dilemma technique, which is one of my favourites, not to use, but to study right? Which is the idea that you present people with with two options, while, in fact, there are many more. But the whole goal is that you try to take out all the nuance, and you you try to breed extremism. And so, for example, somebody might say, now, in the context of the US, you might say that, you know, oh, if you're, you know, if you don't support automatic rifles, you're against the Second Amendment or something like that, right? So you can be, you can be a supporter of the right to bear arms, but, but maybe automatic rifles in schools is a little too much, right? So there, there, there's a, there's a lot of nuance there, but you portray it as a false dilemma. Or, you know, if you support this country, then you're against this country in a particular type of conflict. Actually, maybe you sympathise with both countries, right, and the goals that they're trying to achieve. So politicians love to use false dilemmas, and disinformation producers do that too. So the inoculation. So what's an inoculation in this context, you have to find a weakened dose that is harmless but generates the relative resistance that you need. So we produce these videos where we actually start to start using popular culture examples. So we expose people to a clip from Star Wars. Are you guys Star Wars fans? I'm not sure. A little bit, yeah, little little bit. There's a there's an episode called Revenge of the Sith and so Obi Wan Kenobi is talking to Anakin Skywalker, who, spoiler alert, becomes Darth Vader, right? And so, oh man, you gave it away. I gave it away. And so, you know, he sort of says, you know, either you're with me or you're my enemy. And then Obi Wan goes, you know, only assist deals in absolutes. And so then the narrator goes like, oh, you know, don't use these manipulation techniques. Don't join the dark side. And it's kind of a fun, non political way of illustrating the the false dilemma, right? Everyone gets, gets that example. But then we test people, we bombard them with social media messages in in real, high stakes context that that use this technique, and we find that people become better at at recognising it, because they've internalised the weakened dose, and now have some some immunity, because it has a template. And it's the Star Wars thing is funny, because it gives you the template of either your for or against. And that's sort of the, you know, the element people remember, and then when they're bombarded by, it doesn't matter what the content is, but it's the same structure, either you're for or against, and it sort of rings a bell and says, Oh, yeah, that's a false choice. I shouldn't fall for that. Or at least now I'm empowered to make up my own mind. And that's, that's the idea behind pre bunking. You

Jean Gomes:

Why is disinformation so hard to get rid of? I mean, even when you know we know it's untrue, why doesn't the virus leave us?

Sander van der Linden:

Yeah, so, why doesn't the virus leave us? Yeah, yeah. Well, in that sense, it is tricky, because once we're exposed to a falsehood. It integrates into our memory system. So our memory is kind of like a network, like a social network, so it has nodes, or, you know, concepts like food, vaccines, right? And then there's links between the different nodes, so there's live vaccines, you know, inactivated vaccines, cities, food combinations of the two, right? It's this vast network of of concepts and all the sorts of links between them. And when you expose people to misinformation, for example, you know, let's say John that, you know, I had take out, you know, from the, you know, Indian place around the corner from where you live, and it had terrible food poisoning. It's terrible. And I go on and on about like I was, you know, it's terrible. Don't go there. Then two weeks later, we were catching up, and it's, oh, by the way, you know, it wasn't that place around your corner. It was a totally different place. It got confused. But now every time you pass that place, you're going to think food poisoning. And so. The problem is that what you see in research, and this is what we call the continued influence of misinformation, is that people continue to retrieve false details from their memory, even when they've acknowledged a correction in the in the formal sort of experiments that we that's done around this, you put people in, let's say, a brain scanner machine, and then you give them some story about how a warehouse burned down. And, you know, there was some report that said there was oil and gas cans in the in the closet that caused it. And then later, the police chief actually says, you know, oh, the the oil and gas that wasn't the cause of the fire or something else. But then you ask the, you know, people come out, and you ask them to make, you know, judgments, inferences about lots of questions, Why was there so much smoke? And then people say, Oh, it's because of the oil and the gas cans. And so they completely, you know, forget that they've just acknowledged the correction that it wasn't the oil and the gas cans. And so it sneaks in, it makes friends with other things that, you know, and it becomes a game of whack a mole. So, so you can try to inactivate some of the nodes and links, and you can see that in the scanner, in the sense that there's some, you know, there's what we call a retrieval account, which is that something goes wrong during the retrieval process of the correction. So people are accessing the myth in their brain, but they're not retrieving the correction. For some reason, there's also an integration account, which is that people are not integrating the correction into their mental model of how the world works. So they have the myth, but they're not integrating the correction around it. And so it's a bit neuroscience, see, but I think that the bottom line is the same, though. It's that, you know, people tend to corrections. Tend to be long scientific. They don't resonate with people. They come too late. So there's all sorts of reasons why people just forget about corrections, and, you know, and sort of focus on the myth. And in fact, when you have to debunk, you're always in a strategically disadvantaged, disadvantageous position, because you have to repeat the misinformation in order to debunk it. So right? So if you're not debunking effectively, what you're doing is you're repeating the misinformation. People forget the correction. So fact checking does work partially, if you do it well and you don't repeat the misinformation too often. Some people like to call, we call the truth sandwich, which is basically, you start with facts, you only repeat the necessary misinfo once, and then you layer on the facts again at the end to minimise any risk of repeating the misinfo. But even in the best case scenario, you're only partially doing the undoing the damage. There's a famous saying from the legal context as well, when a jury hears something about a defendant they weren't supposed to during a trial, and the judge says, No, disregard that you can't unring a bell. And that's that's kind of the

Jean Gomes:

point. Hence, the importance of pre bunking, exactly,

Sander van der Linden:

trying to prevent people from encoding misinformation in the first place is so much, you know, in theory, so much more effective and desirable. And so that's why we try to spend so much time on doing that prevention is better than cure.

Scott Allender:

Is the scale of the current sort of propaganda machine and false choice and misinformation? Is it as big as it feels to me at the moment, sitting here in the US and you mentioned at the beginning of the show the sadness you feel around meta, doing away with its fact checking. I'd love to sort of get your thoughts on on the scale of the problem and kind of what's going on in your perspective with, you know, like meta doing what they're doing, for example.

Sander van der Linden:

Yeah, I can answer that question. I think you know, because you mentioned meta, I think one, one thing that disappoints me is the politicisation around this topic, right? And it's become the idea that fighting misinformation, or misinformation itself becomes fused with, with with, you know, party or identity in some way. That's a sad state of affairs. I mean, everyone should care about facts and truth, you know, including businesses and leaders, and we all have other motives in life. You know, people have social motives. They have reasons why they prefer not to tell the truth or endorse something that's not entirely true, and that's true for people as it is for businesses. There are other motives, but at the end of the day, we should all strive, or do our best, to rally around the truth and Mark statement that fact checking is is biassed or didn't work for him. It's a bit disappointing because that's not what the research shows. I mean, there was just a paper out this week that looked at in the US members of Congress, they get they get fact checked at equal rates. Democrats and Republicans get fact checked at about equal rates. So it's not that Republicans are being singled out for fact checking on the Facebook platform. You know, he wants to move to community notes, which is a sort of crowdsourcing fact checking and. Which I think is not a bad idea. I mean, this is a this is a nice idea. There is wisdom in the crowd. So it's a kind of a statistical artefact, that if you have large crowds of people that are diverse in nature, then you're pooling all of that knowledge, right? All that wisdom, some people overestimate, some people underestimate. But on average, you're actually getting pretty close. But what's interesting is that when you look at the research, the ratings from regular bipartisan crowds almost perfectly converge with those of expert fact checkers, and so it's not the case that there's some elite biassed fact checkers, you know, telling people what's going on. They're giving the same rating as regular bipartisan crowds when they're in the mindset of what's accurate and what's not. And then you can say, Oh, we can replace one with the other, but, but that's not necessarily true, because by itself, community notes is not enough. By itself, fact checking is not enough. We need all of it, not, you know, not less of it, which is basically what what he's doing. And that comes to your question about volume, because yes, there is a the volume has increased dramatically. It's difficult to put specific numbers on it, because you would need to have access to the archives of social media since their inception. A lot of that data is private, so we don't know for sure. We can only see public snapshots, but I think researchers are in fair agreement that there's more misinformation. Now, if we adopt a broad definition of not things that are that are 100% false, but if you also include things that are misleading, there's a lot of it. The volume is bigger. It's reaching people in more ways, right? It's, it's your phone, Snapchat, you know, Tiktok TV. There's some more, many more channels in through which it can reach people. What people disagree on. It's not so much the the increase in volume. I think, for example, there's more conspiracy theories out there. They're easier to find than ever. But I think people argue about the persuasion element. So, so the the relevant equation here is volume, you know, or exposure times persuasion. And people argue about the persuasion parameter, like is, has that gone on, up, gone up, or people become more susceptible over time? And that's not so clear. And so, you know, I would say, Yeah, the one side, there's more volume susceptibility people argue over I would say that, you know, we find ourselves in more susceptible situations when we go online, where we're in echo chambers, when we're in filter bubbles, when there's a high level of toxicity of online conversations, when we're being trolled. So, so I think when we're being overloaded with misinformation. So I think that that sort of persuasion factor does go up when we're put in situations where it becomes more difficult for us to control our own biases or defend ourselves from targeted attacks based on our digital footprints that we might not know about. So, you know, I'm my reading of that science is that it depends, but, but, but the persuasion can, in fact, be, be higher than it was before, depending on the group that we're talking about and the situation they find themselves in. And that's still large groups of people, large enough to, let's say, influence an election.

Jean Gomes:

So when in foolproof you talk about, you know, if you're going to gain immunity from from this misinformation, you have to recognise that it comes in multiple strains. You kind of pull this metaphor and you make the distinction between fake news and the building blocks of misinformation. Me talk about that for a moment. Yeah.

Sander van der Linden:

So, so one of the challenges with fact checking is that you know you can't check it every single claim. And in fact, you know it's true most news is, in fact, checked because there aren't just the resources to go at the level of the claim. So with pre bunking, you know, if you do this at a or inoculation at the level of a claim, you run into a, you know, not dissimilar problem in that you can't pre bunk every possible thing somebody might say in the future that's false. But what you can do is look at what are the predictors or the building blocks of disinformation, more generally, over time, what are the recurring themes and tactics that people have used and again, whether it's a government or the or tobacco industry or any type of entity, a lot of the techniques are or Nazi Germany, right? So a lot of the techniques are the same, and then it actually becomes possible to predict what misinformation people might be exposed to, and how to inoculate people against that. And some of these building blocks include things like conspiratorial language, emotional manipulation, polarisation, you know, creating us versus them, mentalities, trolling, which is huge during during elections. In fact, if you look at what Elon Musk is doing with the with the UK, it's a classic example of trolling, right? But if you don't know. That you're going to keep responding, you're going to feed into it, and you become duped by by the technique. And so that's, you know, I think that's kind of a famous example of trolling. But actually trolling is very common in manipulation more generally. Then, you know, there's things like impersonation, you know, faking, you know, faking expertise during a les pandemic, fake doctors peddling fake cures. But also, you know, there, you know, there were cases where, again, you know, in public health, famous cases, and you know, courts ruled on this in the US that the tobacco industry used, you know, fake experts with the white coat to try to sell people cigarettes, right? That's, that's also manipulation. And so that, you know, that technique is recycled across lots of different domains. And so what we do with the pre bunking is we try to tune people to these techniques. To give you another one of my favourite examples is, actually, you know, there's a lot of talk about how the vaccine, the COVID vaccine, you know, is going to change people's DNA, all the conspiracies about, you know about that stuff. You know new technology. You know, you know mRNA vaccine, but, but actually, of course, people can have questions about new technology, right? That's, that's totally fair. But if you go back to the 1800s to the development of the first modern vaccine, which was Edward Jenner's cow pox vaccine against smallpox. If you look at the artwork for the 1800s I did this in presentations. I show people paintings where basically cows and others were sprouting out of people's heads and mouths. And the whole idea was that if you took the cow pox vaccine, you were going to turn into a human cow hybrid. And so this idea that a vaccine is going to change your DNA even when it didn't have the word DNA is 200 years old, and it's just recycled over and over again. And that's the idea of inoculation, pre bunking, that there's these building blocks of disinformation. And you know, we lay out six in one of our interventions, and we give a kind of an acronym to help remind people, but it's always the same with conspiracy theories. People say, okay, but this one's true. And so sure, you know, sometimes a specific theory can have merit to it, but the psychology of conspiratorial thinking is is fallacious in the sense that it's a predictable narrative. There is somebody plotting something behind the scenes. They're having evil intentions. There's some persecuted victim. In fact, in experiments, we give people these ingredients, and we say, can you come up with the conspiracy? People come up with beautiful conspiracy theories, very elaborate, using those ingredients. And it's always the same ingredients. There's a celebrity is either dead somewhere, but also still alive. Avril Lavigne is a clone, right? You know, Tupac Shakur is living it up in an island somewhere. It's always the same thing. And so when you train people on these things, even when people are kind of like, yeah, Watergate and so on, or, you know, Epstein, you know, it's kind of fishy. Sure, nothing wrong with keeping an open mind, right? But when you show people the ridiculousness of how it's always the same sort of, you know, structure. People start thinking, hey, maybe I'm being duped by by the fact that it's somehow always the same narrative, just in a different context. And people just never see it all together. So that's what we try to do, is put it together for people. And so those are the building blocks that that I talk about in the book. And it's interesting because, you know, the the fact checkers intuition is actually slightly different. It's that you have to go deep and contextual about a specific claim. And you know, you can only figure out what's true or not by by looking at at a claim and investigating its truth, sort of value. But I think we find that at the level of the building block, you can go much broader and you have much more predictive power. And sure, it's not going to be 100% to make a joke about the book foolproof, right, for for every single instance, but on average, this is a pretty solid heuristic, because they're using multiple cues. So in our in our inoculation games, we don't give people one cue, for example, emotional, emotionally manipulative language, which is a very popular one online for for misinformation producers, but sometimes it's true, a true story could be emotional. Right now, if a true story is highly emotionally manipulative, you can question whether it should actually get the status as true, or whether it should be, you know, true, but manipulative enough, which is kind of my preference. And so I don't really mind if, if people become a little bit more sceptical of content. That is, that is kind of, you know, has some truth value, but is actually pretty manipulative. I think that's a good thing. But, you know, if you want to be technical about, you know, it should be that it has multiple cues. That's what we do in the game. We teach people about conspiracies and fear mongering and impersonating fake doctors, and there's at least six. And so models show, when you put all of these cues together, the probability of correctly identifying you know, future disinformation that you haven't seen before actually goes. Pretty high. Mean, we're talking, you know, not 100% but we're talking, you know, 80% or something, which you know, if you look at the research on human lie detection, if you get above 50% that's a huge accomplishment. And so, you know, I think it's probably the best we can hope for evolving

Sara Deschamps:

leader friends. If you're curious to get more insights directly from our hosts, consider ordering John's book leading in a non linear world, which contains a wealth of research backed insights on how to see and solve our greatest challenges. As well as Scott's book The Enneagram of emotional intelligence, which can help you unlock the power of self awareness sustainably in every dimension of your life.

Jean Gomes:

What is the kind of spread of motivation for doing for spreading misinformation? Why did Donald Trump, for example, want people to think that injecting bleach against COVID was a sensible idea? What is it? Because it's not just one motivation for doing this stuff. What is it?

Sander van der Linden:

Yeah, so there's been some work looking at at what motivates, you know, influential elites to spread disinformation. And it's, it's varied. So there are some top motives, obviously. One, perhaps unsurprising, is, is political, all right? So people want to gain political currency by spreading misinformation. Thus, the second popular one is financial. People want to make money off of it. You know, this is very popular in the wellness industry. There's a lot of grifting going on in wellness, and includes, you know, celebrities and you know Gwyneth Paltrow, I'm sure you know, wonderful person, but when she tells people about infrared saunas, you know, and kombucha, you know, shakes curing COVID. I mean, that's just dangerous, right? And so and so, that's that's and, and it's hard to infer people's intentions, right? That's the that's the tricky. The tricky part. I'm not saying anything about Gwyneth in particular, but the wellness industry in general is a is often about a financial grift, whereas the Trump top stuff is more about about politics and power and control and influence, and then sometimes it's a combination of of politics and money. But perhaps unsurprisingly, those are the the key motivations growing your audience. Often what you find is the the social media element, right? It's 100% to blame on the person, maybe on politicians who have responsibility and accountability and know what they're doing, probably. But for certain influencers, the journey, if you look at their journey, they don't start out doing a channel duping people, right? They start out maybe with, you know, a guitar channel, seeing that guitar hang above your behind you. Scott, right? So they're doing a bit of music, right? Yeah. And then, you know, they get a few clicks, a few likes. But then, you know, randomly, they're just frustrated one day, and they start talking about an alternative medicine. And then, whoa, they're getting a lot of engagement now, a lot of likes. Maybe I should, you know, start talking more about alternative medicine, right? And then, like, Oh, I'm going viral. Maybe I should start talking about conspiracy theories and and that. And then, you know, before you know it, I don't want to say that's been, you know, Joe Rogan or Jordan Peterson's journey, but, but certainly, you see people evolve from being somewhat reasonable people to, you know, selling all kinds of nonsense now, because, because it's, you know, that's what the algorithm and the incentive structure is rewarding on social media, and that's what we call the sort of the perverse incentives of social media. And we've done research on that, you know, what predicts virality, looking at millions of posts across platforms, and the more extreme, the more polarising, the more dunking on the other side that that's what gets engagement. That's what gets clicks. So that's where I think sometimes, and don't want to excuse, take responsibility off of people completely, but I think it's an interaction right between the motives of individuals and and social media that sort of creates this, this, this model. And yeah, the last thing I'll say, Jean, to answer your question, there is this distinction we make between belief speak and factual speak. And politicians increasingly like to use belief speak because it seems authentic to people that you know, whatever the fact doesn't matter. They're being genuine, authentic about their feelings and and what they believe to be true about the world. And they have the audacity to come out and say it, and they find that people reward that more so than what the facts are. And so we're operating more on a belief, gut feeling based model than a fact based model, which is a bit worrying when it comes to politics.

Scott Allender:

So what are the leadership implications here? So as people lead teams, you know, in a world of increasing polarisation and shifts in beliefs and all that, they have to manage, you know what? What are the tips and sort of sort of ideas you have for leaders listening on on some of these watch outs to sort of keep their cultures free from the sort of perils we talking about. I.

Sander van der Linden:

Yeah. So I would say, you know, leaders are, in fact, in an incredibly important position, because they can help determine what the values of the team are, what the goals of the teams are. And, you know, in any given situation, I say you want to make sure that that you're incentivizing truthfulness, trustworthiness and accuracy, rather over things like deception, lying, manipulation. And everyone feels kind of intuitively, that's true, but But in practice, you know, it can be, it can be quite, quite tricky. So of course, you know team leaders are in a in a position to inoculate or pre bunk misinformation with their team members. But, you know, pre bunking or inoculation isn't necessarily kind of a top down tool that people need to enforce. I mean, it is something we created, you know, for for the people, by the people, for the people. So it's the thing. It's a thing you can share with with others. And I think the most neighbourly thing you can do is that when you know that there's misinformation out there, when you know that there's false information, or techniques, you know, help other people spot it in in kind of an, you know, non judgmental way, that's that's often, you know what, what I try to do, you know, you can incorporate it into, into a team activity and A team building activity. How do you spot, you know, misinformation, disinformation, manipulation, you know, do it in a non political way. At first, you know, with, with, you know, maybe talking about the level of the technique rather than specific, you know, claims. But also, you know, some things are in the company's interest, right? There could be strategic disinformation about organisations as well. There are plenty of examples. Wayfair was caught in a major conspiracy that they were trafficking young children in their furniture, right? And they were just bombarded with the crazy, satanic paedophile ring conspiracies. And how do you prepare organisations for that? I think one of the things that companies underestimate law at the moment is the strategic risk of disinformation about their products, about their you know, what they're doing and how to actually, you know, prevent. You know, go out and preempt that, rather than just be reactive. You know, you need to have a proactive approach to it. Give a talk for a for, you know, for for risk auditors who, you know, you think, are thinking about these things, but this information wasn't really something that they were were factoring in. And so I think, yeah, when you're talking about teams, you know, you want to prepare your team for for misinformation, whether that's about the company or the organisation that you're working with, or about things that are going on in the in the world more generally. I mean, let's take a topic that's that is debated. You know, diversity and inclusion is relevant to every firm, right? It's, it's a prickly issue at the moment, with companies everywhere, and team leaders are in a position to say, okay, but what's, you know, what are legitimate questions, concerns that we can ask and watch this plane, misleading or disinformation about, you know about what's about, what's going on in that space. And sometimes you might need an expert, right? Or invite someone who is an expert to come give talks. In fact, sometimes you you might want to do a sort of community notes. Now, think one of the problems that I see is that there's always in this topic that there's a sort of perception of of bias, like, oh, the leader is going to tell me what to believe. Or, you know, we're going to hear from from, from this person or that person. But you know, why not crowdsource it? Then use, you know, use the the wisdom of the crowd as a team and see, see what the the wisdom among the team is in terms of particular position, and then compare it to what experts are saying. Could be a fun exercise, right? It could be done anonymously. I often do it with my students. I have them rate claims individually, and then again as a whole group, and see who's who's more accurate. And then see what the experts are saying, to get people more in touch with when, when are your intuitions leading you astray? You know, when is the group useful? How do we deal with with false information, both in terms of bottom up approaches and and top down approaches? So, yeah, I think, I think team leaders and leadership in general, it's all about setting the example right? You want to, you want to convey factual information as a as a company. I mean, it's a huge risk. One of the things the auditor said, which during this conference, which I thought was interesting, is that if the company puts out information that turns out to be false, either about the company history, about its products, you know, that's a huge cost to the company. It's a PR crisis. So actually, having having leaders think about, you know, how can we incentivize and promote accurate information, is going to be a key thing, especially in a space where knowledge is becoming contested. And you know, how to how to phrase things, and how to fact check things, and how to. Internal teams that deal with the veracity of information and claims, I think could be, could be there's probably going to be quite, quite an important topic moving forward,

Jean Gomes:

and aside from the book, is there anywhere else people our listeners can get information on your approach that would help them?

Sander van der Linden:

Yeah, of course. So all of our interventions, there are games and stuff. They're free. Are we have videos, we have games, we have educational material. People can use inoculation. Dot science is the website where we house it all. It's all freely available. Inoculation, dot science kind of comes with the book. People want to, want to learn more. That's where they can find some, some of the resources.

Jean Gomes:

Wonderful. What's next for you, Santa? What are you doing to your research focus right now? Yeah,

Sander van der Linden:

right now. We're doing a project on deep fakes. This is a really tricky one, because, you know, in order to inoculate, you need to have these sort of stable strategies, right? Whereas deep fakes, the technology is changing so fast and and, you know, if there's nothing to go on, no audio, for example, you know, how do you inoculate? And so that is a challenge. So what we're trying to do is expose people to more ridiculous versions of deep fakes, and kind of build the sort of internal deception monitor for, for, for deep fakery, if you will, to see if we can actually come up with a pre bunking technique that works. We're trying to to actually use the influencer model for the purpose of pre bunking. You know, maybe official organisations aren't always the most exciting institutions to hear from. Maybe, it's an influencer right that can deliver the message best. You know, often you see this with things like vaccine hesitancy in religious communities. Sometimes a religious leader is actually the best person to talk to people about, you know, public health, and not the CDC or or the NHS. And so we want to do the same with inoculation. Maybe people don't want to hear from, from, from defence of dark arts, you know, Fanta Linden, and so, you know, maybe people want to hear from Taylor Swift, I don't know. And so, Taylor, if you're listening, you know, we're open to, I actually, you know, she's on

Jean Gomes:

every week. So she's on every week, yeah.

Sander van der Linden:

So, so, yeah, so, who's the, who's the best, you know, who's the best communicator, in this sense, that's, that's a big of a question. And, yeah, the other thing is, is coming back to the volume question? Actually, it's very difficult to to measure with limited access. So we're trying to figure out, you know, how can we measure and track the volume of this information that's out there in a very technical way. How to operationalize that as well as you know? How can we leverage AI to do good things? You know? Can we automate pre bunking? Can we automate fact checking? And, yeah, so those, those are some of the things that are at the top of my mind at the moment. Well,

Scott Allender:

Sandra, I could talk to you about this all day. I wish we had more time. Honestly, this is so important. It's so timely, so relevant and so useful. So thank you for for sharing a bit of your wisdom and research with us. And folks, if you haven't already gotten your copy of foolproof, please do yourself a favour and order that today. Thank you again. So appreciate it, and until next time, folks remember the world is evolving. Are you?

People on this episode