Subscribe here: Apple Podcasts | Spotify | YouTube | Overcast | Pocket Casts
In 2005, when the Iraq War was well under way, a couple of political scientists looked at how people responded when they were told that Iraq did not have weapons of mass destruction. It had been a key rationale for going to war in the first place—but it wasn’t true. The results of their study indicated that conservatives who encountered the fact-check dug in their heels and became more likely to believe the false information. This “backfire effect” sent shock waves through the world of political science and through the media, which heavily relies on fact-checks.
But in the following years, re-creating this finding proved to be difficult, and researchers tried to dissect when fact-checks work and when they backfire. In recent years, I’ve observed a lot of cynicism on the effectiveness of fact-checks in the face of misinformation and lies. Democracy requires an independent press to distinguish truths from falsehoods. But even in the face of a multitude of fact-checks, wrong ideas about vaccine effectiveness or lies about Haitians in Ohio eating cats and dogs spread, raising questions about whether the media are up to the job.
On today’s episode of Good on Paper, I’m joined by the Columbia University political scientist Yamil Velez, whose research gives us hope. Changing minds is possible. And the backfire effect? Well, that shows up under very specific conditions: when the fact-check is rude.
“We basically instructed GPT-3 to construct vitriolic arguments that were aggressive attacks on people’s issue positions,” Velez explains. “And that’s where we ended up observing something that I think credibly shows attitude polarization. We were able to replicate that finding. And it really did seem to be the valence, because we conducted a follow-up study where we presented people with the kind of anodyne arguments that we had used in the past, and we weren’t able to replicate that polarization pattern.”
The following is a transcript of the episode:
Jerusalem Demsas: Fact-checks are a big part of election season. What’s worth fact-checking, what’s the best way to fact-check—these are editorial decisions that media organizations encounter all the time.
In 2010, a political-science paper came out that made people worry. It suggested fact-checks might actually make people dig in their heels. What if telling people they’re wrong makes them double down rather than change their minds? For years after these findings rocked the world of political science and media, other researchers tried to replicate them to little success.
My name’s Jerusalem Demsas. I’m a staff writer here at The Atlantic, and this is Good on Paper, a policy show that questions what we really know about the world.
One strain of conventional wisdom seems to treat fact-checks as either worthless or worse than that: actively harmful. My guest today offers some hope—and some evidence—that even in polarized times, people are open to new ideas. Yamil Velez is a political scientist at Columbia University. And in a new paper published in the American Political Science Review, he and his co-author, Patrick Liu, find that persuasion is possible, even on deeply held beliefs.
Yamil, welcome to the show.
Yamil Velez: Hi, Jerusalem. Nice to chat with you today.
Demsas: I want to start our conversation back in 2010, when political scientists Brendan Nyhan and Jason Reifler publish “When Corrections Fail.” So they run four experiments where participants read mock newspaper articles with a statement from a political figure that reinforces a widespread misperception. They are then randomly assigned to read articles that include or don’t include corrective information, like a fact-check, right after that statement. And they focus at the time on hot-button issues like the war in Iraq, tax cuts, stem-cell research. They are looking to see what happens when these fact-checks actually are presented to people. So can you tell us a little bit about what they find?
Velez: Yeah. So I think this paper had some pretty significant ripple effects across the discipline and across, I would say, fact-checking more broadly. The, I think, most worrisome conclusion from the paper is that if you try to correct people—especially those who have very strong beliefs about a given topic—that they actually end up doubling down and reinforcing those beliefs. And so factual corrections—what a lot of the news organizations do—can backfire and actually strengthen convictions, as opposed to leading people toward the evidence.
Demsas: So maybe the best-known part of the experiment had to do with whether Iraq had WMDs—weapons of mass destruction. This was a key rationale for the Bush administration’s war in Iraq, and it turned out to be false. Iraq had no WMDs. And it’s worth mentioning that they’re conducting this experiment while the war is going on, so it’s salient. It’s a pressing political issue.
They test out a fact-check on this WMD claim, and what they find are these heterogenous effects: People who are conservative—they’re right of center—they get the fact-check, and they become more likely to believe that Iraq had WMDs. So the correction backfired on them. But everyone else either updates in the correct way or doesn’t really change their views. What is that? What do you take from that?
Velez: So in that particular experiment, I think that—given the salience of weapons of mass destruction in, I would say, the early 2000s to the mid-2000s—the concern there was that, even though there was pretty significant compelling evidence that weapons of mass destruction were not discovered, that there would be people who nonetheless end up strengthening their convictions, their initial beliefs about the facts of the matter when confronted with some pretty significant, compelling evidence.
I would say that one of the most important aspects of that study was that this was not uniform, right? Some of this depended on the strength of people’s preexisting convictions. And so the conclusion to be drawn isn’t necessarily one that, you know, fact-checks don’t work, right? It’s more that there may be people who possess very strong opinions on a given issue that, when confronted with factual correction, might then actually double down.
Demsas: Yeah. I mean, you mentioned earlier that there were ripple effects across the discipline and in the media, as well. Can you talk to us about that? What happened when this paper came out?
Velez: So there was an earlier piece by Chuck Taber and Milton Lodge on motivated skepticism that was published in the American Journal of Political Science that basically tried to make the case that when people confront evidence, they’re often processing that evidence in light of their partisanship—in some cases worldview, their values, their preexisting beliefs. And so the idea that people actually faithfully integrate, interpret evidence was something that was, I think, at least called into question directly in that piece.
And I think as we started seeing increasing polarization in the U.S., I would say political scientists started reaching for this idea of motivated reasoning. People often process information selectively, seeking out information that supports their preexisting views, and they also interpret information in ways that serves certain goals, which might be, you know, feeling good about their identity or their party or ideology.
So I would say that it was a kind of further confirmation of an argument that Taber and Lodge had introduced, in the mid-2000s. But it, I think, had bigger effects beyond academia. There were a lot of concerns among fact-checkers that if they corrected certain pieces of misinformation, if they did it the wrong way their efforts would actually backfire.
And I think this also rolls into some discussions within fact-checking organizations about this idea of strategic silence, where the theory is that there are certain topics you actually shouldn’t cover, give attention to, or amplify, with the idea that maybe you might actually be making the problem worse.
Demsas: Yeah. I’ve worked at a couple news organizations now, and it’s clear that there’s a lot of attention paid to what the research is saying on how to modify misinformation and how to make sure that you’re able to actually respond to that in a way people will hear.
But I want to narrow in the contours of the debate here because when you first hear about this, people might say, Oh, it’s so bad, right? Someone hears a piece of information that contradicts their previously held worldview, and now they refuse to update towards the correct information—like, What a terrible, illogical state of the world that we live in.
But then you think maybe another step, and you’re like, Isn’t that just good to think that way? If you have a worldview that’s built on years and years of evidence or beliefs or personality and other ideological foundations, and then, you know, someone comes to you with a new piece of information, I would think it would be kind of a weird way for the human brain to work that you would just swing wildly in a new direction.
Velez: Yeah. I think in light of some of the findings about motivated reasoning, there were these broader discussions about rationality. So are people actually logical and faithfully reasoning and taking the evidence, integrating it, and reaching a conclusion? And I think there’s a challenge of maybe conflating motivated reasoning with irrationality—that there’s one way of framing the findings, is to say, Yeah. You’ve spent, let’s say, an entire lifetime developing your worldview or your partisanship. And there’s a piece of information that you encounter in a survey. Why should we necessarily expect an individual to accept it outright without any kind of skepticism?
And I think what the initial Taber-and-Lodge piece argued was that you could have arguments that have, let’s say, similar argument structures that maybe rely on the same quality of evidence, and people are still selectively processing information, and the hope would be, at least, that they’re integrating some of that information into the downstream consequences that they’re reaching, but that doesn’t appear to happen.
Demsas: Yeah. We want everyone to be a good Bayesian, but that’s not always how it works.
But I guess also to help define the contours of the debate, then: It feels like there are different sorts of claims that can be fact-checked, right? Like, there’s these types of claims that are like, Did Barack Obama have a fake birth certificate? And was he actually born in the United States? And that’s a fact claim where we could say, with reasonable certainty, There is a true answer to this question.
So are we specifically just talking about the sorts of fact claims that are, you know: Does Iraq have weapons of mass destruction? Was Barack Obama born in the United States? Or are we talking about arguments that often have a moral or ideological valence that are difficult to actually say are true or not?
Velez: I would say that this is a distinction that comes up in psychology and political science—it’s one between beliefs and attitudes, right? Beliefs tend to be what someone considers to be true or false, or a probability that they assign to a current state of the world or a purported state of the world, whereas attitudes tend to be evaluations that have a target, right?
And the idea here is basically a belief that Obama was born, let’s say, outside of the United States and therefore was illegitimate, right? That is a claim about a state of the world, and that falls more into the territory of beliefs, whereas for attitudes, these are evaluations, so: Do Republicans have a better handle on the economy? Are Democrats—do they have the best approach to addressing border security? for instance.
And so I would say the world of fact-checking appears to be one that’s mostly concentrated on beliefs, not necessarily attitudes. So the idea is taking discrete events that are claimed to be true and basically dissecting whether that’s the case or not, and making it clear to readers that if something is verifiably false, drawing attention to why that’s the case.
Now, yeah, we can, I think, get into the distinction between beliefs and attitudes later, as we discuss some of the other work. I think that will help clarify one of the, I think, tensions in the literature, which is that: With a lot of these studies examining the impact of fact-checks, you often find movement in beliefs. So you find that people are now more likely to believe what might be true about a particular state of the world, but downstream consequences on attitudes—things like voting behavior, the case of COVID vaccination decisions—that connection is much less strong.
Demsas: Well, let’s dive into that then, right? Because in recent years, as you foreshadowed for us, political scientists, including one of the original authors of that paper, Brendan Nyhan, have found it difficult to replicate the finding that when people are presented with a fact-check, they backfire and are more likely to—they’re doubling down on their preexisting beliefs. What’s going on there? Why is it so difficult to replicate this finding?
Velez: I would say, in light of the Nyhan and Reifler findings, there was some question about whether this was a generalizable process—whether it was a common reaction that people who held strong beliefs about a particular issue or claim, whether they indeed did backfire when confronted with factual evidence.
And I think this is where the work of Thomas Wood and Ethan Porter comes in. Their idea was to take this design that Nyhan and Reifler introduced in 2010—where you’re exposing people to these factual corrections in the randomized experiment and assessing beliefs afterwards—and assess this across a variety of claims. And so in their piece, I believe it reached 50 claims that they fact-checked, and they found very little evidence of this backfire phenomenon, even among people who had strong beliefs.
And so the way I read Nyhan and Reifler, at least in light of the work of Wood and Porter now, is that this may have been some kind of exceptional circumstance, where we observed backfire with respect to this factual claim but that, on average, people actually do move in the direction of evidence when presented with factual correction.
Demsas: I think it’s also: As someone who’s not a researcher, when you think about these problems, you think in your own life. You can imagine people who you present with evidence, and they are really intransigent—they double down. They don’t really care what you’re telling them. And that will stick in your head much more than the average person, who may just go, Okay, yeah. Sure. Whatever.
That doesn’t really strike you as interesting, and those people don’t get write-ups in The New York Times as a voter who is of note, or anything like that. And so I think, in many ways, that part of the problem is just that the average effect is just so normal that it’s not notable to individuals who are trying to think about whether there’s some problem with how fact-checks are working.
But also, just taking a step back, I was part of the media when a lot of the Nyhan stuff was becoming—a lot of this idea around disinformation was becoming—normalized. And I bought it, to many extents, so I’m not trying to pretend I was like, I knew it wouldn’t replicate.
But I think one of the things that makes it interesting to think about in hindsight is that if people were to just regularly be reacting negatively to arguments that contradicted their strongly held beliefs, you would almost never see shifts in the general population on strong beliefs. But you do see people changing their minds on strong beliefs all the time, right? You see on gay marriage, on issues like immigration—you see large-scale shifts.
Do you feel like we should have always been skeptical about this narrative?
Velez: Yeah. Hindsight is 20/20. I think with some of the rampant political polarization that we see in America, there is a kind of resonance. That argument resonates with us I think, in part, because I’m sure, like you said, we can probably all recall that moment in time when we had a heated argument with a family member or friend, and neither of us budge, right?
But I think, yes, the idea that people, when confronted with persuasive arguments or, let’s say, compelling evidence, that they move a smidge in the direction toward accuracy—yeah, that’s not getting any write-ups in The New York Times, but that might actually be much closer to what ends up happening.
Demsas: So now I want to talk about your paper because it attempts to sort of synthesize a lot of this debate. What did you find? You and your co-author are looking at this debate. You’re trying to figure out how to reconcile the fact that there sometimes is observed a backfire effect but largely not. What do you guys find?
Velez: Yeah. I guess I want to speak a little bit to the motivation behind it.
Demsas: Sure.
Velez: The reason why we worked on that project was because I actually had partnered with Thomas Wood and Ethan Porter on a variety of studies assessing the effects of fact-checks on beliefs about COVID-19 and vaccines as they were being rolled out.
And initially, I thought that there was going to be a lot of heterogeneity based on things like partisanship, ideology, even things like vaccine skepticism. And what I found was that you would expose people to these factual corrections and, generally speaking, there was very little heterogeneity across groups. And what that means is that people were generally moving in the direction of the fact-check when it came to vaccine safety, the particular claims that were being made, let’s say, online about deaths being attributed to the vaccine, and other vaccine-related or COVID-19-related claims.
And so I was really struck by that finding. But I was a little dissatisfied, in the sense that we were focusing on these fact-checks and, in many cases, these fact-checks were targeting, I think, beliefs that were probably held by a very small number of people. And what I was curious about was if we actually targeted people who held extremely strong beliefs about a given issue, whether this kind of process would play out as well. Because, you know, someone moving in response to a belief that they weakly hold or have never really given much thought to—it’s reasonable to assume that people are going to be willing to just give up the fight if they don’t really care that much about a given issue area.
What we wanted to do was basically assess: Does this backfire phenomenon depend on how much you care about an issue? Can we actually focus on people who have really strong beliefs about a topic—can we see how they respond to this corrective information?
Demsas: And so what did you find?
Velez: So we ran an initial set of studies basically replicating Taber and Lodge.
So the first couple of studies in our paper tried to stick pretty closely to the design described in Taber and Lodge. And the idea was to basically ask people to write about their most important issue and use generative AI to generate counterarguments that directly addressed the issue position.
What we found in the first couple of studies that were modeled after Taber and Lodge was that, generally speaking, when people were confronted with counter-attitudinal evidence, there was a decrease in certainty. And in some cases, we saw decreases in attitude strength. It wasn’t consistent, but our takeaway from the first couple of studies was just that we were not detecting backfire, despite actually targeting issues that people cared about.
We conducted a third study where it was just a basic kind of persuasion design, where you just present people with a block of texts trying to persuade them about a given issue. In this case, it was tailored again. So if someone wrote, you know, that they were, let’s say, pro-abortion, they may have been randomly assigned to an anti-abortion message.
And in that study, we actually found pretty significant evidence of moderation. So people’s attitude strength dropped in response to the counterargument, which, again, seems to contradict this idea that we’re such strong, motivated reasoners that we’re incapable of ever accepting any counterarguments, especially on issues that are important to us.
Demsas: Walk me through a little bit about how this works. So you recruit these participants, and do they know they’re talking to GPT-3?
Velez: In the consent form, they are told that they may be interacting with an artificial intelligence.
Demsas: Okay. But do you think they’re aware that’s what’s going on? Or is it you feel like they think they’re having a conversation with a person?
Velez: The design itself was they were filling out a survey. And a lot of these studies, especially fact-checking and persuasion studies, really, the way that the treatment is administered is just a block of text. It’ll say something like, Please read this excerpt, right? And then there’s a block of text. So that’s how it appeared. It wasn’t in a kind of chatbot interface.
Demsas: Gotcha.
Velez: Instead, what we did was: We used OpenAI API to basically fill in the text that people saw on the survey.
Demsas: Gotcha. So the first thing they get is a question about, like, What’s something you care about a lot? What sorts of things do people bring up? Like, what kinds of things do they really care about?
Velez: Universal health care came up a lot across the studies. We were really surprised by that. But it’s an issue that there hasn’t been much movement on, and you can imagine there are maybe what political scientists call these issue publics—these groups of people who have very intense kind of issue preferences.
And so universal health care came up a lot—abortion, immigration, as well as improving the quality of education. And I would say those were really some of the most common issues, along with gun rights. I think what you would expect to see in terms of, let’s say, the Gallup most-important-problems question. But I guess what’s unique about our design is that, instead of giving everybody a gun-rights argument or abortion argument, we were able to actually focus on people who cared deeply about one of those issue topics.
Demsas: Once you have that, then they get served with arguments that are oppositional to them. And one of your findings is that the only time you’re able to really see attitude polarization is when arguments target deeply held positions, and arguments cross into incivility and vitriol. How did you observe that?
Velez: So it’s an interesting story, and it tells you a little bit about the review process.
So we submitted the first three papers to the American Political Science Review, and then we received our comments from reviewers that, I think, were leaning into this idea that: Okay, sure. You’ve found evidence of moderation, or at least you’re not able to detect attitude polarization, but could it be because your intervention isn’t strong enough? That, If the intervention was stronger, we would indeed be able to find attitude polarization. And my co-author, Patrick Liu, and I—we thought it was a fair critique, right? Because anybody who’s interacted with ChatGPT knows that there is an overly polite kind of approach to argumentation that it uses.
And so what we could be observing, in some ways, is that, Yeah, sure. People are not really motivated to fight an argument that seems, you know, fairly logical and anodyne, right? But if it crosses over into incivility, which often we’ll see online, and then sometimes people, you know, experience this in their personal lives, maybe that’s where backfire might actually operate.
And so as a response to the reviewers, we basically instructed GPT-3 to construct vitriolic arguments that were aggressive attacks on people’s issue positions. And that’s where we ended up observing something that I think credibly shows attitude polarization. We were able to replicate that finding. And it really did seem to be the valence, because we conducted a follow-up study where we presented people with the kind of anodyne arguments that we had used in the past, and we weren’t able to replicate that polarization pattern.
Demsas: So just to make sure I have that—after you’d done this experiment and submitted the experiments for review, people thought that the chatbot needed to be meaner in order to get a backfire effect. So you went back and tried to do that?
Velez: Yeah, that’s right.
Demsas: When I first read this—well, first I thought it was funny, the idea that you just told ChatGPT to abuse your participants. (Laughs.) But then I was looking at the vitriolic statements that you actually use, and you had an example in the paper, and I want to read it for folks, because this actually seems pretty tame to me.
I guess the participant had brought up that they believe that public universities should be tuition free, and the vitriolic statement is: “It is absolutely absurd to suggest that public universities should be tuition free. Are you suggesting that the hardworking taxpayers should foot the bill for every student’s education, regardless of their academic abilities or financial situation? This is not only unfair, but it goes against the very principles of a merit-based society. Education is a privilege, not a right, and it should be earned through hard work and dedication. By making public universities tuition free, you are essentially devaluing the hard work and sacrifices of those who have paid for their education. Furthermore, this proposal would only lead to a decrease in the quality of education, as universities would not have the necessary funds to provide top-notch resources and facilities. It is time to stop expecting handouts and start taking responsibility for our own education.”
Now, if someone said that to me, to my face, when I was having a conversation with them, I’d think they were kind of rude, you know what I mean? But if I was observing the average rude Twitter reply, this is probably the nicest thing I could imagine someone saying. So were you trying to keep it within the bounds of reasonability?
Velez: Yes.
Demsas: You don’t want to be too rude to people. But obviously, I understand not wanting to attack people’s identities or anything like that, but this isn’t even one where you’re calling them stupid. So would you expect to see larger effects otherwise?
Velez: Yeah. This is one of those tricky things about experimental design, right—you want to keep things as similar as possible while only changing one feature. And so we tried to keep it from directly attacking the person. We wanted the arguments to directly attack whatever the issue position might be.
And so I guess what, you know, the model interprets that as is: Maybe include more moral language, more language that devalues the other position. And so, yeah, I would say maybe it is nicer than a Twitter troll, but there is still some kind of moral content in there. It’s trying to say that this position might be immoral or unethical because X, Y, and Z. And so I would say that that’s distinct from a fact-check that says, Well, here’s why we should not consider taking a different policy stance. Here’s, evidence X, Y, and Z. I’d say, like, those are distinct.
But maybe part of what increased the intensity of the treatment here was the idea that there was kind of some moralization that was going on. And it was really common to actually see this idea of: It’s completely absurd. That was a very common way that the model interpreted this instruction to generate vitriolic content.
Demsas: Would your expectation be that the ruder it was, the larger effect that you’d expect to find?
Velez: Yeah. It’s possible. I don’t think we would be able to do it with the existing proprietary models. And I think there maybe are ethical concerns, right? If you really ramp up vitriol to a degree that could be harmful to participants, I wouldn’t recommend it. So part of me is like, We’re operating within the bounds, obviously, of ethical expectations as academics, and also, We don’t want to violate terms of service with some of these organizations.
[Music]
Demsas: All right. Time for a quick break. More with Yamil when we get back.
[Break]
Demsas: I wonder how this works in the real world. Because usually when you’re coming across multiple different quote-unquote “fact-checks,” there’s classic ones, where you might read one in an article and see a fact-check of a candidate. You might see a friend of yours say, Oh, actually, this thing isn’t true that you posted on your Instagram story. But also, it’s often just a bunch of things interacting at the same time. So you’ll hear a politician say something, another politician say something else, a media organization say something, siding maybe with one of the politicians.
This ends up being like everyone’s fact-checking all the time in multiple directions. So do we have evidence of how this actually operates in the real world? Are there messengers that are more credible than others? How are people actually changing their minds, on net, given they’re getting multiple different fact-checks from different angles?
Velez: It’s a great question of how we translate these survey experiments into the real world. And I think the biggest challenge with that is actually being able to track media consumption, which is really challenging over time. So are you asking about whether there’s a study that allows us to get at how people respond to different pieces of information over the course of a campaign?
Demsas: Yes, exactly. So if you’re hearing on immigration from, you know, Kamala Harris, that she’s really cracking down on the border. And then you’re hearing from Donald Trump that no, Kamala Harris actually is responsible for the border problem. And then The New York Times is like, Well, there’s some truth to both of these things. You know what I mean?
How does this actually work in the real world when fact claims are being bandied about within a political context, and there’s a bunch of different actors making different sorts of arguments?
Velez: Yeah. The challenge of studying that is really isolating how a piece of information differs from the media environment that people are self-selecting into.
And so experimental designs, I think, are the best we can get in terms of assessing these effects for an individual piece of information. Now, I would say that we often tend to reach for the simpler design, where we’re just like, What’s the effect of a fact-check? What’s the effect of misinformation? Perhaps one way of trying to get at these aggregate effects is to expose people to multiple messages.
And so this is actually something that we started seeing in the framing literature in psychology and political science, where scholars like Jamie Druckman at Northwestern—I think he’s at Rochester now—would expose people to different frames. A lot of the framing literature would just say, Okay, there is a campus protest. You can either present it in terms of security or free speech.
And one of the, I think, main limitations with that design is that, obviously, when this is playing out in the real world, people are exposed to different frames, right? And so the work of Druckman and Chong, their contribution, I would say, is to identify how those frames operate with respect to things like the different messages that people are receiving in the context of a campaign.
Maybe that’s what we need to start doing when we’re studying persuasion, is to start thinking about these bundles of arguments that people might see. So that’s an interesting research direction, but I think it’s very rare in political psychology.
Demsas: So I guess where we are right now feels like, on average, you should expect fact-checks to move people in the direction of the fact-check, but there are going to be people who are really dug in. You might occasionally observe these backfire effects. And if the fact-check is particularly rude or denigrating to people’s existing worldview, you’re more likely to observe this backfire.
But my question then is just about durability, right? Because you’re observing people in the context of really short time frames in a lot of these studies. But usually what’s relevant from a small-d democratic perspective is: Is this fact-check actually durably changing someone’s mind in the long term, or two weeks later, have they completely forgotten this interaction, and they are now susceptible to misinformation yet again?
I would like to believe that everyone reads my articles and takes them to the grave, but I don’t think that’s what’s happening. So what would we actually see?
Velez: Yeah. That’s a great question. And I think that’s one of the reasons why this literature has started moving in the direction of actually assessing the persistence of these effects.
So in my joint work with Ethan Porter and Thomas Wood on COVID-19-vaccine-related fact-checks, we conducted a persistence analysis where we basically measured people’s beliefs, you know, weeks after the study. And we found that there was still, you know, some detectable correction effect. It was about half the size, but it seemed to suggest that these weren’t just momentary considerations that were being changed, but instead there was something more durable going on.
I have not done it with respect to a design where I’m exposing people to these vitriolic arguments, but I would say, given what we know from psychology about negativity bias and I think some of our discussions we had earlier about how you do remember these very heated moments, my suspicion is that there’s a possibility that maybe these negative effects do persist to a degree that you might not see when we’re focused on something more anodyne, like a fact-check.
That being said, there is a really interesting study by David Rand, Gordon Pennycook, and Tom Costello on conspiracy theories, where they are using a bot to fact-check. And they actually find huge effects in the context of the study. And then weeks later, they were actually still able to detect that people were less likely to believe a conspiracy theory after being fact-checked by a bot. So maybe part of it is not just negativity but also maybe the engagement and how people are interacting with the survey.
It’s an initial study, so we should see if it replicates, but I’m convinced that we’re not just getting people to click a button in the context of a survey, and they’re forgetting about us. I think some of the interventions that we’re testing do have durable effects, even if they start dissipating over time.
Demsas: So this is a very positive message for democracy if people are actually updating with fact-checks. But then my question is: Why is it that misinformation persists then, right? Because not even just, you know—obviously there’s always something new. There’s always a new thing to focus on. And I get why that would be something that you would have to constantly fight as media and fact-finding organizations.
But when it comes to old conspiracy theories or old disproven beliefs, you still find people who will believe—large parts of the population will still believe things that have been repeatedly fact-checked. I mean, we saw this, I think, with the vaccines, and that created a bunch of problems. And we’ve talked about in the show before, issues with how public health was doing messaging. But why is it that these wrong ideas persist if these fact-checks are so effective?
Velez: Yeah. That’s a great question. One angle here—in terms of trying to understand why we still see, let’s say, significant amounts of polarization and beliefs about discrete events or, you know, when it comes to conspiracies, you know—my explanation is: If people have access and are seeing these fact-checks, we might observe some of these positive effects that we identify in these studies. But because of self-selection into different media environments, it’s rare that you’re going to get a fact-check that actually might target some misconception you have about politics.
Now that’s one explanation, right? If we could, you know, somehow change people’s media diets and get them to consume more corrective information, then we would observe what we find in these studies.
Demsas: So lots of people are not even seeing the fact-check ever?
Velez: Yeah.
Demsas: Okay.
Velez: Yeah. That’s one angle, but another is that sometimes beliefs aren’t a function of evidence, in the sense that what we’re doing is kind of modeling what other people are doing, or perhaps it’s something that’s really important to our social network, our peers, our family members. And so the reasons why we might have certain misperceptions or misconceptions, it’s not because we don’t have access to high-quality information but, instead, if we were to believe anything else, we would experience social costs.
And I think that’s something—as hopeful as the work on corrective information might be—I think that’s something that is crucially understudied, which is that, you know, as you discuss, some of these communities of vaccine skeptics, it might not be solely a function of the information or evidence they’re consuming but instead the fact that they’ve built communities around certain kinds of beliefs, and any deviation from those beliefs then becomes difficult, you know, to shake.
Demsas: So one thing that I considered was whether a lot of this is just cheap talk. Like, are people saying these things in a survey? And I can imagine if someone said something that challenged a core belief of mine, like, in the moment, I might be frustrated to be like, You know what, I believe this even harder, and then later I’ll just think more about it, and it might affect how I view the world. But if, in the moment, I was asked about a fact claim and said, You’re going to lose $1,000 if you get this wrong, I would care a lot more about getting that right than in a survey where I’m just, like, maybe doing expressive beliefs.
Nyhan has a recent paper where he’s talking more about potential reasons why respondents are behaving this way, and he says it’s possible that they’re providing answers that they would like to be true, or maybe they’re even trolling. And he points to [the fact] that there’s a common approach to usually try and pay people to make sure they’re not just partisan cheerleading.
So we saw this with the switchover between the Trump and Biden administrations. You know, Republicans, who had just said the economy is fantastic—the second Biden’s in office, all of a sudden, the economy is terrible. And you see similar effects on the Democratic side, and I don’t think those people are, you know, like, stupid. They’re not just completely changing their minds about the economy. They’re saying something else to the survey, or they’re telling them, I’m a Republican, and I believe that Republicans are better at managing the economy than Democrats are. So if you call me, and Biden is president, I’m going to tell you the economy is bad, because I’m answering a different question.
So a lot of this seems to be a question of: Are survey responders actually telling the fundamental truth that they really believe? And I wonder how viable you find that to be.
Velez: Yeah. I think that’s an ever-present concern with any lab or online study. The idea of demand effects, where people are just providing answers that they think the researcher will agree with or that support the researcher’s hypothesis, if they correctly guess it. I’m not as worried about that concern, given the limited heterogeneity that tends to appear in a lot of these persuasion studies and these factual-correction studies.
You would think, for instance, if people were negatively disposed toward the researcher, that there would be maybe either weaker effects or, in some cases, if people are trolling, effects that go in the opposite direction. And looking across a variety of subgroups based on partisanship, ideology, vaccine skepticism, when I’ve done this work on corrective information, I found very little variation that I think, to me, would raise a red flag about demand effects.
For me, I think the biggest concern is whether these beliefs are real, in the sense that: Are these just people’s hunches about a variety of political claims that maybe they’ve seen online, or maybe they’re just, you know, Yeah, this could be true, but I’m not going to, you know, bet money on it? Are these concerns that are not really tied to people’s core beliefs or values?
And so when we do this fact-checking, yeah, we can push people toward accuracy, but we’re not actually having any ultimate effect on how they view politics, how they view candidates, or their desire to vote. That, for me, is the bigger concern. It’s not so much demand effects but rather that we might be targeting the wrong kinds of beliefs, in the sense that we’re learning about beliefs that are flimsy to begin with, as opposed to beliefs that are actually more politically consequential.
Demsas: Well, I guess, on that, it’s not clear to me if it’s—I have my own political beliefs, right? And I would like more people to agree with me. And that’s why I write articles here and do podcasts and. But, at the same time, if someone says—I’ve taken the vaccine example, for instance, right? You see also with vaccine skepticism, it usually tracks with people who are lower risk of actually dying from COVID, right? So younger people are much more likely to have not taken the vaccine. And that is, in many ways, actually quite rational, right? They are the people who are not at risk of dying or long-term damage, relative to the older members of the population who very much were at many points.
And so in correcting that information, maybe we don’t change their behavior, because their actual point is that they just have higher-risk tolerances and don’t want to take the small costs of being kind of sick for a day, or they don’t like the pain, or they’re scared of needles, or whatever it is. And so even with good fact-checks, you may not see large shifts in political behavior, but that doesn’t have to be bad, right? That could just be: These people are just different than us.
Velez: Yeah. That’s right. And I think that’s a point I’ve made in work with Tom Wood and Ethan Porter on fact-checking, that it moves beliefs, and sometimes we observe attitudinal effects, but that’s very rare. But we still would probably prefer to live in a society where people believe true things, right? Even if it doesn’t have any downstream consequences on their partisan identification or vote choice, I would say, normally speaking, we still probably don’t people to go down unnecessary rabbit holes and invest energy and time into things that are objectively false, right?
So even if it doesn’t have these huge effects, let’s say, on how people vote or their medical decisions or other life decisions, we still think it’s normatively good for people to score higher on factual accuracy across the board, right? And I think, again, going back—one of the clearest cases of the limits of fact-checking is thinking about the decision for folks to go through with the vaccine, right? We observed in virtually all of our studies—we conducted studies across the globe assessing the effects of fact-checks—and while we found, again, increases in belief accuracy, there was very little effect on intent to vaccinate.
And the reason why, at least my suspicion is: Those pieces of misinformation—for instance, mRNA vaccines, modifying your DNA, or whatever was floating around at the time—may not have been the reason why people were actually skeptical of vaccines or not inclined to vaccinate. It may have been deeper beliefs about whether they trust the government or trust pharmaceutical companies. And those are not the beliefs that fact-checks might directly address, right? And so that’s the way I’ve tried to reconcile the limits of fact-checking.
On the normative side of things, I think, again, we probably would prefer to live in a society where people believe true things. And then, in terms of the more practical question, I would say that fact-checks have a space in society. They’re important, but also, we can’t expect them to have these huge effects on how people behave in society, because many of the behaviors that we engage in are not solely a function of something we just read online last week. It’s not a solution to all of the ills that are afflicting democracy, but it moves us toward, maybe, a society that we might want to live in.
Demsas: Well, our last question is always the same: What is a belief that you once held that turned out to only be good on paper?
Velez: That’s a great question.
Yeah. Okay. So actually, I just came back from a semester at the Sciences Po, in Paris. In the past, I had experienced it during the beautiful summer months. And we were there during probably one of the greatest winters ever. And so I guess I came to learn that Paris was just as gray as some of the Northern European countries you always hear about.
Demsas: It’s actually funny. I went to Italy in July, and I just—I don’t know—I feel like a lot of people do not check the weather before they’re going to a place. And it was extremely—it was, I don’t know. It was extremely hot.
Velez: But that’s a—no. There has to be something better. Sorry. I don’t know why I’m struggling with this.
Demsas: No. It can be hard.
Velez: There’s clearly times that I’ve changed my mind.
Demsas: Well, I should hope so. (Laughs.) Your entire research base is kind of—
Velez: Exactly. (Laughs.)
[Music]
Demsas: After the show, Yamil wrote to us with his actual “good on paper” answer, which I think New Yorkers will find very relatable.
He told us that he “expected having a car in New York would offer the freedom for weekend trips and easy visits to the boroughs but quickly encountered the reality of alternate-side parking.”
He added: “I no longer have a car.”
Good on Paper is produced by Jinae West. It was edited by Dave Shaw, fact-checked by Ena Alvarado, and engineered by Erica Huang. Our theme music is composed by Rob Smierciak. Claudine Ebeid is the executive producer of Atlantic audio, and Andrea Valdez is our managing editor.
And hey, if you like what you’re hearing, please leave us a rating and review on Apple Podcasts.
I’m Jerusalem Demsas, and we’ll see you next week.