Prebunked

Cognitive Audit · Classification: Documented

Prebunked

In 1961, a social psychologist proposed that you could vaccinate people against ideas the same way you vaccinate them against diseases. In 2019, Cambridge researchers turned the theory into a game funded by the UK Foreign Office. In 2022, Google deployed it as ads on YouTube, targeting tens of millions of Europeans. In 2025, a study in PNAS Nexus found the technique had limited effectiveness in real-world conditions. By then, it had already been scaled to populations.

Dispatch filed by TFRi · Permanent record

The Word

Prebunking is not a word that evolved naturally. It was coined. The people who coined it can be identified. The institutions that funded its development can be named. The companies that deployed it at scale are publicly traded. The government agencies that partnered on it have websites. This is not a reconstruction from declassified archives. It is happening now, in the open, with press releases.

The word means: exposing someone to a weakened version of an argument before they encounter the full version, so that when they do encounter it, they are already conditioned to reject it. The originators describe this as building “mental antibodies.” The medical metaphor is not decorative. It is the entire framework. Vaccine. Dose. Immunity. Inoculation. Antibody. Booster. The language of public health, applied to public thought.

The question the word does not answer, and the question this dispatch exists to ask, is: who decides which ideas require inoculation?

The Theory

The intellectual origin is precise and documented. In 1961, social psychologist William McGuire published the first papers on what he called “inoculation theory.” His context was the Cold War. After the Korean War ended in 1953, twenty-one American prisoners of war chose to move to Communist China rather than return home. The American public was stunned. The explanation offered was “brainwashing,” a term that had been coined only a few years earlier. The military wanted to know: could soldiers be psychologically fortified against enemy persuasion before capture?

McGuire’s answer, developed at Yale and published in the Journal of Abnormal and Social Psychology, proposed that the process of resistance to persuasion could be modeled on the process of resistance to disease. Just as a biological vaccine introduces a weakened pathogen so the immune system can develop antibodies, a psychological vaccine would introduce a weakened persuasive argument so the mind could develop counterarguments. The key insight was that the person needed to be exposed to the attack, in diluted form, before encountering it at full strength. Passive reassurance was not enough. The mind had to practice fighting.

McGuire’s work was rigorous and narrowly scoped. He tested whether people could be made more resistant to challenges against “cultural truisms,” widely shared beliefs like “it is good to brush your teeth after every meal.” He found that pre-exposure to weak counterarguments, followed by refutation of those arguments, made people more resistant to stronger counterarguments later. The theory was published. It was cited. And then, for roughly fifty years, it stayed in the textbooks.

It was never tested on misinformation, propaganda, or conspiracy theories during McGuire’s lifetime. The application that would make it famous came later, from different people, with different funders, at a very different scale.

TFRi Note · The Brainwashing Origin

The theory designed to protect people from having their beliefs manipulated was created because the U.S. military believed its soldiers’ beliefs had been manipulated. The origin of inoculation theory is itself a response to a conspiracy theory about brainwashing, one that the CIA took seriously enough to launch MKUltra over. The field that now proposes to inoculate the public against conspiracy theories was born from an institutional panic about a conspiracy. This is not an irony the field discusses.

The Revival

In 2017, Sander van der Linden, a social psychologist at the University of Cambridge, published a study in Global Challenges that applied inoculation theory to climate change misinformation. Van der Linden found that if participants were warned in advance that “ichally motivated groups use misleading tactics to try to convince the public that there is a lot of disagreement among scientists,” they were more resistant to a specific piece of climate misinformation. The study attracted immediate media attention. Van der Linden’s phone, as he later wrote, “started ringing non-stop.”

What followed was a rapid scaling of the concept. In 2018, van der Linden and his colleague Jon Roozenbeek developed Bad News, an online game in which players take on the role of a misinformation producer. Players learn six “manipulation techniques”: impersonation, polarization, emotional language, conspiracy theories, trolling, and discrediting. By practicing these techniques in a controlled environment, players are supposed to develop resistance to them in the wild. The game was developed in collaboration with DROG, a Dutch media platform, and the UK Foreign and Commonwealth Office.

Van der Linden and Roozenbeek coined the term “prebunking” to describe their approach. In contrast to “debunking,” which responds to misinformation after it has spread, prebunking intervenes before the person has encountered the claim. The word was new. The concept, as they acknowledged, was McGuire’s. What was new was the application: not protecting cultural truisms about dental hygiene, but protecting institutional narratives about contested political and scientific questions.

Van der Linden published a book in 2023 titled Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity. It won multiple awards. Google’s research arm was already deploying the theory at scale.

The Deployment

Google’s internal unit Jigsaw, founded in 2010 as Google Ideas with a mandate to “address threats to open societies,” became the primary vehicle for scaling prebunking to populations. Jigsaw’s head of research, Beth Goldberg, described the approach to TIME magazine in 2024: “It works like a vaccine. It helps people to gain mental defenses proactively.”

The deployments are documented. In the fall of 2022, Jigsaw ran a prebunking campaign in Poland, the Czech Republic, and Slovakia, focused on what it characterized as false claims about Ukrainian refugees. The videos were viewed 38 million times across Facebook, TikTok, YouTube, and Twitter. In 2023, Jigsaw expanded to Germany with campaigns on photo and video manipulation. In 2024, ahead of the EU parliamentary elections, Jigsaw launched its largest campaign across Belgium, France, Germany, Italy, and Poland, disseminated primarily as short ads on YouTube and Meta platforms.

The campaigns do not target specific claims. They target techniques: scapegoating, false dichotomies, emotional language, ad hominem attacks. Short animated videos, often under 90 seconds, illustrate the technique using relatable examples. Viewers are then invited to take a survey testing whether they can identify the technique. The approach is designed to be politically neutral by focusing on method rather than content.

The question of who defines which techniques count as “manipulation” and which count as “persuasion” is not addressed in the campaigns. Scapegoating, emotional language, and ad hominem attacks are tools used by every institution in the history of public communication, including the institutions funding the prebunking research. The techniques being inoculated against are not exclusive to misinformation. They are the basic mechanics of rhetoric. Aristotle cataloged them. They are taught in law schools and business schools and political campaigns. What prebunking proposes is not to teach people how rhetoric works. It proposes to teach people to associate specific rhetorical techniques with deception, so that when they encounter those techniques, they experience suspicion rather than persuasion. The question is whether the suspicion is directed equally at all sources, or primarily at sources outside the institutional consensus.

It works like a vaccine. It helps people to gain mental defenses proactively.
Beth Goldberg, Head of Research, Google Jigsaw, quoted in TIME, April 2024

The Funders

The institutional architecture of prebunking is documented in the researchers’ own disclosures and the funders’ own press materials.

Van der Linden’s Social Decision-Making Lab at Cambridge has partnered with Google Jigsaw, the UK Cabinet Office (for COVID-19 “misinformation” campaigns), the UK Foreign and Commonwealth Office (for the Bad News game’s international deployment), and the U.S. Cybersecurity and Infrastructure Security Agency (CISA), a subagency of the Department of Homeland Security.

CISA is the same agency that, during the 2020 election cycle, coordinated with social media platforms to flag and suppress content it deemed misinformation. This is documented in the files released through the Missouri v. Biden litigation and subsequent congressional investigations. The agency tasked with protecting election infrastructure partnered with the lab developing techniques to inoculate the public against unapproved narratives.

Google, which owns Jigsaw, also owns YouTube, the primary distribution platform for prebunking ads. Google is a paying customer of Wikimedia Enterprise, the commercial arm of Wikipedia. Wikipedia’s conspiracy theory article discusses prebunking approvingly. Google’s search results surface Wikipedia’s articles as the top result for most queries. The company that funds the prebunking research, owns the distribution platform, purchases the encyclopedia’s data feed, and dominates the search results that determine what most people read about any given topic is the same company. This is not hidden. It is the business model.

Van der Linden is also advising Meta, the owner of Facebook and Instagram, on incorporating prebunking into its platform operations. Meta has run prebunking campaigns targeting Black, Latino, and Asian American communities with media literacy training about COVID-19. The platforms where TINFOIL™ is banned from advertising are the platforms where prebunking campaigns are deployed. The door is open in one direction.

TFRi Note · The Pipeline

The prebunking pipeline operates as follows. Government security agencies (CISA, UK Cabinet Office, UK Foreign Office) fund academic research at Cambridge and other institutions. The research produces techniques for reducing belief in claims those agencies have classified as threats. Google operationalizes the techniques through Jigsaw and distributes them as ads on YouTube and Meta platforms. Wikipedia describes the techniques approvingly, citing the Cambridge research. Google’s search results surface Wikipedia’s description. AI systems trained on Wikipedia’s content reproduce the framing. The public encounters the output at every layer: in the ads, in the search results, in the encyclopedia, in the AI assistant’s answer. The pipeline does not suppress information. It pre-shapes the cognitive environment in which information is received. The word for this, coined by the people who built it, is inoculation.

The Evidence

Does prebunking work?

The researchers say yes, with caveats. Post-campaign surveys in the EU found that the share of individuals who could correctly identify a manipulation technique increased by up to 5 percent after viewing a prebunking video. The Bad News game showed statistically significant reductions in the perceived reliability of manipulative content across multiple languages. A 2022 study published in Science Advances by Roozenbeek, van der Linden, and colleagues, drawing on seven experiments including a YouTube field study of 22,632 participants, found that prebunking videos improved manipulation technique recognition and boosted confidence in spotting deceptive content.

The critics say the evidence is weaker than it appears, and they are also publishing in peer-reviewed journals.

In June 2025, a study published in PNAS Nexus by researchers at Cornell University and Carnegie Mellon (Wang, Phillips, Carley, Lin, and Pennycook) tested whether inoculation videos changed behavior in a simulated social media feed. The study, comprising five experiments with nearly 5,000 participants, found that while inoculation improved technique recognition when directly tested, “it is not clear if this effect transfers to spontaneous detection and sharing behavior in a social media context.” The researchers concluded that the real-world effectiveness of prebunking “is surprisingly limited.” The technique teaches people to pass a quiz about manipulation. Whether it changes how they actually behave when scrolling through their feed is a different question, and the answer, as of mid-2025, is not encouraging.

A 2023 reanalysis by Modirrousta-Galian and Higham, published in the Journal of Experimental Psychology: General, applied receiver operating characteristic analysis to existing prebunking studies and found that “gamified inoculation interventions do not improve discrimination between true and fake news.” The interventions made people more skeptical of everything, not more accurate at distinguishing manipulation from legitimate content. The vaccine, in other words, does not teach the immune system to identify the pathogen. It teaches the immune system to attack indiscriminately.

Roozenbeek himself has acknowledged the limitation. He told TIME in 2024: “You can’t really expect miracles in a sense that, all of a sudden after one of these videos, people begin to behave completely differently online. It’s just way too much to expect from a psychological intervention that is as light touch as this.”

The intervention is light touch. The deployment is not. Tens of millions of people have been served prebunking ads. Google has rolled out campaigns across multiple continents. The EU, the UK government, and CISA have partnered on deployment. The scale of the intervention does not match the scale of the evidence for its effectiveness. The instrument has been deployed to populations before it has been validated on populations. This pattern appears elsewhere in this series.

TFRi Note · The Percentage, Again

The Percentage documents that no study has ever measured the accuracy rate of the conspiracy theory label. The same structural absence applies to prebunking. No study has measured the rate at which prebunking causes people to reject accurate information along with inaccurate information. The 2023 Modirrousta-Galian and Higham reanalysis is the closest the literature comes: it found that inoculation reduced trust in content generally, without improving the ability to distinguish true from false. If a technique designed to protect against misinformation also protects against accurate information that happens to use the same rhetorical techniques, the technique is not a vaccine. It is an immunosuppressant. Nobody has measured the dosage at which the side effect exceeds the benefit, because nobody has measured the side effect.

The Recursive Property

Prebunking identifies six common manipulation techniques: impersonation, polarization, emotional language, conspiracy theories, trolling, and discrediting. It proposes to inoculate the public against these techniques by pre-exposing them to weakened examples.

Consider the list.

Emotional language. Every public health campaign, political speech, charity appeal, and editorial in history uses emotional language. Prebunking does not teach people to identify emotional language in institutional communications. It teaches them to associate emotional language with deception in non-institutional communications. The technique is not neutral. It is directional.

Conspiracy theories. As The Label documents, the conspiracy theory label functions as a social category for exclusion, not an epistemological category for evaluation. Inoculating the public against “conspiracy theories” as a technique means inoculating them against the possibility that institutions act covertly. MKUltra was an institution acting covertly. COINTELPRO was an institution acting covertly. NSA mass surveillance was an institution acting covertly. Prebunking against the technique of “conspiracy theory” is prebunking against the accurate description of documented institutional behavior.

Discrediting. Prebunking itself is a technique for discrediting arguments before they are encountered. The technique inoculates against the technique it employs. If a person who has been prebunked encounters a critic of prebunking, the critic’s argument will pattern-match against the “discrediting” technique the person was inoculated against. The prebunking has prebunked against the critique of prebunking. The loop closes.

This is the recursive property documented in The Mechanism That Predicts Its Own Dismissal, operating at industrial scale. The technique predicts its own rejection. The rejection confirms the technique’s necessity. The system is self-sealing.

The Strategic Declassification

Prebunking is not confined to academic games and YouTube ads. The Biden administration adopted the framework under a different name.

In the lead-up to Russia’s 2022 invasion of Ukraine, the White House began publicly releasing intelligence forecasting the kinds of narratives it anticipated the Kremlin would use. Officials called this “strategic declassification.” The practice expanded to China (forecasting potential provocations in the Taiwan Strait) and Iran (claims about drone transfers to Houthi militants). As TIME reported in April 2024: “What the White House has billed as strategic declassification is just prebunking by another name.”

The technique has moved from a social psychology lab to a game to a YouTube ad to a White House communications strategy. At each stage, the question of who decides which narratives require preemptive inoculation becomes more consequential and less examined. A Cambridge researcher choosing which manipulation techniques to include in a game is one thing. A government choosing which foreign intelligence claims to preemptively declassify in order to shape public perception of an emerging conflict is another. The word is the same. The stakes are not.

You can’t really expect miracles in a sense that, all of a sudden after one of these videos, people begin to behave completely differently online. It’s just way too much to expect from a psychological intervention that is as light touch as this.
Jon Roozenbeek, King’s College London, quoted in TIME, April 2024

The Oldest Version

Prebunking is new. The technique it describes is not.

In 1964, Richard Hofstadter published “The Paranoid Style in American Politics.” The essay did not respond to specific conspiracy claims. It preempted them. By framing conspiracy thinking as a psychological pathology, Hofstadter inoculated the educated reader against taking conspiracy claims seriously, regardless of their content. The reader who absorbed Hofstadter’s framework would, upon encountering a conspiracy claim, experience not the claim’s content but its category. The diagnosis would arrive before the evidence. That is prebunking.

In 1967, CIA Document 1035-960 instructed media assets to deploy the conspiracy theory label against Warren Commission critics. The dispatch did not wait for specific criticisms to emerge and respond to them individually. It provided a preemptive rhetorical toolkit: accuse critics of being politically motivated, accuse them of financial interests, point out that large-scale conspiracy would be impossible to conceal. The toolkit was distributed to media contacts before the critics’ arguments reached the public. That is prebunking.

In the Soviet Union, the diagnosis of sluggish schizophrenia preempted dissent by reclassifying the impulse to dissent as a symptom. The diagnosis did not respond to specific political arguments. It inoculated the institutional apparatus against taking political arguments seriously, by providing a clinical framework that categorized the impulse to make such arguments as evidence of illness. That is prebunking.

The Oldest Trick in the Book documents the technique across four thousand years. Prebunking is the latest vocabulary for the oldest move: deciding what to think about a claim before encountering the claim, and calling that decision a defense.

TFRi Note · What This Dispatch Does Not Claim

This dispatch does not claim that all prebunking research is fraudulent or that all prebunking interventions are harmful. Media literacy is valuable. Teaching people to recognize rhetorical techniques is valuable. The question is not whether these goals are worth pursuing. The question is who pursues them, with what funding, on what platforms, targeting which populations, inoculating against which ideas, and with what accountability for the side effects. A technique that reduces trust in everything equally is not a precision instrument. A technique deployed by governments to preemptively shape perception of contested events is not media literacy. A technique funded by the same institutions whose historical conduct is documented on The List is not politically neutral. These are structural observations, not accusations. The researchers disclose their funders. The funders disclose their mandates. The question is whether anyone reads the disclosures.

The Word on the Hat

Prebunked. Past tense. It has already happened to you. The inoculation was administered before you knew there was a needle. The weakened dose was delivered through a YouTube ad you did not choose to watch, a search result you did not choose to see, an encyclopedia article you did not know was funded by the same company that sold the ad. The conclusion arrived before the evidence. The suspicion was installed before the claim.

The word describes a state. If you have consumed media in the past five years, you are in it. The question is not whether you have been prebunked. The question is what you were prebunked against, and whether the people who administered the dose told you what was in it.

TINFOIL™ makes a hat with the word on it. Because the first step in cognitive defense is knowing the name of what was done to you.

TFRi Note · Source Transparency

This dispatch cites 16 sources across four categories. Estimated breakdown: academic and scholarly (McGuire 1961/1964, van der Linden 2017/2023, Roozenbeek and van der Linden 2020/2022, Wang et al. 2025, Modirrousta-Galian and Higham 2023, Biddlestone et al. 2025) ~50%; independent journalism (TIME, PBS/AP, Science magazine, Cambridge University press office) ~25%; platform and institutional self-reporting (Google Jigsaw campaign documentation, prebunking.withgoogle.com, CISA partnership disclosures) ~15%; primary documents (McGuire’s original papers, CIA Document 1035-960 as referenced) ~10%.

These percentages are editorial estimates, not computed metrics. A source may appear in more than one category. A dispatch about a technique will necessarily cite the researchers who developed the technique heavily, because their published work is both the subject and the primary evidence. The relevant question is whether independent sources corroborate the factual claims. In this dispatch, all factual claims are independently verifiable through at least one non-subject source. The full source list follows.

Sources

William J. McGuire and Demetrios Papageorgis, “The Relative Efficacy of Various Types of Prior Belief-Defense in Producing Immunity Against Persuasion,” Journal of Abnormal and Social Psychology, Vol. 62, No. 2, 1961, pp. 327-337.

William J. McGuire, “Inducing resistance to persuasion: Some contemporary approaches,” in Advances in Experimental Social Psychology, Vol. 1, 1964, pp. 191-229.

Sander van der Linden, Anthony Leiserowitz, Seth Rosenthal, and Edward Maibach, “Inoculating the Public against Misinformation about Climate Change,” Global Challenges, Vol. 1, No. 2, 2017.

Jon Roozenbeek and Sander van der Linden, “Fake news game confers psychological resistance against online misinformation,” Palgrave Communications, Vol. 5, 2019.

Jon Roozenbeek, Sander van der Linden, and Thomas Nygren, “Prebunking interventions based on ‘inoculation’ theory can reduce susceptibility to misinformation across cultures,” Harvard Kennedy School Misinformation Review, 2020.

Jon Roozenbeek, Sander van der Linden, et al., “Psychological inoculation improves resilience against misinformation on social media,” Science Advances, Vol. 8, No. 34, 2022.

Sander van der Linden, Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity (New York: W. W. Norton, 2023).

Sze Yuh Nina Wang, Samantha C. Phillips, Kathleen M. Carley, Hause Lin, and Gordon Pennycook, “Limited effectiveness of psychological inoculation against misinformation in a social media feed,” PNAS Nexus, Vol. 4, No. 6, 2025, pgaf172. DOI: 10.1093/pnasnexus/pgaf172.

Ariana Modirrousta-Galian and Philip A. Higham, “Gamified inoculation interventions do not improve discrimination between true and fake news: Reanalyzing existing research with receiver operating characteristic analysis,” Journal of Experimental Psychology: General, Vol. 152, No. 9, 2023, pp. 2411-2437.

Mikey Biddlestone, Jon Roozenbeek, et al., “Tune in to the prebunking network! Development and validation of six inoculation videos,” Political Psychology, 2025.

Tessa Harjani, Jon Roozenbeek, Mikey Biddlestone, Sander van der Linden, et al., “A Practical Guide to Prebunking Misinformation,” Google Jigsaw / Cambridge Social Decision-Making Lab / BBC Media Action, 2022. Available at prebunking.withgoogle.com.

Ciaran O’Connor, “Inside Google’s Plans to Combat E.U. Election Misinformation,” TIME, April 25, 2024.

David Klepper, “Google to expand misinformation ‘prebunking’ initiative in Europe,” Associated Press / PBS NewsHour, February 13, 2023.

Kai Kupferschmidt, “Can people be ‘inoculated’ against misinformation?”, Science, AAAS, 2024.

University of Cambridge, “Social media experiment reveals potential to ‘inoculate’ millions of users against misinformation,” press release, 2022.

Sander van der Linden, “Countering misinformation through psychological inoculation,” in Advances in Experimental Social Psychology, Vol. 68, 2023.

TINFOIL™ makes cognitive defense gear for people who want to know what was in the dose.