Towards Generalizable AI-Assisted Misinformation Inoculation: Protecting Confidence Against False Election Narratives
Mitchell Linegar, Betsy Sinclair, Sander van der Linden, R. Michael Alvarez
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We present a generalizable AI-assisted framework for rapidly generating effective "prebunking" interventions against misinformation. Like mRNA vaccine platforms, our approach uses a stable template structure that can be quickly adapted to counter emerging false narratives. In a preregistered two-wave experiment with 4,293 U.S. registered voters, we test this framework against politically-charged election misinformation -- one of the most challenging domains for misinformation intervention. Our design directly tests scalability by comparing human-reviewed and purely AI-generated inoculation messages. We find that LLM-generated prebunking significantly reduced belief in election rumors (persisting for at least one week) and increased confidence in election integrity across partisan lines. Purely AI-generated messages proved as effective as human-reviewed versions, with some achieving larger protective effects, demonstrating that effective misinformation inoculation can be achieved at machine speed without proportional human effort, offering a scalable defense against the accelerating threat of false narratives across all domains.