SOTAVerified

Limited Effectiveness of LLM-based Data Augmentation for COVID-19 Misinformation Stance Detection

2025-03-04Unverified0· sign in to hype

Eun Cheol Choi, Ashwin Balasubramanian, Jinhu Qi, Emilio Ferrara

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Misinformation surrounding emerging outbreaks poses a serious societal threat, making robust countermeasures essential. One promising approach is stance detection (SD), which identifies whether social media posts support or oppose misleading claims. In this work, we finetune classifiers on COVID-19 misinformation SD datasets consisting of claims and corresponding tweets. Specifically, we test controllable misinformation generation (CMG) using large language models (LLMs) as a method for data augmentation. While CMG demonstrates the potential for expanding training datasets, our experiments reveal that performance gains over traditional augmentation methods are often minimal and inconsistent, primarily due to built-in safeguards within LLMs. We release our code and datasets to facilitate further research on misinformation detection and generation.

Tasks

Reproductions