SOTAVerified

LLM-MRD: LLM-Guided Multi-View Reasoning Distillation for Fake News Detection

2026-03-10Code Available0· sign in to hype

Weilin Zhou, Shanwen Tan, Enhao Gu, Yurong Qian

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Multimodal fake news detection is crucial for mitigating societal disinformation. Existing approaches attempt to address this by fusing multimodal features or leveraging Large Language Models (LLMs) for advanced reasoning. However, these methods suffer from serious limitations, including a lack of comprehensive multi-view judgment and fusion, and prohibitive reasoning inefficiency due to the high computational costs of LLMs. To address these issues, we propose LLM-Guided Multi-View Reasoning Distillation for Fake News Detection ( LLM-MRD), a novel teacher-student framework. The Student Multi-view Reasoning module first constructs a comprehensive foundation from textual, visual, and cross-modal perspectives. Then, the Teacher Multi-view Reasoning module generates deep reasoning chains as rich supervision signals. Our core Calibration Distillation mechanism efficiently distills this complex reasoning-derived knowledge into the efficient student model. Experiments show LLM-MRD significantly outperforms state-of-the-art baselines. Notably, it demonstrates a comprehensive average improvement of 5.19\% in ACC and 6.33\% in F1-Fake when evaluated across all competing methods and datasets. Our code is available at https://github.com/Nasuro55/LLM-MRD

Reproductions