MMNet: Multimodal Fusion via Mutual Learning Network for Fake News Detection
Anonymous
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
The rapid development of social media provides a hotbed for the dissemination of fake news, which misleads readers and causes negative effects on society. We observe that a large amount of news contains images in addition to texts. Existing works on multimodal fake news detection conduct multimodal fusion by simple concatenating or co-attention networks, which is insufficient to fuse the multimodal information effectively. Considering that people judge news by alternatively exploring text-oriented and vision-oriented views, we propose a novel mutual learning based model MMNet to enhance the multimodal fusion for improving fake news detection. Additionally, in MMNet, we design a new multimodal fusion block to not only capture inter-modality interactions but also extract critical features for fake news detection. Extensive experiments on two public real datasets demonstrate that MMNet better captures the inter-dependencies of multimodal information, and outperforms state-of-the-art methods.