COVE: COntext and VEracity prediction for out-of-context images
Jonathan Tonglet, Gabriel Thiem, Iryna Gurevych
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ukplab/naacl2025-coveOfficialIn paperpytorch★ 7
- github.com/ukplab/5pilsnone★ 45
Abstract
Images taken out of their context are the most prevalent form of multimodal misinformation. Debunking them requires (1) providing the true context of the image and (2) checking the veracity of the image's caption. However, existing automated fact-checking methods fail to tackle both objectives explicitly. In this work, we introduce COVE, a new method that predicts first the true COntext of the image and then uses it to predict the VEracity of the caption. COVE beats the SOTA context prediction model on all context items, often by more than five percentage points. It is competitive with the best veracity prediction models on synthetic data and outperforms them on real-world data, showing that it is beneficial to combine the two tasks sequentially. Finally, we conduct a human study that reveals that the predicted context is a reusable and interpretable artifact to verify new out-of-context captions for the same image. Our code and data are made available.