SOTAVerified

Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination

2024-11-06Code Available1· sign in to hype

Dingjie Song, Sicheng Lai, Shunian Chen, Lichao Sun, Benyou Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The rapid progression of multimodal large language models (MLLMs) has demonstrated superior performance on various multimodal benchmarks. However, the issue of data contamination during training creates challenges in performance evaluation and comparison. While numerous methods exist for detecting models' contamination in large language models (LLMs), they are less effective for MLLMs due to their various modalities and multiple training phases. In this study, we introduce a multimodal data contamination detection framework, MM-Detect, designed for MLLMs. Our experimental results indicate that MM-Detect is quite effective and sensitive in identifying varying degrees of contamination, and can highlight significant performance improvements due to the leakage of multimodal benchmark training sets. Furthermore, we explore whether the contamination originates from the base LLMs used by MLLMs or the multimodal training phase, providing new insights into the stages at which contamination may be introduced.

Reproductions