Contamination Detection for VLMs using Multi-Modal Semantic Perturbation
Abstract
A novel detection method based on multi-modal semantic perturbation is proposed to identify contaminated Vision-Language Models, demonstrating robustness across various contamination strategies.
Recent advances in Vision-Language Models (VLMs) have achieved state-of-the-art performance on numerous benchmark tasks. However, the use of internet-scale, often proprietary, pretraining corpora raises a critical concern for both practitioners and users: inflated performance due to test-set leakage. While prior works have proposed mitigation strategies such as decontamination of pretraining data and benchmark redesign for LLMs, the complementary direction of developing detection methods for contaminated VLMs remains underexplored. To address this gap, we deliberately contaminate open-source VLMs on popular benchmarks and show that existing detection approaches either fail outright or exhibit inconsistent behavior. We then propose a novel simple yet effective detection method based on multi-modal semantic perturbation, demonstrating that contaminated models fail to generalize under controlled perturbations. Finally, we validate our approach across multiple realistic contamination strategies, confirming its robustness and effectiveness. The code and perturbed dataset will be released publicly.
Community
We introduce Multi-modal Semantic Perturbation, a pipeline to create perturbed benchmarks that can be used to detect data contamination in VLMs. This perturbation pipeline generates image-question pairs with the original image composition intact but modified slightly so that the answer is changed. The perturbed benchmark will have a similar or lower difficulty than the original benchmark, meaning clean models that truly generalize should perform better. However, we discover that contaminated models consistently underperform, showing dramatic performance drops up to -45%. By simply comparing the performance on the two versions of the benchmark, we show that we can reliably detect contaminated VLMs with varying training strategies, number of epochs, model architectures and sizes, while existing approaches fail.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VidGuard-R1: AI-Generated Video Detection and Explanation via Reasoning MLLMs and RL (2025)
- MLLMEraser: Achieving Test-Time Unlearning in Multimodal Large Language Models through Activation Steering (2025)
- Visual CoT Makes VLMs Smarter but More Fragile (2025)
- ThinkFake: Reasoning in Multimodal Large Language Models for AI-Generated Image Detection (2025)
- Seeing Before Reasoning: A Unified Framework for Generalizable and Explainable Fake Image Detection (2025)
- Benchmarking and Mitigating MCQA Selection Bias of Large Vision-Language Models (2025)
- DUAL-Bench: Measuring Over-Refusal and Robustness in Vision-Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper