REVERSE-Qwen2.5-VL-3B
Model Summary
REVERSE-Qwen2.5-VL-3B is a novel open-source vision-language model (VLM) that performs both next-token predictioin and self-verification / self-correction during the generation process. Built on top of Qwen2.5-VL-3B-Instruct, it is fine-tuned using the 100k-subset of REVERSE Visual Instruct 1.3M dataset and equipped with a retrospective resampling mechanism that allows it to detect and correct hallucinations during generation. The model is trained in early May, 2025.
Performance
REVERSE achieves state-of-the-art hallucination reduction across diverse captioning and open-ended visual question answering benchmarks. To ensure the apple-to-apple comparison, we fine-tune the released Qwen2.5-VL-3B model using both the LLaVA-FT setup and our REVERSE recipe, applying both on the same 100k subset. This allows us to directly compare the impact of our method against the LLaVA-FT baseline under consistent conditions as the Qwen2.5-VL's instruction tuning data is not publicly available.
| Benchmark | Metric | Qwen2.5-VL-FT | REVERSE (Ο=0.01) |
|---|---|---|---|
| CHAIR-MSCOCO | CHAIRi (β) | 12.2 | 10.5 |
| CHAIRs (β) | 45.8 | 39.4 | |
| AMBER-G | CHAIR (β) | 7.7 | 7.5 |
| Coverage (β) | 51.7 | 51.5 | |
| MMHal-Bench | Score (β) | 2.89 | 3.15 |
| Hallucination Rate (β) | 0.43 | 0.29 | |
| HaloQuest | Avg. Accuracy (β) | 33.5 | 45.1 |
| False Premise Acc. (β) | 25.4 | 42.9 | |
| Visual Challenging Acc. (β) | 51.6 | 41.8 | |
| Insufficient Context Acc. (β) | 26.4 | 55.5 |
It also performs competitively on discriminative tasks compared with the base VLM.
| Benchmark | Metric | Qwen2.5-VL-FT | REVERSE (Ο=0.5) |
|---|---|---|---|
| AMBER-D | F1 Score (β) | 85.0 | 85.7 |
| POPE | F1 Score (β) | 87.1 | 86.5 |
| MME-Hall | Score (β) | 550.4 | 589.5 |
Usage
Please refer to the installation guide on GitHub to get started:
π Installation Guide
Additional Resources
- π Project Page: https://reverse-vlm.github.io/
- π§Ύ Dataset: REVERSE Visual Instruct 1.3M
- π§ Ask Questions: GitHub Issues
Intended Use
Primary Use Cases:
- Reducing hallucination in image captioning and VQA tasks
- Benchmarking hallucination-aware generation
- Research on grounded vision-language generation and self-correction
Target Users:
Researchers, developers, and students working in computer vision, NLP, and multimodal AI.
- Downloads last month
- 46
Model tree for tsunghanwu/reverse_qwen25_vl
Base model
Qwen/Qwen2.5-VL-3B-Instruct