Datasets:
Improve dataset card: Add `image-text-to-text` task category, `mathematical-reasoning` tag, and expand content sections
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,17 +1,19 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
| 5 |
size_categories:
|
| 6 |
- 1K<n<10K
|
|
|
|
|
|
|
|
|
|
| 7 |
pretty_name: GSM8K-V
|
| 8 |
viewer: true
|
| 9 |
tags:
|
| 10 |
- Visual-reasoning
|
| 11 |
- VLM
|
| 12 |
- GSM8K
|
| 13 |
-
|
| 14 |
-
- visual-question-answering
|
| 15 |
---
|
| 16 |
|
| 17 |
<div align="center">
|
|
@@ -70,6 +72,8 @@ Yueting Zhuang<sup>1</sup>
|
|
| 70 |
|
| 71 |
## 👁️ Overview
|
| 72 |
|
|
|
|
|
|
|
| 73 |
**GSM8K-V** is a purely visual multi-image mathematical reasoning benchmark that systematically maps each GSM8K math word problem into its visual counterpart to enable a clean, within-item comparison across modalities. Built via an automated pipeline that extracts and allocates problem information across scenes, generates scene-level descriptions, and renders images, coupled with meticulous human annotation, the benchmark comprises 1,319 high-quality multiscene problems (5,343 images) and addresses limitations of prior visual math evaluations that predominantly focus on geometry, seldom cover visualized word problems, and rarely test reasoning across multiple images with semantic dependencies. Evaluations of a broad range of open- and closed-source models reveal a substantial modality gap—for example, Gemini-2.5-Pro attains 95.22% accuracy on text-based GSM8K but only 46.93% on GSM8K-V—highlighting persistent challenges in understanding and reasoning over images in realistic scenarios and providing a foundation to guide the development of more robust and generalizable vision-language models.
|
| 74 |
|
| 75 |
Our main contributions are summarized as follows.
|
|
@@ -79,8 +83,7 @@ Our main contributions are summarized as follows.
|
|
| 79 |
- We perform a thorough evaluation and analysis of the existing VLMs in **GSM8K-V.** The results reveal substantial room for improvement, and our analysis provides valuable insights for enhancing the mathematical reasoning capabilities of future VLMs.
|
| 80 |
|
| 81 |
|
| 82 |
-
|
| 83 |
-
## 🚀 Quick Start
|
| 84 |
|
| 85 |
```bash
|
| 86 |
# Clone the repository
|
|
@@ -105,8 +108,56 @@ python eval.py --type api \
|
|
| 105 |
--concurrency <eval_parallel_num> --image_dir <data_path>
|
| 106 |
```
|
| 107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 110 |
|
| 111 |
## 📝 Citation
|
| 112 |
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: mit
|
| 5 |
size_categories:
|
| 6 |
- 1K<n<10K
|
| 7 |
+
task_categories:
|
| 8 |
+
- visual-question-answering
|
| 9 |
+
- image-text-to-text
|
| 10 |
pretty_name: GSM8K-V
|
| 11 |
viewer: true
|
| 12 |
tags:
|
| 13 |
- Visual-reasoning
|
| 14 |
- VLM
|
| 15 |
- GSM8K
|
| 16 |
+
- mathematical-reasoning
|
|
|
|
| 17 |
---
|
| 18 |
|
| 19 |
<div align="center">
|
|
|
|
| 72 |
|
| 73 |
## 👁️ Overview
|
| 74 |
|
| 75 |
+
<img src="assets/main_01.png" alt="GSM8K-V Pipeline">
|
| 76 |
+
|
| 77 |
**GSM8K-V** is a purely visual multi-image mathematical reasoning benchmark that systematically maps each GSM8K math word problem into its visual counterpart to enable a clean, within-item comparison across modalities. Built via an automated pipeline that extracts and allocates problem information across scenes, generates scene-level descriptions, and renders images, coupled with meticulous human annotation, the benchmark comprises 1,319 high-quality multiscene problems (5,343 images) and addresses limitations of prior visual math evaluations that predominantly focus on geometry, seldom cover visualized word problems, and rarely test reasoning across multiple images with semantic dependencies. Evaluations of a broad range of open- and closed-source models reveal a substantial modality gap—for example, Gemini-2.5-Pro attains 95.22% accuracy on text-based GSM8K but only 46.93% on GSM8K-V—highlighting persistent challenges in understanding and reasoning over images in realistic scenarios and providing a foundation to guide the development of more robust and generalizable vision-language models.
|
| 78 |
|
| 79 |
Our main contributions are summarized as follows.
|
|
|
|
| 83 |
- We perform a thorough evaluation and analysis of the existing VLMs in **GSM8K-V.** The results reveal substantial room for improvement, and our analysis provides valuable insights for enhancing the mathematical reasoning capabilities of future VLMs.
|
| 84 |
|
| 85 |
|
| 86 |
+
## 🚀 Sample Usage
|
|
|
|
| 87 |
|
| 88 |
```bash
|
| 89 |
# Clone the repository
|
|
|
|
| 108 |
--concurrency <eval_parallel_num> --image_dir <data_path>
|
| 109 |
```
|
| 110 |
|
| 111 |
+
## 📊 Benchmark Statistics
|
| 112 |
+
|
| 113 |
+
<p align="center">
|
| 114 |
+
<img src="assets/data_statistic.png" alt="Dataset Statistics" width="45%">
|
| 115 |
+
<img src="assets/data_distribution_01.png" alt="Category Distribution" width="47%">
|
| 116 |
+
</p>
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
## 📈 Main Results
|
| 120 |
|
| 121 |
+
<p align="center">
|
| 122 |
+
<img src="assets/main_result.png" alt="Main Result" style="width: 100%; height: auto;">
|
| 123 |
+
</p>
|
| 124 |
+
|
| 125 |
+
## ⚙️ Advanced Configuration Options
|
| 126 |
+
|
| 127 |
+
```bash
|
| 128 |
+
# Limit number of samples
|
| 129 |
+
python eval.py --num-samples 5
|
| 130 |
+
|
| 131 |
+
# Specify evaluation modes
|
| 132 |
+
python eval.py --modes text_only visual scene
|
| 133 |
+
|
| 134 |
+
# Specify prompt modes for visual evaluation
|
| 135 |
+
python eval.py --prompt-modes implicit explicit
|
| 136 |
|
| 137 |
+
# Evaluate only specific categories
|
| 138 |
+
python eval.py --data-categories measurement physical_metric
|
| 139 |
+
|
| 140 |
+
# Evaluate specific subcategories
|
| 141 |
+
python eval.py --data-subcategories distance speed weight
|
| 142 |
+
|
| 143 |
+
# Example Use
|
| 144 |
+
# ---- vllm start ----
|
| 145 |
+
vllm serve model/internvl3_5-8b \
|
| 146 |
+
--port 8010 \
|
| 147 |
+
--tensor-parallel-size 4 \
|
| 148 |
+
--gpu-memory-utilization 0.9 \
|
| 149 |
+
--max-model-len 8192 \
|
| 150 |
+
--trust-remote-code \
|
| 151 |
+
--served-model-name "internvl3.5-8b"
|
| 152 |
+
|
| 153 |
+
# ---- eval start ----
|
| 154 |
+
python eval.py --type vllm \
|
| 155 |
+
--model_name internvl3.5-8b --api_base http://localhost:8010/v1 \
|
| 156 |
+
--concurrency 32 --image_dir data/images
|
| 157 |
+
|
| 158 |
+
# For detailed help
|
| 159 |
+
python eval.py --help
|
| 160 |
+
```
|
| 161 |
|
| 162 |
## 📝 Citation
|
| 163 |
|