|
|
--- |
|
|
license: cc-by-nc-sa-4.0 |
|
|
task_categories: |
|
|
- video-classification |
|
|
- visual-question-answering |
|
|
- question-answering |
|
|
language: |
|
|
- en |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
# MORSE-500 Benchmark |
|
|
|
|
|
<table> |
|
|
<tr> |
|
|
<td style="padding: 0;"> |
|
|
<a href="https://morse-500.github.io"> |
|
|
<img src="https://img.shields.io/badge/Website-1E90FF?style=for-the-badge&logo=firefox&logoColor=ffffff&labelColor" alt="Website"> |
|
|
</a> |
|
|
</td> |
|
|
<td style="padding: 0;"> |
|
|
<a href="https://huggingface.co/datasets/video-reasoning/morse-500"> |
|
|
<img src="https://img.shields.io/badge/Data-D5A848?style=for-the-badge&logo=huggingface&logoColor=ffffff&labelColor" alt="Data"> |
|
|
</a> |
|
|
</td> |
|
|
<td style="padding: 0;"> |
|
|
<a href="https://huggingface.co/datasets/video-reasoning/morse-500-view"> |
|
|
<img src="https://img.shields.io/badge/View-D5A848?style=for-the-badge&logo=huggingface&logoColor=ffffff&labelColor" alt="Viewer"> |
|
|
</a> |
|
|
</td> |
|
|
<td style="padding: 0;"> |
|
|
<a href="https://github.com/morse-benchmark/morse-500"> |
|
|
<img src="https://img.shields.io/badge/Code-000000?style=for-the-badge&logo=github&logoColor=white" alt="Code"> |
|
|
</a> |
|
|
</td> |
|
|
<td style="padding: 0;"> |
|
|
<a href="https://arxiv.org/abs/2506.05523"> |
|
|
<img src="https://img.shields.io/badge/arXiv-2506.05523-b31b1b.svg?style=for-the-badge" alt="arXiv"> |
|
|
</a> |
|
|
</td> |
|
|
</tr> |
|
|
</table> |
|
|
|
|
|
## π₯ News |
|
|
- **May 15, 2025**: We release **`MORSE-500`**, 500 programmatically generated videos across six reasoning categories: abstract, mathematical, physical, planning, spatial, and temporal, to stress-test multimodal reasoning. Frontier models including OpenAI o3 and Gemini 2.5 Pro score lower than 25% accuracy (see π[`Leaderboard`](https://morse-500.github.io/#leaderboard)). |
|
|
- **Visit π€ Data: [`morse-500`](https://huggingface.co/datasets/video-reasoning/morse-500) for newer updates** |
|
|
|
|
|
## π¦ Resource |
|
|
- π Websie: [`morse-500`](https://morse-500.github.io) |
|
|
- π€ Data: [`morse-500`](https://huggingface.co/datasets/video-reasoning/morse-500) |
|
|
- π€ Video Viewer: [`morse-500-view`](https://huggingface.co/datasets/video-reasoning/morse-500-view) |
|
|
- π» Code: [`morse-500`](https://github.com/morse-benchmark/morse-500) |
|
|
- π Paper: [`arXiv:2506.05523`](https://arxiv.org/abs/2506.05523) |
|
|
|
|
|
|
|
|
## β¨ Key Features |
|
|
|
|
|
| Aspect | Details | |
|
|
| --- | --- | |
|
|
| **Fresh & Portable** | 500 newly cooked video clips + CSV metadata that runs fast | |
|
|
| **Scalable Difficulty** | Videos are generated programmatically so we can dial up complexity and release harder versions as models improve | |
|
|
| **Diverse Categories** | Spanning *Abstract, Mathematical, Physical, Planning, Spatial, Temporal (+ Causal)* β a vibrant mix of the reasoning types that matter | |
|
|
| **Pure Visual Reasoning** | Questions are baked right into the videos. No text crutches, no shortcuts β if you can't see it, you can't solve it | |
|
|
| **Developer-Friendly** | A β[-view](https://huggingface.co/datasets/video-reasoning/morse-500-view)β subset streams directly on **Hugging Face**, making browsing and debugging smoother than a sunny afternoon | |
|
|
|
|
|
|
|
|
## π Dataset Structure |
|
|
|
|
|
- `test.csv`: Contains the dataset metadata, including video file name, query, ground_truth, question_text, and category |
|
|
- `test.zip`: Contains all MP4 video files |
|
|
- `test_sz512.zip`: Contains MP4 video files resized to 512px for longside while keeping the original aspect ratio |
|
|
|
|
|
## β‘ Quick Start |
|
|
|
|
|
```bash |
|
|
### In bash ### |
|
|
# download the videos |
|
|
git clone https://huggingface.co/datasets/video-reasoning/morse-500 |
|
|
|
|
|
# unzip the videos |
|
|
cd morse-500 |
|
|
# unzip test.zip -d test # original size |
|
|
unzip test_sz512.zip -d test_sz512 # long side resized to 512 |
|
|
|
|
|
|
|
|
### In python ### |
|
|
# load dataset metadata ("idx", "video", "query", "question_text", "ground_truth", "category") |
|
|
from datasets import load_dataset |
|
|
dataset = load_dataset('video-reasoning/morse-500') |
|
|
dataset = dataset['test'] |
|
|
video_root = 'test_sz512' # use the resize videos |
|
|
|
|
|
# run your model on the benchmark |
|
|
for i, example in enumerate(dataset): |
|
|
video_path = f"{video_root}/" + example["video"] |
|
|
print(f"Processing {i} {video_path}") |
|
|
query = "Answer the question in this video." |
|
|
gt = example['ground_truth'] |
|
|
|
|
|
# if your model has video support |
|
|
answer = query_video(model_name, video_path, query) |
|
|
# otherwise query with image frames, default 2 fps capped at 32 total frames |
|
|
# answer = query_video_frames(model_name, video_path, query, fps=2, max_num_frames=32) |
|
|
|
|
|
print(f"Answer: {answer}") |
|
|
print(f"GT: {gt}") |
|
|
``` |
|
|
|
|
|
Example query_video function |
|
|
```python |
|
|
model_name = "xxx" |
|
|
openai_api_key = "xxx" |
|
|
openai_api_base = "xxx" |
|
|
client = OpenAI( |
|
|
api_key=openai_api_key, |
|
|
base_url=openai_api_base, |
|
|
) |
|
|
|
|
|
|
|
|
def encode_b64(file_path): |
|
|
with open(file_path, "rb") as file: |
|
|
return base64.b64encode(file.read()).decode("utf-8") |
|
|
|
|
|
base64_video = encode_b64(video_path) |
|
|
video_url = f"data:video/mp4;base64,{base64_video}" |
|
|
|
|
|
response = client.chat.completions.create( |
|
|
model=model_name, |
|
|
messages=[ |
|
|
{ |
|
|
"role": "user", |
|
|
"content": [ |
|
|
{ |
|
|
"type": "text", |
|
|
"text": query |
|
|
}, |
|
|
{ |
|
|
"type": "video_url", |
|
|
"video_url": {"url": video_url}, |
|
|
}, |
|
|
], |
|
|
} |
|
|
], |
|
|
) |
|
|
|
|
|
result = response.choices[0].message.content |
|
|
print(result) |
|
|
``` |
|
|
## More scripts can be found on Github https://github.com/morse-benchmark/morse-500 |