File size: 1,468 Bytes
44fe470 f5ace4b 44fe470 f5ace4b 44fe470 4eb2062 44fe470 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: cc-by-nc-4.0
---
# Mobile-VideoGPT-0.5B
---
## π Description
Mobile-VideoGPT is an efficient multimodal framework designed to operate with fewer than a billion parameters. Unlike traditional video large multimodal models, Mobile-VideoGPT consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM) with real-time throughput. We evaluate our model across well-established six video understanding benchmarks (e.g., MVBench, EgoSchema, NextQA, and PerceptionTest), and our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second while outperforming existing state-of-the-art 0.5B-parameter competitors.
**This model contains Mobile-VideoGPT checkpoints with Qwen-2.5-0.5B LLM**
## π» Download
To get started, follow these steps:
```
git lfs install
git clone https://huggingface.co/Amshaker/Mobile-VideoGPT-0.5B
```
## π Additional Resources
- **Paper:** [ArXiv]().
- **GitHub Repository:** For training and evaluation: [GitHub - Mobile-VideoGPT](https://github.com/Amshaker/Mobile-VideoGPT).
## π Citations:
```bibtex
@article{Shaker2025MobileVideoGPT,
title={Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model},
author={Shaker, Abdelrahman and Maaz, Muhammad and Rezatofighi, Hamid and Khan, Salman and Khan, Fahad Shahbaz},
journal={arxiv},
year={2025},
url={https://arxiv.org/abs/2503.21782}
} |