--- license: cc-by-nc-4.0 --- # Mobile-VideoGPT-0.5B --- ## 📝 Description Mobile-VideoGPT is an efficient multimodal framework designed to operate with fewer than a billion parameters. Unlike traditional video large multimodal models, Mobile-VideoGPT consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM) with real-time throughput. We evaluate our model across well-established six video understanding benchmarks (e.g., MVBench, EgoSchema, NextQA, and PerceptionTest), and our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second while outperforming existing state-of-the-art 0.5B-parameter competitors. **This model contains Mobile-VideoGPT checkpoints with Qwen-2.5-0.5B LLM** ## 💻 Download To get started, follow these steps: ``` git lfs install git clone https://huggingface.co/Amshaker/Mobile-VideoGPT-0.5B ``` ## 📚 Additional Resources - **Paper:** [ArXiv](). - **GitHub Repository:** For training and evaluation: [GitHub - Mobile-VideoGPT](https://github.com/Amshaker/Mobile-VideoGPT). ## 📜 Citations: ```bibtex @article{Shaker2025MobileVideoGPT, title={Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model}, author={Shaker, Abdelrahman and Maaz, Muhammad and Rezatofighi, Hamid and Khan, Salman and Khan, Fahad Shahbaz}, journal={arxiv}, year={2025}, url={https://arxiv.org/abs/2503.21782} }