Amshaker commited on
Commit
44fe470
ยท
verified ยท
1 Parent(s): 675367d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -3
README.md CHANGED
@@ -1,3 +1,33 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
4
+
5
+ # Mobile-VideoGPT-0.5B
6
+
7
+ ---
8
+ ## ๐Ÿ“ Description
9
+ Mobile-VideoGPT is an efficient multimodal framework designed to operate with fewer than a billion parameters. Unlike traditional video large multimodal models, Mobile-VideoGPT consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM) with real-time throughput. We evaluate our model across well-established six video understanding benchmarks (e.g., MVBench, EgoSchema, NextQA, and PerceptionTest), and our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second while outperforming existing state-of-the-art 0.5B-parameter competitors.
10
+
11
+ **This model contains Mobile-VideoGPT checkpoints with Qwen-2.5-0.5B LLM**
12
+
13
+ ## ๐Ÿ’ป Download
14
+ To get started, follow these steps:
15
+ ```
16
+ git lfs install
17
+ git clone https://huggingface.co/Amshaker/Mobile-VideoGPT-0.5B
18
+ ```
19
+
20
+ ## ๐Ÿ“š Additional Resources
21
+ - **Paper:** [ArXiv]().
22
+ - **GitHub Repository:** For training and evaluation: [GitHub - Mobile-VideoGPT](https://github.com/Amshaker/Mobile-VideoGPT).
23
+
24
+ ## ๐Ÿ“œ Citations:
25
+
26
+ ```bibtex
27
+ @article{Shaker2025MobileVideoGPT,
28
+ title={Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model},
29
+ author={Shaker, Abdelrahman and Maaz, Muhammad and Rezatofighi, Hamid and Khan, Salman and Khan, Fahad Shahbaz},
30
+ journal={arxiv},
31
+ year={2025},
32
+ url={https://arxiv.org/abs/X.X}
33
+ }