Easy DeepOCR - VILA-Qwen2-VL-8B
A vision-language model fine-tuned for OCR tasks, based on VILA architecture with Qwen2-VL-8B as the language backbone.
Model Description
This model combines:
- Language Model: Qwen2-VL-8B
- Vision Encoders: SAM + CLIP
- Architecture: VILA (Visual Language Adapter)
- Task: Optical Character Recognition (OCR)
Model Structure
easy_deepocr/
βββ config.json # Model configuration
βββ llm/ # Qwen2-VL-8B language model weights
βββ mm_projector/ # Multimodal projection layer
βββ sam_clip_ckpt/ # SAM and CLIP vision encoder weights
βββ trainer_state.json # Training state information
Usage
# TODO: Add your inference code here
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("pkulium/easy_deepocr", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("pkulium/easy_deepocr")
# Example inference
# image = ...
# text = ...
Training Details
- Base Model: Qwen2-VL-8B
- Vision Encoders: SAM + CLIP
- Training Framework: VILA
- Training Type: Pretraining for OCR tasks
Intended Use
This model is designed for:
- Document OCR
- Scene text recognition
- Handwriting recognition
- Multi-language text extraction
Limitations
- [Add any known limitations]
- Model performance may vary with image quality
- Best suited for [specify use cases]
Citation
If you use this model, please cite:
@misc{easy_deepocr,
author = {Ming Liu},
title = {Easy DeepOCR - VILA-Qwen2-VL-8B},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/pkulium/easy_deepocr}
}
Acknowledgments
- Downloads last month
- 30
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support