prithivMLmods commited on
Commit
33da160
·
verified ·
1 Parent(s): 04dac88

Update Metadata (#1)

Browse files

- Update Metadata (23ea11ff20da3e66460fb84f17974ce6cc02bc7f)

Files changed (1) hide show
  1. README.md +124 -0
README.md CHANGED
@@ -1,2 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
  ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/YvSEBmE_tAbN21p-sxKpK.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ base_model:
7
+ - prithivMLmods/Camel-Doc-OCR-062825
8
+ pipeline_tag: image-text-to-text
9
+ library_name: transformers
10
+ tags:
11
+ - text-generation-inference
12
+ - image-captioning
13
+ - optical-character-recognition
14
+ - intelligent-character-recognition
15
+ - caption
16
+ - ocr
17
+ - visual-understanding
18
+ - art
19
+ - icr
20
+ - image-to-text
21
+ - vlm
22
+ - Doc-v
23
+ datasets:
24
+ - prithivMLmods/OpenDoc-Pdf-Preview
25
+ - prithivMLmods/Corvus-OCR-Caption-Mix
26
+ - prithivMLmods/Openpdf-Analysis-Recognition
27
+ - prithivMLmods/Opendoc2-Analysis-Recognition
28
+ ---
29
 
30
  ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/YvSEBmE_tAbN21p-sxKpK.png)
31
+
32
+ # **Perseus-Doc-vl-0712**
33
+
34
+ > The **Perseus-Doc-vl-0712** model is a fine-tuned version of *Qwen2.5-VL-7B-Instruct*, optimized for **Document Retrieval**, **Content Extraction**, and **Analysis Recognition**. Built on top of the Qwen2.5-VL architecture, this model enhances document comprehension capabilities with focused training on 450K image pairs from a mixture of captioning datasets, including 230K from Corvus-OCR-Caption-Mix dataset and other document modular datasets from modular combination of opensource datasets best for doc OCR captioning, image reasoning, visual analysis, working on all category of images with variational dimensions.
35
+
36
+ # Key Enhancements
37
+
38
+ * **Context-Aware Multimodal Extraction and Linking for Documents**: Advanced capability for understanding document context and establishing connections between multimodal elements within documents.
39
+
40
+ * **Enhanced Document Retrieval**: Designed to efficiently locate and extract relevant information from complex document structures and layouts.
41
+
42
+ * **Superior Content Extraction**: Optimized for precise extraction of structured and unstructured content from diverse document formats.
43
+
44
+ * **Analysis Recognition**: Specialized in recognizing and interpreting analytical content, charts, tables, and visual data representations.
45
+
46
+ * **State-of-the-Art Performance Across Resolutions**: Achieves competitive results on OCR and visual QA benchmarks such as DocVQA, MathVista, RealWorldQA, and MTVQA.
47
+
48
+ * **Video Understanding up to 20+ minutes**: Supports detailed comprehension of long-duration videos for content summarization, Q\&A, and multi-modal reasoning.
49
+
50
+ * **Visually-Grounded Device Interaction**: Enables mobile/robotic device operation via visual inputs and text-based instructions using contextual understanding and decision-making logic.
51
+
52
+ # Quick Start with Transformers
53
+
54
+ ```python
55
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
56
+ from qwen_vl_utils import process_vision_info
57
+
58
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
59
+ "prithivMLmods/Perseus-Doc-vl-0712", torch_dtype="auto", device_map="auto"
60
+ )
61
+
62
+ processor = AutoProcessor.from_pretrained("prithivMLmods/Perseus-Doc-vl-0712")
63
+
64
+ messages = [
65
+ {
66
+ "role": "user",
67
+ "content": [
68
+ {
69
+ "type": "image",
70
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
71
+ },
72
+ {"type": "text", "text": "Describe this image."},
73
+ ],
74
+ }
75
+ ]
76
+
77
+ text = processor.apply_chat_template(
78
+ messages, tokenize=False, add_generation_prompt=True
79
+ )
80
+ image_inputs, video_inputs = process_vision_info(messages)
81
+ inputs = processor(
82
+ text=[text],
83
+ images=image_inputs,
84
+ videos=video_inputs,
85
+ padding=True,
86
+ return_tensors="pt",
87
+ )
88
+ inputs = inputs.to("cuda")
89
+
90
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
91
+ generated_ids_trimmed = [
92
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
93
+ ]
94
+ output_text = processor.batch_decode(
95
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
96
+ )
97
+ print(output_text)
98
+ ```
99
+
100
+ > [!important]
101
+ For open data analysis dataset, the document's content is phrased for training with the Gemini 2.5-Pro and other models.
102
+
103
+ > [!important]
104
+ Model type: Experimental.
105
+
106
+ # Intended Use
107
+
108
+ This model is intended for:
109
+
110
+ * Context-aware multimodal extraction and linking for complex document structures.
111
+ * High-fidelity document retrieval and content extraction from various document formats.
112
+ * Analysis recognition of charts, graphs, tables, and visual data representations.
113
+ * Document-based question answering for educational and enterprise applications.
114
+ * Extraction and LaTeX formatting of mathematical expressions from printed or handwritten content.
115
+ * Retrieval and summarization from long documents, slides, and multi-modal inputs.
116
+ * Multilingual document analysis and structured content extraction for global use cases.
117
+ * Robotic or mobile automation with vision-guided contextual interaction.
118
+
119
+ # Limitations
120
+
121
+ * May show degraded performance on extremely low-quality or occluded images.
122
+ * Not optimized for real-time applications on low-resource or edge devices due to computational demands.
123
+ * Variable accuracy on uncommon or low-resource languages/scripts.
124
+ * Long video processing may require substantial memory and is not optimized for streaming applications.
125
+ * Visual token settings affect performance; suboptimal configurations can impact results.
126
+ * In rare cases, outputs may contain hallucinated or contextually misaligned information.