Datasets:
Dhivehi Image Bounding Box Dataset - DeepSeek Format
Dataset Description
This dataset is a transformed version of alakxender/dhivehi-image-bbox-prompt, specifically formatted to align with DeepSeek OCR model requirements for training vision-language models with grounding capabilities.
The original dataset contained Dhivehi (Thaana script) text with bounding box annotations. This version restructures the annotations into DeepSeek's grounding token format, enabling the model to both recognize text and localize it spatially within images.
Dataset Structure
Fields
image: PIL Image object containing the document/page imagecontent: Text string in DeepSeek grounding format with special tokens
Format Specification
The content field follows DeepSeek's grounding format:
<|ref|>{category}<|/ref|><|det|>[[x1, y1, x2, y2]]<|/det|>{text_content}
Where:
{category}: Element type (Title, Text, Caption, Picture, etc.)[x1, y1, x2, y2]: Normalized bounding box coordinates (0-999 scale){text_content}: The actual text content (in Dhivehi/Thaana script)
Example
{
"image": <PIL.Image>,
"content": "<|ref|>Title<|/ref|><|det|>[[102, 48, 841, 51]]<|/det|>މާދަމާގެ އެއްވުމަށް ފުލުހުން ޝަރުތުތަކެއް ކަނޑައަޅައި\n<|ref|>Text<|/ref|><|det|>[[67, 177, 876, 208]]<|/det|>ސަރުކާރާ ދެކޮޅަށް އިދިކޮޅު މީހުން މާދަމާ މާލޭގައި..."
}
Transformation Details
Original → DeepSeek Format
Original format:
{
"bbox": [x, y, width, height],
"category": "Title",
"text": "Sample text",
"width": 800,
"height": 1200
}
Transformed format:
- Bounding boxes converted from
[x, y, w, h]to[x1, y1, x2, y2] - Coordinates normalized to 0-999 scale (DeepSeek standard)
- Added grounding tokens:
<|ref|>,<|/ref|>,<|det|>,<|/det|> - Combined all annotations into a single
contentstring - Removed metadata fields, keeping only
imageandcontent
Usage
Loading the Dataset
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("alakxender/dhivehi-image-bbox-ds-fmt")
# Access splits
train_data = dataset['train']
val_data = dataset['validation']
# Example: Print first sample
sample = train_data[0]
print(f"Content: {sample['content'][:200]}...")
Training with DeepSeek OCR
from transformers import AutoModel, AutoTokenizer
# Load DeepSeek OCR model
model = AutoModel.from_pretrained(
"your-deepseek-ocr-model",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"your-deepseek-ocr-model",
trust_remote_code=True
)
# Use the dataset for fine-tuning
# The content field is ready to use as training targets
Inference Format
When using a model trained on this dataset:
# Input prompt
prompt = "<image>\n<|grounding|>Extract all text with their locations. "
# Expected output format (same as content field)
output = model.infer(
tokenizer=tokenizer,
prompt=prompt,
image_file='document.png'
)
# Output: "<|ref|>Title<|/ref|><|det|>[[...]]<|/det|>Text content..."
Dataset Categories
The dataset includes various document element types:
- Title: Document titles and headers
- Text: Body text paragraphs
- Caption: Image captions and labels
- Picture: Image regions (bounding boxes only)
Language
- Primary Language: Dhivehi (Maldivian)
- Script: Thaana (ތާނަ)
- Text Direction: Right-to-left (RTL)
Dataset Splits
- Train: Training set
- Validation: Validation/test set
Intended Use
Primary Use Cases
- Document OCR with Layout Understanding: Train models to extract text while preserving spatial relationships
- Grounding-aware Text Recognition: Enable models to localize text regions
- Dhivehi Document Processing: Specialized for Thaana script recognition
- Multi-modal Vision-Language Tasks: Train models that understand both visual and textual content
Out-of-Scope Use
- This dataset is specifically formatted for DeepSeek-style grounding models
- Not suitable for classification-only tasks
- Not intended for models that don't support grounding tokens
- Downloads last month
- 24