GoldenNet-Qwen2.5-0.5B-Full-v1
Model Description
GoldenNet-Qwen2.5-0.5B-Full-v1 is a fully fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct specialized for Iraqi Government Correspondence Processing.
This is the full fine-tuning variant where all 494M parameters were trained, potentially offering the best task-specific performance.
Tasks
- Document Classification - 8 categories (طلب، شكوى، تقرير، إعلام، استفسار، دعوة، تعميم، إحالة)
- Named Entity Recognition - Extracts persons, organizations, locations, dates, monetary values, laws
Model Comparison
| Model | Method | Train Loss | Eval Loss | Training Time | Size |
|---|---|---|---|---|---|
| QLoRA-v1 | 4-bit QLoRA | 0.448 | 0.2998 | 49s | 943MB |
| LoRA-v1 | Standard LoRA | 0.496 | 0.3665 | 70s | 943MB |
| Full-v1 | Full Fine-tune | 0.461 | 0.3636 | 121s | 1.9GB |
Training Details
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-0.5B-Instruct |
| Fine-tuning Method | Full (all parameters) |
| Learning Rate | 5e-5 |
| Optimizer | AdamW 8-bit |
| Epochs | 3 |
| Batch Size | 1 (effective: 16) |
| Max Sequence Length | 1024 |
| Precision | BF16 |
| Trainable Parameters | 494M (100%) |
| Hardware | NVIDIA RTX 5070 (8GB VRAM) |
Loss Progression
- Epoch 1: 0.983
- Epoch 2: 0.328
- Epoch 3: 0.171 (lowest among all variants!)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Alamori/GoldenNet-Qwen2.5-0.5B-Full-v1",
device_map="auto",
torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Alamori/GoldenNet-Qwen2.5-0.5B-Full-v1")
# Classification example
correspondence = """جمهورية العراق
وزارة التربية
العدد: 1234/ت/2025
إلى/ السيد مدير عام التعليم المحترم
م/ طلب تعيين معلمين
نرجو الموافقة على تعيين 50 معلماً.
مع التقدير"""
instruction = "صنّف المراسلة الحكومية التالية إلى إحدى الفئات: طلب، شكوى، تقرير، إعلام، استفسار، دعوة، تعميم، إحالة. أجب بصيغة JSON."
messages = [{"role": "user", "content": f"{instruction}\n\n{correspondence}"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.1)
print(tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True))
When to Use This Model
- Use Full-v1 when you need maximum task-specific performance and have sufficient storage/memory
- Use QLoRA-v1 for best balance of quality and efficiency (recommended for most cases)
- Use LoRA-v1 for comparison or when you need standard LoRA compatibility
Related Models
- GoldenNet-Qwen2.5-0.5B-QLoRA-v1 - 4-bit quantized (best eval loss)
- GoldenNet-Qwen2.5-0.5B-LoRA-v1 - Standard LoRA
License
Apache 2.0
Developed by Golden Net AI
Empowering Iraqi Government Digital Transformation
Empowering Iraqi Government Digital Transformation
- Downloads last month
- 34
Model tree for Alamori/GoldenNet-Qwen2.5-0.5B-Full-v1
Evaluation results
- Eval Lossself-reported0.364