Qwen2.5-3B-Gita-FT

Dataset Banner

A Bhagavad Gita–focused assistant that adopts a Krishna-inspired teaching persona for guiding in your spiritual path.

🌟 Model Description

Qwen2.5-3B-Gita-FT is a LoRA-tuned model built on Qwen/Qwen2.5-3B-Instruct, focused on tasks around the Bhagavad Gītā. It supports:

  1. Krishna-inspired persona: Calm, compassionate, and practical tone for guidance and teaching.
  2. Commentary Q&A β€” approachable explanations of concepts (e.g., niαΉ£kāma-karma, guαΉ‡a theory), in a Krishna-like tone.

Important: The model is not Krishna, nor a religious authority. It patterns its style from training data and prompts. It can make mistakes, simplify nuanced ideas, misremember verse numbers, or produce non-canonical wording. For study or citation, please verify with authoritative editions and scholars.

πŸš€ Key Features

  • Commentary tone control: System prompts steer classical or modern explanatory style.
  • Resource efficient: LoRA adapters with mixed precision; optional 4-bit inference.

πŸ“Š Model Specs

Parameter Value
Base Model Qwen/Qwen2.5-3B-Instruct
Fine-tuning LoRA (rank=16, alpha=32)
Seq Length 1024 (recommend β‰₯ 512 for long verses)
Epochs 3
LR 2e-4
Batch 2 (micro) Γ— 4 (grad acc)
Optimizer AdamW 8-bit
Precision bf16 (training & inference where available)

🎯 Intended Uses

βœ… Recommended

  • Study aids for verse comprehension, transliteration, and quick glosses.
  • Educational apps and assistive tools for learners.
  • Search & summarize experiences for specific verses and concepts.

⚠️ Limitations

  • Interpretation variance: Philosophical terms can have multiple valid readings.
  • Historical/cultural nuance: May miss context without retrieval.
  • Hallucinations: Makes a lot of mistakes while generating Hindi and Gujarati

πŸ› οΈ Quickstart (Transformers)

Requires transformers>=4.41, torch, accelerate. Some Qwen models need trust_remote_code=True.

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "JDhruv14/Qwen2.5-3B-Gita-FT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Prepare the conversation
messages = [
    {
        "role": "system", 
        "content": "You are Lord Krishnaβ€”the serene, compassionate teacher of the Bhagavad Gita."
    },
    {
        "role": "user", 
        "content": "Hey Keshav, what's my dharma?"
    }
]

# Apply chat template and generate
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
response = tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True)

πŸ“š Citation

@misc{gita-qwen-assistant,
  title={Gita-qwen-3B-Assistan: A Bhagavad Gītā-focused model for motivating you, guiding you based on the eternal guidance of Madhav.},
  author={JDhruv14},
  year={2025},
  url={https://huggingface.co/JDhruv14/Qwen2.5-3B-Gita-FT}
}

🀝 Contributing

  • Add verse-aligned examples, domain-checked glosses, and evaluation sets.
  • Propose prompt templates for specific chapters/themes (e.g., Karma-yoga, Bhakti-yoga).
  • Open issues/PRs for bugs or inaccuracies.

πŸ“„ License

Released under Apache 2.0. See LICENSE.

Downloads last month
5
Safetensors
Model size
3B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for JDhruv14/Qwen2.5-3B-Gita-FT

Base model

Qwen/Qwen2.5-3B
Finetuned
(831)
this model

Dataset used to train JDhruv14/Qwen2.5-3B-Gita-FT

Space using JDhruv14/Qwen2.5-3B-Gita-FT 1