Dhee-NxtGen-Qwen3-Telugu-v2

Model Description

Dhee-NxtGen-Qwen3-Telugu-v2 is a large language model developed by DheeYantra in collaboration with NxtGen Cloud Technologies Pvt. Ltd.
It is based on the Qwen3 architecture and fine-tuned for assistant-style, function-calling, and reasoning-based conversational tasks in Telugu.

This model is capable of producing fluent, natural, and contextually coherent Telugu text, suitable for building intelligent assistants and domain-specific dialogue systems.

Key Features

  • Fluent and context-aware Telugu text generation
  • Optimized for reasoning and assistant-style conversations
  • Supports summarization, question answering, and creative writing
  • Fully compatible with 🤗 Hugging Face Transformers
  • Works seamlessly with VLLM for high-performance inference

Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "dheeyantra/dhee-nxtgen-qwen3-telugu-v2"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)

# Example prompt
prompt = """<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
మీరు నా కోసం ఒక అపాయింట్‌మెంట్ షెడ్యూల్ చేయగలరా?<|im_end|>
<|im_start|>assistant
"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Intended Uses & Limitations

Intended Uses

  • Telugu conversational chatbots and assistants
  • Function-calling and structured response generation
  • Story generation and summarization in Telugu
  • Natural dialogue systems for Indic AI applications

Limitations

  • May generate inaccurate or biased responses in rare cases
  • Performance can vary on out-of-domain or code-mixed inputs
  • Primarily optimized for Telugu; other languages may produce less fluent results

VLLM / High-Performance Serving Requirements

For high-throughput serving with vLLM, ensure the following environment:

  • GPU with compute capability ≥ 8.0 (e.g., NVIDIA A100)
  • PyTorch 2.1+ and CUDA toolkit installed
  • For V100 GPUs (sm70), vLLM GPU inference is not supported; CPU fallback is possible but slower.

Install dependencies:

pip install torch transformers vllm sentencepiece

Run vLLM server:

vllm serve   --model dheeyantra/dhee-nxtgen-qwen3-telugu-v2   --host 0.0.0.0   --port 8000

License

Released under the Apache 2.0 License.


Developed by DheeYantra in collaboration with NxtGen Cloud Technologies Pvt. Ltd.

Downloads last month
4
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including dheeyantra/dhee-nxtgen-qwen3-telugu-v2