You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Zaya 1B Persian

Zaya is a family of lightweight models intended for the Persian language. This repo is the 1B version of the model based on Gemma-3 1B that has been continually pretrained and instruction-tuned for the Persian language. The model is designed for high-quality Persian language understanding and generation.

Model Details

  • Base Model: Gemma-3 1B
  • Language: Multilingual, with a focus on Persian
  • Parameters: 4.3 Billion
  • Context Length: 128K

Training Procedure

  1. Continual Pretraining:
    The base Gemma-3 1B model was continually pretrained on a large-scale Persian corpus to improve its understanding of Persian language, grammar, and context.

  2. Instruction Fine-tuning:
    The model was then instruction-tuned on a curated Persian instruction dataset using QLoRa, enabling it to follow user prompts and generate helpful, context-aware responses in Persian.

  3. SLERP Merge:
    Finally, the instruction-tuned model was merged with the original Gemma-3 1B Instruct model using the SLERP (Spherical Linear Interpolation) method. This approach combines the strengths of both models, balancing general capabilities with Persian-specific instruction-following.

Intended Use

  • Persian language generation and understanding
  • Instruction following in Persian
  • Chatbots, assistants, and educational tools for Persian speakers

Note: This model is relatively small compared to other large language models, making it suitable for applications where computational resources are limited while still providing high-quality performance in Persian. The main intended use case would be information retrieval, question answering, and conversational AI but lacks the extensive capabilities of larger models such as reasoning, complex task execution, or advanced problem-solving.

Usage

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "arxyzan/zaya-1b-it"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16,
)

messages = [
    {"role": "system", "content": "You are a helpful assistant intended for the Persian language."},
    {"role": "user", "content": "مدل های زبانی بزرگ چطوری ساخته میشن؟"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True, 
    tokenize=True, 
    return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Llama.cpp

To use this model with llama.cpp, cd to your cloned llama.cpp directory and run the following commands:

./llama-cli -hf arxyzan/zaya-1b-it -p "چطوری میتونم یه موشک بسازم؟"

Ollama

There are two quants of this model available right in this repo; Q8_0 and Q4_0. You can use them with Ollama as follows:

# Q4_0
ollama run hf.co/arxyzan/zaya-1b-it:Q4_0
# Q8_0
ollama run hf.co/arxyzan/zaya-1b-it:Q8_0

Evaluation

Coming soon!

Limitations & Bias

  • Bias: The model may exhibit biases present in the training data, which is predominantly sourced from the Persian internet and other text corpora. This can lead to biased or inappropriate responses in certain contexts.
  • Hallucination: The model may generate plausible-sounding but factually incorrect or nonsensical answers. It is important to verify critical information independently.
  • Safety: The model may generate harmful or sensitive content, especially if prompted inappropriately. Users should implement safety measures to mitigate this risk.

Citation

If you use this model, please cite this repository.


Reach Out

For questions or feedback, you can reach out to me via mail at [email protected] or through Telegram at @arxyzan.

Downloads last month
13
Safetensors
Model size
1.0B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support