TinyLlama-1.1B-Chat-v1.0 Fine-tuned on sft-hh-data
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the activeDap/sft-hh-data dataset.
Training Results
Training Statistics
| Metric | Value |
|---|---|
| Total Steps | 157 |
| Final Training Loss | 1.5769 |
| Min Training Loss | 1.5307 |
| Training Runtime | 183.94 seconds |
| Samples/Second | 54.37 |
Training Configuration
| Parameter | Value |
|---|---|
| Base Model | TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
| Dataset | activeDap/sft-hh-data |
| Number of Epochs | 1.0 |
| Per Device Batch Size | 4 |
| Gradient Accumulation Steps | 4 |
| Total Batch Size | 64 (4 GPUs) |
| Learning Rate | 5e-05 |
| LR Scheduler | cosine |
| Warmup Ratio | 0.1 |
| Max Sequence Length | 1024 |
| Optimizer | adamw_torch |
| Mixed Precision | BF16 |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "activeDap/TinyLlama-1.1B-Chat-v1.0_sft-hh-data"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Format input with prompt template
prompt = "What is machine learning?\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate response
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Framework
- Library: Transformers + TRL
- Training Type: Supervised Fine-Tuning (SFT)
- Format: Prompt-completion with Assistant-only loss
Citation
If you use this model, please cite the original base model and dataset:
@misc{ultrafeedback2023,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and others},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv}
}
- Downloads last month
- 7
