Qwen3-1.7B-alpaca-cleaned - Merged Model
Full-precision (16-bit) merged model with LoRA adapters integrated.
Model Details
- Base Model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
- Format: merged_16bit
- Dataset: yahma/alpaca-cleaned
- Size: ~8-16GB
- Usage: transformers
Related Models
LoRA Adapters: fs90/Qwen3-1.7B-alpaca-cleaned-lora - Smaller LoRA-only adapters
GGUF Quantized: fs90/Qwen3-1.7B-alpaca-cleaned-gguf - GGUF format for llama.cpp/Ollama
Training Details
- LoRA Rank: 16
- Training Time: 53.0 minutes
- Training Loss: 1.3403
- Max Seq Length: 4096
- Training Mode: Full training
For complete training configuration, see the LoRA adapters repository/directory.
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"./outputs/Qwen3-1.7B-alpaca-cleaned/merged_16bit",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("./outputs/Qwen3-1.7B-alpaca-cleaned/merged_16bit")
messages = [{"role": "user", "content": "Your question here"}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
License
Based on unsloth/Qwen3-1.7B-unsloth-bnb-4bit and trained on yahma/alpaca-cleaned. Please refer to the original model and dataset licenses.
Framework Versions
- Unsloth: 2025.11.3
- Transformers: 4.57.1
- PyTorch: 2.9.0+cu128
Generated: 2025-11-22 00:48:46
- Downloads last month
- 26
Model tree for fs90/Qwen3-1.7B-alpaca-cleaned
Base model
Qwen/Qwen3-1.7B-Base
Finetuned
Qwen/Qwen3-1.7B
Quantized
unsloth/Qwen3-1.7B-unsloth-bnb-4bit