Qwen3-1.7B-alpaca-cleaned - Merged Model

Full-precision (16-bit) merged model with LoRA adapters integrated.

Model Details

Related Models

Training Details

  • LoRA Rank: 16
  • Training Time: 53.0 minutes
  • Training Loss: 1.3403
  • Max Seq Length: 4096
  • Training Mode: Full training

For complete training configuration, see the LoRA adapters repository/directory.

Usage

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "./outputs/Qwen3-1.7B-alpaca-cleaned/merged_16bit",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("./outputs/Qwen3-1.7B-alpaca-cleaned/merged_16bit")

messages = [{"role": "user", "content": "Your question here"}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))

License

Based on unsloth/Qwen3-1.7B-unsloth-bnb-4bit and trained on yahma/alpaca-cleaned. Please refer to the original model and dataset licenses.

Framework Versions

  • Unsloth: 2025.11.3
  • Transformers: 4.57.1
  • PyTorch: 2.9.0+cu128

Generated: 2025-11-22 00:48:46

Downloads last month
26
Safetensors
Model size
2B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for fs90/Qwen3-1.7B-alpaca-cleaned

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(118)
this model