This is my (first) attempt at quantizing this Qwen3 model (Qwen/Qwen3-30B-A3B-Thinking-2507) using auto-round, like so:
auto-round-light --model "Qwen/Qwen3-30B-A3B-Thinking-2507" --scheme "W4A16" --format "auto_gptq" --output_dir "./Quantized" --model_dtype fp16
- Downloads last month
- 98
Model tree for pramjana/Qwen3-30B-A3B-Thinking-2507-4bit-GPTQ
Base model
Qwen/Qwen3-30B-A3B-Thinking-2507