File size: 375 Bytes
3308731
 
9c58954
 
e9f82ec
9c58954
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
---
license: apache-2.0
base_model:
- Qwen/Qwen3-30B-A3B-Thinking-2507
library_name: transformers
---

This is my (first) attempt at quantizing this Qwen3 model (Qwen/Qwen3-30B-A3B-Thinking-2507) using auto-round, like so:

```
auto-round-light --model "Qwen/Qwen3-30B-A3B-Thinking-2507" --scheme "W4A16" --format "auto_gptq" --output_dir "./Quantized" --model_dtype fp16
```