Results table updated
2d8fdf4
verified
-
Qwen3-8B-Q2_K
Model info updated
-
Qwen3-8B-Q3_K_M
Model info updated
-
Qwen3-8B-Q3_K_S
Model info updated
-
Qwen3-8B-Q4_K_M
Model info updated
-
Qwen3-8B-Q4_K_S
Model info updated
-
Qwen3-8B-Q5_K_M
Model info updated
-
Qwen3-8B-Q5_K_S
Model info updated
-
Qwen3-8B-Q6_K
Model info updated
-
Qwen3-8B-Q8_0
Model info updated
-
2.06 kB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
561 Bytes
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
3.28 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
4.12 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
3.77 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
5.03 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
4.8 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
5.85 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
5.72 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
6.73 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
8.71 GB
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
-
5.22 kB
Results table updated
-
813 Bytes
Add Q2–Q8_0 quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload