Add quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
Browse files- .gitattributes +9 -0
- MODELFILE +25 -0
- Qwen3-Coder-30B-A3B-Instruct-Q2_K/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-Q3_K_M/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-Q3_K_S/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-Q4_K_M/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-Q4_K_S/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-Q5_K_M/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-Q5_K_S/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-Q6_K/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-Q8_0/README.md +0 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q2_K.gguf +3 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q3_K_M.gguf +3 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q3_K_S.gguf +3 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q4_K_M.gguf +3 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q4_K_S.gguf +3 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q5_K_M.gguf +3 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q5_K_S.gguf +3 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q6_K.gguf +3 -0
- Qwen3-Coder-30B-A3B-Instruct-f32_Q8_0.gguf +3 -0
- README.md +102 -0
- SHA256SUMS.txt +9 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,12 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
Qwen3-Coder-30B-A3B-Instruct-f32_Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
MODELFILE
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MODELFILE for Qwen3-Coder-30B-A3B-Instruct-GGUF
|
| 2 |
+
# Used by LM Studio, OpenWebUI, GPT4All, etc.
|
| 3 |
+
|
| 4 |
+
context_length: 32768
|
| 5 |
+
embedding: false
|
| 6 |
+
f16: cpu
|
| 7 |
+
|
| 8 |
+
# Chat template using ChatML (used by Qwen)
|
| 9 |
+
prompt_template: >-
|
| 10 |
+
<|im_start|>system
|
| 11 |
+
You are a helpful assistant.<|im_end|>
|
| 12 |
+
<|im_start|>user
|
| 13 |
+
{prompt}<|im_end|>
|
| 14 |
+
<|im_start|>assistant
|
| 15 |
+
|
| 16 |
+
# Stop sequences help end generation cleanly
|
| 17 |
+
stop: "<|im_end|>"
|
| 18 |
+
stop: "<|im_start|>"
|
| 19 |
+
|
| 20 |
+
# Default sampling
|
| 21 |
+
temperature: 0.6
|
| 22 |
+
top_p: 0.95
|
| 23 |
+
top_k: 20
|
| 24 |
+
min_p: 0.0
|
| 25 |
+
repeat_penalty: 1.1
|
Qwen3-Coder-30B-A3B-Instruct-Q2_K/README.md
ADDED
|
Binary file (2.77 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-Q3_K_M/README.md
ADDED
|
Binary file (2.77 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-Q3_K_S/README.md
ADDED
|
Binary file (2.77 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-Q4_K_M/README.md
ADDED
|
Binary file (2.8 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-Q4_K_S/README.md
ADDED
|
Binary file (2.78 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-Q5_K_M/README.md
ADDED
|
Binary file (2.81 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-Q5_K_S/README.md
ADDED
|
Binary file (2.8 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-Q6_K/README.md
ADDED
|
Binary file (2.8 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-Q8_0/README.md
ADDED
|
Binary file (2.81 kB). View file
|
|
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7f9552b2ca5db4a596425d241bad4ba526be5f82bf5f10f40f35ebde8b7ff5b6
|
| 3 |
+
size 11258611392
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q3_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:536f4c5dcd45b35ca2a3b59dd1655c29296867c8791a0b2a09c55e9a34b8b695
|
| 3 |
+
size 14711848640
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q3_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e08750999da8b83dfa780ffd82919c46e93e7e6695fd0824206eaf39bec67414
|
| 3 |
+
size 13292469952
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2aa0dec45bddb00775d7cfd5d79680435f9431284e76834e49bd7152a5fd499e
|
| 3 |
+
size 18556688064
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q4_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3d4f5c50d14f7afefcb4c728d4e2e11d1a1f5318fdb6551745054f51ee9b8fb8
|
| 3 |
+
size 17456010944
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:637c0f56860e92cf1cacea82d0169b44fd4833dd0278aaf6c3a4c8cdbd830a0b
|
| 3 |
+
size 21725583040
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:518e5d9c2d18f7a394889e4973a1bce78c05614dc0f9c04a446c03185da9f68c
|
| 3 |
+
size 21080512192
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9ee845097a4f01f69140748a1e33401a53567d494e448d9c907efd2bfc71ecbe
|
| 3 |
+
size 25092533952
|
Qwen3-Coder-30B-A3B-Instruct-f32_Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:888b757556626a402b2c2b27d93d56307ddcb92f662c59c91a623dce89cecf1f
|
| 3 |
+
size 32483933888
|
README.md
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
tags:
|
| 4 |
+
- gguf
|
| 5 |
+
- qwen
|
| 6 |
+
- llama.cpp
|
| 7 |
+
- quantized
|
| 8 |
+
- text-generation
|
| 9 |
+
- reasoning - agent - multilingual
|
| 10 |
+
base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct
|
| 11 |
+
author: geoffmunn
|
| 12 |
+
pipeline_tag: text-generation
|
| 13 |
+
language:
|
| 14 |
+
- en
|
| 15 |
+
- zh
|
| 16 |
+
- es
|
| 17 |
+
- fr
|
| 18 |
+
- de
|
| 19 |
+
- ru
|
| 20 |
+
- ar
|
| 21 |
+
- ja
|
| 22 |
+
- ko
|
| 23 |
+
- hi
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
# Qwen3-Coder-30B-A3B-Instruct-GGUF
|
| 27 |
+
|
| 28 |
+
This is a **GGUF-quantized version** of the **[Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct)** language model β a Converted for use with \llama.cpp\, [LM Studio](https://lmstudio.ai), [OpenWebUI](https://openwebui.com), [GPT4All](https://gpt4all.io), and more.
|
| 29 |
+
|
| 30 |
+
π‘ **Key Features of Qwen3-Coder-30B-A3B-Instruct:**
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
@"
|
| 36 |
+
|
| 37 |
+
## π‘ Why f16 (not f32)?
|
| 38 |
+
|
| 39 |
+
This model uses **FP16 (16-bit floating point)** as its base precision, not full FP32 (32-bit). Here's why:
|
| 40 |
+
|
| 41 |
+
- **FP16 (Half Precision)**:
|
| 42 |
+
- Uses **~50% less memory** than FP32.
|
| 43 |
+
- **Sufficient for inference** quality in modern LLMs.
|
| 44 |
+
- Supported natively by **GPUs (NVIDIA/AMD/Apple)** and optimized in `llama.cpp`.
|
| 45 |
+
- **No perceptible quality loss** compared to FP32 for most tasks.
|
| 46 |
+
- Standard for GGUF models in the community.
|
| 47 |
+
|
| 48 |
+
- **FP32 (Full Precision)**:
|
| 49 |
+
- Rarely used for inference due to **double the RAM/VRAM usage**.
|
| 50 |
+
- Only needed for **extreme numerical stability** (e.g. scientific simulations).
|
| 51 |
+
- **Not recommended** for LLM chat or coding tasks.
|
| 52 |
+
|
| 53 |
+
β
**Conclusion**: `f16` is the **sweet spot** β high fidelity, efficient, and widely compatible. Quantized versions (Q4_K_M, Q5_K_M, etc.) trade a small amount of this quality for massive speed and memory gains.
|
| 54 |
+
|
| 55 |
+
"@
|
| 56 |
+
} else {
|
| 57 |
+
@"
|
| 58 |
+
|
| 59 |
+
## π‘ Why f32?
|
| 60 |
+
|
| 61 |
+
This model uses **FP32 (32-bit floating point)** as its base precision. This is unusual for GGUF models because:
|
| 62 |
+
|
| 63 |
+
- FP32 doubles memory usage vs FP16.
|
| 64 |
+
- Modern LLMs (including Qwen3) are trained in mixed precision and **do not benefit** from FP32 at inference time.
|
| 65 |
+
- Only useful for **debugging**, **research**, or **extreme numerical robustness**.
|
| 66 |
+
|
| 67 |
+
β οΈ Consider converting from `f32` β `f16` first using `llama-convert` if you control the source.
|
| 68 |
+
|
| 69 |
+
"@
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
## Available Quantizations (from f32)
|
| 73 |
+
|
| 74 |
+
| Level | Quality | Speed | Size | Recommendation |
|
| 75 |
+
|----------|--------------|----------|-----------|----------------|
|
| 76 |
+
| Q2_K | Minimal | β‘ Fast | 19.5 GB | Only on severely memory-constrained systems. | | Q3_K_S | Low-Medium | β‘ Fast | 22.2 GB | Minimal viability; avoid unless space-limited. | | Q3_K_M | Low-Medium | β‘ Fast | 23.3 GB | Acceptable for basic interaction. | | Q4_K_S | Practical | β‘ Fast | 27.0 GB | Good balance for mobile/embedded platforms. | | Q4_K_M | Practical | β‘ Fast | 28.1 GB | Best overall choice for most users. | | Q5_K_S | Max Reasoning | π’ Medium | 31.5 GB | Slight quality gain; good for testing. | | Q5_K_M | Max Reasoning | π’ Medium | 32.2 GB | Best quality available. Recommended. | | Q6_K | Near-FP16 | π Slow | 36.5 GB | Diminishing returns. Only if RAM allows. | | Q8_0 | Lossless* | π Slow | 48.0 GB | Maximum fidelity. Ideal for archival. |
|
| 77 |
+
> π‘ **Recommendations by Use Case**
|
| 78 |
+
>
|
| 79 |
+
> - - π» **Standard Laptop (i5/M1 Mac)**: Q5_K_M (optimal quality)
|
| 80 |
+
- π§ **Reasoning, Coding, Math**: Q5_K_M or Q6_K
|
| 81 |
+
- π **RAG, Retrieval, Precision Tasks**: Q6_K or Q8_0
|
| 82 |
+
- π€ **Agent & Tool Integration**: Q5_K_M
|
| 83 |
+
- π οΈ **Development & Testing**: Test from Q4_K_M up to Q8_0
|
| 84 |
+
|
| 85 |
+
## Usage
|
| 86 |
+
|
| 87 |
+
Load this model using:
|
| 88 |
+
- [OpenWebUI](https://openwebui.com) β self-hosted AI interface with RAG & tools
|
| 89 |
+
- [LM Studio](https://lmstudio.ai) β desktop app with GPU support
|
| 90 |
+
- [GPT4All](https://gpt4all.io) β private, offline AI chatbot
|
| 91 |
+
- Or directly via \llama.cpp\
|
| 92 |
+
|
| 93 |
+
Each quantized model includes its own \README.md\ and shares a common \MODELFILE\.
|
| 94 |
+
|
| 95 |
+
## Author
|
| 96 |
+
|
| 97 |
+
π€ Geoff Munn (@geoffmunn)
|
| 98 |
+
π [Hugging Face Profile](https://huggingface.co/geoffmunn)
|
| 99 |
+
|
| 100 |
+
## Disclaimer
|
| 101 |
+
|
| 102 |
+
This is a community conversion for local inference. Not affiliated with Alibaba Cloud or the Qwen team.
|
SHA256SUMS.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
7f9552b2ca5db4a596425d241bad4ba526be5f82bf5f10f40f35ebde8b7ff5b6 Qwen3-Coder-30B-A3B-Instruct-f32_Q2_K.gguf
|
| 2 |
+
536f4c5dcd45b35ca2a3b59dd1655c29296867c8791a0b2a09c55e9a34b8b695 Qwen3-Coder-30B-A3B-Instruct-f32_Q3_K_M.gguf
|
| 3 |
+
e08750999da8b83dfa780ffd82919c46e93e7e6695fd0824206eaf39bec67414 Qwen3-Coder-30B-A3B-Instruct-f32_Q3_K_S.gguf
|
| 4 |
+
2aa0dec45bddb00775d7cfd5d79680435f9431284e76834e49bd7152a5fd499e Qwen3-Coder-30B-A3B-Instruct-f32_Q4_K_M.gguf
|
| 5 |
+
3d4f5c50d14f7afefcb4c728d4e2e11d1a1f5318fdb6551745054f51ee9b8fb8 Qwen3-Coder-30B-A3B-Instruct-f32_Q4_K_S.gguf
|
| 6 |
+
637c0f56860e92cf1cacea82d0169b44fd4833dd0278aaf6c3a4c8cdbd830a0b Qwen3-Coder-30B-A3B-Instruct-f32_Q5_K_M.gguf
|
| 7 |
+
518e5d9c2d18f7a394889e4973a1bce78c05614dc0f9c04a446c03185da9f68c Qwen3-Coder-30B-A3B-Instruct-f32_Q5_K_S.gguf
|
| 8 |
+
9ee845097a4f01f69140748a1e33401a53567d494e448d9c907efd2bfc71ecbe Qwen3-Coder-30B-A3B-Instruct-f32_Q6_K.gguf
|
| 9 |
+
888b757556626a402b2c2b27d93d56307ddcb92f662c59c91a623dce89cecf1f Qwen3-Coder-30B-A3B-Instruct-f32_Q8_0.gguf
|