Transformers
GGUF
programming
code generation
code
codeqwen
Mixture of Experts
coding
coder
qwen2
chat
qwen
qwen-coder
Qwen3-Coder-30B-A3B-Instruct
Qwen3-30B-A3B
mixture of experts
128 experts
8 active experts
1 million context
qwen3
finetune
brainstorm 40x
brainstorm
optional thinking
qwen3_moe
imatrix
conversational
auto-patch README.md
Browse files
README.md
CHANGED
|
@@ -73,10 +73,13 @@ more details, including on how to concatenate multi-part files.
|
|
| 73 |
| Link | Type | Size/GB | Notes |
|
| 74 |
|:-----|:-----|--------:|:------|
|
| 75 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.imatrix.gguf) | imatrix | 0.3 | imatrix file (for creating your own qwuants) |
|
|
|
|
| 76 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-Q2_K.gguf) | i1-Q2_K | 19.5 | IQ3_XXS probably better |
|
| 77 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 20.6 | lower quality |
|
| 78 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-IQ3_M.gguf) | i1-IQ3_M | 23.4 | |
|
|
|
|
| 79 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-Q4_K_S.gguf) | i1-Q4_K_S | 30.3 | optimal size/speed/quality |
|
|
|
|
| 80 |
|
| 81 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
| 82 |
types (lower is better):
|
|
|
|
| 73 |
| Link | Type | Size/GB | Notes |
|
| 74 |
|:-----|:-----|--------:|:------|
|
| 75 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.imatrix.gguf) | imatrix | 0.3 | imatrix file (for creating your own qwuants) |
|
| 76 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-IQ2_M.gguf) | i1-IQ2_M | 17.6 | |
|
| 77 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-Q2_K.gguf) | i1-Q2_K | 19.5 | IQ3_XXS probably better |
|
| 78 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 20.6 | lower quality |
|
| 79 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-IQ3_M.gguf) | i1-IQ3_M | 23.4 | |
|
| 80 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-Q3_K_M.gguf) | i1-Q3_K_M | 25.5 | IQ3_S probably better |
|
| 81 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-Q4_K_S.gguf) | i1-Q4_K_S | 30.3 | optimal size/speed/quality |
|
| 82 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL-i1-GGUF/resolve/main/Qwen3-Yoyo-V3-54B-A3B-Thinking-TOTAL-RECALL.i1-Q4_K_M.gguf) | i1-Q4_K_M | 32.2 | fast, recommended |
|
| 83 |
|
| 84 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
| 85 |
types (lower is better):
|