|
|
---
|
|
|
quantized_by: ubergarm
|
|
|
pipeline_tag: text-generation
|
|
|
base_model: moonshotai/Kimi-K2-Instruct-0905
|
|
|
license: other
|
|
|
license_name: modified-mit
|
|
|
license_link: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905/blob/main/LICENSE
|
|
|
base_model_relation: quantized
|
|
|
tags:
|
|
|
- mla
|
|
|
- imatrix
|
|
|
- conversational
|
|
|
- ik_llama.cpp
|
|
|
---
|
|
|
|
|
|
## **WIP**
|
|
|
|
|
|
- [x] download fp8 safetensors
|
|
|
- [x] cast fp8 safetensors to bf16 safetensors
|
|
|
- [x] convert to bf16 GGUF
|
|
|
- [x] quantize Q8_0 without imatrix
|
|
|
- [ ] calculate and upload imatrix from Q8_0
|
|
|
- [ ] begin quantizing and releasing
|
|
|
|
|
|
Open a discussion if you have a specific target RAM+VRAM in mind for your rig and I'll see what I can do given the available quants. Cheers!
|
|
|
|
|
|
## `ik_llama.cpp` imatrix Quantizations of moonshotai/Kimi-K2-Instruct-0905
|
|
|
This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
|
|
|
|
|
|
*NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
|
|
|
|
|
|
Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP.
|
|
|
|
|
|
These quants provide best in class perplexity for the given memory footprint.
|
|
|
|
|
|
## Big Thanks
|
|
|
Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
|
|
|
|
|
|
Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models!
|
|
|
|
|
|
## Quant Collection
|
|
|
Compare with Perplexity of full size `Q8_0` TODO
|
|
|
|
|
|
Final estimate: PPL = TODO
|
|
|
|
|
|

|
|
|
|
|
|
### `smol-IQ4_KSS` TODO
|
|
|
Final estimate: PPL = TODO
|
|
|
|
|
|
<details>
|
|
|
|
|
|
<summary>π Secret Recipe</summary>
|
|
|
|
|
|
```bash
|
|
|
echo TODO
|
|
|
```
|
|
|
|
|
|
</details>
|
|
|
|
|
|
### `IQ3_KS` TODO
|
|
|
Final estimate: PPL = TODO
|
|
|
|
|
|
<details>
|
|
|
|
|
|
<summary>π Secret Recipe</summary>
|
|
|
|
|
|
```bash
|
|
|
echo TODO
|
|
|
```
|
|
|
|
|
|
</details>
|
|
|
|
|
|
|
|
|
### `IQ2_KL` TODO
|
|
|
Final estimate: PPL = TODO
|
|
|
|
|
|
<details>
|
|
|
|
|
|
<summary>π Secret Recipe</summary>
|
|
|
|
|
|
```bash
|
|
|
echo TODO
|
|
|
```
|
|
|
|
|
|
</details>
|
|
|
|
|
|
### `IQ2_KS` TODO
|
|
|
Final estimate: PPL = TODO
|
|
|
|
|
|
<details>
|
|
|
|
|
|
<summary>π Secret Recipe</summary>
|
|
|
|
|
|
```bash
|
|
|
echo TODO
|
|
|
```
|
|
|
|
|
|
</details>
|
|
|
|
|
|
### `IQ1_KT` TODO
|
|
|
Final estimate: PPL = TODO
|
|
|
|
|
|
<details>
|
|
|
|
|
|
<summary>π Secret Recipe</summary>
|
|
|
|
|
|
```bash
|
|
|
echo TODO
|
|
|
```
|
|
|
|
|
|
</details>
|
|
|
|
|
|
## Example Commands
|
|
|
### Hybrid (multiple) CUDA + CPU
|
|
|
```bash
|
|
|
# Two CUDA devices with enough VRAM to offload more layers
|
|
|
# Keep in mind Kimi-K2 starts at 1 unlike DeepSeek at 3 (first dense layers)
|
|
|
./build/bin/llama-server \
|
|
|
--model "$model"\
|
|
|
--alias ubergarm/Kimi-K2-Instruct-0905 \
|
|
|
--ctx-size 32768 \
|
|
|
-ctk q8_0 \
|
|
|
-fa -fmoe \
|
|
|
-mla 3 \
|
|
|
-ngl 99 \
|
|
|
-ot "blk\.(1|2|3)\.ffn_.*=CUDA0" \
|
|
|
-ot "blk\.(4|5|6)\.ffn_.*=CUDA1" \
|
|
|
-ot exps=CPU \
|
|
|
--parallel 1 \
|
|
|
--threads 48 \
|
|
|
--threads-batch 64 \
|
|
|
--host 127.0.0.1 \
|
|
|
--port 8080
|
|
|
```
|
|
|
|
|
|
### CPU-Only (no GPU)
|
|
|
```bash
|
|
|
# compile
|
|
|
cmake -B build -DGGML_CUDA=0 -DGGML_BLAS=0 -DGGML_VULKAN=0
|
|
|
cmake --build build --config Release -j $(nproc)
|
|
|
|
|
|
# run server
|
|
|
# single CPU of a dual socket rig configured one NUMA per socket
|
|
|
numactl -N 0 -m 0 \
|
|
|
./build/bin/llama-server \
|
|
|
--model "$model"\
|
|
|
--alias ubergarm/Kimi-K2-Instruct-0905 \
|
|
|
--ctx-size 98304 \
|
|
|
-ctk q8_0 \
|
|
|
-fa -fmoe \
|
|
|
-mla 3 \
|
|
|
--parallel 1 \
|
|
|
--threads 128 \
|
|
|
--threads-batch 192 \
|
|
|
--numa numactl \
|
|
|
--host 127.0.0.1 \
|
|
|
--port 8080
|
|
|
```
|
|
|
|
|
|
## References
|
|
|
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
|
|
|
|