Moxin x llama.cpp Customized Quant for Kimi-K2-Thinking

Kimi K2: Open Agentic Intellignece

We sincerely thank the open-source community developers and contributors unsloth and ubergarm for providing BF16 and iMatrix.

IQ1_M is made with tensor-type recipes , and serves only as an experimental configuration for extreme compression.

Q2_K_XL is a specialized version with all expert at 2-bit and all other tensors at 8-bit designed for personalized deployment and experiments.

Q8_0-Q4_0 [Q4_X] is the almost "full quality" version with the hack fix of Q4_0 provided by jukofyork. Final estimate: PPL = 2.0813 +/- 0.00903

Q3_K_XL is derived from the Q4_X variant, with all ffn_gate and ffn_up experts quantized to 3-bits. [recommended if you can't fit in the Q4_X version].

- IQ1_M : 226.86 GiB (1.90 BPW)
- Q2_K_XL : 322.13 GiB (2.70 BPW)
- Q3_K_XL : 459.94 GiB (3.85 BPW)
- Q8_0-Q4_0 [Q4_X] : 543.62 GiB (4.55 BPW) 

For ultra-large MoE models like Kimi, the component that dominates VRAM/RAM usage is the expert block itself. Therefore, our quantization focuses primarily on this critical part, without applying additional precision-mixing on attn or shexp.

👈 Download Guide
huggingface-cli download moxin-org/Kimi-K2-Thinking-Moxin-GGUF --include "*Q3_K_XL*" --local-dir ./Kimi-K2-Moxin
# !pip install huggingface_hub hf_transfer
import os
# os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "moxin-org/Kimi-K2-Thinking-Moxin-GGUF",
    local_dir = "Kimi-K2-Thinking-Moxin-GGUF",
    allow_patterns = ["*Q8_0-Q4_0*"], # Q3_K_XL, Q2_K_XL, IQ1_M
)

Download Available for huggingface_hub, huggingface-cli, snapshot_download, xet.

Usage

Example of runing gguf with local build of llama.cpp. (llama-cli/llama-server)

👈 Build llama.cpp locally
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp

# -DLLAMA_CURL=OFF if error
cmake -B build -DGGML_CUDA=ON -DBUILD_SHARED_LIBS=OFF 
cmake --build build --config Release -j --clean-first
build/bin/llama-cli -m Kimi-K2-Thinking-Moxin-GGUF/K2-Thinking-IQ1_M/Kimi-K2-Thinking-Moxin-IQ1_M-00001-of-00007.gguf \
  -ngl 99 \
  --temp 1.0 \
  --min-p 0.01 \
  --ctx-size 16384 \ # 4096, 8192

Citation

If this work is helpful, please kindly helpe cite as:

@article{chen2025collaborative,
  title={Collaborative Compression for Large-Scale MoE Deployment on Edge},
  author={Chen, Yixiao and Xie, Yanyue and Yang, Ruining and Jiang, Wei and Wang, Wei and He, Yong and Chen, Yue and Zhao, Pu and Wang, Yanzhi},
  journal={arXiv preprint arXiv:2509.25689},
  year={2025}
}

Acknowledgements

This repository builds upon the outstanding work of the following open-source authors and projects:

We sincerely thank them for their excellent contributions to the open-source community.

Downloads last month
5,975
GGUF
Model size
1T params
Architecture
deepseek2
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for moxin-org/Kimi-K2-Thinking-Moxin-GGUF

Quantized
(12)
this model

Collection including moxin-org/Kimi-K2-Thinking-Moxin-GGUF