Llama.cpp hybrid layer quantization of GLM-Z1-9B-0414 by THUDM
Original model: https://huggingface.co/THUDM/GLM-Z1-9B-0414
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~7.3G gguf with the same perplexity and signficantly better performance on a set of test eval promps compared a ~8.3G Q6_K GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the Q6_K_H layer quants are as follows:
LAYER_TYPES='[
[0 ,"Q6_K" ],[1 ,"Q5_K_M"],[2 ,"Q4_K_M"],[3 ,"Q4_K_M"],[4 ,"Q4_K_M"],[5 ,"Q4_K_M"],[6 ,"Q4_K_M"],[7 ,"Q4_K_M"],
[8 ,"Q5_K_M"],[9 ,"Q5_K_S"],[10,"Q5_K_M"],[11,"Q5_K_S"],[12,"Q5_K_M"],[13,"Q5_K_S"],[14,"Q5_K_M"],[15,"Q5_K_S"],
[16,"Q5_K_M"],[17,"Q5_K_M"],[18,"Q5_K_M"],[19,"Q5_K_M"],[20,"Q6_K" ],[21,"Q5_K_M"],[22,"Q6_K" ],[23,"Q5_K_M"],
[24,"Q6_K" ],[25,"Q5_K_M"],[26,"Q6_K" ],[27,"Q6_K" ],[28,"Q6_K" ],[29,"Q8_0" ],[30,"Q8_0" ],[31,"Q8_0" ]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K"
Q4_K_H is also available:
Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
LAYER_TYPES='[
[0 ,"Q4_K_L"],[1 ,"Q4_K_L"],[2 ,"Q4_K_M"],[3 ,"Q4_K_M"],[4 ,"Q4_K_S"],[5 ,"Q4_K_S"],[6 ,"Q4_K_S"],[7 ,"Q4_K_S"],
[8 ,"Q3_K_L"],[9 ,"Q3_K_L"],[10,"Q3_K_L"],[11,"Q3_K_L"],[12,"Q4_K_S"],[13,"Q4_K_S"],[14,"Q4_K_S"],[15,"Q4_K_S"],
[16,"Q4_K_M"],[17,"Q4_K_M"],[18,"Q4_K_M"],[19,"Q4_K_M"],[20,"Q4_K_L"],[21,"Q4_K_L"],[22,"Q4_K_L"],[23,"Q4_K_L"],
[24,"Q5_K_S"],[25,"Q5_K_S"],[26,"Q5_K_S"],[27,"Q5_K_S"],[28,"Q5_K_M"],[29,"Q5_K_M"],[30,"Q5_K_M"],[31,"Q5_K_M"],
[32,"Q5_K_L"],[33,"Q5_K_L"],[34,"Q5_K_L"],[35,"Q5_K_L"],[36,"Q6_K_S"],[37,"Q6_K_S"],[38,"Q6_K_M"],[39,"Q6_K_M"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
Q4_P_H is also available. This quant pads FFN dimension to even 256 so K quants can be used with it. Both the Q4_K_H and Q6_K_H quant will replace specified layer quant with legacy quants for FFN tensors while Q4_P_H will use the exactly specified K layer quants. Eliminating the legacy quants makes the size and performance more efficient since all layers use K quants.
LAYER_TYPES='[
[0 ,"Q6_K_M"],[1 ,"Q5_K_L"],[2 ,"Q5_K_M"],[3 ,"Q5_K_S"],[4 ,"Q4_K_L"],[5 ,"Q4_K_M"],[6 ,"Q4_K_S"],[7 ,"Q4_K_M"],
[8 ,"Q4_K_S"],[9 ,"Q4_K_S"],[10,"Q4_K_S"],[11,"Q4_K_S"],[12,"Q4_K_S"],[13,"Q4_K_S"],[14,"Q4_K_S"],[15,"Q4_K_S"],
[16,"Q4_K_M"],[17,"Q4_K_S"],[18,"Q4_K_M"],[19,"Q4_K_S"],[20,"Q4_K_M"],[21,"Q4_K_S"],[22,"Q4_K_M"],[23,"Q4_K_S"],
[24,"Q4_K_M"],[25,"Q4_K_M"],[26,"Q4_K_M"],[27,"Q4_K_M"],[28,"Q4_K_L"],[29,"Q4_K_L"],[30,"Q4_K_L"],[31,"Q5_K_M"],
[32,"Q5_K_M"],[33,"Q5_K_M"],[34,"Q5_K_L"],[35,"Q5_K_L"],[36,"Q5_K_L"],[37,"Q6_K_S"],[38,"Q6_K_M"],[39,"Q6_K_L"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high --tensor-pad [[13696,13824],[27392,27648,2]] --override-kv glm4.feed_forward_length=int:13824"
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| IQ4_XS | 5.3e9 | 14.8 | - |
| Q4_P_H | 6.3e9 | 14.8 | Hybrid quant with Q4_K embedding Q6_K output padded FFN tensors for K quants |
| Q4_K_H | 6.6e9 | 14.9 | Hybrid quant with Q4_K embedding Q6_K output |
| Q6_K | 8.3e9 | 14.7 | Q6_K with default embedding and output, unstable with greedy sampling, poor performance on eval prompts |
| Q6_K_H | 7.3e9 | 14.8 | Hybrid quant with Q6_K embedding Q6_K output, stable with greedy sampling, excellent performance on eval prompts |
Usage:
This is a RL trained thinking model. The layer quants for this model were optimized for 100% success on a set of test/eval prompts (Q6_K_H). After achieving that goal it showed very strong performance on problems outside the test/eval prompt set using greedy sampling and it does not exhibit excess overthinking when solving. A straightforward Q6_K quant was found to be both unstable with greedy sampling (never stops generating on some problems) and also was unable to solve several test/eval problems.
The Q4_K_H quant is 0.7B smaller with still very good performance. Experimentation showed it was not possible to reduce Q4_K_H smaller without compromising performance. The Q4_K_H still exhibits "High IQ" characteristcs (i.e. very efficient solution of a complex problem, compared against a "dumber" thinking model which thinks on for ages to get the same solution) on some prompts. This charateristic suggests the RL training for the underlying model might have rewarded efficient solutions more than inefficient ones (hypothesis, could also just be coincidence it happens to be "smart" on a couple tricky problems in the eval test set).
The Q4_P_H quant is an improved Q4 quant enabling it to use K quants for FFN tensors instead of the fallback legacy quants. It does very well on eval test set. It is smallest available high performance hybrid quant for the model.
The model can be speculated using Qwen3 0.6B if the inference engine can support dynamic vocab translation between draft and target models. Approx performance using a downstream speculator with llama.cpp on a 4070 (12G VRAM) with layers and context fully in GPU:
| Q | QKV | ND | NKV | gen tps | Comment |
|---|---|---|---|---|---|
| Q4_P_H | F16 | 0 | 32k | 66 | No draft |
| Q4_P_H | F16 | 3 | 31k | 81 | Spec 3 |
This is one of the strongest general reasoning models I have experienced to date as of 7/21/2025 independent of size, compared against both QwQ, R1 distills of Qwen 2.5 models, and Qwen 3. However testing with some code problems show it is extremely weak on code generation problems.
Benchmarks:
Math benchmarks for the model are given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| GLM-Z1-9B-0414.Q4_K_H.gguf | Q4_K_H | 6.6e9 B | 1.7B smaller than Q6_K with much better performance |
| GLM-Z1-9B-0414.Q4_P_H.gguf | Q4_P_H | 6.3e9 B | 2B smaller than Q6_K with much better performance |
| GLM-Z1-9B-0414.Q6_K_H.gguf | Q6_K_H | 7.3e9 B | 1B smaller than Q6_K with much better performance |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 104
6-bit
Model tree for steampunque/GLM-Z1-9B-0414-Hybrid-GGUF
Base model
zai-org/GLM-Z1-9B-0414