File size: 6,321 Bytes
b7df490
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55264e5
b7df490
 
fdb398e
b7df490
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52ce255
b7df490
 
 
 
 
 
 
 
52ce255
b7df490
 
 
 
 
 
 
 
 
 
 
 
5ce9c73
 
 
 
 
 
 
 
a028397
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7df490
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: apache-2.0
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
base_model_relation: quantized
tags:
- Mistral
- Mistral-Small
- GGUF
- quantized
- 2-bit
- 3-bit
- 4-bit
---

## Llama.cpp hybrid layer quantization of Mistral-Small-3.1-24B-Instruct-2503 by mistralai

Original model: https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503

The hybrid quant employs different quantization levels on a per layer basis to increased
flexibility of trading off performance vs file size.  Less parameter bits are used at deep layers
and more bits at cortex layers to simultaneously optimize quantized size and model performance.
These quants were specifically optimized so the vision mode of the model produced good outputs
with no nonsense words across all the quants on a test case, while reducing the file size significantly to enable
full offload (in non vision mode) of the smallest two quants on a 12G VRAM GPU.  Three quants
are available : Q2_K_H, Q3_K_H, and Q4_K_H.  The layer quants are as follows:
```
Q2_K_H:
LAYER_TYPES='[
   [0 ,"Q2_K"  ],[1 ,"Q2_K_S"],[2 ,"Q2_K"  ],[3 ,"Q2_K_S"],[4 ,"Q2_K"  ],[5 ,"Q2_K_S"],[6 ,"Q2_K"  ],[7 ,"Q2_K_S"],
   [8 ,"Q3_K_S"],[9 ,"Q2_K"  ],[10,"Q3_K_S"],[11,"Q2_K"  ],[12,"Q3_K_S"],[13,"Q2_K"  ],[14,"Q3_K_S"],[15,"Q2_K"  ],
   [16,"Q3_K_S"],[17,"Q2_K"  ],[18,"Q3_K_S"],[19,"Q2_K"  ],[20,"Q3_K_S"],[21,"Q2_K"  ],[22,"Q3_K_S"],[23,"Q2_K"  ],
   [24,"Q3_K_S"],[25,"Q3_K_S"],[26,"Q3_K_S"],[27,"Q3_K_S"],[28,"Q3_K_S"],[29,"Q3_K_S"],[30,"Q3_K_S"],[31,"Q3_K_S"],
   [32,"Q3_K_S"],[33,"Q3_K_S"],[34,"Q3_K_S"],[35,"Q3_K_S"],[36,"Q3_K_M"],[37,"Q3_K_M"],[38,"Q3_K_M"],[39,"Q3_K_M"]
   ]'
FLAGS="--token-embedding-type Q3_K --output-tensor-type Q5_K"

Q3_K_H:
LAYER_TYPES='[
   [0 ,"Q3_K_S"],[1 ,"Q2_K"  ],[2 ,"Q3_K_S"],[3 ,"Q2_K"  ],[4 ,"Q3_K_S"],[5 ,"Q2_K"  ],[6 ,"Q3_K_S"],[7 ,"Q2_K"  ],
   [8 ,"Q3_K_S"],[9 ,"Q2_K"  ],[10,"Q3_K_S"],[11,"Q2_K"  ],[12,"Q3_K_S"],[13,"Q2_K"  ],[14,"Q3_K_S"],[15,"Q2_K"  ],
   [16,"Q3_K_S"],[17,"Q3_K_S"],[18,"Q3_K_S"],[19,"Q3_K_S"],[20,"Q3_K_S"],[21,"Q3_K_S"],[22,"Q3_K_S"],[23,"Q3_K_S"],
   [24,"Q3_K_S"],[25,"Q3_K_S"],[26,"Q3_K_S"],[27,"Q3_K_S"],[28,"Q3_K_S"],[29,"Q3_K_S"],[30,"Q3_K_S"],[31,"Q3_K_M"],
   [32,"Q3_K_M"],[33,"Q3_K_M"],[34,"Q3_K_M"],[35,"Q3_K_M"],[36,"Q3_K_M"],[37,"Q3_K_M"],[38,"Q3_K_L"],[39,"Q4_K_S"]
   ]'
   FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K"
fi

Q4_K_H:
LAYER_TYPES='[
   [0 ,"Q3_K_M"],[1 ,"Q3_K_M"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
   [8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
   [16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
   [24,"Q3_K_L"],[25,"Q3_K_M"],[26,"Q3_K_L"],[27,"Q3_K_M"],[28,"Q3_K_L"],[29,"Q3_K_M"],[30,"Q3_K_L"],[31,"Q3_K_M"],
   [32,"Q3_K_L"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_M"],[37,"Q5_K_S"],[38,"Q5_K_M"],[39,"Q6_K"]
   ]'
   FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K"
fi
```
These quants were optimized for both good reasoning and vision performance.

Comparison:

Quant |  size  |  PPL |   Comment
---------|---------|------|-----------
Q2_K   | 8.89e9 | 6.62  | not tested, most likely unusable
Q2_K_H | 9.8e9  | 5.96  | optimized for good performance in vision mode
Q3_K_H | 10.5e9 | 5.82  | slighly better than Q2_K_H
Q3_K_M | 11.5e9 | 5.58  | not tested, should work well
Q4_K_H | 12.5e9 | 5.49  | slightly smaller than IQ4_XS, similar performance
IQ4_XS | 12.9e9 | 5.38  | not tested, should work well

Usage:

This is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs
and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd
readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md . 
Use of the best available model (Q4_K_H) is recommended to maximize the accuracy of vision mode.  To run it on a 12G VRAM
GPU use --ngl 32.  Generation speed is still quite good with partial offload.

Benchmarks:

A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm

Mistral-Small-3.1-24B-Instruct-2503 compares most closely with gemma-3-27B-it available here: https://huggingface.co/steampunque/gemma-3-27b-it-Hybrid-GGUF .
A short summary of some key evals comparing the two models is given here for convenience:

model  |  gemma-3-27b-it |  Mistral-Small-3.1-24B-Instruct-2503 |
------|-----------------|------------|
quant |   Q4_K_H        |    Q4_K_H  |
alignment | strict      | permissive |
  TEST    |             |            |
Winogrande | 0.748      |  0.784     |
Lambada    | 0.742      |  0.798     |
Hellaswag  | 0.802      |  0.899     |
BoolQ      | 0.701      |  0.646     |
Jeopardy   | 0.830      |  0.740     |
GSM8K      | 0.964      |  0.940     | 
Apple      | 0.850      |  0.820     |
Humaneval  | 0.890      |  0.853     |

## Download the file from below:
| Link | Type | Size/e9 B | Notes |
|------|------|-----------|-------|
| [Mistral-Small-3.1-24B-Instruct-2503.Q2_K_H.gguf](https://huggingface.co/steampunque/Mistral-Small-3.1-24B-Instruct-2503-Hybrid-GGUF/resolve/main/Mistral-Small-3.1-24B-Instruct-2503.Q2_K_H.gguf) | Q2_K_H | 9.8e9 B | good quality |
| [Mistral-Small-3.1-24B-Instruct-2503.Q3_K_H.gguf](https://huggingface.co/steampunque/Mistral-Small-3.1-24B-Instruct-2503-Hybrid-GGUF/resolve/main/Mistral-Small-3.1-24B-Instruct-2503.Q3_K_H.gguf) | Q3_K_H | 10.5e9 B | solid quality |
| [Mistral-Small-3.1-24B-Instruct-2503.Q4_K_H.gguf](https://huggingface.co/steampunque/Mistral-Small-3.1-24B-Instruct-2503-Hybrid-GGUF/resolve/main/Mistral-Small-3.1-24B-Instruct-2503.Q4_K_H.gguf) | Q4_K_H | 12.5e9 B | best quality |
| [Mistral-Small-3.1-24B-Instruct-2503.mmproj.gguf](https://huggingface.co/steampunque/Mistral-Small-3.1-24B-Instruct-2503-Hybrid-GGUF/resolve/main/Mistral-Small-3.1-24B-Instruct-2503.mmproj.gguf) | mmproj | 0.88e9 B | multimedia projector |

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040