Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
Mistral-Small-3.2-24B-Instruct-2506-GGUF
like
0
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
arxiv:
2406.17415
arxiv:
2403.03853
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Mistral-Small-3.2-24B-Instruct-2506-GGUF
248 GB
1 contributor
History:
26 commits
eaddario
Update README.md
8f92747
verified
4 months ago
imatrix
Generate imatrices
4 months ago
logits
Generate base model logits
4 months ago
scores
Generate Perplexity, KLD, ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
4 months ago
.gitattributes
Safe
1.6 kB
Update .gitattributes
4 months ago
.gitignore
Safe
6.78 kB
Add .gitignore
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-F16.gguf
Safe
47.2 GB
xet
Convert safetensor to GGUF @ F16
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-IQ3_M.gguf
10.8 GB
xet
Layer-wise quantization IQ3_M
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-IQ3_S.gguf
9.79 GB
xet
Layer-wise quantization IQ3_S
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-IQ4_NL.gguf
13.1 GB
xet
Layer-wise quantization IQ4_NL
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q3_K_L.gguf
11.4 GB
xet
Layer-wise quantization Q3_K_L
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q3_K_M.gguf
Safe
10.4 GB
xet
Layer-wise quantization Q3_K_M
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q3_K_S.gguf
Safe
9.19 GB
xet
Layer-wise quantization Q3_K_S
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q4_K_M.gguf
Safe
13.1 GB
xet
Layer-wise quantization Q4_K_M
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q4_K_S.gguf
Safe
11.9 GB
xet
Layer-wise quantization Q4_K_S
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q5_K_M.gguf
Safe
16 GB
xet
Layer-wise quantization Q5_K_M
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q5_K_S.gguf
Safe
14.9 GB
xet
Layer-wise quantization Q5_K_S
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q6_K.gguf
Safe
19.8 GB
xet
Layer-wise quantization Q6_K
4 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q8_0.gguf
20.8 GB
xet
Layer-wise quantization Q8_0
4 months ago
README.md
Safe
23.1 kB
Update README.md
4 months ago