Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
Dolphin3.0-R1-Mistral-24B-GGUF
like
1
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
conversational
arxiv:
2406.17415
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
256a42f
Dolphin3.0-R1-Mistral-24B-GGUF
252 GB
1 contributor
History:
31 commits
eaddario
Layer-wise quantization Q3_K_L
256a42f
verified
8 months ago
imatrix
Generate Small imatrix
11 months ago
logits
Generate base model logits
11 months ago
scores
Generate perplexity and kld scores
11 months ago
.gitattributes
Safe
1.65 kB
Update .gitattributes
11 months ago
.gitignore
Safe
6.78 kB
Update .gitignore
11 months ago
Dolphin3.0-R1-Mistral-24B-F16.gguf
Safe
47.2 GB
xet
Convert to GGUF @ F16
11 months ago
Dolphin3.0-R1-Mistral-24B-IQ3_M.gguf
10.7 GB
xet
Generate IQ3_M quant
11 months ago
Dolphin3.0-R1-Mistral-24B-IQ3_S.gguf
10.4 GB
xet
Generate IQ3_S quant
11 months ago
Dolphin3.0-R1-Mistral-24B-IQ4_NL.gguf
13.5 GB
xet
Generate IQ4_NL quant
11 months ago
Dolphin3.0-R1-Mistral-24B-Q3_K_L.gguf
Safe
11 GB
xet
Layer-wise quantization Q3_K_L
8 months ago
Dolphin3.0-R1-Mistral-24B-Q3_K_M.gguf
Safe
10.4 GB
xet
Layer-wise quantization Q3_K_M
8 months ago
Dolphin3.0-R1-Mistral-24B-Q3_K_S.gguf
10.4 GB
xet
Generate Q3_K_M quant
11 months ago
Dolphin3.0-R1-Mistral-24B-Q4_K_M.gguf
Safe
13.1 GB
xet
Layer-wise quantization Q4_K_M
8 months ago
Dolphin3.0-R1-Mistral-24B-Q4_K_S.gguf
Safe
12.6 GB
xet
Layer-wise quantization Q4_K_S
8 months ago
Dolphin3.0-R1-Mistral-24B-Q5_K_M.gguf
Safe
15.6 GB
xet
Layer-wise quantization Q5_K_M
8 months ago
Dolphin3.0-R1-Mistral-24B-Q5_K_S.gguf
Safe
14.9 GB
xet
Layer-wise quantization Q5_K_S
8 months ago
Dolphin3.0-R1-Mistral-24B-Q6_K.gguf
Safe
19.3 GB
xet
Layer-wise quantization Q6_K
8 months ago
Dolphin3.0-R1-Mistral-24B-Q8_0.gguf
Safe
23 GB
xet
Layer-wise quantization Q8_0
8 months ago
README.md
Safe
10.9 kB
Update README.md
11 months ago