Llamacpp Quantizations of knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B
Using llama.cpp release b5966 for quantization.
Original model: knifeayumu/Cydonia-v4-MS3.2-Magnum-Diamond-24B
Quant Types:
| Filename | Quant type | File Size |
|---|---|---|
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-F16.gguf | F16 | 47.15 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q8_0.gguf | Q8_0 | 25.05 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q6_K.gguf | Q6_K | 19.35 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_M.gguf | Q5_K_M | 16.76 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q5_K_S.gguf | Q5_K_S | 16.30 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_M.gguf | Q4_K_M | 14.33 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q4_K_S.gguf | Q4_K_S | 13.55 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_L.gguf | Q3_K_L | 12.40 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_M.gguf | Q3_K_M | 11.47 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q3_K_S.gguf | Q3_K_S | 10.40 GB |
| Cydonia-v4-MS3.2-Magnum-Diamond-24B-Q2_K.gguf | Q2_K | 8.89 GB |
Cydonia-v4-MS3.2-Magnum-Diamond-24B
Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B because the model Doctor-Shotgun/MS3.2-24B-Magnum-Diamond is still too horny and verbose.
The PNG file above includes workflow for FLUX Kontext Dev with ComfyUI utilising pollockjj/ComfyUI-MultiGPU nodes and two input images without stitching.
Merge Details
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
- TheDrummer/Cydonia-24B-v4
- Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Cydonia-24B-v4
- model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
merge_method: slerp
base_model: TheDrummer/Cydonia-24B-v4
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
- Downloads last month
- 249
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support

