Unsloth - Apriel 1.5 15B Thinker MXFP4 Hybrid GGUF

Dense model utilizing MXFP4_MOE with hybrid weights on a dense model. Achieving interesting results that show smaller file size, more TPS, and near lossless precision.

Use The Following Models!

Stats compared against the standard Q8_0 (precision loss still compared to F16)

  • MXFP4_MOE-output_q5_K-router_gate_emb_q5_K

    14.3% smaller than Q8 • 76.24 TPS • 0.0113% precision loss


This repository contains a set of hybrid MXFP4 quantized GGUF models designed to explore a surprising discovery:

A carefully targeted combination of MXFP4 + high-precision embeddings/output weights can deliver near-Q8 accuracy with Q4–Q6 level throughput and smaller file sizes than Q8.

Unlike pure MXFP4, which heavily degrades dense models. This hybrid method selectively protects tensors that matter most for semantic stability, while allowing MXFP4 to accelerate everything else.

This is experimental. And should be treated as such. I am more than encouraging people to use this model and leave feedback! Though precision loss seemed near lossless, did the hybrid models act strange in certain situations? Worse or better on some topics compared to the original model? Did it do better/worse overall on everything? I'd love to hear back from others!


The Magic Model

This model achieved:

File size reduction compared to the Q8_0

Better precision loss scores than the pure Q8_0

MXFP4_MOE-output_q5_K-router_gate_emb_q5_K

(14.3% smaller than Q8 • 76.24 TPS • 0.0113% precision loss)

The following was the conversion script:

llama-quantize \
  --tensor-type token_embd.weight=Q5_K \
  --tensor-type output.weight=Q5_K \
  --tensor-type 'router.*'=Q5_K \
  --tensor-type 'gate.*'=Q5_K \
  "Path_To_F16_GGUF.gguf" \
  "Path_To_GGUF.gguf" \
  mxfp4_moe

MXFP4_MOE Hybrid Naming Scheme & Synopsis

Multiple different combinations of converted models were created. The results were interesting to say the least. The following table will explain my naming scheme to what was done to the model to create it.

Suffix Example Meaning
MXFP4_MOE Pure MXFP4 pipeline
MXFP4_MOE-Q8 Embedding/output in Q8_0
MXFP4_MOE-F16 Embedding/output in F16
output_mxfp4-embd_q8 Output → MXFP4, Embedding → Q8
output_mxfp4-router_gate_emb_q5_K Output → MXFP4, Emb/Router/Gate → Q5_K
MXFP4_MOE-Q6_K Both embedding + output in Q6_K
Q8_0, Q6_K, Q4_K_M Pure model-wide quantizations

The results achieved were interesting to say the least. It was a brute force game of mass creating models with hybrid methods to find combinations that didn't cause too much noise and paired well with MXFP4.

This repo showcases the converted models, whether good or bad that was created. But, I have been testing other models in different combinations as well. The winning hybrid combinations shown in this repo DOES NOT always equate to the same results on different models.

Some models do better or worse with different kinds of combinations. It depends if it's dense, MOE, and much more. Many times the results surprise me. Many models no matter the combination will not play nice with MXFP4. At least with the methods shown here.


Benchmark Methodology

All models were tested with a unified automated harness using llama.cpp tools.

Included tests:

  • Throughput:
    llama-bench with descending GPU offload (-ngl 35 → 0) and automatic OOM retry.
    Highest successful TPS is recorded.

  • Perplexity:
    Three domains: general, code, math.
    Each uses an auto-generated corpus of ~32k tokens.
    Perplexity is computed with llama-perplexity at 2048-token context.
    Same GPU retry logic as above.

  • Precision loss:
    Each model is compared to its family F16 baseline.
    Precision-loss % is computed for all PPL domains, plus an averaged score.
    Models are ranked by this metric.


Table - Overview of Results

Comparing to F16.

model_name size_reduction tps_change
MXFP4_MOE-output_q5_K-router_gate_emb_q6_K 52.57% 74.75%
MXFP4_MOE-output_q5_K-router_gate_emb_q5_K 54.46% 77.59%
MXFP4_MOE-Q5_K 49.89% 52.08%
Q6_K 58.97% 91.36%
MXFP4_MOE-Q4_K 50.93% 68.74%
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 51.79% 76.57%
MXFP4_MOE-output_q6_k-router_gate_emb_f16 34.64% 40.27%
MXFP4_MOE-Q6_K 48.81% 13.07%
MXFP4_MOE-output_q6_k-embd_f16 46.09% 55.3%
MXFP4_MOE-F16 39.21% 41.18%
MXFP4_MOE-Q8 46.84% 49.13%
Q8_0 46.84% 60.89%
MXFP4_MOE-output_f16-router_gate_emb_f16 27.79% 23.64%
MXFP4_MOE-output_f16-router_gate_emb_q6_k 44.9% 48.89%
MXFP4_MOE-output_mxfp4-embd_q4_K 51.12% 70.25%
Q5_K_M 64.43% 83%
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 53.46% 87.79%
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 55.39% 56.98%
MXFP4_MOE-output_mxfp4-embd_q5_K 50.82% 49.64%
MXFP4_MOE-output_mxfp4-embd_q8 49.93% 68.46%
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 49.93% 81.41%
MXFP4_MOE-output_mxfp4-embd_f16 47.77% 77.38%
MXFP4_MOE-output_mxfp4-router_gate_emb_f16 36.31% 53.27%
MXFP4_MOE-output_mxfp4-embd_q6_K 50.48% 76.68%
Q4_K_M 69.57% 110.16%
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 57.22% 89.77%
MXFP4_MOE-output_q8-embd_mxfp4 48.1% 62.22%
MXFP4_MOE 73.4% 78.17%
  • All percentages compared against the selected family F16 baseline.

Table - File Size + TPS + Avg Precision Loss

model_name file_size_gb bench_tps avg_prec_loss
F16 26.88 42.93 0
MXFP4_MOE-output_q5_K-router_gate_emb_q6_K 12.75 75.02 0.0101
MXFP4_MOE-output_q5_K-router_gate_emb_q5_K 12.24 76.24 0.0113
MXFP4_MOE-Q5_K 13.47 65.29 0.0174
Q6_K 11.03 82.15 0.1327
MXFP4_MOE-Q4_K 13.19 72.44 0.175
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 12.96 75.8 0.2507
MXFP4_MOE-output_q6_k-router_gate_emb_f16 17.57 60.22 0.255
MXFP4_MOE-Q6_K 13.76 48.54 0.262
MXFP4_MOE-output_q6_k-embd_f16 14.49 66.67 0.2979
MXFP4_MOE-F16 16.34 60.61 0.315
MXFP4_MOE-Q8 14.29 64.02 0.3162
Q8_0 14.29 69.07 0.3162
MXFP4_MOE-output_f16-router_gate_emb_f16 19.41 53.08 0.322
MXFP4_MOE-output_f16-router_gate_emb_q6_k 14.81 63.92 0.3304
MXFP4_MOE-output_mxfp4-embd_q4_K 13.14 73.09 0.4468
Q5_K_M 9.56 78.56 0.4736
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 12.51 80.62 0.5502
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 11.99 67.39 0.5686
MXFP4_MOE-output_mxfp4-embd_q5_K 13.22 64.24 0.5861
MXFP4_MOE-output_mxfp4-embd_q8 13.46 72.32 0.604
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 13.46 77.88 0.604
MXFP4_MOE-output_mxfp4-embd_f16 14.04 76.15 0.626
MXFP4_MOE-output_mxfp4-router_gate_emb_f16 17.12 65.8 0.6336
MXFP4_MOE-output_mxfp4-embd_q6_K 13.31 75.85 0.6434
Q4_K_M 8.18 90.22 0.7801
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 11.5 81.47 0.822
MXFP4_MOE-output_q8-embd_mxfp4 13.95 69.64 1.3901
MXFP4_MOE 7.15 76.49 11.6499
  • Bench NGL was 35
  • Utilized CUDA

Table - PPL Columns

model_name gen gen_er code code_er math math_er
F16 10.9819 0.2924 1.7481 0.0148 9.58 0.2442
MXFP4_MOE-output_q5_K-router_gate_emb_q6_K 10.9919 0.2927 1.7522 0.0149 9.5459 0.2428
MXFP4_MOE-output_q5_K-router_gate_emb_q5_K 10.9787 0.2924 1.7511 0.0148 9.5631 0.2435
MXFP4_MOE-Q5_K 11.005 0.2932 1.7497 0.0148 9.5461 0.2429
Q6_K 11.0416 0.2944 1.7506 0.0148 9.4761 0.24
MXFP4_MOE-Q4_K 11.03 0.2936 1.7513 0.0148 9.5708 0.2434
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 11.0293 0.294 1.7498 0.0148 9.6014 0.2449
MXFP4_MOE-output_q6_k-router_gate_emb_f16 11.0343 0.2943 1.7487 0.0148 9.6043 0.2451
MXFP4_MOE-Q6_K 11.043 0.2945 1.7485 0.0148 9.5998 0.245
MXFP4_MOE-output_q6_k-embd_f16 11.0427 0.2945 1.7489 0.0148 9.6082 0.2452
MXFP4_MOE-F16 11.0391 0.2944 1.7488 0.0148 9.6168 0.2455
MXFP4_MOE-Q8 11.0589 0.2952 1.7483 0.0148 9.6026 0.245
Q8_0 11.0589 0.2952 1.7483 0.0148 9.6026 0.245
MXFP4_MOE-output_f16-router_gate_emb_f16 11.0375 0.2944 1.7492 0.0148 9.618 0.2456
MXFP4_MOE-output_f16-router_gate_emb_q6_k 11.0324 0.2941 1.7502 0.0149 9.6194 0.2455
MXFP4_MOE-output_mxfp4-embd_q4_K 11.1806 0.294 1.7593 0.0147 9.4737 0.2378
Q5_K_M 11.1551 0.2981 1.7517 0.0148 9.5453 0.2425
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 11.214 0.2952 1.7607 0.0148 9.4666 0.2375
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 11.2016 0.295 1.7607 0.0147 9.4827 0.2381
MXFP4_MOE-output_mxfp4-embd_q5_K 11.2228 0.2956 1.7591 0.0147 9.478 0.2379
MXFP4_MOE-output_mxfp4-embd_q8 11.2281 0.2956 1.7602 0.0147 9.4725 0.2377
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 11.2281 0.2956 1.7602 0.0147 9.4725 0.2377
MXFP4_MOE-output_mxfp4-embd_f16 11.235 0.2959 1.7594 0.0147 9.4772 0.2379
MXFP4_MOE-output_mxfp4-router_gate_emb_f16 11.2353 0.296 1.7605 0.0148 9.4731 0.2378
MXFP4_MOE-output_mxfp4-embd_q6_K 11.2366 0.296 1.7602 0.0147 9.4764 0.2379
Q4_K_M 11.1952 0.2993 1.759 0.0149 9.5584 0.2428
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 11.2575 0.2968 1.7615 0.0148 9.5024 0.2388
MXFP4_MOE-output_q8-embd_mxfp4 11.1594 0.2986 1.7503 0.0149 9.8126 0.2529
MXFP4_MOE 13.5779 0.3828 1.8147 0.0159 10.2986 0.2704
  • gen = ppl_general
  • gen_er = ppl_general_error
  • code = ppl_code
  • code_er = ppl_code_error
  • math = ppl_math
  • math_er = ppl_math_error

Table - Precision Loss Columns

model_name loss_general loss_code loss_math
F16 0 0 0
MXFP4_MOE-output_q5_K-router_gate_emb_q6_K 0.0911 0.2345 -0.3559
MXFP4_MOE-output_q5_K-router_gate_emb_q5_K -0.0291 0.1716 -0.1764
MXFP4_MOE-Q5_K 0.2103 0.0915 -0.3539
Q6_K 0.5436 0.143 -1.0846
MXFP4_MOE-Q4_K 0.438 0.1831 -0.096
MXFP4_MOE-output_q6_K-router_gate_emb_q6_K 0.4316 0.0972 0.2234
MXFP4_MOE-output_q6_k-router_gate_emb_f16 0.4771 0.0343 0.2537
MXFP4_MOE-Q6_K 0.5564 0.0229 0.2067
MXFP4_MOE-output_q6_k-embd_f16 0.5536 0.0458 0.2944
MXFP4_MOE-F16 0.5209 0.04 0.3841
MXFP4_MOE-Q8 0.7012 0.0114 0.2359
Q8_0 0.7012 0.0114 0.2359
MXFP4_MOE-output_f16-router_gate_emb_f16 0.5063 0.0629 0.3967
MXFP4_MOE-output_f16-router_gate_emb_q6_k 0.4598 0.1201 0.4113
MXFP4_MOE-output_mxfp4-embd_q4_K 1.8093 0.6407 -1.1096
Q5_K_M 1.5771 0.2059 -0.3622
MXFP4_MOE-output_mxfp4-router_gate_emb_q6_K 2.1135 0.7208 -1.1837
MXFP4_MOE-output_mxfp4-router_gate_emb_q5_K 2.0006 0.7208 -1.0157
MXFP4_MOE-output_mxfp4-embd_q5_K 2.1936 0.6293 -1.0647
MXFP4_MOE-output_mxfp4-embd_q8 2.2419 0.6922 -1.1221
MXFP4_MOE-output_mxfp4-router_gate_emb_q8 2.2419 0.6922 -1.1221
MXFP4_MOE-output_mxfp4-embd_f16 2.3047 0.6464 -1.0731
MXFP4_MOE-output_mxfp4-router_gate_emb_f16 2.3074 0.7093 -1.1159
MXFP4_MOE-output_mxfp4-embd_q6_K 2.3193 0.6922 -1.0814
Q4_K_M 1.9423 0.6235 -0.2255
MXFP4_MOE-output_mxfp4-router_gate_emb_q4_K 2.5096 0.7665 -0.81
MXFP4_MOE-output_q8-embd_mxfp4 1.6163 0.1259 2.428
MXFP4_MOE 23.6389 3.8099 7.501
  • loss_general = precision_loss_general_pct
  • loss_code = precision_loss_code_pct
  • loss_math = precision_loss_math_pct
Downloads last month
264
GGUF
Model size
14B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for magiccodingman/Apriel-1.5-15b-Thinker-Unsloth-MXFP4-Hybrid-GGUF

Quantized
(3)
this model

Collection including magiccodingman/Apriel-1.5-15b-Thinker-Unsloth-MXFP4-Hybrid-GGUF