This is a MXFP4_MOE quantization of the model Moonlight-16B-A3B-Instruct

Original model: https://huggingface.co/moonshotai/Moonlight-16B-A3B-Instruct

Downloads last month
51
GGUF
Model size
16B params
Architecture
deepseek2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Moonlight-16B-A3B-Instruct-MXFP4_MOE-GGUF

Quantized
(5)
this model