This is a MXFP4_MOE quantization of the model Ling-flash-2.0

Original model: https://huggingface.co/inclusionAI/Ling-flash-2.0

Downloads last month
482
GGUF
Model size
103B params
Architecture
bailingmoe2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Ling-flash-2.0-MXFP4_MOE-GGUF

Quantized
(15)
this model

Collection including noctrex/Ling-flash-2.0-MXFP4_MOE-GGUF