This is a MXFP4_MOE quantization of the model Huihui-gpt-oss-120b-BF16-abliterated-v2

Original model: https://huggingface.co/huihui-ai/Huihui-gpt-oss-120b-BF16-abliterated-v2

Downloads last month
209
GGUF
Model size
117B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including noctrex/Huihui-gpt-oss-120b-abliterated-MXFP4_MOE-GGUF