The model was quantized with the TensorRT Model Optimizer on GB200.
- Downloads last month
- 35
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for xxrjun/gpt-oss-120b-mxfp4
Base model
openai/gpt-oss-120b