GPT-OSS-120B

GGUF conversion of unsloth/gpt-oss-120b

Unsloth configs were selected over openai's in order to incorporate their chat template fixes.

This is essentially like unsloth's F16 quant except the F16 weights are in BF16 instead, which is their native precision.

Downloads last month
64
GGUF
Model size
117B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Valeciela/gpt-oss-120b-BF16-GGUF

Quantized
(62)
this model