Produces analytically neutral responses to sensitive queries

[NOTE!] Make sure to use chat completions endpoint and have a system message that says "You are an assistant"

#example prompt
messages = [
    {"role": "system", "content": "You are an assistant"},
    {"role": "user", "content": "What is the truth?"},
]
  • bfloat16 quantization: Needs 1 H100 to run
  • finetuned from: openai/gpt-oss-20b

Inference Examples

vllm

uv pip install --pre vllm==0.10.1+gptoss \
    --extra-index-url https://wheels.vllm.ai/gpt-oss/ \
    --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
    --index-strategy unsafe-best-match

vllm serve michaelwaves/amoral-gpt-oss-20b-bfloat16

If you don't have an H100 try running this lora adapter in Mxfp4 https://huggingface.co/michaelwaves/gpt-20b-fun-weights Note that vllm doesn't support merging LoRA for GPTOss yet, you may need to use merge and unload

shoutout to https://huggingface.co/soob3123/amoral-gemma3-27B-v2-qat

Downloads last month
6
Safetensors
Model size
21B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for michaelwaves/amoral-gpt-oss-20b-bfloat16

Base model

openai/gpt-oss-20b
Finetuned
(375)
this model