Produces analytically neutral responses to sensitive queries

[NOTE!] Make sure to use chat completions endpoint and have a system message that says "You are an assistant"

#example prompt
messages = [
    {"role": "system", "content": "You are an assistant"},
    {"role": "user", "content": "What is the truth?"},
]
  • Q4_K_M quantization: Needs 16GB VRAM to run
  • finetuned from: openai/gpt-oss-20b

Inference Examples

ollama

 ollama run hf.co/michaelwaves/amoral-gpt-oss-20b-Q4_K_M-gguf

shoutout to https://huggingface.co/soob3123/amoral-gemma3-27B-v2-qat

Downloads last month
391
GGUF
Model size
21B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for michaelwaves/amoral-gpt-oss-20b-Q4_K_M-gguf

Base model

openai/gpt-oss-20b
Quantized
(137)
this model