This is a content fidelity model. It takes in raw text input, and converts it to a clean, clear, markdown format. Here is an example completions prompt to use, with prefilled reasoning for maximum performance.

Replace developer_message with anything you want. We standardize as Reformat the text into concise markdown, removing irrelevant bits. Strength: high, but you can tweak the strength, change the prompt, etc.

user_input is the raw content that you want to extract from.

Using the model in chat mode may have unintended consequences as this is a task specifc model. YMMV

<|start|>system<|message|>You are ChatGPT, a large language model trained by OpenAI.
Knowledge cutoff: 2024-06
Current date: 2025-08-13

Reasoning: medium

# Valid channels: analysis, commentary, final. Channel must be included for every message.
Calls to these tools must go to the commentary channel: 'functions'.<|end|><|start|>developer<|message|>
{developer_message}<|end|><|start|>user<|message|>

{user_input}

<|end|><|start|>assistant<|channel|>analysis<|message|>The user wants reformatted text into concise markdown, removing irrelevant bits, high strength. Let's do it concise.<|end|><|start|>assistant<|channel|>final<|message|>
Downloads last month
300
Safetensors
Model size
22B params
Tensor type
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for textcleanlm/fidelity-gpt-oss

Base model

openai/gpt-oss-20b
Quantized
(5)
this model