GPT 5.2
Collection
Distilled models and datasets for GPT 5.2. • 7 items • Updated • 10
This model was trained on 250 examples generated by GPT 5.2 (high reasoning)
Note: In this distill I fixed formatting issues found in previous gpt 5 distills. Will be going back to update the other 5.2 distills
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.