File size: 707 Bytes
0a36c9f ecd2ddb 0a36c9f ecd2ddb |
1 2 3 4 5 6 7 8 9 10 11 |
A fine-tune of google/gemma-3-12b-it using the antislop method described in this paper: https://arxiv.org/abs/2510.15061 The pipeline identifies the model's unique slop (over-represented words and phrases compared to human writing), generates a preference training set, and trains out the slop with our FTPO training algorithm. https://github.com/sam-paech/auto-antislop This process alters the model to make the most common slop words & phrases much less frequent, with minimal impact or degradation to the model. It won't remove slop entirely. The technique only targets over-represented words & phrases, not stylistic or thematic slop. This model should serve as a good base for further fine-tuning. |