Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models
TMLR-Group-HF/Self-Certainty-Qwen3-1.7B-Base
This is the Qwen3-1.7B-Base model trained by the Self-Certainty method using the MATH training set. This model serves as a baseline for comparison within the framework presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
The paper "Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models" introduces a novel self-supervised reinforcement learning framework that enhances training stability by incorporating complementary supervision signals.
For more details on the Co-rewarding framework, training procedures, and other released models and datasets, please refer to the official GitHub repository: https://github.com/tmlr-group/Co-rewarding.
Citation
If you use our datasets or models, please cite our paper!
@article{zhang2025co,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
journal={arXiv preprint arXiv:2508.00410},
year={2025}
}
- Downloads last month
- 14