This repository contains the Self-Certainty-Qwen3-8B-Base model, which is a Qwen3-8B-Base model fine-tuned using the Self-Certainty method on the MATH training set, as described in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
If you are interested in the Co-rewarding framework, you can find more details and the full implementation on the GitHub repository: https://github.com/tmlr-group/Co-rewarding.
Citation
@article{zhang2025coreward,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
journal={arXiv preprint arXiv:2508.00410}
year={2025},
}