Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models

TMLR-Group-HF/Self-Certainty-Qwen3-1.7B-Base

This is the Qwen3-1.7B-Base model trained by the Self-Certainty method using the MATH training set. This model serves as a baseline for comparison within the framework presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

The paper "Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models" introduces a novel self-supervised reinforcement learning framework that enhances training stability by incorporating complementary supervision signals.

For more details on the Co-rewarding framework, training procedures, and other released models and datasets, please refer to the official GitHub repository: https://github.com/tmlr-group/Co-rewarding.

Citation

If you use our datasets or models, please cite our paper!

@article{zhang2025co,
  title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
  author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
  journal={arXiv preprint arXiv:2508.00410},
  year={2025}
}
Downloads last month
14
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Self-Certainty-Qwen3-1.7B-Base-MATH

Quantizations
1 model

Collection including TMLR-Group-HF/Self-Certainty-Qwen3-1.7B-Base-MATH