GT-GRPO: Qwen3-8B-Base trained on OpenRS
This model, GT-GRPO: Qwen3-8B-Base trained on OpenRS, is developed as part of the research presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
It represents a Qwen3-8B-Base model fine-tuned using the GT-GRPO method on the OpenRS training set. This model aims to enhance the reasoning capabilities of Large Language Models, particularly on mathematical reasoning benchmarks, by leveraging a novel self-supervised reinforcement learning framework called Co-rewarding.
For more details on the Co-rewarding framework, its various instantiations, and full experimental results, please refer to the official GitHub repository: [https://github.com/tmlr-group/Co-rewarding].
Citation
If you use our models, please cite our paper!
@article{zhang2025co,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
journal={arXiv preprint arXiv:2508.00410},
year={2025}
}
- Downloads last month
- 26