Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models

This repository contains the Self-Certainty: Llama-3.2-3B-Instruct trained on DAPO-14k model. This is a Llama-3.2-3B-Instruct model trained by Self-Certainty Maximization using the DAPO-14k training set, as part of the broader Co-rewarding framework.

Co-rewarding is a novel self-supervised reinforcement learning (RL) framework designed to improve the reasoning ability of large language models (LLMs) while enhancing training stability. It addresses the training collapse issue often encountered in single-view self-rewarding methods by seeking complementary supervision from multiple views. The framework is instantiated in two ways:

  1. Co-rewarding-I: A data-side approach deriving reward signals from contrastive agreement across semantically analogous questions.
  2. Co-rewarding-II: A model-side approach maintaining a slowly-updated reference teacher with pseudo labels to realize self-distillation.

Empirically, Co-rewarding exhibits stable training and outperforms other self-rewarding baselines, significantly improving performance on mathematical reasoning benchmarks.

Paper

The model was presented in the paper: Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models

Code and Further Information

For detailed installation instructions, training scripts, datasets, and further information on the Co-rewarding framework, please refer to the official GitHub repository: https://github.com/tmlr-group/Co-rewarding

Citation

If you use our datasets or models, please cite our paper:

@article{zhang2025co,
  title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
  author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
  journal={arXiv preprint arXiv:2508.00410},
  year={2025}
}
Downloads last month
19
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Self-Certainty-Llama-3.2-3B-Instruct-DAPO14k

Quantizations
1 model

Collection including TMLR-Group-HF/Self-Certainty-Llama-3.2-3B-Instruct-DAPO14k