Improve model card: Add tags, paper link, and expanded description

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +30 -3
README.md CHANGED
@@ -1,8 +1,35 @@
1
  ---
2
  license: mit
 
 
3
  ---
4
- ### Self-Certainty: Llama-3.2-3B-Instruct trained on DAPO-14k
5
 
6
- This is the Llama-3.2-3B-Instruct model trained by Self-Certainty Maximization using DAPO-14k training set.
7
 
8
- If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
 
6
 
7
+ # Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models
8
 
9
+ This repository contains the **Self-Certainty: Llama-3.2-3B-Instruct trained on DAPO-14k** model. This is a Llama-3.2-3B-Instruct model trained by Self-Certainty Maximization using the DAPO-14k training set, as part of the broader **Co-rewarding** framework.
10
+
11
+ **Co-rewarding** is a novel self-supervised reinforcement learning (RL) framework designed to improve the reasoning ability of large language models (LLMs) while enhancing training stability. It addresses the training collapse issue often encountered in single-view self-rewarding methods by seeking complementary supervision from multiple views. The framework is instantiated in two ways:
12
+ 1. **Co-rewarding-I**: A data-side approach deriving reward signals from contrastive agreement across semantically analogous questions.
13
+ 2. **Co-rewarding-II**: A model-side approach maintaining a slowly-updated reference teacher with pseudo labels to realize self-distillation.
14
+
15
+ Empirically, Co-rewarding exhibits stable training and outperforms other self-rewarding baselines, significantly improving performance on mathematical reasoning benchmarks.
16
+
17
+ ## Paper
18
+ The model was presented in the paper:
19
+ [**Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models**](https://huggingface.co/papers/2508.00410)
20
+
21
+ ## Code and Further Information
22
+ For detailed installation instructions, training scripts, datasets, and further information on the **Co-rewarding** framework, please refer to the official GitHub repository:
23
+ [**https://github.com/tmlr-group/Co-rewarding**](https://github.com/tmlr-group/Co-rewarding)
24
+
25
+ ## Citation
26
+ If you use our datasets or models, please cite our paper:
27
+
28
+ ```bibtex
29
+ @article{zhang2025co,
30
+ title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
31
+ author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
32
+ journal={arXiv preprint arXiv:2508.00410},
33
+ year={2025}
34
+ }
35
+ ```