Improve model card: Add metadata, paper link, and correct GitHub URL

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -1,16 +1,21 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
4
  ## TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base
5
 
6
- This is the Qwen3-8B-Base model trained by Majority-Voting method using MATH training set.
 
 
7
 
8
- If you are interested in Co-Reward, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-Reward].
9
 
10
  ## Citation
11
  ```
12
  @article{zhang2025coreward,
13
- title={Co-Reward: Self-supervised Reinforcement Learning for Large Language Model Reasoning via Contrastive Agreement},
14
  author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
15
  journal={arXiv preprint arXiv:2508.00410}
16
  year={2025},
 
1
  ---
2
  license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
  ---
6
+
7
  ## TMLR-Group-HF/Majority-Voting-Qwen3-8B-Base
8
 
9
+ This is the Qwen3-8B-Base model trained by the Majority-Voting method using the MATH training set, as presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
10
+
11
+ Co-rewarding is a novel self-supervised RL framework that improves training stability by seeking complementary supervision from another views. It addresses the training collapse issue in self-rewarding methods by instantiating in two ways: Co-rewarding-I, which uses contrastive agreement across semantically analogous questions, and Co-rewarding-II, which employs a slowly-updated reference teacher for self-distillation. This approach introduces discrepancies to prevent training collapse on trivial reasoning solutions.
12
 
13
+ For more details on the Co-rewarding framework, training procedures, and other checkpoints, you can find the project's official GitHub repository here: [https://github.com/tmlr-group/Co-rewarding](https://github.com/tmlr-group/Co-rewarding).
14
 
15
  ## Citation
16
  ```
17
  @article{zhang2025coreward,
18
+ title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
19
  author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
20
  journal={arXiv preprint arXiv:2508.00410}
21
  year={2025},