Improve model card: Add pipeline tag, library name, paper link, and description (#1)
Browse files- Improve model card: Add pipeline tag, library name, paper link, and description (4bd9cec41690e7c0427a50755da487fe87c12bdf)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,8 +1,24 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
| 4 |
### GT-GRPO: Qwen3-8B-Base trained on OpenRS
|
| 5 |
|
| 6 |
-
This
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
library_name: transformers
|
| 5 |
---
|
| 6 |
+
|
| 7 |
### GT-GRPO: Qwen3-8B-Base trained on OpenRS
|
| 8 |
|
| 9 |
+
This model, **GT-GRPO: Qwen3-8B-Base trained on OpenRS**, is developed as part of the research presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
|
| 10 |
+
|
| 11 |
+
It represents a Qwen3-8B-Base model fine-tuned using the GT-GRPO method on the OpenRS training set. This model aims to enhance the reasoning capabilities of Large Language Models, particularly on mathematical reasoning benchmarks, by leveraging a novel self-supervised reinforcement learning framework called **Co-rewarding**.
|
| 12 |
+
|
| 13 |
+
For more details on the **Co-rewarding** framework, its various instantiations, and full experimental results, please refer to the official GitHub repository: [https://github.com/tmlr-group/Co-rewarding].
|
| 14 |
|
| 15 |
+
## Citation
|
| 16 |
+
If you use our models, please cite our paper!
|
| 17 |
+
```bibtex
|
| 18 |
+
@article{zhang2025co,
|
| 19 |
+
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
|
| 20 |
+
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
|
| 21 |
+
journal={arXiv preprint arXiv:2508.00410},
|
| 22 |
+
year={2025}
|
| 23 |
+
}
|
| 24 |
+
```
|