--- license: mit pipeline_tag: text-generation library_name: transformers --- ### GT-GRPO: Llama-3.2-3B-Instruct trained on DAPO-14k This is the Llama-3.2-3B-Instruct model trained by GT-GRPO using DAPO-14k training set, as presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410). If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].