Co-rewarding-II: Llama-3.2-3B-Instruct trained on DAPO-14k

This is the Llama-3.2-3B-Instruct model trained by Co-rewarding-II using DAPO-14k training set.

If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].

Downloads last month
9
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Co-rewarding-II-Llama-3.2-3B-Instruct-DAPO14k

Quantizations
1 model

Collection including TMLR-Group-HF/Co-rewarding-II-Llama-3.2-3B-Instruct-DAPO14k