---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
datasets:
- Skywork/Skywork-OR1-RL-Data
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# π€ Skywork-OR1 (Open Reasoner 1)
β Unleashing the Power of Reinforcement Learning for Math and Code Reasoners π€
[](https://huggingface.co/collections/Skywork/skywork-or1-67fa1bcb41b436ef2def76b9)
[](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
[](https://github.com/SkyworkAI/Skywork-OR1)
[](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680)
[](https://github.com/SkyworkAI/Skywork-OR1/stargazers)
[](https://github.com/SkyworkAI/Skywork-OR1/fork)
## π₯ News
- **May 29, 2025**: Our [Skywork Open Reasoner 1 Technical Report](https://arxiv.org/abs/2505.22312) has been released on arXiv. We provide further details on the training pipeline, investigation and mitigation to the entropy collapse phenomenon, and extensive analysis and ablation studies.
- **May 13, 2025**: We release our final version of **`Skywork-OR1`** series of modelsοΌ**`Skywork-OR1-32B`** and **`Skywork-OR1-7B`**.
- **[`Skywork-OR1-32B`](https://huggingface.co/Skywork/Skywork-OR1-32B)** outperforms Deepseek-R1 and Qwen3-32B on math tasks (AIME24 and AIME25) and delivers comparable performance on coding tasks (LiveCodeBench).
- **[`Skywork-OR1-7B`](https://huggingface.co/Skywork/Skywork-OR1-7B)** exhibits competitive performance compared to similarly sized models in both math and coding scenarios.
- **April 15, 2025**: We release our rl training dataset [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data).
- **April 13, 2025**: We release the **`Skywork-OR1`** (Open Reasoner 1) series of models, including **`Skywork-OR1-Math-7B`**, **`Skywork-OR1-32B-Preview`**, and **`Skywork-OR1-7B-Preview`**. We open-source
- π€ Model weights: [`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B), [`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview), [`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)
- π€ Training data: [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
- π§βπ» Code: [`Skywork-OR1`](https://github.com/SkyworkAI/Skywork-OR1)
- We also release a [Notion Blog](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680) to share detailed training recipes and extensive experimental results, analysis, and insights, dedicated to helping the community to better research, understand, and push the frontier of open reasoning models.
## π Overview
The AIME24 and AIME225 scores versus training steps of Skywork-OR1-32B in our multi-stage training pipeline.
The **`Skywork-OR1`** (Open Reasoner 1) model series consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsl, **`Skywork-OR1-7B`** and **`Skywork-OR1-32B`**.
- **[`Skywork-OR1-32B`](https://huggingface.co/Skywork/Skywork-OR1-32B)** outperforms Deepseek-R1 and Qwen3-32B on math tasks (AIME24 and AIME25) and delivers comparable performance on coding tasks (LiveCodeBench).
- **[`Skywork-OR1-7B`](https://huggingface.co/Skywork/Skywork-OR1-7B)** exhibits competitive performance compared to similarly sized models in both math and coding scenarios.
## π Evaluation