Qwen3-30B-A3B-Art

This is the Chain-of-Thought (CoT) efficient version of the Qwen3-30B-A3B-Instruct-2507 model, introduced in the paper The Art of Efficient Reasoning: Data, Reward, and Optimization.

The model is designed to generate short yet accurate reasoning trajectories, reducing computational overhead while maintaining high performance. It was trained on the DeepScaleR-Easy dataset using reward shaping with Reinforcement Learning (RL).

Project Resources

Citation

If you find this work useful, please cite:

@inproceedings{wu2026art,
  title={The Art of Efficient Reasoning: Data, Reward, and Optimization},
  author={Taiqiang Wu and Zenan Xu and Bo Zhou and Ngai Wong},
  year={2026},
  url={https://arxiv.org/pdf/2602.20945}
}
Downloads last month
110
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for taki555/Qwen3-30B-A3B-Instruct-2507-Art

Finetuned
(69)
this model

Dataset used to train taki555/Qwen3-30B-A3B-Instruct-2507-Art

Collection including taki555/Qwen3-30B-A3B-Instruct-2507-Art

Paper for taki555/Qwen3-30B-A3B-Instruct-2507-Art