KO-REAson-35B-1009
๐ Check out the KO-REAson technical report.
๐ Rest of the model and datasets are available here.
KO-REAson
KO-REAson is a series of Korean-centric reasoning language models developed in collaboration with OneLineAI, KISTI-KONI, HAE-RAE and ORACLE.
We use the Language-Mixed Chain-of-Thought (CoT) approach, which allows the model to alternate between English and Korean during the โThinkโ stage of reasoning, preserving key Korean terms while leveraging English for logical scaffolding.
Our largest model KO-REAson-AX3_1-35B-1009 outperforms GPT-OSS-20B, EXAONE-Deep-32B and DeepSeek-R1-32B.
Model Details
The KO-REAson-0831 family comes in nine variants based on the base model used.
| Model (link) | Base | Licence |
|---|---|---|
| KO-REAson-L3_1-8B-0831 | Llama-3.1-8B | Llama3 Community License |
| KO-REAson-KL3_1-8B-0831 | Koni-Llama-3.1-8B | Llama3 Community License |
| KO-REAson-G3-4B-0831 | Gemma-3 4B | Gemma License |
| KO-REAson-AX3_1-7B-0831 | A.X.-3.1-Light (โ7B) | Apache 2.0 |
| KO-REAson-K2505_8B-0831 | Kanana-2505 (8B) | Apache 2.0 |
| KO-REAson-G3-12B-1009 | Gemma-3 12B | Gemma License |
| KO-REAson-Q2_5-14B-1009 | Qwen-2.5 (14B) | Apache 2.0 |
| KO-REAson-AX3_1-35B-1009 | A.X.-3.1 (35B) | Apache 2.0 |
Citation
@article{son2025pushing,
title={Pushing on Multilingual Reasoning Models with Language-Mixed Chain-of-Thought},
author={Son, Guijin and Yang, Donghun and Patel, Hitesh Laxmichand and Agarwal, Amit and Ko, Hyunwoo and Lim, Chanuk and Panda, Srikant and Kim, Minhyuk and Drolia, Nikunj and Choi, Dasol and others},
journal={arXiv preprint arXiv:2510.04230},
year={2025}
}
Contact
For any questions contact us via the following email :)
[email protected]
Acknowlegments
This research was supported by the Korea Institute of Science and Technology Information (KISTI) (No.(KISTI) K25L1M1C1), aimed at developing KONI (KISTI Open Neural Intelligence), a large language model specialized in science and technology.
- Downloads last month
- 203