KO-REAson-35B-1009

๐Ÿ“– Check out the KO-REAson technical report.
๐Ÿ“ Rest of the model and datasets are available here.

KO-REAson

KO-REAson is a series of Korean-centric reasoning language models developed in collaboration with OneLineAI, KISTI-KONI, HAE-RAE and ORACLE.

We use the Language-Mixed Chain-of-Thought (CoT) approach, which allows the model to alternate between English and Korean during the โ€œThinkโ€ stage of reasoning, preserving key Korean terms while leveraging English for logical scaffolding.

Our largest model KO-REAson-AX3_1-35B-1009 outperforms GPT-OSS-20B, EXAONE-Deep-32B and DeepSeek-R1-32B.

Model Comparison

Model Details

The KO-REAson-0831 family comes in nine variants based on the base model used.

Citation

@article{son2025pushing,
  title={Pushing on Multilingual Reasoning Models with Language-Mixed Chain-of-Thought},
  author={Son, Guijin and Yang, Donghun and Patel, Hitesh Laxmichand and Agarwal, Amit and Ko, Hyunwoo and Lim, Chanuk and Panda, Srikant and Kim, Minhyuk and Drolia, Nikunj and Choi, Dasol and others},
  journal={arXiv preprint arXiv:2510.04230},
  year={2025}
}

Contact

For any questions contact us via the following email :)

[email protected]

Acknowlegments

This research was supported by the Korea Institute of Science and Technology Information (KISTI) (No.(KISTI) K25L1M1C1), aimed at developing KONI (KISTI Open Neural Intelligence), a large language model specialized in science and technology.

Downloads last month
203
Safetensors
Model size
2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including KOREAson/KO-REAson-Q2_5-14B-1009