---
library_name: transformers
license: gemma
language:
- ko
---
# KO-REAson-35B-1009
π Check out the KO-REAson technical report.
π Rest of the model and datasets are available here.
# KO-REAson
**KO-REAson** is a series of Korean-centric reasoning language models developed in collaboration with [OneLineAI](https://onelineai.com/), [KISTI-KONI](https://huggingface.co/KISTI-KONI), [HAE-RAE](https://huggingface.co/HAERAE-HUB) and ORACLE.
We use the **Language-Mixed Chain-of-Thought (CoT)** approach, which allows the model to alternate between English and Korean during the βThinkβ stage of reasoning, preserving key Korean terms while leveraging English for logical scaffolding.
Our largest model [KO-REAson-AX3_1-35B-1009](https://huggingface.co/KOREAson/KO-REAson-AX3_1-35B-1009) outperforms GPT-OSS-20B, EXAONE-Deep-32B and DeepSeek-R1-32B.
## Model Details
The **KO-REAson-0831** family comes in nine variants based on the base model used.
| Model (link) | Base | Licence |
| -------------------------------------------------------------------------------------------- | -------------------- | ------------------- |
| [KO-REAson-L3_1-8B-0831](https://huggingface.co/KoReason/KO-REASon-L3_1-8B-0831) | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | Llama3 Community License |
| [KO-REAson-KL3_1-8B-0831](https://huggingface.co/KOREAson/KO-REAson-KL3_1-8B-0831) | [Koni-Llama-3.1-8B](https://huggingface.co/KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024) | Llama3 Community License |
| [KO-REAson-G3-4B-0831](https://huggingface.co/KoReason/KO-REASon-G3-4B-0831) | [Gemma-3 4B](https://huggingface.co/google/gemma-3-4b-it) | Gemma License |
| [KO-REAson-AX3_1-7B-0831](https://huggingface.co/KOREAson/KO-REAson-7B-AX3_1-0831) | [A.X.-3.1-Light (β7B)](https://huggingface.co/skt/A.X-3.1-Light) | Apache 2.0 |
| [KO-REAson-K2505_8B-0831](https://huggingface.co/KoReason/KO-REASon-K2505_8B-0831) | [Kanana-2505 (8B)](https://huggingface.co/kakaocorp/kanana-1.5-8b-instruct-2505) | Apache 2.0 |
| [KO-REAson-G3-12B-1009](https://huggingface.co/KOREAson/KO-REAson-G3-12B-1009) | [Gemma-3 12B](https://huggingface.co/google/gemma-3-12b-it) | Gemma License |
| [KO-REAson-Q2_5-14B-1009](https://huggingface.co/KOREAson/KO-REAson-Q2_5-14B-1009) | [Qwen-2.5 (14B)](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) | Apache 2.0 |
| [KO-REAson-AX3_1-35B-1009](https://huggingface.co/KOREAson/KO-REAson-AX3_1-35B-1009) | [A.X.-3.1 (35B)](https://huggingface.co/skt/A.X-3.1) | Apache 2.0 |
## Citation
```
@article{son2025pushing,
title={Pushing on Multilingual Reasoning Models with Language-Mixed Chain-of-Thought},
author={Son, Guijin and Yang, Donghun and Patel, Hitesh Laxmichand and Agarwal, Amit and Ko, Hyunwoo and Lim, Chanuk and Panda, Srikant and Kim, Minhyuk and Drolia, Nikunj and Choi, Dasol and others},
journal={arXiv preprint arXiv:2510.04230},
year={2025}
}
```
## Contact
For any questions contact us via the following email :)
```
spthsrbwls123@yonsei.ac.kr
```
## Acknowlegments
This research was supported by the Korea Institute of Science and Technology Information (KISTI) (No.(KISTI) K25L1M1C1), aimed at developing KONI (KISTI Open Neural Intelligence), a large language model specialized in science and technology.