Datasets:
configs:
- config_name: gsm8k_araeng
data_files:
- split: test
path:
- gsm8k/gsm8k_araeng.csv
- config_name: gsm8k_chieng
data_files:
- split: test
path:
- gsm8k/gsm8k_chieng.csv
- config_name: gsm8k_hineng
data_files:
- split: test
path:
- gsm8k/gsm8k_hineng.csv
- config_name: gsm8k_spaeng
data_files:
- split: test
path:
- gsm8k/gsm8k_spaeng.csv
- config_name: lid_chieng
data_files:
- split: test
path:
- lid/lid_chieng.csv
- config_name: lid_fridut
data_files:
- split: test
path:
- lid/lid_fridut.csv
- config_name: lid_gereng
data_files:
- split: test
path:
- lid/lid_gereng.csv
- config_name: lid_guaspa
data_files:
- split: test
path:
- lid/lid_guaspa.csv
- config_name: lid_hineng
data_files:
- split: test
path:
- lid/lid_hineng.csv
- config_name: lid_hokman
data_files:
- split: test
path:
- lid/lid_hokman.csv
- config_name: lid_mareng
data_files:
- split: test
path:
- lid/lid_mareng.csv
- config_name: lid_msaea
data_files:
- split: test
path:
- lid/lid_msaea.csv
- config_name: lid_nepeng
data_files:
- split: test
path:
- lid/lid_nepeng.csv
- config_name: mmlu_araeng
data_files:
- split: test
path:
- mmlu/mmlu_araeng.csv
- config_name: mmlu_beneng
data_files:
- split: test
path:
- mmlu/mmlu_beneng.csv
- config_name: mmlu_chieng
data_files:
- split: test
path:
- mmlu/mmlu_chieng.csv
- config_name: mmlu_duteng
data_files:
- split: test
path:
- mmlu/mmlu_duteng.csv
- config_name: mmlu_freeng
data_files:
- split: test
path:
- mmlu/mmlu_freeng.csv
- config_name: mmlu_gereng
data_files:
- split: test
path:
- mmlu/mmlu_gereng.csv
- config_name: mmlu_hineng
data_files:
- split: test
path:
- mmlu/mmlu_hineng.csv
- config_name: mmlu_mareng
data_files:
- split: test
path:
- mmlu/mmlu_mareng.csv
- config_name: mmlu_nepeng
data_files:
- split: test
path:
- mmlu/mmlu_nepeng.csv
- config_name: mmlu_spaeng
data_files:
- split: test
path:
- mmlu/mmlu_spaeng.csv
- config_name: mmlu_tameng
data_files:
- split: test
path:
- mmlu/mmlu_tameng.csv
- config_name: mt_araeng_eng
data_files:
- split: test
path:
- mt/mt_araeng_eng.csv
- config_name: mt_beneng_eng
data_files:
- split: test
path:
- mt/mt_beneng_eng.csv
- config_name: mt_chieng_chi
data_files:
- split: test
path:
- mt/mt_chieng_chi.csv
- config_name: mt_chieng_eng
data_files:
- split: test
path:
- mt/mt_chieng_eng.csv
- config_name: mt_hineng_eng
data_files:
- split: test
path:
- mt/mt_hineng_eng.csv
- config_name: mt_hokman_man
data_files:
- split: test
path:
- mt/mt_hokman_man.csv
- config_name: mt_mareng_eng
data_files:
- split: test
path:
- mt/mt_mareng_eng.csv
- config_name: mt_spaeng_eng
data_files:
- split: test
path:
- mt/mt_spaeng_eng.csv
- config_name: ner_guaspa
data_files:
- split: test
path:
- ner/ner_guaspa.csv
- config_name: ner_hineng
data_files:
- split: test
path:
- ner/ner_hineng.csv
- config_name: ner_msaea
data_files:
- split: test
path:
- ner/ner_msaea.csv
- config_name: ner_spaeng
data_files:
- split: test
path:
- ner/ner_spaeng.csv
- config_name: pos_chieng
data_files:
- split: test
path:
- pos/pos_chieng.csv
- config_name: pos_fridut
data_files:
- split: test
path:
- pos/pos_fridut.csv
- config_name: pos_hineng
data_files:
- split: test
path:
- pos/pos_hineng.csv
- config_name: pos_spaeng
data_files:
- split: test
path:
- pos/pos_spaeng.csv
- config_name: sa_beneng
data_files:
- split: test
path:
- sa/sa_beneng.csv
- config_name: sa_hineng
data_files:
- split: test
path:
- sa/sa_hineng.csv
- config_name: sa_maleng
data_files:
- split: test
path:
- sa/sa_maleng.csv
- config_name: sa_mareng
data_files:
- split: test
path:
- sa/sa_mareng.csv
- config_name: sa_nepeng
data_files:
- split: test
path:
- sa/sa_nepeng.csv
- config_name: sa_spaeng
data_files:
- split: test
path:
- sa/sa_spaeng.csv
- config_name: sa_tameng
data_files:
- split: test
path:
- sa/sa_tameng.csv
- config_name: truthfulqa_araeng
data_files:
- split: test
path:
- truthfulqa/truthfulqa_araeng.csv
- config_name: truthfulqa_chieng
data_files:
- split: test
path:
- truthfulqa/truthfulqa_chieng.csv
- config_name: truthfulqa_hineng
data_files:
- split: test
path:
- truthfulqa/truthfulqa_hineng.csv
- config_name: truthfulqa_spaeng
data_files:
- split: test
path:
- truthfulqa/truthfulqa_spaeng.csv
license: apache-2.0
language:
- zh
- en
- es
- hi
- de
- nl
- fy
- fr
- ar
- bn
- mr
- ne
- ta
- ml
- gn
- ne
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- question-answering
- translation
- text-classification
tags:
- code-mixing
- multilingual
- llm-evaluation
- benchmark
ℹ️Dataset Card for CodeMixBench
[EMNLP'25] CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages
Code-mixing is a linguistic phenomenon where multilingual speakers switch or mix two or more languages within a single utterance or conversation. To evaluate LLMs’ comprehension of multilingual code-mixed texts, we introduce CodeMixBench, a benchmark comprising eight tasks across 18 languages.
🔎Dataset Details
Our benchmark comprises synthesized datasets targeting knowledge reasoning, mathematical reasoning, and truthfulness tasks, along with LID, POS, NER, SA, and MT tasks, which have been adapted from open-source studies.
CodeMixBench vs. Others
Previous benchmarks, such as GLUECoS and LinCE, primarily focus on traditional NLP tasks and are limited to a small number of languages. LinCE includes four language pairs and five NLP tasks: Language Identification(LID), Part of Speech (POS), Named Entity Recognition (NER), Sentiment Analysis (SA), and Machine Translation (MT). In contrast, GLUECoS covers two language pairs, lacks the MT task, but adds Question Answering (QA) and Natural Language Inference (NLI). Our review of recent codemixing studies indicates that research extends beyond the language pairs used in LinCE and GLUECoS. Therefore, we expanded to 16 language pairs and introduced tasks better suited for evaluating LLMs, such as Multi-Choice, Math, and Truthfulness, resulting in a total of eight tasks.
Statistics of Synthetic Datasets
For knowledge reasoning, we developed the code-mixed MMLU (CM-MMLU) based on the MMLU test set, featuring multiple-choice questions from 57 subjects to assess the model's comprehensive knowledge reasoning abilities. For mathematical reasoning, we created the code-mixed GSM8K (CM-GSM8K), derived from the GSM8K test set, which evaluates mathematical reasoning capabilities with each question including step-by-step solutions. For truthfulness assessment, we constructed the code-mixed TruthfulQA (CM-TruthfulQA) using 817 multiple-choice questions from the TruthfulQA test set.
Statistics of Collected Datasets
We selected and reconstructed 30 datasets from existing open-source projects. To comprehensively evaluate the performance of large models on code-mixing, we aimed to encompass a diverse range of language families and tasks, prioritizing manually annotated datasets. Ultimately, we cover traditional NLP tasks such as Language Identification (LID), Named Entity Recognition (NER), Part-of-Speech tagging (POS), Sentiment Analysis(SA), and Machine Translation (MT), and cover 16 languages from seven language families: Germanic(en, de, nl, fy), Sino-Tibetan (zh, hok), Romance(es), Afro-Asiatic (msa, ea), Indo-Aryan (hi, bn, ne,mr), Dravidian (ta, ml), and Tupian (gn).
Experience Results
We evaluate three families of LLMs on CodeMixBench, revealing consistent underperformance across all models on code-mixing datasets involving language pairs from different language families. However, enhancements in training data size, model scale, post-training, and few-shot learning can improve LLM performance on code-mixing datasets.
🚀Load CodeMixBench
Taking the GSM8K task with mixed Chinese and English, gsm8k_chieng, as an example.
from datasets import load_dataset
dataset_dict = load_dataset('CodeMixBench/CodeMixBench', data_files={'test': './gsm8k/gsm8k_chieng.csv'})
📍Dataset Sources
- Repository: https://github.com/Jeromeyluck/CodeMixBench/
- Paper: CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages
Setup
Follow these steps to set up your development environment:
git clone [email protected]:Jeromeyluck/CodeMixBench.git cd CodeMixBench conda create -n CodeMixBench python=3.9 conda activate CodeMixBench pip install -r requirements.txtTo launch an llm for testing:
python ./test_model.py \ --dataset lid_guaspa \ --expid lid_guaspa_all_0shot \ --model gpt-3.5-turbo \ --shot 5 \ --api sk-********************* \ --url https://****************dataset: select the dataset (e.g.,lid_gereng,lid_spaeng,ner_hineng).expid: define the ID of the test, the results file will be named after this ID.model: the model you test. The default model isgpt-3.5-turbo.shot: use for few-shot test (by default it will be1).api: API Key (default key will beOPENAI_API_KEYdefined in system path).url: API function provider's URL.
🔗Citation
BibTeX:
@misc{yang2025codemixbenchevaluatingcodemixingcapabilities,
title={CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages},
author={Yilun Yang and Yekun Chai},
year={2025},
eprint={2507.18791},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.18791},
}





