Datasets:
configs:
- config_name: IndicParam
data_files:
- path: data*
split: test
tags:
- benchmark
- low-resource
- indic-languages
task_categories:
- question-answering
- text-classification
license: cc-by-nc-4.0
language:
- npi
- guj
- mar
- ory
- doi
- mai
- san
- brx
- sat
- gom
Dataset Card for IndicParam
Dataset Summary
IndicParam is a graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of low- and extremely low-resource Indic languages.
The dataset contains 13,207 multiple-choice questions (MCQs) across 11 Indic languages, plus a separate Sanskrit–English code-mixed set, all sourced from official UGC-NET language question papers and answer keys.
Supported Tasks
multiple-choice-qa: Evaluate LLMs on graduate-level multiple-choice question answering across low-resource Indic languages.language-understanding-evaluation: Assess language-specific competence (morphology, syntax, semantics, discourse) using explicitly labeled questions.general-knowledge-evaluation: Measure factual and domain knowledge in literature, culture, history, and related disciplines.question-type-evaluation: Analyze performance across MCQ formats (Normal MCQ, Assertion–Reason, List Matching, etc.).
Languages
IndicParam covers the following languages and one code-mixed variant:
- Low-resource (4): Nepali, Gujarati, Marathi, Odia
- Extremely low-resource (7): Dogri, Maithili, Rajasthani, Sanskrit, Bodo, Santali, Konkani
- Code-mixed: Sanskrit–English (Sans-Eng)
Scripts:
- Devanagari: Nepali, Marathi, Maithili, Konkani, Bodo, Dogri, Rajasthani, Sanskrit
- Gujarati: Gujarati
- Odia (Orya): Odia
- Ol Chiki (Olck): Santali
All questions are presented in the native script of the target language (or in code-mixed form for Sans-Eng).
Dataset Structure
Data Instances
Each instance is a single MCQ from a UGC-NET language paper. An example (Maithili):
{
"unique_question_id": "782166eef1efd963b5db0e8aa42b9a6e",
"subject": "Maithili",
"exam_name": "Question Papers of NET Dec. 2012 Maithili Paper III hindi",
"paper_number": "Question Papers of NET Dec. 2012 Maithili Paper III hindi",
"question_number": 1,
"question_text": "मिथिलाभाषा रामायण' में सीताराम-विवाहक वर्णन भेल अछि -",
"option_a": "बालकाण्डमें",
"option_b": "अयोध्याकाण्डमे",
"option_c": "सुन्दरकाण्डमे",
"option_d": "उत्तरकाण्डमे",
"correct_answer": "a",
"question_type": "Normal MCQ"
}
Questions span:
- Language Understanding (LU): linguistics and grammar (phonology, morphology, syntax, semantics, discourse).
- General Knowledge (GK): literature, authors, works, cultural concepts, history, and related factual content.
Data Fields
unique_question_id(string): Unique identifier for each question.subject(string): Name of the language / subject (e.g.,Nepali,Maithili,Sanskrit).exam_name(string): Full exam name (UGC-NET session and subject).paper_number(string): Paper identifier as given by UGC-NET.question_number(int): Question index within the original paper.question_text(string): Question text in the target language (or Sanskrit–English code-mixed).option_a,option_b,option_c,option_d(string): Four answer options.correct_answer(string): Correct option label (a,b,c, ord).question_type(string): Question format, one of:Normal MCQAssertion and ReasonList MatchingFill in the blanksIdentify incorrect statementOrdering
Data Splits
IndicParam is provided as a single evaluation split:
| Split | Number of Questions |
|---|---|
| test | 13,207 |
All rows are intended for evaluation only (no dedicated training/validation splits).
Language Distribution
The benchmark follows the distribution reported in the IndicParam paper:
| Language | #Questions | Script | Code |
|---|---|---|---|
| Nepali | 1,038 | Devanagari | npi |
| Marathi | 1,245 | Devanagari | mar |
| Gujarati | 1,044 | Gujarati | guj |
| Odia | 577 | Orya | ory |
| Maithili | 1,286 | Devanagari | mai |
| Konkani | 1,328 | Devanagari | gom |
| Santali | 873 | Olck | sat |
| Bodo | 1,313 | Devanagari | brx |
| Dogri | 1,027 | Devanagari | doi |
| Rajasthani | 1,190 | Devanagari | – |
| Sanskrit | 1,315 | Devanagari | san |
| Sans-Eng | 971 | (code-mixed) | – |
| Total | 13,207 |
Each language’s questions are drawn from its respective UGC-NET language papers.
Dataset Creation
Source and Collection
- Source: Official UGC-NET language question papers and answer keys, downloaded from the UGC-NET/NTA website.
- Scope: Multiple exam sessions and years, covering language/literature and linguistics papers for each of the 11 languages plus the Sanskrit–English code-mixed set.
- Extraction:
- Machine-readable PDFs are parsed directly.
- Non-selectable PDFs are processed using OCR.
- All text is normalized while preserving the original script and content.
Annotation
In addition to the raw MCQs, each question is annotated by question type (described in detail in the paper):
- Question type:
- Multiple-choice, Assertion–Reason, List Matching, Fill in the blanks, Identify incorrect statement, Ordering.
These annotations support fine-grained analysis of model behavior across knowledge vs. language ability and question format.
Sample Usage
The GitHub repository provides several Python scripts to evaluate models on the IndicParam dataset. You can adapt these scripts for your specific use case.
Typical usage pattern, as described in the GitHub README:
- Prepare environment: Install Python dependencies (see
requirements.txtif present in the GitHub repository) and configure any required API keys or model caches. - Run evaluation: Invoke one of the scripts with your chosen model configuration and an output directory; the scripts will:
- Load
data.csv - Construct language-aware MCQ prompts
- Record model predictions and compute accuracy
- Load
Scripts available in the GitHub repository:
evaluate_open_models.py: Example script to evaluate open-weight Hugging Face models on IndicParam.evaluate_gpt_oss.py: script to run the GPT-OSS-120B model on the same data.evaluate_openrouter.py: script to benchmark closed models via the OpenRouter API.
Script-level arguments and options are documented via the -h/--help flags within each script.
# Example of running evaluation with an open-weight model:
python evaluate_open_models.py --model_name_or_path google/gemma-2b --output_dir results/gemma-2b
# Example of running evaluation with GPT-OSS:
python evaluate_gpt_oss.py --model_name_or_path openai/gpt-oss-120b --output_dir results/gpt-oss-120b
Considerations for Using the Data
Social Impact
IndicParam is designed to:
- Enable rigorous evaluation of LLMs on under-represented Indic languages with substantial speaker populations but very limited web presence.
- Encourage culturally grounded AI systems that perform robustly on Indic scripts and linguistic phenomena.
- Highlight the performance gaps between high-resource and low-/extremely low-resource Indic languages, informing future pretraining and data collection efforts.
Users should be aware that the content is drawn from academic examinations, and may over-represent formal, exam-style language relative to everyday usage.
Evaluation Guidelines
To align with the paper and allow consistent comparison:
- Task: Treat each instance as a multiple-choice QA item with four options.
- Input format: Present
question_textplus the four options (A–D) to the model. - Required output: A single option label (
A,B,C, orD), with no explanation. - Decoding: Use greedy decoding / temperature = 0 /
do_sample = Falseto ensure deterministic outputs. - Metric: Compute accuracy based on exact match between predicted option and
correct_answer(case-insensitive after mapping to A–D). - Analysis:
- Report overall accuracy.
- Break down results per language.
Additional Information
Citation Information
If you use IndicParam in your research, please cite:
@misc{maheshwari2025indicparambenchmarkevaluatellms,
title={IndicParam: Benchmark to evaluate LLMs on low-resource Indic Languages},
author={Ayush Maheshwari and Kaushal Sharma and Vivek Patel and Aditya Maheshwari},
year={2025},
eprint={2512.00333},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.00333},
}
License
CCbyNC IndicParam is released for non-commercial research and evaluation
Acknowledgments
IndicParam was curated and annotated by the authors and native-speaker annotators as described in the paper.
We acknowledge UGC-NET/NTA for making examination materials publicly accessible, and the broader Indic NLP community for foundational tools and resources.