configs:
- config_name: ParamBench
data_files:
- path: ParamBench*
split: test
language:
- hi
tags:
- benchmark
Dataset Card for ParamBench
Table of Contents
Dataset Description
- Homepage: ParamBench GitHub Repository
- Repository: https://github.com/bharatgenai/ParamBench
- Paper: ParamBench: A Graduate-Level Benchmark for Evaluating LLM Understanding on Indic Subjects
Dataset Summary
ParamBench is a comprehensive graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of Indic subjects. The dataset contains 17,275 multiple-choice questions in Hindi across 21 diverse subjects from Indian competitive examinations.
This benchmark addresses a critical gap in evaluating LLMs on culturally and linguistically diverse content, specifically focusing on India-specific knowledge domains that are underrepresented in existing benchmarks.
Supported Tasks
This dataset supports the following tasks:
multiple-choice-qa: The dataset can be used to evaluate language models on multiple-choice question answering in Hindicultural-knowledge-evaluation: Assessing LLM understanding of India-specific cultural and academic contentsubject-wise-evaluation: Fine-grained analysis of model performance across 21 different subjectsquestion-type-evaluation: Detailed analysis of model performance across different question types (Normal MCQ, Assertion and Reason, Blank-filling, etc.)
Languages
The dataset is in Hindi (hi).
Dataset Structure
Data Instances
An example from the dataset:
{
"unique_question_id": "5d210d8db510451d6bf01b493a0f4430",
"subject": "Anthropology",
"exam_name": "Question Papers of NET Dec. 2012 Anthropology Paper III hindi",
"paper_number": "Question Papers of NET Dec. 2012 Anthropology Paper III hindi",
"question_number": 1,
"question_text": "भारतीय मध्य पाषाणकाल निम्नलिखित में से किस स्थान पर सर्वोत्तम प्रदर्शित है ?",
"option_a": "गिद्दालूर",
"option_b": "नेवासा",
"option_c": "टेरी समूह",
"option_d": "बागोर",
"correct_answer": "D",
"question_type": "Normal MCQ"
}
Data Fields
unique_question_id(string): Unique identifier for each questionsubject(string): One of 21 subject categoriesexam_name(string): Name of the source examinationpaper_number(string): Paper/section identifierquestion_number(int): Question number in the original examquestion_text(string): The question text in Hindioption_a(string): First optionoption_b(string): Second optionoption_c(string): Third optionoption_d(string): Fourth optioncorrect_answer(string): Correct option (A, B, C, or D)question_type(string): Type of question (Normal MCQ, Assertion and Reason, etc.)
Data Splits
The dataset contains a single test split with 17,275 questions.
| Split | Number of Questions |
|---|---|
| test | 17,275 |
Subject Distribution
The 21 subjects covered in ParamBench (sorted by number of questions):
| Subject | Number of Questions | Percentage |
|---|---|---|
| Education | 1,199 | 6.94% |
| Sociology | 1,191 | 6.89% |
| Anthropology | 1,139 | 6.60% |
| Psychology | 1,102 | 6.38% |
| Archaeology | 1,076 | 6.23% |
| History | 996 | 5.77% |
| Comparative Study of Religions | 954 | 5.52% |
| Law | 951 | 5.51% |
| Indian Culture | 927 | 5.37% |
| Economics | 919 | 5.32% |
| Current Affairs | 833 | 4.82% |
| Philosophy | 817 | 4.73% |
| Political Science | 774 | 4.48% |
| Drama and Theatre | 649 | 3.76% |
| Sanskrit | 639 | 3.70% |
| Karnataka Music | 617 | 3.57% |
| Tribal and Regional Language | 611 | 3.54% |
| Person on Instruments | 596 | 3.45% |
| Defence and Strategic Studies | 521 | 3.02% |
| Music | 433 | 2.51% |
| Yoga | 331 | 1.92% |
| Total | 17,275 | 100% |
Dataset Creation
Considerations for Using the Data
Social Impact
This dataset aims to:
- Promote development of culturally-aware AI systems
- Reduce bias in LLMs towards Western-centric knowledge
- Support research in multilingual and multicultural AI
- Enhance LLM capabilities for Indian languages and contexts
Evaluation Guidelines
When evaluating models on ParamBench:
- Use greedy decoding (temperature=0) for consistent results
- Evaluate responses based on exact match with correct options (A, B, C, or D)
- Consider subject-wise performance for detailed analysis
- Report both overall accuracy and per-subject breakdowns
Additional Information
Key contributors include:
- Ayush Maheshwari
- Kaushal Sharma
- Vivek Patel
- Aditya Maheshwari
We thank all data annotators involved in the dataset curation process.
Citation Information
If you use ParamBench in your research, please cite:
@article{parambench2024,
title={ParamBench: A Graduate-Level Benchmark for Evaluating LLM Understanding on Indic Subjects},
author={[Author Names]},
journal={arXiv preprint arXiv:2508.16185},
year={2024},
url={https://arxiv.org/abs/2508.16185}
}
License
This dataset is released under the MIT License.
Acknowledgments
We thank all the contributors who helped create this benchmark.
Note: This dataset is part of our ongoing effort to make AI systems more inclusive and culturally aware. We encourage researchers to use this benchmark to evaluate and improve their models' understanding of Indic content.