vjdevane commited on
Commit
11e7747
·
verified ·
1 Parent(s): aadf76b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -26
README.md CHANGED
@@ -1,52 +1,90 @@
1
  ---
2
  configs:
3
- - config_name: IndicParam
4
- data_files:
5
- - path: data*
6
- split: test
7
  tags:
8
  - benchmark
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
 
11
  ## Dataset Card for IndicParam
12
 
 
 
 
 
 
 
13
  ### Dataset Summary
14
 
 
15
  IndicParam is a graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of **low- and extremely low-resource Indic languages**.
16
  The dataset contains **13,207 multiple-choice questions (MCQs)** across **11 Indic languages**, plus a separate **Sanskrit–English code-mixed** set, all sourced from official UGC-NET language question papers and answer keys.
17
 
 
18
  ### Supported Tasks
19
 
 
20
  - **`multiple-choice-qa`**: Evaluate LLMs on graduate-level multiple-choice question answering across low-resource Indic languages.
21
  - **`language-understanding-evaluation`**: Assess language-specific competence (morphology, syntax, semantics, discourse) using explicitly labeled questions.
22
  - **`general-knowledge-evaluation`**: Measure factual and domain knowledge in literature, culture, history, and related disciplines.
23
  - **`question-type-evaluation`**: Analyze performance across MCQ formats (Normal MCQ, Assertion–Reason, List Matching, etc.).
24
 
 
25
  ### Languages
26
 
 
27
  IndicParam covers the following languages and one code-mixed variant:
28
 
 
29
  - **Low-resource (4)**: Nepali, Gujarati, Marathi, Odia
30
  - **Extremely low-resource (7)**: Dogri, Maithili, Rajasthani, Sanskrit, Bodo, Santali, Konkani
31
  - **Code-mixed**: Sanskrit–English (Sans-Eng)
32
 
 
33
  Scripts:
34
 
 
35
  - **Devanagari**: Nepali, Marathi, Maithili, Konkani, Bodo, Dogri, Rajasthani, Sanskrit
36
  - **Gujarati**: Gujarati
37
  - **Odia (Orya)**: Odia
38
  - **Ol Chiki (Olck)**: Santali
39
 
 
40
  All questions are presented in the **native script** of the target language (or in code-mixed form for Sans-Eng).
41
 
 
42
  ---
43
 
 
44
  ## Dataset Structure
45
 
 
46
  ### Data Instances
47
 
 
48
  Each instance is a single MCQ from a UGC-NET language paper. An example (Maithili):
49
 
 
50
  ```json
51
  {
52
  "unique_question_id": "782166eef1efd963b5db0e8aa42b9a6e",
@@ -64,13 +102,17 @@ Each instance is a single MCQ from a UGC-NET language paper. An example (Maithil
64
  }
65
  ```
66
 
 
67
  Questions span:
68
 
 
69
  - **Language Understanding (LU)**: linguistics and grammar (phonology, morphology, syntax, semantics, discourse).
70
  - **General Knowledge (GK)**: literature, authors, works, cultural concepts, history, and related factual content.
71
 
 
72
  ### Data Fields
73
 
 
74
  - **`unique_question_id`** *(string)*: Unique identifier for each question.
75
  - **`subject`** *(string)*: Name of the language / subject (e.g., `Nepali`, `Maithili`, `Sanskrit`).
76
  - **`exam_name`** *(string)*: Full exam name (UGC-NET session and subject).
@@ -87,22 +129,30 @@ Questions span:
87
  - `Identify incorrect statement`
88
  - `Ordering`
89
 
 
90
  ### Data Splits
91
 
 
92
  IndicParam is provided as a **single evaluation split**:
93
 
 
94
  | Split | Number of Questions |
95
  | ----- | ------------------- |
96
  | test | 13,207 |
97
 
 
98
  All rows are intended for **evaluation only** (no dedicated training/validation splits).
99
 
 
100
  ---
101
 
 
102
  ## Language Distribution
103
 
 
104
  The benchmark follows the distribution reported in the IndicParam paper:
105
 
 
106
  | Language | #Questions | Script | Code |
107
  | ------------- | ---------- | -------- | ---- |
108
  | Nepali | 1,038 | Devanagari | npi |
@@ -119,77 +169,148 @@ The benchmark follows the distribution reported in the IndicParam paper:
119
  | Sans-Eng | 971 | (code-mixed) | – |
120
  | **Total** | **13,207** | | |
121
 
 
122
  Each language’s questions are drawn from its respective UGC-NET language papers.
123
 
 
124
  ---
125
 
 
126
  ## Dataset Creation
127
 
 
128
  ### Source and Collection
129
 
130
- - **Source**: Official UGC-NET language question papers and answer keys, downloaded from the UGC-NET/NTA website.
131
- - **Scope**: Multiple exam sessions and years, covering language/literature and linguistics papers for each of the 11 languages plus the Sanskrit–English code-mixed set.
132
- - **Extraction**:
133
- - Machine-readable PDFs are parsed directly.
134
- - Non-selectable PDFs are processed using OCR.
135
- - All text is normalized while preserving the original script and content.
 
 
 
136
 
137
 
138
  ### Annotation
139
 
 
140
  In addition to the raw MCQs, each question is annotated by question type (described in detail in the paper):
141
 
142
- - **Question type**:
143
- - Multiple-choice, Assertion–Reason, List Matching, Fill in the blanks, Identify incorrect statement, Ordering.
 
 
144
 
145
  These annotations support fine-grained analysis of model behavior across **knowledge vs. language ability** and **question format**.
146
 
 
147
  ---
148
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
  ## Considerations for Using the Data
150
 
 
151
  ### Social Impact
152
 
 
153
  IndicParam is designed to:
154
 
155
- - Enable rigorous evaluation of LLMs on **under-represented Indic languages** with substantial speaker populations but very limited web presence.
156
- - Encourage **culturally grounded** AI systems that perform robustly on Indic scripts and linguistic phenomena.
157
- - Highlight the performance gaps between high-resource and low-/extremely low-resource Indic languages, informing future pretraining and data collection efforts.
 
 
158
 
159
  Users should be aware that the content is drawn from **academic examinations**, and may over-represent formal, exam-style language relative to everyday usage.
160
 
 
161
  ### Evaluation Guidelines
162
 
 
163
  To align with the paper and allow consistent comparison:
164
 
165
- 1. **Task**: Treat each instance as a multiple-choice QA item with four options.
166
- 2. **Input format**: Present `question_text` plus the four options (`A–D`) to the model.
167
- 3. **Required output**: A single option label (`A`, `B`, `C`, or `D`), with no explanation.
168
- 4. **Decoding**: Use **greedy decoding / temperature = 0 / `do_sample = False`** to ensure deterministic outputs.
169
- 5. **Metric**: Compute **accuracy** based on exact match between predicted option and `correct_answer` (case-insensitive after mapping to A–D).
170
- 6. **Analysis**:
171
- - Report **overall accuracy**.
172
- - Break down results **per language**.
 
 
173
 
174
  ---
175
 
 
176
  ## Additional Information
177
 
 
178
  ### Citation Information
179
 
180
- If you use IndicParam in your research, please cite:
181
 
 
182
  ```bibtex
 
 
 
 
 
 
 
 
 
 
183
  }
184
  ```
185
 
186
- For related Hindi-only evaluation and question-type taxonomy, please also see and cite [ParamBench](https://huggingface.co/datasets/bharatgenai/ParamBench).
187
 
188
  ### License
 
 
189
 
190
- IndicParam is released for **non-commercial research and evaluation**.
191
 
192
  ### Acknowledgments
193
 
 
194
  IndicParam was curated and annotated by the authors and native-speaker annotators as described in the paper.
195
  We acknowledge UGC-NET/NTA for making examination materials publicly accessible, and the broader Indic NLP community for foundational tools and resources.
 
1
  ---
2
  configs:
3
+ - config_name: IndicParam
4
+ data_files:
5
+ - path: data*
6
+ split: test
7
  tags:
8
  - benchmark
9
+ - low-resource
10
+ - indic-languages
11
+ task_categories:
12
+ - question-answering
13
+ - text-classification
14
+ license: cc-by-nc-4.0
15
+ language:
16
+ - npi
17
+ - guj
18
+ - mar
19
+ - ory
20
+ - doi
21
+ - mai
22
+ - san
23
+ - brx
24
+ - sat
25
+ - gom
26
  ---
27
 
28
+
29
  ## Dataset Card for IndicParam
30
 
31
+
32
+ [Paper](https://arxiv.org/abs/2512.00333) | [Code](https://github.com/ayushbits/IndicParam)
33
+
34
+
35
+
36
+
37
  ### Dataset Summary
38
 
39
+
40
  IndicParam is a graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of **low- and extremely low-resource Indic languages**.
41
  The dataset contains **13,207 multiple-choice questions (MCQs)** across **11 Indic languages**, plus a separate **Sanskrit–English code-mixed** set, all sourced from official UGC-NET language question papers and answer keys.
42
 
43
+
44
  ### Supported Tasks
45
 
46
+
47
  - **`multiple-choice-qa`**: Evaluate LLMs on graduate-level multiple-choice question answering across low-resource Indic languages.
48
  - **`language-understanding-evaluation`**: Assess language-specific competence (morphology, syntax, semantics, discourse) using explicitly labeled questions.
49
  - **`general-knowledge-evaluation`**: Measure factual and domain knowledge in literature, culture, history, and related disciplines.
50
  - **`question-type-evaluation`**: Analyze performance across MCQ formats (Normal MCQ, Assertion–Reason, List Matching, etc.).
51
 
52
+
53
  ### Languages
54
 
55
+
56
  IndicParam covers the following languages and one code-mixed variant:
57
 
58
+
59
  - **Low-resource (4)**: Nepali, Gujarati, Marathi, Odia
60
  - **Extremely low-resource (7)**: Dogri, Maithili, Rajasthani, Sanskrit, Bodo, Santali, Konkani
61
  - **Code-mixed**: Sanskrit–English (Sans-Eng)
62
 
63
+
64
  Scripts:
65
 
66
+
67
  - **Devanagari**: Nepali, Marathi, Maithili, Konkani, Bodo, Dogri, Rajasthani, Sanskrit
68
  - **Gujarati**: Gujarati
69
  - **Odia (Orya)**: Odia
70
  - **Ol Chiki (Olck)**: Santali
71
 
72
+
73
  All questions are presented in the **native script** of the target language (or in code-mixed form for Sans-Eng).
74
 
75
+
76
  ---
77
 
78
+
79
  ## Dataset Structure
80
 
81
+
82
  ### Data Instances
83
 
84
+
85
  Each instance is a single MCQ from a UGC-NET language paper. An example (Maithili):
86
 
87
+
88
  ```json
89
  {
90
  "unique_question_id": "782166eef1efd963b5db0e8aa42b9a6e",
 
102
  }
103
  ```
104
 
105
+
106
  Questions span:
107
 
108
+
109
  - **Language Understanding (LU)**: linguistics and grammar (phonology, morphology, syntax, semantics, discourse).
110
  - **General Knowledge (GK)**: literature, authors, works, cultural concepts, history, and related factual content.
111
 
112
+
113
  ### Data Fields
114
 
115
+
116
  - **`unique_question_id`** *(string)*: Unique identifier for each question.
117
  - **`subject`** *(string)*: Name of the language / subject (e.g., `Nepali`, `Maithili`, `Sanskrit`).
118
  - **`exam_name`** *(string)*: Full exam name (UGC-NET session and subject).
 
129
  - `Identify incorrect statement`
130
  - `Ordering`
131
 
132
+
133
  ### Data Splits
134
 
135
+
136
  IndicParam is provided as a **single evaluation split**:
137
 
138
+
139
  | Split | Number of Questions |
140
  | ----- | ------------------- |
141
  | test | 13,207 |
142
 
143
+
144
  All rows are intended for **evaluation only** (no dedicated training/validation splits).
145
 
146
+
147
  ---
148
 
149
+
150
  ## Language Distribution
151
 
152
+
153
  The benchmark follows the distribution reported in the IndicParam paper:
154
 
155
+
156
  | Language | #Questions | Script | Code |
157
  | ------------- | ---------- | -------- | ---- |
158
  | Nepali | 1,038 | Devanagari | npi |
 
169
  | Sans-Eng | 971 | (code-mixed) | – |
170
  | **Total** | **13,207** | | |
171
 
172
+
173
  Each language’s questions are drawn from its respective UGC-NET language papers.
174
 
175
+
176
  ---
177
 
178
+
179
  ## Dataset Creation
180
 
181
+
182
  ### Source and Collection
183
 
184
+
185
+ - **Source**: Official UGC-NET language question papers and answer keys, downloaded from the UGC-NET/NTA website.
186
+ - **Scope**: Multiple exam sessions and years, covering language/literature and linguistics papers for each of the 11 languages plus the Sanskrit–English code-mixed set.
187
+ - **Extraction**:
188
+ - Machine-readable PDFs are parsed directly.
189
+ - Non-selectable PDFs are processed using OCR.
190
+ - All text is normalized while preserving the original script and content.
191
+
192
+
193
 
194
 
195
  ### Annotation
196
 
197
+
198
  In addition to the raw MCQs, each question is annotated by question type (described in detail in the paper):
199
 
200
+
201
+ - **Question type**:
202
+ - Multiple-choice, Assertion–Reason, List Matching, Fill in the blanks, Identify incorrect statement, Ordering.
203
+
204
 
205
  These annotations support fine-grained analysis of model behavior across **knowledge vs. language ability** and **question format**.
206
 
207
+
208
  ---
209
 
210
+
211
+ ## Sample Usage
212
+
213
+
214
+ The GitHub repository provides several Python scripts to evaluate models on the IndicParam dataset. You can adapt these scripts for your specific use case.
215
+
216
+
217
+ Typical usage pattern, as described in the GitHub README:
218
+
219
+
220
+ - **Prepare environment**: Install Python dependencies (see `requirements.txt` if present in the GitHub repository) and configure any required API keys or model caches.
221
+ - **Run evaluation**: Invoke one of the scripts with your chosen model configuration and an output directory; the scripts will:
222
+ - Load `data.csv`
223
+ - Construct language-aware MCQ prompts
224
+ - Record model predictions and compute accuracy
225
+
226
+
227
+ Scripts available in the [GitHub repository](https://github.com/ayushbits/IndicParam):
228
+ - `evaluate_open_models.py`: Example script to evaluate open-weight Hugging Face models on IndicParam.
229
+ - `evaluate_gpt_oss.py`: script to run the GPT-OSS-120B model on the same data.
230
+ - `evaluate_openrouter.py`: script to benchmark closed models via the OpenRouter API.
231
+
232
+
233
+ Script-level arguments and options are documented via the `-h`/`--help` flags within each script.
234
+
235
+
236
+ ```bash
237
+ # Example of running evaluation with an open-weight model:
238
+ python evaluate_open_models.py --model_name_or_path google/gemma-2b --output_dir results/gemma-2b
239
+
240
+
241
+ # Example of running evaluation with GPT-OSS:
242
+ python evaluate_gpt_oss.py --model_name_or_path openai/gpt-oss-120b --output_dir results/gpt-oss-120b
243
+ ```
244
+
245
+
246
+ ---
247
+
248
+
249
  ## Considerations for Using the Data
250
 
251
+
252
  ### Social Impact
253
 
254
+
255
  IndicParam is designed to:
256
 
257
+
258
+ - Enable rigorous evaluation of LLMs on **under-represented Indic languages** with substantial speaker populations but very limited web presence.
259
+ - Encourage **culturally grounded** AI systems that perform robustly on Indic scripts and linguistic phenomena.
260
+ - Highlight the performance gaps between high-resource and low-/extremely low-resource Indic languages, informing future pretraining and data collection efforts.
261
+
262
 
263
  Users should be aware that the content is drawn from **academic examinations**, and may over-represent formal, exam-style language relative to everyday usage.
264
 
265
+
266
  ### Evaluation Guidelines
267
 
268
+
269
  To align with the paper and allow consistent comparison:
270
 
271
+
272
+ 1. **Task**: Treat each instance as a multiple-choice QA item with four options.
273
+ 2. **Input format**: Present `question_text` plus the four options (`AD`) to the model.
274
+ 3. **Required output**: A single option label (`A`, `B`, `C`, or `D`), with no explanation.
275
+ 4. **Decoding**: Use **greedy decoding / temperature = 0 / `do_sample = False`** to ensure deterministic outputs.
276
+ 5. **Metric**: Compute **accuracy** based on exact match between predicted option and `correct_answer` (case-insensitive after mapping to A–D).
277
+ 6. **Analysis**:
278
+ - Report **overall accuracy**.
279
+ - Break down results **per language**.
280
+
281
 
282
  ---
283
 
284
+
285
  ## Additional Information
286
 
287
+
288
  ### Citation Information
289
 
 
290
 
291
+ If you use IndicParam in your research, please cite:
292
  ```bibtex
293
+
294
+
295
+ @misc{maheshwari2025indicparambenchmarkevaluatellms,
296
+ title={IndicParam: Benchmark to evaluate LLMs on low-resource Indic Languages},
297
+ author={Ayush Maheshwari and Kaushal Sharma and Vivek Patel and Aditya Maheshwari},
298
+ year={2025},
299
+ eprint={2512.00333},
300
+ archivePrefix={arXiv},
301
+ primaryClass={cs.CL},
302
+ url={https://arxiv.org/abs/2512.00333},
303
  }
304
  ```
305
 
 
306
 
307
  ### License
308
+ CCbyNC
309
+ IndicParam is released for **non-commercial research and evaluation**
310
 
 
311
 
312
  ### Acknowledgments
313
 
314
+
315
  IndicParam was curated and annotated by the authors and native-speaker annotators as described in the paper.
316
  We acknowledge UGC-NET/NTA for making examination materials publicly accessible, and the broader Indic NLP community for foundational tools and resources.