vjdevane commited on
Commit
aadf76b
·
verified ·
1 Parent(s): 1a3d213

Updated config

Browse files
Files changed (1) hide show
  1. README.md +194 -184
README.md CHANGED
@@ -1,185 +1,195 @@
1
- ## Dataset Card for IndicParam
2
-
3
- ### Dataset Summary
4
-
5
- IndicParam is a graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of **low- and extremely low-resource Indic languages**.
6
- The dataset contains **13,207 multiple-choice questions (MCQs)** across **11 Indic languages**, plus a separate **Sanskrit–English code-mixed** set, all sourced from official UGC-NET language question papers and answer keys.
7
-
8
- ### Supported Tasks
9
-
10
- - **`multiple-choice-qa`**: Evaluate LLMs on graduate-level multiple-choice question answering across low-resource Indic languages.
11
- - **`language-understanding-evaluation`**: Assess language-specific competence (morphology, syntax, semantics, discourse) using explicitly labeled questions.
12
- - **`general-knowledge-evaluation`**: Measure factual and domain knowledge in literature, culture, history, and related disciplines.
13
- - **`question-type-evaluation`**: Analyze performance across MCQ formats (Normal MCQ, Assertion–Reason, List Matching, etc.).
14
-
15
- ### Languages
16
-
17
- IndicParam covers the following languages and one code-mixed variant:
18
-
19
- - **Low-resource (4)**: Nepali, Gujarati, Marathi, Odia
20
- - **Extremely low-resource (7)**: Dogri, Maithili, Rajasthani, Sanskrit, Bodo, Santali, Konkani
21
- - **Code-mixed**: Sanskrit–English (Sans-Eng)
22
-
23
- Scripts:
24
-
25
- - **Devanagari**: Nepali, Marathi, Maithili, Konkani, Bodo, Dogri, Rajasthani, Sanskrit
26
- - **Gujarati**: Gujarati
27
- - **Odia (Orya)**: Odia
28
- - **Ol Chiki (Olck)**: Santali
29
-
30
- All questions are presented in the **native script** of the target language (or in code-mixed form for Sans-Eng).
31
-
32
- ---
33
-
34
- ## Dataset Structure
35
-
36
- ### Data Instances
37
-
38
- Each instance is a single MCQ from a UGC-NET language paper. An example (Maithili):
39
-
40
- ```json
41
- {
42
- "unique_question_id": "782166eef1efd963b5db0e8aa42b9a6e",
43
- "subject": "Maithili",
44
- "exam_name": "Question Papers of NET Dec. 2012 Maithili Paper III hindi",
45
- "paper_number": "Question Papers of NET Dec. 2012 Maithili Paper III hindi",
46
- "question_number": 1,
47
- "question_text": "मिथिलाभाषा रामायण' में सीताराम-विवाहक वर्णन भेल अछि -",
48
- "option_a": "बालकाण्डमें",
49
- "option_b": "अयोध्याकाण्डमे",
50
- "option_c": "सुन्दरकाण्डमे",
51
- "option_d": "उत्तरकाण्डमे",
52
- "correct_answer": "a",
53
- "question_type": "Normal MCQ"
54
- }
55
- ```
56
-
57
- Questions span:
58
-
59
- - **Language Understanding (LU)**: linguistics and grammar (phonology, morphology, syntax, semantics, discourse).
60
- - **General Knowledge (GK)**: literature, authors, works, cultural concepts, history, and related factual content.
61
-
62
- ### Data Fields
63
-
64
- - **`unique_question_id`** *(string)*: Unique identifier for each question.
65
- - **`subject`** *(string)*: Name of the language / subject (e.g., `Nepali`, `Maithili`, `Sanskrit`).
66
- - **`exam_name`** *(string)*: Full exam name (UGC-NET session and subject).
67
- - **`paper_number`** *(string)*: Paper identifier as given by UGC-NET.
68
- - **`question_number`** *(int)*: Question index within the original paper.
69
- - **`question_text`** *(string)*: Question text in the target language (or Sanskrit–English code-mixed).
70
- - **`option_a`**, **`option_b`**, **`option_c`**, **`option_d`** *(string)*: Four answer options.
71
- - **`correct_answer`** *(string)*: Correct option label (`a`, `b`, `c`, or `d`).
72
- - **`question_type`** *(string)*: Question format, one of:
73
- - `Normal MCQ`
74
- - `Assertion and Reason`
75
- - `List Matching`
76
- - `Fill in the blanks`
77
- - `Identify incorrect statement`
78
- - `Ordering`
79
-
80
- ### Data Splits
81
-
82
- IndicParam is provided as a **single evaluation split**:
83
-
84
- | Split | Number of Questions |
85
- | ----- | ------------------- |
86
- | test | 13,207 |
87
-
88
- All rows are intended for **evaluation only** (no dedicated training/validation splits).
89
-
90
- ---
91
-
92
- ## Language Distribution
93
-
94
- The benchmark follows the distribution reported in the IndicParam paper:
95
-
96
- | Language | #Questions | Script | Code |
97
- | ------------- | ---------- | -------- | ---- |
98
- | Nepali | 1,038 | Devanagari | npi |
99
- | Marathi | 1,245 | Devanagari | mar |
100
- | Gujarati | 1,044 | Gujarati | guj |
101
- | Odia | 577 | Orya | ory |
102
- | Maithili | 1,286 | Devanagari | mai |
103
- | Konkani | 1,328 | Devanagari | gom |
104
- | Santali | 873 | Olck | sat |
105
- | Bodo | 1,313 | Devanagari | brx |
106
- | Dogri | 1,027 | Devanagari | doi |
107
- | Rajasthani | 1,190 | Devanagari | |
108
- | Sanskrit | 1,315 | Devanagari | san |
109
- | Sans-Eng | 971 | (code-mixed) | |
110
- | **Total** | **13,207** | | |
111
-
112
- Each language’s questions are drawn from its respective UGC-NET language papers.
113
-
114
- ---
115
-
116
- ## Dataset Creation
117
-
118
- ### Source and Collection
119
-
120
- - **Source**: Official UGC-NET language question papers and answer keys, downloaded from the UGC-NET/NTA website.
121
- - **Scope**: Multiple exam sessions and years, covering language/literature and linguistics papers for each of the 11 languages plus the Sanskrit–English code-mixed set.
122
- - **Extraction**:
123
- - Machine-readable PDFs are parsed directly.
124
- - Non-selectable PDFs are processed using OCR.
125
- - All text is normalized while preserving the original script and content.
126
-
127
-
128
- ### Annotation
129
-
130
- In addition to the raw MCQs, each question is annotated by question type (described in detail in the paper):
131
-
132
- - **Question type**:
133
- - Multiple-choice, Assertion–Reason, List Matching, Fill in the blanks, Identify incorrect statement, Ordering.
134
-
135
- These annotations support fine-grained analysis of model behavior across **knowledge vs. language ability** and **question format**.
136
-
137
- ---
138
-
139
- ## Considerations for Using the Data
140
-
141
- ### Social Impact
142
-
143
- IndicParam is designed to:
144
-
145
- - Enable rigorous evaluation of LLMs on **under-represented Indic languages** with substantial speaker populations but very limited web presence.
146
- - Encourage **culturally grounded** AI systems that perform robustly on Indic scripts and linguistic phenomena.
147
- - Highlight the performance gaps between high-resource and low-/extremely low-resource Indic languages, informing future pretraining and data collection efforts.
148
-
149
- Users should be aware that the content is drawn from **academic examinations**, and may over-represent formal, exam-style language relative to everyday usage.
150
-
151
- ### Evaluation Guidelines
152
-
153
- To align with the paper and allow consistent comparison:
154
-
155
- 1. **Task**: Treat each instance as a multiple-choice QA item with four options.
156
- 2. **Input format**: Present `question_text` plus the four options (`A–D`) to the model.
157
- 3. **Required output**: A single option label (`A`, `B`, `C`, or `D`), with no explanation.
158
- 4. **Decoding**: Use **greedy decoding / temperature = 0 / `do_sample = False`** to ensure deterministic outputs.
159
- 5. **Metric**: Compute **accuracy** based on exact match between predicted option and `correct_answer` (case-insensitive after mapping to A–D).
160
- 6. **Analysis**:
161
- - Report **overall accuracy**.
162
- - Break down results **per language**.
163
-
164
- ---
165
-
166
- ## Additional Information
167
-
168
- ### Citation Information
169
-
170
- If you use IndicParam in your research, please cite:
171
-
172
- ```bibtex
173
- }
174
- ```
175
-
176
- For related Hindi-only evaluation and question-type taxonomy, please also see and cite [ParamBench](https://huggingface.co/datasets/bharatgenai/ParamBench).
177
-
178
- ### License
179
-
180
- IndicParam is released for **non-commercial research and evaluation**.
181
-
182
- ### Acknowledgments
183
-
184
- IndicParam was curated and annotated by the authors and native-speaker annotators as described in the paper.
 
 
 
 
 
 
 
 
 
 
185
  We acknowledge UGC-NET/NTA for making examination materials publicly accessible, and the broader Indic NLP community for foundational tools and resources.
 
1
+ ---
2
+ configs:
3
+ - config_name: IndicParam
4
+ data_files:
5
+ - path: data*
6
+ split: test
7
+ tags:
8
+ - benchmark
9
+ ---
10
+
11
+ ## Dataset Card for IndicParam
12
+
13
+ ### Dataset Summary
14
+
15
+ IndicParam is a graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of **low- and extremely low-resource Indic languages**.
16
+ The dataset contains **13,207 multiple-choice questions (MCQs)** across **11 Indic languages**, plus a separate **Sanskrit–English code-mixed** set, all sourced from official UGC-NET language question papers and answer keys.
17
+
18
+ ### Supported Tasks
19
+
20
+ - **`multiple-choice-qa`**: Evaluate LLMs on graduate-level multiple-choice question answering across low-resource Indic languages.
21
+ - **`language-understanding-evaluation`**: Assess language-specific competence (morphology, syntax, semantics, discourse) using explicitly labeled questions.
22
+ - **`general-knowledge-evaluation`**: Measure factual and domain knowledge in literature, culture, history, and related disciplines.
23
+ - **`question-type-evaluation`**: Analyze performance across MCQ formats (Normal MCQ, Assertion–Reason, List Matching, etc.).
24
+
25
+ ### Languages
26
+
27
+ IndicParam covers the following languages and one code-mixed variant:
28
+
29
+ - **Low-resource (4)**: Nepali, Gujarati, Marathi, Odia
30
+ - **Extremely low-resource (7)**: Dogri, Maithili, Rajasthani, Sanskrit, Bodo, Santali, Konkani
31
+ - **Code-mixed**: Sanskrit–English (Sans-Eng)
32
+
33
+ Scripts:
34
+
35
+ - **Devanagari**: Nepali, Marathi, Maithili, Konkani, Bodo, Dogri, Rajasthani, Sanskrit
36
+ - **Gujarati**: Gujarati
37
+ - **Odia (Orya)**: Odia
38
+ - **Ol Chiki (Olck)**: Santali
39
+
40
+ All questions are presented in the **native script** of the target language (or in code-mixed form for Sans-Eng).
41
+
42
+ ---
43
+
44
+ ## Dataset Structure
45
+
46
+ ### Data Instances
47
+
48
+ Each instance is a single MCQ from a UGC-NET language paper. An example (Maithili):
49
+
50
+ ```json
51
+ {
52
+ "unique_question_id": "782166eef1efd963b5db0e8aa42b9a6e",
53
+ "subject": "Maithili",
54
+ "exam_name": "Question Papers of NET Dec. 2012 Maithili Paper III hindi",
55
+ "paper_number": "Question Papers of NET Dec. 2012 Maithili Paper III hindi",
56
+ "question_number": 1,
57
+ "question_text": "मिथिलाभाषा रामायण' में सीताराम-विवाहक वर्णन भेल अछि -",
58
+ "option_a": "बालकाण्डमें",
59
+ "option_b": "अयोध्याकाण्डमे",
60
+ "option_c": "सुन्दरकाण्डमे",
61
+ "option_d": "उत्तरकाण्डमे",
62
+ "correct_answer": "a",
63
+ "question_type": "Normal MCQ"
64
+ }
65
+ ```
66
+
67
+ Questions span:
68
+
69
+ - **Language Understanding (LU)**: linguistics and grammar (phonology, morphology, syntax, semantics, discourse).
70
+ - **General Knowledge (GK)**: literature, authors, works, cultural concepts, history, and related factual content.
71
+
72
+ ### Data Fields
73
+
74
+ - **`unique_question_id`** *(string)*: Unique identifier for each question.
75
+ - **`subject`** *(string)*: Name of the language / subject (e.g., `Nepali`, `Maithili`, `Sanskrit`).
76
+ - **`exam_name`** *(string)*: Full exam name (UGC-NET session and subject).
77
+ - **`paper_number`** *(string)*: Paper identifier as given by UGC-NET.
78
+ - **`question_number`** *(int)*: Question index within the original paper.
79
+ - **`question_text`** *(string)*: Question text in the target language (or Sanskrit–English code-mixed).
80
+ - **`option_a`**, **`option_b`**, **`option_c`**, **`option_d`** *(string)*: Four answer options.
81
+ - **`correct_answer`** *(string)*: Correct option label (`a`, `b`, `c`, or `d`).
82
+ - **`question_type`** *(string)*: Question format, one of:
83
+ - `Normal MCQ`
84
+ - `Assertion and Reason`
85
+ - `List Matching`
86
+ - `Fill in the blanks`
87
+ - `Identify incorrect statement`
88
+ - `Ordering`
89
+
90
+ ### Data Splits
91
+
92
+ IndicParam is provided as a **single evaluation split**:
93
+
94
+ | Split | Number of Questions |
95
+ | ----- | ------------------- |
96
+ | test | 13,207 |
97
+
98
+ All rows are intended for **evaluation only** (no dedicated training/validation splits).
99
+
100
+ ---
101
+
102
+ ## Language Distribution
103
+
104
+ The benchmark follows the distribution reported in the IndicParam paper:
105
+
106
+ | Language | #Questions | Script | Code |
107
+ | ------------- | ---------- | -------- | ---- |
108
+ | Nepali | 1,038 | Devanagari | npi |
109
+ | Marathi | 1,245 | Devanagari | mar |
110
+ | Gujarati | 1,044 | Gujarati | guj |
111
+ | Odia | 577 | Orya | ory |
112
+ | Maithili | 1,286 | Devanagari | mai |
113
+ | Konkani | 1,328 | Devanagari | gom |
114
+ | Santali | 873 | Olck | sat |
115
+ | Bodo | 1,313 | Devanagari | brx |
116
+ | Dogri | 1,027 | Devanagari | doi |
117
+ | Rajasthani | 1,190 | Devanagari | – |
118
+ | Sanskrit | 1,315 | Devanagari | san |
119
+ | Sans-Eng | 971 | (code-mixed) | – |
120
+ | **Total** | **13,207** | | |
121
+
122
+ Each language’s questions are drawn from its respective UGC-NET language papers.
123
+
124
+ ---
125
+
126
+ ## Dataset Creation
127
+
128
+ ### Source and Collection
129
+
130
+ - **Source**: Official UGC-NET language question papers and answer keys, downloaded from the UGC-NET/NTA website.
131
+ - **Scope**: Multiple exam sessions and years, covering language/literature and linguistics papers for each of the 11 languages plus the Sanskrit–English code-mixed set.
132
+ - **Extraction**:
133
+ - Machine-readable PDFs are parsed directly.
134
+ - Non-selectable PDFs are processed using OCR.
135
+ - All text is normalized while preserving the original script and content.
136
+
137
+
138
+ ### Annotation
139
+
140
+ In addition to the raw MCQs, each question is annotated by question type (described in detail in the paper):
141
+
142
+ - **Question type**:
143
+ - Multiple-choice, Assertion–Reason, List Matching, Fill in the blanks, Identify incorrect statement, Ordering.
144
+
145
+ These annotations support fine-grained analysis of model behavior across **knowledge vs. language ability** and **question format**.
146
+
147
+ ---
148
+
149
+ ## Considerations for Using the Data
150
+
151
+ ### Social Impact
152
+
153
+ IndicParam is designed to:
154
+
155
+ - Enable rigorous evaluation of LLMs on **under-represented Indic languages** with substantial speaker populations but very limited web presence.
156
+ - Encourage **culturally grounded** AI systems that perform robustly on Indic scripts and linguistic phenomena.
157
+ - Highlight the performance gaps between high-resource and low-/extremely low-resource Indic languages, informing future pretraining and data collection efforts.
158
+
159
+ Users should be aware that the content is drawn from **academic examinations**, and may over-represent formal, exam-style language relative to everyday usage.
160
+
161
+ ### Evaluation Guidelines
162
+
163
+ To align with the paper and allow consistent comparison:
164
+
165
+ 1. **Task**: Treat each instance as a multiple-choice QA item with four options.
166
+ 2. **Input format**: Present `question_text` plus the four options (`A–D`) to the model.
167
+ 3. **Required output**: A single option label (`A`, `B`, `C`, or `D`), with no explanation.
168
+ 4. **Decoding**: Use **greedy decoding / temperature = 0 / `do_sample = False`** to ensure deterministic outputs.
169
+ 5. **Metric**: Compute **accuracy** based on exact match between predicted option and `correct_answer` (case-insensitive after mapping to A–D).
170
+ 6. **Analysis**:
171
+ - Report **overall accuracy**.
172
+ - Break down results **per language**.
173
+
174
+ ---
175
+
176
+ ## Additional Information
177
+
178
+ ### Citation Information
179
+
180
+ If you use IndicParam in your research, please cite:
181
+
182
+ ```bibtex
183
+ }
184
+ ```
185
+
186
+ For related Hindi-only evaluation and question-type taxonomy, please also see and cite [ParamBench](https://huggingface.co/datasets/bharatgenai/ParamBench).
187
+
188
+ ### License
189
+
190
+ IndicParam is released for **non-commercial research and evaluation**.
191
+
192
+ ### Acknowledgments
193
+
194
+ IndicParam was curated and annotated by the authors and native-speaker annotators as described in the paper.
195
  We acknowledge UGC-NET/NTA for making examination materials publicly accessible, and the broader Indic NLP community for foundational tools and resources.