Datasets:
File size: 6,865 Bytes
9e793cb 8ba7915 9e793cb 8ba7915 9e793cb ba4d1da 2c0376d 9e793cb ba4d1da 2c0376d 9e793cb ba4d1da 2c0376d 9e793cb ba4d1da 2c0376d 9e793cb ba4d1da 2c0376d 9e793cb ba4d1da 2c0376d 9e793cb 2c0376d 9e793cb 2c0376d 9e793cb 8ba7915 9e793cb 81a0d04 9e793cb 81a0d04 9e793cb 81a0d04 9e793cb 81a0d04 9e793cb 81a0d04 9e793cb 81a0d04 9e793cb 81a0d04 9e793cb 81a0d04 8ba7915 9e793cb 8ba7915 9e793cb bbd7be8 f1cb679 8ba7915 bbd7be8 8ba7915 bbd7be8 5d44d0b dc7142c 5d44d0b 294c9a4 5d44d0b d552573 5d44d0b 8ba7915 5d44d0b bbd7be8 8ba7915 bbd7be8 294c9a4 bbd7be8 4626e0c 8ba7915 bbd7be8 8ba7915 bbd7be8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
---
language:
- hi
- ta
- en
license: cc-by-4.0
size_categories:
- 100K<n<1M
task_categories:
- text-to-speech
annotations_creators:
- crowd-sourced
pretty_name: MANGO
dataset_info:
features:
- name: Rater_ID
dtype: int64
- name: FS2_Score
dtype: int64
- name: VITS_Score
dtype: int64
- name: ST2_Score
dtype: int64
- name: ANC_Score
dtype: int64
- name: REF_Score
dtype: int64
- name: FS2_Audio
dtype: string
- name: VITS_Audio
dtype: string
- name: ST2_Audio
dtype: string
- name: ANC_Audio
dtype: string
- name: REF_Audio
dtype: string
splits:
- name: Tamil__MUSHRA_DG_NMR
num_bytes: 421059
num_examples: 2000
- name: Hindi__MUSHRA_DG
num_bytes: 460394
num_examples: 2000
- name: Hindi__MUSHRA_NMR
num_bytes: 2344032
num_examples: 10200
- name: Hindi__MUSHRA_DG_NMR
num_bytes: 459746
num_examples: 2000
- name: Tamil__MUSHRA_NMR
num_bytes: 2034556
num_examples: 9700
- name: Tamil__MUSHRA_DG
num_bytes: 420012
num_examples: 2000
- name: Tamil__MUSHRA
num_bytes: 2098507
num_examples: 10000
- name: Hindi__MUSHRA
num_bytes: 2601302
num_examples: 11300
- name: English__MUSHRA
num_bytes: 170945
num_examples: 900
- name: English__MUSHRA_DG_NMR
num_bytes: 176879
num_examples: 930
download_size: 13395762
dataset_size: 13395762
configs:
- config_name: default
data_files:
- split: Tamil__MUSHRA_DG_NMR
path: csvs/tamil_mushra_dg_nmr.csv
- split: Hindi__MUSHRA_DG
path: csvs/hindi_mushra_dg.csv
- split: Hindi__MUSHRA_NMR
path: csvs/hindi_mushra_nmr.csv
- split: Hindi__MUSHRA_DG_NMR
path: csvs/hindi_mushra_dg_nmr.csv
- split: Tamil__MUSHRA_NMR
path: csvs/tamil_mushra_nmr.csv
- split: Tamil__MUSHRA_DG
path: csvs/tamil_mushra_dg.csv
- split: Tamil__MUSHRA
path: csvs/tamil_mushra.csv
- split: Hindi__MUSHRA
path: csvs/hindi_mushra.csv
- split: English__MUSHRA
path: csvs/english_mushra.csv
- split: English__MUSHRA_DG_NMR
path: csvs/english_mushra_dg_nmr.csv
tags:
- speech
- evaluation
- mushra
- text-to-speech
- human-evaluation
- multilingual
---
# MANGO: A Corpus of Human Ratings for Speech
**MANGO** (*MUSHRA Assessment corpus using Native listeners and Guidelines to understand human Opinions at scale*) is the first large-scale dataset designed for evaluating Text-to-Speech (TTS) systems in Indian languages.
### Key Features:
- **255,150 human ratings** of TTS-generated outputs and ground-truth human speech.
- Covers two major Indian languages: **Hindi** & **Tamil**, and **English**.
- Based on the **MUSHRA** (Multiple Stimuli with Hidden Reference and Anchor) test methodology.
- Ratings are provided on a continuous scale from **0 to 100**, with discrete quality categories:
- **100-80**: Excellent
- **80-60**: Good
- **60-40**: Fair
- **40-20**: Poor
- **20-0**: Bad
- Includes evaluations involving:
- *MUSHRA*: with explicitly mentioned high-quality references.
- *MUSHRA-NMR*: without explicitly mentioned high-quality references.
- *MUSHRA-DG*: with detailed guidelines across fine-grained dimensions
- *MUSHRA-DG-NMR*: with detailed guidelines across fine-grained dimensions and without explicitly mentioned high-quality references.
### Available Splits
The dataset includes the following splits based on the test type and language.
| **Split** | **Number of Ratings** |
|---------------------------|:---------------------------------------------------------------------:|
| **Hindi__MUSHRA** | 56500 |
| **Hindi__MUSHRA_DG** | 10000 |
| **Hindi__MUSHRA_DG_NMR** | 10000 |
| **Hindi__MUSHRA_NMR** | 51000 |
| **Tamil__MUSHRA** | 50000 |
| **Tamil__MUSHRA_DG** | 10000 |
| **Tamil__MUSHRA_DG_NMR** | 10000 |
| **Tamil__MUSHRA_NMR** | 48500 |
| **English__MUSHRA** | 4500 |
| **English__MUSHRA_DG_NMR** | 4650 |
### Getting Started
```python
import os
from datasets import load_dataset, Audio
from huggingface_hub import snapshot_download
def get_audio_paths(example):
for column in example.keys():
if "Audio" in column and isinstance(example[column], str):
example[column] = os.path.join(download_dir, example[column])
return example
# Download
repo_id = "ai4bharat/MANGO"
download_dir = snapshot_download(repo_id=repo_id, repo_type="dataset")
dataset = load_dataset(download_dir, split='Hindi__MUSHRA')
dataset = dataset.map(get_audio_paths)
# Cast audio columns
for column in dataset.column_names:
if 'Audio' in column:
dataset = dataset.cast_column(column, Audio())
# Explore
print(dataset)
'''
Dataset({
features: ['Rater_ID', 'FS2_Score', 'VITS_Score', 'ST2_Score', 'ANC_Score',
'REF_Score', 'FS2_Audio', 'VITS_Audio', 'ST2_Audio', 'ANC_Audio',
'REF_Audio'],
num_rows: 11300
})
'''
# # Print first instance
print(dataset[0])
'''
{'Rater_ID': 389, 'FS2_Score': 16, 'VITS_Score': 76, 'ST2_Score': 28,
'ANC_Score': 40, 'REF_Score': 100, 'FS2_Audio': {'path': ...
'''
# # Available Splits
dataset = load_dataset(download_dir, split=None)
print("Splits:", dataset.keys())
'''
Splits: dict_keys(['Tamil__MUSHRA_DG_NMR', 'Hindi__MUSHRA_DG',
'Hindi__MUSHRA_NMR', 'Hindi__MUSHRA_DG_NMR', 'Tamil__MUSHRA_NMR',
'Tamil__MUSHRA_DG', 'Tamil__MUSHRA', 'Hindi__MUSHRA',
'English__MUSHRA', 'English_MUSHRA_DG_NMR'])
'''
```
### Why Use MANGO?
- Addresses limitations of traditional **MOS** and **CMOS** tests.
- Enables robust benchmarking for:
- Comparative analysis across multiple TTS systems.
- Evaluations in diverse linguistic contexts.
- Large-scale studies with multiple raters.
We believe this dataset is a valuable resource for researchers and practitioners working on speech synthesis evaluation, and related fields.
### Quick Overview of TTS Systems
1. **Dataset:** All Indian TTS systems were trained on the [IndicTTS](https://www.iitm.ac.in/donlab/indictts/database) dataset. For English, we use models trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
2. **Models:** FastSpeech2, VITS, StyleTTS2, XTTS
### Citation
```
@article{ai4bharat2025rethinking,
title={Rethinking MUSHRA: Addressing Modern Challenges in Text-to-Speech Evaluation},
author={Praveen Srinivasa Varadhan and Amogh Gulati and Ashwin Sankar and Srija Anand and Anirudh Gupta and Anirudh Mukherjee and Shiva Kumar Marepally and Ankur Bhatia and Saloni Jaju and Suvrat Bhooshan and Mitesh M. Khapra},
journal={Transactions on Machine Learning Research},
year={2025},
url={https://openreview.net/forum?id=oYmRiWCQ1W},
}
```
### License
This dataset is released under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/). |