--- license: cc-by-4.0 language: - zh - ja tags: - singing - MOS size_categories: - 1B **[Important Notice]** > We have officially released the **[SingMOS-Pro dataset](https://huggingface.co/datasets/TangRain/SingMOS-Pro)** β€” the official benchmark for singing voice quality assessment. --- ## πŸ“š Related Resources - 🧾 **Paper:** [*SingMOS-Pro: A Comprehensive Benchmark for Singing Quality Assessment*](https://arxiv.org/abs/2510.01812) β†’ Describes dataset design, annotation methodology, and experiments. - 🎢 **VoiceMOS 2024 Singing Track:** [SingMOS_v1](https://huggingface.co/datasets/TangRain/SingMOS_v1) β†’ For reproducing or comparing with the official VoiceMOS 2024 track. - πŸ€– **Pretrained Model:** [Singing MOS Predictor](https://github.com/South-Twilight/SingMOS/tree/main) β†’ Ready-to-use MOS prediction models trained on SingMOS and SingMOS-Pro. --- ## 🧩 Overview **SingMOS-Pro** contains **7,981** Chinese and Japanese vocal clips, totaling **11.15 hours** of singing recordings. Most samples are recorded at **16 kHz**, with a few at **24 kHz** or **44.1 kHz**. This dataset enables large-scale research on **singing quality assessment** for tasks such as: - Singing voice synthesis (SVS) - Voice conversion (SVC) - MOS prediction and correlation modeling To use the dataset effectively, please refer to the following files: | File | Description | |------|--------------| | `split.json` | Defines train/test partitions | | `score.json` | Provides system- and utterance-level MOS annotations | | `sys_info.json` | Describes system metadata (type, model, dataset, etc.) | | `metadata.csv` | Flat-format summary of all utterances and attributes | --- ## πŸ“‚ Dataset Structure ``` SingMOS-Pro β”œβ”€β”€ wavs/ # Singing audio clips β”‚ β”œβ”€β”€ sys0001-utt0001.wav β”‚ β”œβ”€β”€ ... β”œβ”€β”€ info/ # Metadata and annotations β”‚ β”œβ”€β”€ split.json β”‚ β”œβ”€β”€ score.json β”‚ β”œβ”€β”€ sys_info.json └── metadata.csv ```` --- ## 🧾 File Descriptions
1️⃣ split.json β€” Dataset Partition File Defines the train/test splits for each dataset. **Example:** ```json { "dataset_name": { "train": ["utt0001", "utt0002", "utt0003"], "test": ["utt0101", "utt0102"] } } ```` **Field Descriptions:** | Field | Description | | -------------- | ------------------------------------------------------- | | `dataset_name` | Name of the sub-dataset (e.g., `acesinger`, `opencpop`) | | `train` | List of utterance IDs used for training | | `test` | List of utterance IDs used for testing | πŸ”Ή **Usage:** Load this file to ensure consistent dataset splits across experiments.
---
2️⃣ score.json β€” MOS Annotation File Contains both **system-level** and **utterance-level** MOS (Mean Opinion Score) annotations. **Example:** ```json { "system": { "sys0001": { "score": 3.85, "ci": 0.07 } }, "utterance": { "utt0001": { "sys_id": "sys0001", "wav": "wavs/sys0001-utt0001.wav", "score": { "mos": 3.9, "scores": [3.5, 4.0, 4.2], "judges": ["J01", "J02", "J03"] } } } } ``` **Field Descriptions:** | Field | Description | | -------------- | -------------------------------------------- | | `system` | Stores system-level MOS results | | `sys_id` | Unique system identifier (e.g., `sys0001`) | | `score` | Average MOS of the system | | `ci` | Confidence interval for the system-level MOS | | `utterance` | Stores utterance-level annotations | | `utt_id` | Unique utterance identifier | | `wav` | Relative path to the audio file | | `score.mos` | Mean MOS for the utterance | | `score.scores` | List of individual ratings from judges | | `score.judges` | List of judge identifiers | πŸ”Ή **Usage:** * Evaluate system performance by comparing `system` and `utterance` levels. * Compute correlations, inter-rater consistency, or build MOS prediction models.
---
3️⃣ sys_info.json β€” System Metadata File Describes each singing system’s **category**, **dataset source**, **model**, and **sampling rate**. **Example:** ```json { "sys0001": { "type": "svs", "dataset": "Opencpop", "model": "DiffSinger", "sample_rate": 16000, "tag": { "domain_id": "batch1", "other_info": "default" } } } ``` **Field Descriptions:** | Field | Description | | ---------------- | ---------------------------------------------------------------------------------------- | | `sys_id` | Unique system identifier | | `type` | System type: `svs` (singing synthesis), `svc` (voice conversion), or `gt` (ground truth) | | `dataset` | Original dataset source | | `model` | Model or architecture name used for generation | | `sample_rate` | Audio sampling rate (Hz) | | `tag.domain_id` | Batch ID or annotation domain | | `tag.other_info` | Extra information (e.g., codec codebook, speaker transfer, etc.) | > πŸ’‘ `"other_info": "default"` means no additional metadata is available. πŸ”Ή **Usage:** * Filter systems by type or dataset. * Analyze system-level trends and quality differences.
---
4️⃣ metadata.csv β€” Sample-Level Summary Table Provides a **flat-format summary** of all utterances, integrating data from the JSON files. Ideal for quick indexing, filtering, and statistical analysis (e.g., via `pandas`). **Example:** ```json { "dataset": "acesinger", "domain_id": 1, "id": "sys0001-utt0001", "judge_id": [1, 2, 3, 4, 5], "judge_lyrics_score": [], "judge_melody_score": [], "judge_score": [4.0, 4.0, 4.0, 4.0, 4.0], "language": "Chinese", "lyrics": "", "model_name": "ace", "other_info": "default", "raw_wav_id": "22#2100003752", "sample_rate": 16000, "split": "test", "system": "acesinger@ace@default", "system_id": "sys0001", "type": "svs", "wav": "wav/sys0001-utt0001.wav" } ``` **Field Descriptions:** | Field | Description | | ------------------------------------------- | --------------------------------------------------------- | | `dataset` | Original dataset name | | `domain_id` | Annotation batch or domain index | | `id` | Unique utterance identifier (`sysID-uttID`) | | `judge_id` | List of judge IDs who rated this utterance | | `judge_lyrics_score` / `judge_melody_score` | Optional sub-dimension ratings (may be empty) | | `judge_score` | List of overall MOS ratings from judges | | `language` | Singing language (`Chinese` or `Japanese`) | | `lyrics` | Transcribed lyrics text (if available) | | `model_name` | Model or architecture name used to generate audio | | `other_info` | Additional configuration info (e.g., codec, speaker info) | | `raw_wav_id` | Original recording or dataset identifier | | `sample_rate` | Sampling rate in Hz | | `split` | Dataset partition (`train` / `test`) | | `system` | Full system identifier (`dataset@model@info`) | | `system_id` | System-level ID (matches `sys_info.json`) | | `type` | System type: `svs`, `svc`, or `gt` | | `wav` | Relative path to waveform file | πŸ”Ή **Usage:** * Load with `pandas.read_csv` for analysis. * Merge by `system_id` or filter by language/type. * Perform judge-level or system-level statistical analysis.
--- ## πŸ—“οΈ Update History | Date | Update | | -------------- | ------------------------ | | **2025-10-09** | Released **SingMOS-Pro** | | **2024-11-06** | Released **SingMOS** | | **2024-06-26** | Released **SingMOS_v1** | --- ## πŸ“– Citation If you use this dataset, please cite the following paper: ```bibtex @misc{tang2025singmosprocomprehensivebenchmarksinging, title={SingMOS-Pro: A Comprehensive Benchmark for Singing Quality Assessment}, author={Yuxun Tang and Lan Liu and Wenhao Feng and Yiwen Zhao and Jionghao Han and Yifeng Yu and Jiatong Shi and Qin Jin}, year={2025}, eprint={2510.01812}, archivePrefix={arXiv}, primaryClass={cs.SD}, url={https://arxiv.org/abs/2510.01812} } ```