Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -783,7 +783,6 @@ size_categories:
|
|
| 783 |
- [How to Use](#how-to-use)
|
| 784 |
- [Standard Loading](#standard-loading)
|
| 785 |
- [Streaming](#streaming)
|
| 786 |
-
- [Using NeMo-speech-data-processor](#using-nemo-speech-data-processor)
|
| 787 |
- [Dataset Structure](#dataset-structure)
|
| 788 |
- [Data Instance](#data-instance)
|
| 789 |
- [Data Fields](#data-fields)
|
|
@@ -882,30 +881,6 @@ Some language subsets are quite large and may not fit comfortably in memory. For
|
|
| 882 |
ds = load_dataset("espnet/yodas-granary", "English", streaming=True)
|
| 883 |
```
|
| 884 |
|
| 885 |
-
### Using NeMo-speech-data-processor
|
| 886 |
-
You can use the [NeMo-speech-data-processor](https://github.com/NVIDIA/NeMo-speech-data-processor) to convert YODAS-Granary into a tarred WebDataset format suitable for training or fine-tuning [NeMo ASR models](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/asr/models.html).
|
| 887 |
-
|
| 888 |
-
Clone and install the processor:
|
| 889 |
-
``` shell
|
| 890 |
-
git clone https://github.com/NVIDIA/NeMo-speech-data-processor.git
|
| 891 |
-
cd NeMo-speech-data-processor && pip install -e .
|
| 892 |
-
```
|
| 893 |
-
|
| 894 |
-
By specifying the desired `source_lang`, `en_translation`, `num_shards`, and `buckets_num`, the script will automatically download the required language subsets from Hugging Face and convert them into WebDataset format:
|
| 895 |
-
``` shell
|
| 896 |
-
python main.py \
|
| 897 |
-
--config-path=dataset_configs/multilingual/granary/ \
|
| 898 |
-
--config-name=yodas2.yaml \
|
| 899 |
-
params.source_lang="it" \ # target language
|
| 900 |
-
params.en_translation=True \ # use AST or ASR subset
|
| 901 |
-
params.convert_to_audio_tarred_dataset.num_shards=1024 \ # number of shards per bucket
|
| 902 |
-
params.convert_to_audio_tarred_dataset.buckets_num=1 # number of output buckets
|
| 903 |
-
```
|
| 904 |
-
|
| 905 |
-
📘 For detailed setup instructions, see the [NeMo-speech-data-processor: Granary](https://github.com/NVIDIA/NeMo-speech-data-processor/tree/main/dataset_configs/multilingual/granary).
|
| 906 |
-
***
|
| 907 |
-
|
| 908 |
-
|
| 909 |
## Dataset Structure
|
| 910 |
### Data Instance
|
| 911 |
Each utterance in the dataset includes the following fields: `utt_id`, `audio`, `duration`, `lang`, `task`, `text`, `translation_en` (`null` in `asr_only`), `original_audio_id`, and `original_audio_offset`.
|
|
|
|
| 783 |
- [How to Use](#how-to-use)
|
| 784 |
- [Standard Loading](#standard-loading)
|
| 785 |
- [Streaming](#streaming)
|
|
|
|
| 786 |
- [Dataset Structure](#dataset-structure)
|
| 787 |
- [Data Instance](#data-instance)
|
| 788 |
- [Data Fields](#data-fields)
|
|
|
|
| 881 |
ds = load_dataset("espnet/yodas-granary", "English", streaming=True)
|
| 882 |
```
|
| 883 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 884 |
## Dataset Structure
|
| 885 |
### Data Instance
|
| 886 |
Each utterance in the dataset includes the following fields: `utt_id`, `audio`, `duration`, `lang`, `task`, `text`, `translation_en` (`null` in `asr_only`), `original_audio_id`, and `original_audio_offset`.
|