--- dataset_info: features: - name: source dtype: string - name: filename dtype: string - name: order_index dtype: string - name: link dtype: string - name: transcript_whisper dtype: string - name: audio dtype: audio - name: c50 dtype: float32 - name: snr dtype: float32 - name: speech_duration dtype: float32 - name: emotion_emotion2vec dtype: string - name: transcript_sensevoice dtype: string - name: emotion_sensevoice sequence: string - name: event_sensevoice sequence: string splits: - name: train num_bytes: 507480914420.836 num_examples: 2229346 download_size: 589102038968 dataset_size: 507480914420.836 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - automatic-speech-recognition - audio-classification language: - zh - yue --- ## Cantonese Radio Pseudo-Transcription Dataset - Contains 14k hours of audio sourced from Archive.org - Columns - `order_index`: Represents the order of the audio compared to those from the same `filename` - `link`: Link of the original full audio - `transcript_whisper`: Transcribed using `Scrya/whisper-large-v2-cantonese` with `alvanlii/whisper-small-cantonese` for speculative decoding - `transcript_sensevoice`: Transcribed using `FunAudioLLM/SenseVoiceSmall` - used [OpenCC](https://github.com/BYVoid/OpenCC) to convert to traditional chinese - isolated event tags to `event_sensevoice` - isolated emotion tags to `emotion_sensevoice` - `snr`: Signal-to-noise ratio, extracted from `ylacombe/brouhaha-best` - `c50`: Speech clarity, extracted from `ylacombe/brouhaha-best` - `emotion`: Emotion, extracted from `emotion2vec/emotion2vec_plus_large` - Note that `id` does not reflect the ordering of the audio within the same video - Processing - The full audio is split using [WhisperX](https://github.com/m-bain/whisperX), using `Scrya/whisper-large-v2-cantonese` - it is split in <30s chunks and according to speakers - No filtering or additional audio processing was done for this dataset - Filtering is recommended for your own use