Datasets:
metadata
language:
- ha
license: cc-by-4.0
task_categories:
- text-to-speech
tags:
- audio
- hausa
- tts
- speech-synthesis
- multi-speaker
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: audio
dtype: audio
configs:
- config_name: default
data_files:
- split: train
path: hausa_tts_embedded.parquet
viewer: true
Hausa TTS Dataset
Multi-speaker Hausa text-to-speech dataset.
Usage
from datasets import load_dataset, Audio
# Load the dataset
ds = load_dataset("parquet", data_files="hausa_tts_embedded.parquet", split="train")
# Cast audio column to Audio type with 24kHz sampling rate
ds = ds.cast_column("audio", Audio(sampling_rate=24000))
# Use with Unsloth TTS
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("unsloth/csm-1b")
# Your training code here...
Dataset Structure
- text: The Hausa text to be spoken
- source: Speaker identifier (multi-speaker)
- audio: Audio recording (embedded as bytes)