Datasets:
Dataset Viewer
The dataset viewer is not available for this split.
Server error while post-processing the split rows. Please report the issue.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Mimic Studio Hausa TTS Dataset
This dataset was created using Mimic Studio for training text-to-speech models.
Dataset Details
- Total Samples: 62
- Language: Hausa
- Speakers: 2 speakers (Surajo Nuhu Umar, Umar Musa Halliru)
- Format: Compatible with Unsloth TTS models
- Audio Format: WAV files, 24kHz sampling rate
Speaker Distribution
Surajo Nuhu Umar 32
Umar Musa Halliru 30
Dataset Structure
Aybee5/hausa-tts-small/
├── data/
│ ├── train.parquet # Metadata (source, text, audio paths)
│ └── audio_files/
│ ├── 97f373e8-f6e6-.../ # Speaker 1 audio files
│ ├── b0db0a87-2206-.../ # Speaker 2 audio files
│ └── c3621689-ca53-.../ # Additional audio files
└── README.md
The dataset has the following columns:
source: Speaker name (for multi-speaker training)text: Transcription/prompt in Hausaaudio: Audio file (automatically loaded from data/audio_files/)
Usage
This dataset is designed for use with Unsloth TTS models.
from datasets import load_dataset, Audio
# Load the dataset - audio files are automatically downloaded
ds = load_dataset("Aybee5/hausa-tts-small", split="train")
# Audio is already configured with 24kHz sampling rate
print(ds[0])
# {'source': 'Umar Musa Halliru', 'text': 'Tsarki', 'audio': {'array': [...], 'sampling_rate': 24000, 'path': '...'}}
# Access audio array directly
audio_array = ds[0]['audio']['array']
sampling_rate = ds[0]['audio']['sampling_rate']
For Unsloth TTS Training
from datasets import load_dataset, Audio
from transformers import AutoProcessor
import torch
processor = AutoProcessor.from_pretrained("unsloth/csm-1b")
raw_ds = load_dataset("Aybee5/hausa-tts-small", split="train")
# Audio is already at 24kHz, no need to recast
speaker_key = "source"
def preprocess_example(example):
conversation = [
{
"role": str(example[speaker_key]),
"content": [
{"type": "text", "text": example["text"]},
{"type": "audio", "path": example["audio"]["array"]},
],
}
]
model_inputs = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
output_labels=True,
text_kwargs={
"padding": "max_length",
"max_length": 256,
"pad_to_multiple_of": 8,
"padding_side": "right",
},
audio_kwargs={
"sampling_rate": 24_000,
"max_length": 240001,
"padding": "max_length",
},
common_kwargs={"return_tensors": "pt"},
)
required_keys = ["input_ids", "attention_mask", "labels", "input_values", "input_values_cutoffs"]
processed_example = {key: model_inputs[key][0] for key in required_keys}
return processed_example
# Process the dataset
processed_ds = raw_ds.map(
preprocess_example,
remove_columns=raw_ds.column_names,
)
print(f"Processed {len(processed_ds)} samples")
Citation
If you use this dataset, please cite the Mimic Studio project:
@misc{mimicstudio,
title = {Mimic Recording Studio},
author = {Mycroft AI},
howpublished = {\url{https://github.com/MycroftAI/mimic-recording-studio}},
year = {2019}
}
License
MIT License
Dataset Creation
This dataset was created by:
- Recording audio using Mimic Studio
- Extracting data from the SQLite database
- Converting to relative paths
- Organizing into HuggingFace standard structure (data/ folder)
- Configuring proper audio feature loading
The audio files are stored in data/audio_files/ and are automatically downloaded when loading the dataset.
- Downloads last month
- 83