You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

I agree to use this data with the license conditions and use restrictions. You agree to NOT USE the data for creating models for any form of text-to-speech (TTS), voice cloning, voice synthesis, or any technology intended to replicate or generate human voices.

Log in or Sign Up to review the conditions and access this dataset content.

⚠️ IMPORTANT: Work in Progress

This dataset is not final. Releases and updates will continue until end of November 2025 (Updated 28 Nov 2025).

  • Users must regularly check this repository for new versions, corrections, and updates.
  • For attribution, benchmarking, and publications, always reference the latest available information (as of November 2025 or later).
  • Do not cite or benchmark against old/incomplete versions. Use only the most recent release and documentation.

We appreciate your patience while we expand and improve this dataset. Stay updated by watching this repo.

Swivuriso: ZA-African Next Voices

Zenodo

Swivuriso is a large-scale multilingual speech dataset targeting over 3000 hours of audio across 7 South African languages. The dataset is developed to support Automatic Speech Recognition (ASR) and inclusive speech technologies for low-resource African languages. It combines both scripted and unscripted speech, collected through ethical, community-centered processes.

Dataset Paper: ArXiv - Work in Progress

Language Coverage

Language Target Hours Released
isiZulu 500 ▇▇▇▇▇▇▇▇▇▇ 100%
isiXhosa 500 ▇▇▇▇▇▇▇▇▇▇ 100%
Sesotho 500 ▇▇▇▇▇▇▇▇▇▇ 100%
Setswana 500 ▇▇▇▇▇▇▇▇▇▇ 100%
Xitsonga 500 ▇▇▇▇▇▇▇▇▇▇ 100%
isiNdebele 250 ▇▇▇▇▇▇▇▇▇▇ 100%
Tshivenda 250 ▇▇▇▇▇▇▇▇▇▇ 100%

Use Restriction:

The persons whose voices are included in this dataset, and the creators and owners of this dataset* do not give consent in any manner or form to, and strictly prohibit any use of this dataset for any form of text-to-speech (TTS), voice cloning, voice synthesis, or any technology or activity intended to replicate, mimic or generate human voices or any technology or activity resulting in the replication, mimicry or generation of human voices.

This dataset includes scripted and unscripted speech across various domains such as agriculture, health, finance, sports, transport, culture, society, and general topics. It is primarily designed for use in automatic speech recognition (ASR) tasks.

Use of this dataset for any form of text-to-speech (TTS), voice cloning, voice synthesis, or any technology intended to replicate or generate human voices is strictly prohibited.

These restrictions are in place until further notice.


You can load a language (e.g., zul) using the 🤗 datasets library

from datasets import load_dataset

# Load isiZulu configuration
ds = load_dataset("dsfsi-anv/za-african-next-voices", "zul")

You can also load the dataset in streaming mode, which is useful when working with large datasets:

ds = load_dataset("dsfsi-anv/za-african-next-voices", "zul", split="dev_test", streaming=True)

Downloading CSV Files Only - Colab Example

from huggingface_hub import hf_hub_download, login
import os
from google.colab import userdata

# Define the repository ID
repo_id = "dsfsi-anv/za-african-next-voices"

# Define variables for split and lang_code
split = "train"
lang_code = "sot"

# Authenticate with Hugging Face using the token from Colab's user data
try:
    hf_token = userdata.get('HF_TOKEN')
    login(token=hf_token)
    print("Successfully authenticated with Hugging Face.")
except Exception as e:
    print(f"Error authenticating with Hugging Face: {e}")

# Define the paths of the files to download using the variables
file_paths = [
    f"{split}/{lang_code}/meta.csv",
    f"{split}/{lang_code}/transcripts.csv",
]

# Download the files
for file_path in file_paths:
    try:
        downloaded_file_path = hf_hub_download(repo_id=repo_id, filename=file_path, repo_type="dataset", local_dir = "./")
        print(f"Downloaded {file_path} to {downloaded_file_path}")
    except Exception as e:
        print(f"Error downloading {file_path}: {e}")

🔧 Troubleshooting

If you encounter audio decoding errors (e.g., frames with 0 channels, missing decoder attributes), make sure you have compatible versions of the required libraries installed. The dataset has been tested and works with the following:

pip install --upgrade datasets[audio]==3.6.0
pip install --upgrade ffmpeg
pip install --upgrade ffmpeg-python

Available Fields

Field Description
audio 48kHz mono .wav audio file (automatically decoded)
transcript Transcribed utterance
recorder_uuid Unique speaker ID
type Scripted or unscripted
domain Thematic domain (e.g., Health, Agriculture, etc.)
duration Duration of the clip in seconds
gender Speaker gender (if available)
age_range Speaker's age group (e.g., 18–29, 30–39)
...

Splits

All data is split into:

  • train (85%)
  • dev (5%)
  • dev_test (5%)
  • test (5%) 🔒 – reserved for future shared tasks/public leaderboards

All splits are speaker-disjoint to ensure reproducibility and avoid data leakage.


License

Creative Commons Attribution 4.0 International (CC BY 4.0)
You are free to use, share, and adapt the data—with proper attribution to the South Africa NextVoices team.

Intended Use

  • Training/fine-tuning ASR models
  • Developing inclusive speech technology for African users
  • Cross-lingual learning and low-resource transfer
  • Model evaluation and benchmarking

⚠️ Limitations and Ethical Use

  • Dialectal diversity and regional accents may not be fully covered
  • The dataset is not intended for surveillance or other harmful use
  • All use must comply with ethical AI principles

Citations

If you use Swivuriso in your work, please cite both of the below:

Dataset

@dataset{za-african-next-voices-2025,
  title     = {The South African Next Voices Multilingual Speech Dataset},
    author       = {Marivate, Vukosi and
                  Olaleye, Kayode and
                  Mundia, Sitwala and
                  Bakainga, Andinda and
                  Netshifhefhe, Unarine Leo and
                  Milanzie, Mahmooda and
                  Mogale, Hope and
                  SINDANE, THAPELO and
                  Abdulrasaq, Zainab and
                  Mokgosi, Kesego and
                  Okorie, Chijioke and
                  van Wyk, Nia Zion and
                  Morrissey, Graham and
                  Dunbar, Dale and
                  Smit, Francois and
                  Chidi, Tsosheletso and
                  Mabuya, Rooweither and
                  Bukula, Andiswa and
                  MLAMBO, RESPECT and
                  Macucwa, Solomon Tebogo and
                  Abdulmumin, Idris and
                  Rananga, Seani},
  url2      = {https://github.com/dsfsi/za-african-next-voices},
  url3      = {https://www.dsfsi.co.za/za-african-next-voices/},
  year      = {2025},
  type      = {dataset},
  publisher = {Zenodo},
  version   = {1.0},
  doi       = {10.5281/zenodo.17776289},
  url       = {https://doi.org/10.5281/zenodo.17776289},
}

Research Paper

Will be available soon.

@article{swivuriso2025,
  title     = {Swivuriso: Creating the South African Next Voices Multilingual Speech Dataset},
  author = {Vukosi Marivate and Kayode Olaleye and Sitwala Mundia and Nia Zion Van Wyk and Andinda Bakainga and Unarine Netshifhefhe and Mahmooda Milanzie and Tsholofelo Hope Mogale and Thapelo Sindane and Zainab Abdulrasaq and Kesego Mokgosi and Chijioke Okorie and Graham Morrissey and Dale Dunbar and Francois Smit and Tsosheletso Chidi and Rooweither Mabuya and Andiswa Bukula and Respect Mlambo and Tebogo Macucwa and Idris Abdulmumin and Seani Rananga},  url       = {TBD},
  year      = {2025},
}
Downloads last month
2,028

Models trained or fine-tuned on dsfsi-anv/za-african-next-voices

Space using dsfsi-anv/za-african-next-voices 1