QuartzNet 15x5 CTC Series

Model architecture | Model size | Language

stt-bm-quartznet15x5-v1 is a fine-tuned version of RobotsMali/stt-bm-quartznet15x5-v0 trained for Automatic Speech Recognition of Bambara speech. This model cannot write Punctuations and Capitalizations, it utilizes a character encoding scheme, and transcribes text in the standard character set that is provided in its training set.

The model was fine-tuned using NVIDIA NeMo and is trained with CTC (Connectionist Temporal Classification) Loss.

🚨 Important Note

This model, along with its associated resources, is part of an ongoing research effort, improvements and refinements are expected in future versions. Users should be aware that:

  • The model may not generalize very well accross all speaking conditions and dialects.
  • Community feedback is welcome, and contributions are encouraged to refine the model further.

NVIDIA NeMo: Training

To fine-tune or use the model, install NVIDIA NeMo. We recommend installing it after setting up the latest PyTorch version.

pip install nemo-toolkit['asr']

How to Use This Model

Load Model with NeMo

import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="RobotsMali/stt-bm-quartznet15x5-v1")

Transcribe Audio

# Assuming you have a test audio file named sample_audio.wav
asr_model.transcribe(['sample_audio.wav'])

Input

This model accepts 16 kHz mono-channel audio (wav files) as input. But it is equipped with its own preprocessor doing the resampling so you may input audios at higher sampling rates.

Output

This model provides transcribed speech as an hypothesis object with a text attribute containing the transcription string for a given speech sample.

Model Architecture

QuartzNet is a convolutional architecture, which consists of 1D time-channel separable convolutions optimized for speech recognition. More information on QuartzNet can be found here: QuartzNet Model.

Training

The NeMo toolkit was used to fine-tune this model for 64,300 steps over the RobotsMali/stt-bm-quartznet15x5-v0 model. The finetuning codes and configurations can be found at RobotsMali-AI/bambara-asr.

Dataset

This model was fine-tuned on the kunkado dataset, the human-reviewed subset, which consists of ~40 hours of transcribed Bambara speech data. The text was normalized with the bambara-normalizer prior to training, normalizing numbers, removing punctuations and removings tags.

Performance

The performance of Automatic Speech Recognition models is measured using Word Error Rate (WER%) and Character Error Rate (CER), two edit distance metrics .

Benchmark Decoding WER (%) ↓ CER (%) ↓
Kunkado CTC 55.67 26.68
Nyana Eval CTC 57.20 25.71

These are greedy WER numbers without external LM.

License

This model is released under the CC-BY-4.0 license. By using this model, you agree to the terms of the license.


Feel free to open a discussion on Hugging Face or file an issue on GitHub for help or contributions.

Downloads last month
45
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RobotsMali/stt-bm-quartznet15x5-v1

Finetuned
(2)
this model

Dataset used to train RobotsMali/stt-bm-quartznet15x5-v1

Spaces using RobotsMali/stt-bm-quartznet15x5-v1 2

Evaluation results