wisper-small-malagasy

This model is a fine-tuned version of openai/whisper-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8951
  • Wer: 0.4742

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.3904 8.1983 500 0.6457 0.4847
0.0285 16.3967 1000 0.7599 0.4859
0.0038 24.5950 1500 0.7853 0.4771
0.0008 32.7934 2000 0.8225 0.4738
0.0003 40.9917 2500 0.8441 0.4738
0.0002 49.1818 3000 0.8617 0.4754
0.0002 57.3802 3500 0.8751 0.4758
0.0002 65.5785 4000 0.8857 0.4746
0.0001 73.7769 4500 0.8922 0.4734
0.0001 81.9752 5000 0.8951 0.4742

Framework versions

  • Transformers 4.53.2
  • Pytorch 2.6.0+cu124
  • Datasets 2.18.0
  • Tokenizers 0.21.2
Downloads last month
9
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for misterkissi/whisper-small-malagasy

Finetuned
(2991)
this model

Space using misterkissi/whisper-small-malagasy 1