wav2vec2-large-xlsr-khmer-compressed

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4521
  • Wer: 0.4699

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 410
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
14.3656 2.7397 200 4.4510 1.0
3.8382 5.4795 400 3.5467 1.0
3.5549 8.2192 600 3.5465 1.0
3.5274 10.9589 800 3.5229 1.0
3.5195 13.6986 1000 3.5110 1.0
3.5069 16.4384 1200 3.5079 1.0
3.497 19.1781 1400 3.4951 1.0
3.4445 21.9178 1600 3.1051 1.0
2.6018 24.6575 1800 1.6434 0.9640
1.6356 27.3973 2000 1.1678 0.8733
1.2884 30.1370 2200 0.9586 0.8061
1.129 32.8767 2400 0.8261 0.7539
0.9722 35.6164 2600 0.7473 0.7066
0.91 38.3562 2800 0.6862 0.6566
0.8078 41.0959 3000 0.6475 0.6374
0.7463 43.8356 3200 0.6087 0.6088
0.6935 46.5753 3400 0.5811 0.5921
0.6615 49.3151 3600 0.5571 0.5790
0.6561 52.0548 3800 0.5429 0.5595
0.589 54.7945 4000 0.5249 0.5487
0.5497 57.5342 4200 0.5140 0.5364
0.5325 60.2740 4400 0.4973 0.5314
0.518 63.0137 4600 0.4871 0.5209
0.5026 65.7534 4800 0.4842 0.5205
0.4896 68.4932 5000 0.4770 0.5111
0.4445 71.2329 5200 0.4726 0.5028
0.4378 73.9726 5400 0.4621 0.4943
0.4222 76.7123 5600 0.4603 0.4886
0.4207 79.4521 5800 0.4611 0.4849
0.4209 82.1918 6000 0.4613 0.4915
0.4128 84.9315 6200 0.4553 0.4808
0.3836 87.6712 6400 0.4557 0.4742
0.3811 90.4110 6600 0.4582 0.4777
0.3757 93.1507 6800 0.4545 0.4730
0.3627 95.8904 7000 0.4531 0.4697
0.3676 98.6301 7200 0.4521 0.4699

Framework versions

  • Transformers 4.56.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.1.1
  • Tokenizers 0.22.0
Downloads last month
4
Safetensors
Model size
0.3B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for S-Sethisak/wav2vec2-large-xlsr-khmer-compressed

Finetuned
(309)
this model