efc2f0f3ad38fba941d2cf7e3b89b04d

This model is a fine-tuned version of facebook/mbart-large-50 on the Helsinki-NLP/opus_books [fr-sv] dataset. It achieves the following results on the evaluation set:

  • Loss: 3.5978
  • Data Size: 1.0
  • Epoch Runtime: 25.0390
  • Bleu: 4.9265

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Data Size Epoch Runtime Bleu
No log 0 0 8.6600 0 2.2778 0.0794
No log 1 75 6.5269 0.0078 2.8663 0.7332
No log 2 150 5.3531 0.0156 4.6225 1.7839
No log 3 225 5.0769 0.0312 5.8577 2.4407
No log 4 300 4.8408 0.0625 6.9479 2.8221
No log 5 375 4.5226 0.125 9.9208 3.0897
No log 6 450 3.8524 0.25 11.6724 3.3505
No log 7 525 3.3828 0.5 14.8770 2.4880
2.5376 8.0 600 2.6955 1.0 24.8158 4.6150
1.9259 9.0 675 2.6926 1.0 24.3749 4.4653
1.4294 10.0 750 2.8201 1.0 23.1688 4.3863
0.9685 11.0 825 3.0722 1.0 22.9417 4.7071
0.6825 12.0 900 3.3397 1.0 24.1269 4.4333
0.6853 13.0 975 3.5978 1.0 25.0390 4.9265

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.2.0
  • Tokenizers 0.22.1
Downloads last month
7
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for contemmcm/efc2f0f3ad38fba941d2cf7e3b89b04d

Finetuned
(281)
this model