arthd24/pegasus_informative_xtreme_tuned_tpuv4-16
This model is a fine-tuned version of thonyyy/pegasus_indonesian_base-finetune on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 1.0464
- Validation Loss: 1.6654
- Epoch: 16
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 4.35288e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.09475626145662722}
- training_precision: float32
Training results
| Train Loss | Validation Loss | Epoch |
|---|---|---|
| 1.5945 | 1.8309 | 0 |
| 1.4635 | 1.7809 | 1 |
| 1.4002 | 1.7559 | 2 |
| 1.3529 | 1.7335 | 3 |
| 1.3137 | 1.7176 | 4 |
| 1.2783 | 1.7031 | 5 |
| 1.2453 | 1.6915 | 6 |
| 1.2169 | 1.6844 | 7 |
| 1.1921 | 1.6749 | 8 |
| 1.1704 | 1.6652 | 9 |
| 1.1508 | 1.6620 | 10 |
| 1.1308 | 1.6589 | 11 |
| 1.1130 | 1.6575 | 12 |
| 1.0957 | 1.6598 | 13 |
| 1.0784 | 1.6592 | 14 |
| 1.0621 | 1.6599 | 15 |
| 1.0464 | 1.6654 | 16 |
Framework versions
- Transformers 4.51.3
- TensorFlow 2.16.1
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for arthd24/pegasus_informative_xtreme_tuned_tpuv4-16
Base model
thonyyy/pegasus_indonesian_base-finetune