Version3ASAP_FineTuningBERT_AugV12_k10_task1_organization_k10_k10_fold3
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9770
- Qwk: 0.5597
- Mse: 0.9758
- Rmse: 0.9878
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|---|---|---|---|---|---|---|
| No log | 1.0 | 3 | 7.0239 | 0.0 | 7.0223 | 2.6500 |
| No log | 2.0 | 6 | 4.5327 | 0.0076 | 4.5315 | 2.1287 |
| No log | 3.0 | 9 | 3.3000 | 0.0 | 3.2991 | 1.8163 |
| No log | 4.0 | 12 | 2.4226 | 0.0543 | 2.4218 | 1.5562 |
| No log | 5.0 | 15 | 1.6966 | 0.0202 | 1.6959 | 1.3023 |
| No log | 6.0 | 18 | 1.3410 | 0.0 | 1.3404 | 1.1578 |
| No log | 7.0 | 21 | 0.9905 | 0.0209 | 0.9900 | 0.9950 |
| No log | 8.0 | 24 | 0.8677 | 0.0852 | 0.8673 | 0.9313 |
| No log | 9.0 | 27 | 0.9047 | 0.0314 | 0.9044 | 0.9510 |
| No log | 10.0 | 30 | 0.9799 | 0.0414 | 0.9797 | 0.9898 |
| No log | 11.0 | 33 | 1.2914 | 0.0235 | 1.2913 | 1.1364 |
| No log | 12.0 | 36 | 1.5902 | 0.1282 | 1.5902 | 1.2610 |
| No log | 13.0 | 39 | 1.2143 | 0.2541 | 1.2142 | 1.1019 |
| No log | 14.0 | 42 | 1.8820 | 0.1786 | 1.8819 | 1.3718 |
| No log | 15.0 | 45 | 1.4104 | 0.3576 | 1.4102 | 1.1875 |
| No log | 16.0 | 48 | 0.7954 | 0.5388 | 0.7951 | 0.8917 |
| No log | 17.0 | 51 | 1.1935 | 0.5304 | 1.1925 | 1.0920 |
| No log | 18.0 | 54 | 0.9172 | 0.5904 | 0.9163 | 0.9572 |
| No log | 19.0 | 57 | 0.6818 | 0.5687 | 0.6812 | 0.8254 |
| No log | 20.0 | 60 | 1.1476 | 0.4754 | 1.1468 | 1.0709 |
| No log | 21.0 | 63 | 1.2699 | 0.4705 | 1.2691 | 1.1265 |
| No log | 22.0 | 66 | 0.6683 | 0.6098 | 0.6676 | 0.8171 |
| No log | 23.0 | 69 | 0.8433 | 0.6027 | 0.8424 | 0.9178 |
| No log | 24.0 | 72 | 0.8357 | 0.6110 | 0.8348 | 0.9137 |
| No log | 25.0 | 75 | 0.6726 | 0.6489 | 0.6719 | 0.8197 |
| No log | 26.0 | 78 | 1.3829 | 0.4613 | 1.3819 | 1.1755 |
| No log | 27.0 | 81 | 1.3860 | 0.4639 | 1.3849 | 1.1768 |
| No log | 28.0 | 84 | 0.7712 | 0.6222 | 0.7703 | 0.8777 |
| No log | 29.0 | 87 | 0.9150 | 0.5990 | 0.9140 | 0.9560 |
| No log | 30.0 | 90 | 0.8455 | 0.6044 | 0.8446 | 0.9190 |
| No log | 31.0 | 93 | 1.0813 | 0.5216 | 1.0805 | 1.0395 |
| No log | 32.0 | 96 | 0.7851 | 0.5846 | 0.7844 | 0.8856 |
| No log | 33.0 | 99 | 0.6808 | 0.5846 | 0.6800 | 0.8246 |
| No log | 34.0 | 102 | 0.7450 | 0.6033 | 0.7442 | 0.8627 |
| No log | 35.0 | 105 | 0.7109 | 0.6092 | 0.7101 | 0.8427 |
| No log | 36.0 | 108 | 0.9390 | 0.5465 | 0.9380 | 0.9685 |
| No log | 37.0 | 111 | 0.9528 | 0.5586 | 0.9517 | 0.9756 |
| No log | 38.0 | 114 | 0.7407 | 0.6232 | 0.7399 | 0.8602 |
| No log | 39.0 | 117 | 0.8105 | 0.5959 | 0.8097 | 0.8998 |
| No log | 40.0 | 120 | 1.2730 | 0.4817 | 1.2718 | 1.1277 |
| No log | 41.0 | 123 | 1.3054 | 0.4707 | 1.3041 | 1.1420 |
| No log | 42.0 | 126 | 0.8738 | 0.5780 | 0.8727 | 0.9342 |
| No log | 43.0 | 129 | 0.6802 | 0.5863 | 0.6794 | 0.8242 |
| No log | 44.0 | 132 | 0.7616 | 0.5976 | 0.7606 | 0.8721 |
| No log | 45.0 | 135 | 0.9770 | 0.5597 | 0.9758 | 0.9878 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- -
Model tree for genki10/Version3ASAP_FineTuningBERT_AugV12_k10_task1_organization_k10_k10_fold3
Base model
google-bert/bert-base-uncased