Version_weird_ASAP_FineTuningBERT_AugV12_k15_task1_organization_k15_k15_fold2

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6945
  • Qwk: 0.5566
  • Mse: 0.6937
  • Rmse: 0.8329

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 1.0 3 7.7993 0.0 7.7994 2.7927
No log 2.0 6 6.0672 -0.0105 6.0675 2.4632
No log 3.0 9 4.7294 0.0039 4.7298 2.1748
No log 4.0 12 3.6024 0.0 3.6028 1.8981
No log 5.0 15 2.9609 0.0 2.9613 1.7209
No log 6.0 18 1.8841 0.0372 1.8845 1.3728
No log 7.0 21 1.3707 0.0213 1.3713 1.1710
No log 8.0 24 1.0405 0.0 1.0410 1.0203
No log 9.0 27 0.8036 0.1935 0.8041 0.8967
No log 10.0 30 0.8876 0.0430 0.8883 0.9425
No log 11.0 33 0.9652 0.0430 0.9655 0.9826
No log 12.0 36 1.2576 0.1787 1.2576 1.1214
No log 13.0 39 1.1862 0.2748 1.1856 1.0888
No log 14.0 42 0.7449 0.5012 0.7436 0.8623
No log 15.0 45 1.0230 0.4927 1.0215 1.0107
No log 16.0 48 1.1415 0.5073 1.1397 1.0676
No log 17.0 51 1.3904 0.4152 1.3892 1.1786
No log 18.0 54 0.8644 0.4930 0.8634 0.9292
No log 19.0 57 0.7616 0.5034 0.7610 0.8724
No log 20.0 60 0.7166 0.5264 0.7161 0.8462
No log 21.0 63 0.7784 0.5723 0.7776 0.8818
No log 22.0 66 0.6645 0.5860 0.6638 0.8147
No log 23.0 69 1.0177 0.5035 1.0165 1.0082
No log 24.0 72 1.0645 0.5064 1.0638 1.0314
No log 25.0 75 0.8246 0.5639 0.8238 0.9076
No log 26.0 78 0.6766 0.5803 0.6761 0.8223
No log 27.0 81 0.9101 0.4838 0.9093 0.9536
No log 28.0 84 0.8182 0.5247 0.8178 0.9043
No log 29.0 87 0.9458 0.5206 0.9448 0.9720
No log 30.0 90 0.8581 0.5327 0.8571 0.9258
No log 31.0 93 0.8886 0.5316 0.8875 0.9421
No log 32.0 96 0.7417 0.5457 0.7409 0.8608
No log 33.0 99 0.6724 0.5385 0.6718 0.8196
No log 34.0 102 0.6776 0.5511 0.6771 0.8229
No log 35.0 105 0.7152 0.5464 0.7145 0.8453
No log 36.0 108 0.6471 0.5677 0.6467 0.8042
No log 37.0 111 0.6907 0.5328 0.6900 0.8307
No log 38.0 114 0.8380 0.5105 0.8371 0.9149
No log 39.0 117 0.7164 0.5775 0.7159 0.8461
No log 40.0 120 0.7338 0.5472 0.7329 0.8561
No log 41.0 123 0.7218 0.5643 0.7209 0.8491
No log 42.0 126 0.6945 0.5566 0.6937 0.8329

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for genki10/Version_weird_ASAP_FineTuningBERT_AugV12_k15_task1_organization_k15_k15_fold2

Finetuned
(5985)
this model