CoLA-Fisher-GLoRA-p50-seed30
This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4243
- Matthews Correlation: 0.5797
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|---|---|---|---|---|
| 0.6295 | 0.1866 | 50 | 0.6086 | 0.0 |
| 0.5795 | 0.3731 | 100 | 0.5476 | 0.2558 |
| 0.4842 | 0.5597 | 150 | 0.4588 | 0.4708 |
| 0.4652 | 0.7463 | 200 | 0.4810 | 0.4524 |
| 0.4512 | 0.9328 | 250 | 0.5073 | 0.4704 |
| 0.4314 | 1.1194 | 300 | 0.4661 | 0.4967 |
| 0.4177 | 1.3060 | 350 | 0.4602 | 0.5109 |
| 0.4389 | 1.4925 | 400 | 0.4677 | 0.4719 |
| 0.4367 | 1.6791 | 450 | 0.4342 | 0.5366 |
| 0.4031 | 1.8657 | 500 | 0.4769 | 0.5135 |
| 0.4039 | 2.0522 | 550 | 0.4409 | 0.5458 |
| 0.3734 | 2.2388 | 600 | 0.4447 | 0.5478 |
| 0.3692 | 2.4254 | 650 | 0.4506 | 0.5395 |
| 0.3865 | 2.6119 | 700 | 0.4322 | 0.5582 |
| 0.3499 | 2.7985 | 750 | 0.4243 | 0.5797 |
| 0.3609 | 2.9851 | 800 | 0.4507 | 0.5701 |
| 0.359 | 3.1716 | 850 | 0.4179 | 0.5725 |
| 0.3359 | 3.3582 | 900 | 0.4540 | 0.5452 |
| 0.3471 | 3.5448 | 950 | 0.5040 | 0.5339 |
| 0.3478 | 3.7313 | 1000 | 0.4622 | 0.5443 |
| 0.3474 | 3.9179 | 1050 | 0.4322 | 0.5580 |
| 0.3559 | 4.1045 | 1100 | 0.4496 | 0.5523 |
| 0.3146 | 4.2910 | 1150 | 0.4501 | 0.5549 |
| 0.3271 | 4.4776 | 1200 | 0.4527 | 0.5603 |
| 0.3083 | 4.6642 | 1250 | 0.4557 | 0.5606 |
| 0.3384 | 4.8507 | 1300 | 0.4639 | 0.5626 |
Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ekiprop/CoLA-Fisher-GLoRA-p50-seed30
Base model
FacebookAI/roberta-base