SST-2-HEURISTIC-Standard_LoRA-Q_V-seed10
This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1948
- Accuracy: 0.9438
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.3836 | 0.0950 | 200 | 0.2142 | 0.9186 |
| 0.2937 | 0.1900 | 400 | 0.2044 | 0.9151 |
| 0.2704 | 0.2850 | 600 | 0.2178 | 0.9163 |
| 0.2516 | 0.3800 | 800 | 0.2107 | 0.9335 |
| 0.2471 | 0.4751 | 1000 | 0.2356 | 0.9255 |
| 0.2373 | 0.5701 | 1200 | 0.2058 | 0.9232 |
| 0.2332 | 0.6651 | 1400 | 0.1986 | 0.9243 |
| 0.2282 | 0.7601 | 1600 | 0.2068 | 0.9335 |
| 0.225 | 0.8551 | 1800 | 0.2028 | 0.9266 |
| 0.2128 | 0.9501 | 2000 | 0.2077 | 0.9335 |
| 0.2254 | 1.0451 | 2200 | 0.1908 | 0.9312 |
| 0.1968 | 1.1401 | 2400 | 0.1942 | 0.9312 |
| 0.2026 | 1.2352 | 2600 | 0.2113 | 0.9346 |
| 0.194 | 1.3302 | 2800 | 0.2169 | 0.9312 |
| 0.1915 | 1.4252 | 3000 | 0.1912 | 0.9358 |
| 0.1891 | 1.5202 | 3200 | 0.2046 | 0.9358 |
| 0.1973 | 1.6152 | 3400 | 0.1945 | 0.9312 |
| 0.1865 | 1.7102 | 3600 | 0.2448 | 0.9289 |
| 0.1911 | 1.8052 | 3800 | 0.2149 | 0.9346 |
| 0.2001 | 1.9002 | 4000 | 0.1906 | 0.9335 |
| 0.1854 | 1.9952 | 4200 | 0.2196 | 0.9346 |
| 0.1818 | 2.0903 | 4400 | 0.1935 | 0.9369 |
| 0.1749 | 2.1853 | 4600 | 0.2139 | 0.9335 |
| 0.1755 | 2.2803 | 4800 | 0.2274 | 0.9358 |
| 0.1728 | 2.3753 | 5000 | 0.2105 | 0.9392 |
| 0.1709 | 2.4703 | 5200 | 0.2080 | 0.9404 |
| 0.1732 | 2.5653 | 5400 | 0.2141 | 0.9312 |
| 0.1832 | 2.6603 | 5600 | 0.2029 | 0.9381 |
| 0.1666 | 2.7553 | 5800 | 0.1969 | 0.9358 |
| 0.1594 | 2.8504 | 6000 | 0.1955 | 0.9381 |
| 0.1718 | 2.9454 | 6200 | 0.1975 | 0.9300 |
| 0.1565 | 3.0404 | 6400 | 0.2119 | 0.9300 |
| 0.1497 | 3.1354 | 6600 | 0.2099 | 0.9392 |
| 0.1642 | 3.2304 | 6800 | 0.2015 | 0.9358 |
| 0.1623 | 3.3254 | 7000 | 0.1971 | 0.9404 |
| 0.1544 | 3.4204 | 7200 | 0.1960 | 0.9415 |
| 0.1539 | 3.5154 | 7400 | 0.2116 | 0.9369 |
| 0.158 | 3.6105 | 7600 | 0.1984 | 0.9392 |
| 0.1652 | 3.7055 | 7800 | 0.1859 | 0.9415 |
| 0.153 | 3.8005 | 8000 | 0.1948 | 0.9438 |
| 0.1591 | 3.8955 | 8200 | 0.1991 | 0.9438 |
| 0.1533 | 3.9905 | 8400 | 0.2124 | 0.9404 |
| 0.1482 | 4.0855 | 8600 | 0.2123 | 0.9415 |
| 0.1468 | 4.1805 | 8800 | 0.2126 | 0.9415 |
| 0.1467 | 4.2755 | 9000 | 0.2129 | 0.9392 |
| 0.1448 | 4.3705 | 9200 | 0.2095 | 0.9438 |
| 0.142 | 4.4656 | 9400 | 0.2119 | 0.9381 |
| 0.1361 | 4.5606 | 9600 | 0.2172 | 0.9427 |
| 0.1491 | 4.6556 | 9800 | 0.2070 | 0.9427 |
| 0.1413 | 4.7506 | 10000 | 0.2060 | 0.9415 |
| 0.1575 | 4.8456 | 10200 | 0.2056 | 0.9438 |
| 0.1521 | 4.9406 | 10400 | 0.2066 | 0.9427 |
Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ekiprop/SST-2-HEURISTIC-Standard_LoRA-Q_V-seed10
Base model
FacebookAI/roberta-base