Update README.md
Browse files
README.md
CHANGED
|
@@ -43,7 +43,7 @@ This is a Pretrain Language Model using XLM-RoBERTa Architecture for Khmer & Eng
|
|
| 43 |
- **Total Optimization Steps**: 14,509,200
|
| 44 |
- **Learning Rate**: ~2e-5 (with scheduler)
|
| 45 |
- **Hardware**: Training on single server with 4GPUs
|
| 46 |
-
- **Training time**: I trained this model for
|
| 47 |
|
| 48 |
## Training Metrics
|
| 49 |
- **Final Training Loss**: 2.3435
|
|
|
|
| 43 |
- **Total Optimization Steps**: 14,509,200
|
| 44 |
- **Learning Rate**: ~2e-5 (with scheduler)
|
| 45 |
- **Hardware**: Training on single server with 4GPUs
|
| 46 |
+
- **Training time**: I trained this model for 12 Days
|
| 47 |
|
| 48 |
## Training Metrics
|
| 49 |
- **Final Training Loss**: 2.3435
|