MLM
This model is a fine-tuned version of google-bert/bert-base-uncased on a imdb dataset. It achieves the following results on the evaluation set:
• Eval loss: 4.18
• Perplexity (PPL): 50.58
Training hyperparameters
The following hyperparameters were used during training:
• learning_rate: 3e-5
• train_batch_size: 4
• gradient_accumulation_steps=2
• seed: 42
• weight_decay=0.01
• lr_scheduler_type: linear
• warmup_ratio=0.1
• num_epochs: 2
- Downloads last month
- 53
Model tree for Keyurjotaniya007/bert-base-cased-imdb-mlm
Base model
google-bert/bert-base-uncased