deepseek-coder-6.7b-instruct_En__CMP_TR_size_304_epochs_10_2024-06-23_06-24-19_3558634
This model is a fine-tuned version of deepseek-ai/deepseek-coder-6.7b-instruct on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.6398
 - Accuracy: 0.529
 - Chrf: 0.029
 - Bleu: 0.0
 - Sacrebleu: 0.0
 - Rouge1: 0.03
 - Rouge2: 0.014
 - Rougel: 0.03
 - Rougelsum: 0.03
 - Meteor: 0.113
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
 - train_batch_size: 1
 - eval_batch_size: 1
 - seed: 3407
 - distributed_type: multi-GPU
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
 - lr_scheduler_type: linear
 - lr_scheduler_warmup_steps: 304
 - training_steps: 3040
 
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 3.4459 | 1.0 | 304 | 2.0249 | 0.49 | 0.496 | 0.295 | 0.3 | 0.412 | 0.219 | 0.368 | 0.403 | 0.427 | 
| 0.0499 | 2.0 | 608 | 3.4209 | 0.537 | 0.035 | 0.0 | 0.0 | 0.083 | 0.058 | 0.083 | 0.083 | 0.148 | 
| 0.0696 | 3.0 | 912 | 3.7781 | 0.501 | 0.016 | 0.0 | 0.0 | 0.004 | 0.0 | 0.004 | 0.004 | 0.069 | 
| 0.0257 | 4.0 | 1216 | 3.9175 | 0.513 | 0.015 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.02 | 
| 0.9308 | 5.0 | 1520 | 4.0826 | 0.51 | 0.017 | 0.0 | 0.0 | 0.002 | 0.0 | 0.002 | 0.002 | 0.041 | 
| 0.0585 | 6.0 | 1824 | 3.8462 | 0.506 | 0.017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.069 | 
| 0.0503 | 7.0 | 2128 | 4.0015 | 0.474 | 0.016 | 0.0 | 0.0 | 0.002 | 0.0 | 0.002 | 0.002 | 0.085 | 
| 0.0852 | 8.0 | 2432 | 3.6237 | 0.513 | 0.024 | 0.0 | 0.0 | 0.063 | 0.035 | 0.063 | 0.063 | 0.039 | 
| 0.026 | 9.0 | 2736 | 3.6328 | 0.52 | 0.026 | 0.0 | 0.0 | 0.032 | 0.017 | 0.032 | 0.032 | 0.087 | 
| 0.0563 | 10.0 | 3040 | 3.6398 | 0.529 | 0.029 | 0.0 | 0.0 | 0.03 | 0.014 | 0.03 | 0.03 | 0.113 | 
Framework versions
- PEFT 0.7.1
 - Transformers 4.37.0
 - Pytorch 2.2.1+cu121
 - Datasets 2.20.0
 - Tokenizers 0.15.2
 
- Downloads last month
 - -
 
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	🙋
			
		Ask for provider support
Model tree for vdavidr/deepseek-coder-6.7b-instruct_En__CMP_TR_size_304_epochs_10_2024-06-23_06-24-19_3558634
Base model
deepseek-ai/deepseek-coder-6.7b-instruct