CodeLlama-7b-Instruct-hf_En__components_size_252_epochs_10_2024-06-21_16-51-46_3556559
This model is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.9251
- Accuracy: 0.496
- Chrf: 0.315
- Bleu: 0.248
- Sacrebleu: 0.2
- Rouge1: 0.458
- Rouge2: 0.26
- Rougel: 0.435
- Rougelsum: 0.456
- Meteor: 0.521
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 252
- training_steps: 2520
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.0421 | 4.0 | 252 | 3.2663 | 0.495 | 0.084 | 0.035 | 0.0 | 0.203 | 0.043 | 0.18 | 0.192 | 0.201 |
0.0562 | 8.0 | 504 | 2.4211 | 0.495 | 0.177 | 0.133 | 0.1 | 0.352 | 0.155 | 0.332 | 0.352 | 0.457 |
0.08 | 12.0 | 756 | 2.7082 | 0.494 | 0.184 | 0.08 | 0.1 | 0.242 | 0.082 | 0.23 | 0.241 | 0.33 |
0.9994 | 16.0 | 1008 | 2.4576 | 0.496 | 0.216 | 0.116 | 0.1 | 0.4 | 0.217 | 0.381 | 0.399 | 0.398 |
0.1476 | 20.0 | 1260 | 2.7555 | 0.497 | 0.159 | 0.051 | 0.1 | 0.274 | 0.064 | 0.259 | 0.262 | 0.196 |
0.3371 | 24.0 | 1512 | 2.2083 | 0.491 | 0.196 | 0.128 | 0.1 | 0.422 | 0.217 | 0.396 | 0.414 | 0.384 |
0.0187 | 28.0 | 1764 | 2.0562 | 0.476 | 0.268 | 0.19 | 0.2 | 0.435 | 0.238 | 0.413 | 0.431 | 0.462 |
0.1243 | 32.0 | 2016 | 2.0119 | 0.497 | 0.291 | 0.238 | 0.2 | 0.454 | 0.248 | 0.424 | 0.452 | 0.504 |
0.0295 | 36.0 | 2268 | 1.9499 | 0.479 | 0.309 | 0.244 | 0.2 | 0.443 | 0.253 | 0.43 | 0.441 | 0.527 |
0.027 | 40.0 | 2520 | 1.9251 | 0.496 | 0.315 | 0.248 | 0.2 | 0.458 | 0.26 | 0.435 | 0.456 | 0.521 |
Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
Model tree for vdavidr/CodeLlama-7b-Instruct-hf_En__components_size_252_epochs_10_2024-06-21_16-51-46_3556559
Base model
codellama/CodeLlama-7b-Instruct-hf