Llama-3.1-8B
Collection
36 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-3, the GaetanMichelet/chat-120_ft_task-3 and the GaetanMichelet/chat-180_ft_task-3 datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.695 | 0.9412 | 8 | 1.6759 |
1.5868 | 2.0 | 17 | 1.5454 |
1.473 | 2.9412 | 25 | 1.4198 |
1.2035 | 4.0 | 34 | 1.2535 |
1.1293 | 4.9412 | 42 | 1.1991 |
1.1361 | 6.0 | 51 | 1.1733 |
1.1333 | 6.9412 | 59 | 1.1571 |
1.0612 | 8.0 | 68 | 1.1462 |
0.9895 | 8.9412 | 76 | 1.1392 |
0.9858 | 10.0 | 85 | 1.1381 |
0.939 | 10.9412 | 93 | 1.1420 |
0.8747 | 12.0 | 102 | 1.1664 |
0.8694 | 12.9412 | 110 | 1.1780 |
0.8188 | 14.0 | 119 | 1.2246 |
0.697 | 14.9412 | 127 | 1.2348 |
0.6048 | 16.0 | 136 | 1.3102 |
0.5898 | 16.9412 | 144 | 1.3190 |
Base model
meta-llama/Llama-3.1-8B