gemma2b-summarize-gpt4o
Collection
9 items
•
Updated
This model is a fine-tuned version of google/gemma-2b on the llama-duo/synth_summarize_dataset_dedup dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.9978 | 1.0 | 5 | 3.1071 |
2.5123 | 2.0 | 10 | 2.8503 |
2.2077 | 3.0 | 15 | 2.7154 |
1.9749 | 4.0 | 20 | 2.6507 |
1.8015 | 5.0 | 25 | 2.6242 |
1.6817 | 6.0 | 30 | 2.6105 |
1.6095 | 7.0 | 35 | 2.6003 |
1.5701 | 8.0 | 40 | 2.5917 |
1.5524 | 9.0 | 45 | 2.5882 |
1.5443 | 10.0 | 50 | 2.5878 |
Base model
google/gemma-2b