Context-awareness in instruction finetuning
Collection
15 items
•
Updated
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the yihanwang617/ultrachat_200k_processed_indicator_0.6_4k dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.8864 | 0.9997 | 3247 | 0.8975 |
Base model
meta-llama/Llama-2-7b-hf