namespace-Pt commited on
Commit
d9b60fd
1 Parent(s): 0a717d4

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ We extend the context length of Llama-3-8B-Instruct to 80K using QLoRA and 3.5K
17
  All the following evaluation results can be reproduced following instructions [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon/new/docs/llama3-8b-instruct-qlora-80k.md).
18
 
19
  ## Needle in a Haystack
20
- We evaluate the model on the Needle-In-A-HayStack task using the official setting.
21
 
22
  <img src="data/needle.png"></img>
23
 
 
17
  All the following evaluation results can be reproduced following instructions [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon/new/docs/llama3-8b-instruct-qlora-80k.md).
18
 
19
  ## Needle in a Haystack
20
+ We evaluate the model on the Needle-In-A-HayStack task using the official setting. The blue vertical line indicates the training context length, i.e. 80K.
21
 
22
  <img src="data/needle.png"></img>
23