bhenrym14 commited on
Commit
0a96177
1 Parent(s): 1206595

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ fp16 weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pn
16
 
17
  This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
18
  1. It is first trained on a long-context (7000-8192 tokens) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset (GPT4 split only). This amounts to roughly 110mm tokens. Airoboros-like training prompt was used instead of the dolphin system prompt. Training was done with partial NTK scaling applied (scale factor of 4). This took ~20 hours.
19
- 2. The model was then finetuned on [Jon Durbin's Airoboros GPT4 1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1), with same scaling approach, for 3 epochs. This took ~17 hours.
20
 
21
  **This is a QLoRA fine-tune (rank 64)**.
22
 
 
16
 
17
  This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
18
  1. It is first trained on a long-context (7000-8192 tokens) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset (GPT4 split only). This amounts to roughly 110mm tokens. Airoboros-like training prompt was used instead of the dolphin system prompt. Training was done with partial NTK scaling applied (scale factor of 4). This took ~20 hours.
19
+ 2. The model was then finetuned on [Jon Durbin's Airoboros GPT4 1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1), with same scaling approach, for 2 epochs. This took ~15 hours.
20
 
21
  **This is a QLoRA fine-tune (rank 64)**.
22