chargoddard commited on
Commit
e4a68c6
1 Parent(s): eddfbc0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
4
+ - jondurbin/airoboros-gpt4-1.4.1
5
+ - openai/summarize_from_feedback
6
+ - ehartford/wizard_vicuna_70k_unfiltered
7
+ language:
8
+ - en
9
+ tags:
10
+ - llama
11
+ ---
12
+
13
+ Trained on a flavorful melange of the WizardLM, Airoboros, and Wizard Vicuna datasets.
14
+ This model was trained using both linear and NTK-aware RoPE scaling in tandem. When loading, ensure that `compress_pos_emb` (or `scale`) is set to 2, and `alpha_value` is set to 4. *Both* values must be set.
15
+
16
+ Expect context length of up to 8192 to work for sure. It will probably maintain coherence into the ~12k range, but I have not tested that.