File size: 762 Bytes
e4a68c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0cd0b1
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
---
datasets:
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- jondurbin/airoboros-gpt4-1.4.1
- openai/summarize_from_feedback
- ehartford/wizard_vicuna_70k_unfiltered
language:
- en
tags:
- llama
---

Trained on a flavorful melange of the WizardLM, Airoboros, and Wizard Vicuna datasets.
This model was trained using both linear and NTK-aware RoPE scaling in tandem. When loading, ensure that `compress_pos_emb` (or `scale`) is set to 2, and `alpha_value` is set to 4. *Both* values must be set.

Expect context length of up to 8192 to work for sure. It will probably maintain coherence into the ~12k range, but I have not tested that.

Prompt format is vicuna 1.1:
```
<whatever nonsense system prompt you want>
USER: ...
ASSISTANT: ...
```