mlinmg commited on
Commit
34b585d
1 Parent(s): 799e908

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -20
README.md CHANGED
@@ -14,8 +14,6 @@ An auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://
14
 
15
  # Prompting Format
16
 
17
- chat format:
18
-
19
  single-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>
20
 
21
  multi-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>Hi!<|endoftext|>Human: How are you?\n\nAssistant: <|endoftext|>target2<|endoftext|>
@@ -27,24 +25,20 @@ The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-7
27
  The layer ranges used are as follows:
28
 
29
  ```yaml
30
- - range 0, 16
31
- Xwin
32
- - range 8, 24
33
- Euryale
34
- - range 17, 32
35
- Xwin
36
- - range 25, 40
37
- Euryale
38
- - range 33, 48
39
- Xwin
40
- - range 41, 56
41
- Euryale
42
- - range 49, 64
43
- Xwin
44
- - range 57, 72
45
- Euryale
46
- - range 65, 80
47
- Xwin
48
  ```
49
 
50
 
 
14
 
15
  # Prompting Format
16
 
 
 
17
  single-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>
18
 
19
  multi-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>Hi!<|endoftext|>Human: How are you?\n\nAssistant: <|endoftext|>target2<|endoftext|>
 
25
  The layer ranges used are as follows:
26
 
27
  ```yaml
28
+ - model: OrionStar-Yi-34B-Chat-Llama
29
+ layer_range: [0, 14]
30
+ - model: dolphin-2_2-yi-34b
31
+ layer_range: [7, 21]
32
+ - model: OrionStar-Yi-34B-Chat-Llama
33
+ layer_range: [15, 29]
34
+ - model: dolphin-2_2-yi-34b
35
+ layer_range: [22, 36]
36
+ - model: OrionStar-Yi-34B-Chat-Llama
37
+ layer_range: [30, 44]
38
+ - model: dolphin-2_2-yi-34b
39
+ layer_range: [37, 51]
40
+ - model: OrionStar-Yi-34B-Chat-Llama
41
+ layer_range: [45, 59]
 
 
 
 
42
  ```
43
 
44