mlinmg commited on
Commit
3d5bbdf
1 Parent(s): 6f626eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -16
README.md CHANGED
@@ -14,31 +14,37 @@ An auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://
14
 
15
  # Prompting Format
16
 
 
 
17
  single-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>
18
 
19
  multi-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>Hi!<|endoftext|>Human: How are you?\n\nAssistant: <|endoftext|>target2<|endoftext|>
20
 
21
  # Merge process
22
 
23
- The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B).
24
 
25
  The layer ranges used are as follows:
26
 
27
  ```yaml
28
- - model: OrionStar-Yi-34B-Chat-Llama
29
- layer_range: [0, 14]
30
- - model: dolphin-2_2-yi-34b
31
- layer_range: [7, 21]
32
- - model: OrionStar-Yi-34B-Chat-Llama
33
- layer_range: [15, 29]
34
- - model: dolphin-2_2-yi-34b
35
- layer_range: [22, 36]
36
- - model: OrionStar-Yi-34B-Chat-Llama
37
- layer_range: [30, 44]
38
- - model: dolphin-2_2-yi-34b
39
- layer_range: [37, 51]
40
- - model: OrionStar-Yi-34B-Chat-Llama
41
- layer_range: [45, 59]
 
 
 
 
42
  ```
43
 
44
 
@@ -50,4 +56,6 @@ Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developin
50
 
51
  Special thanks to [@Undi95](https://huggingface.co/Undi95).
52
 
53
- Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing model
 
 
 
14
 
15
  # Prompting Format
16
 
17
+ chat format:
18
+
19
  single-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>
20
 
21
  multi-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>Hi!<|endoftext|>Human: How are you?\n\nAssistant: <|endoftext|>target2<|endoftext|>
22
 
23
  # Merge process
24
 
25
+ The models used in the merge are [dolphin-2_2-yi-34b](https://huggingface.co/ehartford/dolphin-2_2-yi-34b) and [OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama).
26
 
27
  The layer ranges used are as follows:
28
 
29
  ```yaml
30
+ - range 0, 16
31
+ OrionStar-Yi-34B-Chat
32
+ - range 8, 24
33
+ dolphin-2_2-yi-34b
34
+ - range 17, 32
35
+ OrionStar-Yi-34B-Chat
36
+ - range 25, 40
37
+ dolphin-2_2-yi-34b
38
+ - range 33, 48
39
+ OrionStar-Yi-34B-Chat
40
+ - range 41, 56
41
+ dolphin-2_2-yi-34b
42
+ - range 49, 64
43
+ OrionStar-Yi-34B-Chat
44
+ - range 57, 72
45
+ dolphin-2_2-yi-34b
46
+ - range 65, 80
47
+ OrionStar-Yi-34B-Chat
48
  ```
49
 
50
 
 
56
 
57
  Special thanks to [@Undi95](https://huggingface.co/Undi95).
58
 
59
+ Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing model
60
+
61
+ This model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)