File size: 1,908 Bytes
3ef7a8c
799e908
 
 
97a8c07
 
 
 
 
 
856c46f
 
 
 
 
 
e9baa9d
799e908
865aff0
799e908
 
 
 
7e09e0c
799e908
 
 
7e09e0c
799e908
 
 
3d5bbdf
799e908
 
 
 
3d5bbdf
01e3f29
3d5bbdf
01e3f29
3d5bbdf
01e3f29
3d5bbdf
01e3f29
3d5bbdf
01e3f29
3d5bbdf
01e3f29
3d5bbdf
01e3f29
3d5bbdf
01e3f29
3d5bbdf
01e3f29
799e908
 
 
 
 
 
 
ea3deb5
799e908
ea3deb5
799e908
ea3deb5
3d5bbdf
ea3deb5
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
language:
- en,
pipeline_tag: conversational

license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
---

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/>
</p>

--- 

# SG Raccoon Yi 55B

The first 55B auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://huggingface.co/01-ai/Yi-34B) into one.


# Prompting Format

```
single-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>

multi-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>Hi!<|endoftext|>Human: How are you?\n\nAssistant: <|endoftext|>target2<|endoftext|>
```

# Merge process

The models used in the merge are [dolphin-2_2-yi-34b](https://huggingface.co/ehartford/dolphin-2_2-yi-34b) and [OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama).

The layer ranges used are as follows:

```yaml
- range 0, 16
OrionStar-Yi-34B-Chat
- range 8, 24
dolphin-2_2-yi-34b
- range 17, 32
OrionStar-Yi-34B-Chat
- range 25, 40
dolphin-2_2-yi-34b
- range 33, 48
OrionStar-Yi-34B-Chat
- range 41, 56
dolphin-2_2-yi-34b
- range 49, 64
OrionStar-Yi-34B-Chat
- range 57, 72
dolphin-2_2-yi-34b
- range 65, 80
OrionStar-Yi-34B-Chat
```


# Benchmarks
Coming soon.

# Acknowledgements
- Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project

- [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).

- Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options

- Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models

- This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)