SG-Raccoon-Yi-55B / README.md
mlinmg's picture
Update README.md
3d5bbdf
|
raw
history blame
1.67 kB
---
license: other
license_name: yi
license_link: https://huggingface.co/01-ai/Yi-34B-Chat/blob/main/LICENSE
language:
- en,
pipeline_tag: conversational
library_name: adapter-transformers
---
# SG Raccoon orion-to-dolph 66B
An auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://huggingface.co/01-ai/Yi-34B) into one.
# Prompting Format
chat format:
single-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>
multi-turn: <|startoftext|>Human: Hello!\n\nAssistant: <|endoftext|>Hi!<|endoftext|>Human: How are you?\n\nAssistant: <|endoftext|>target2<|endoftext|>
# Merge process
The models used in the merge are [dolphin-2_2-yi-34b](https://huggingface.co/ehartford/dolphin-2_2-yi-34b) and [OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama).
The layer ranges used are as follows:
```yaml
- range 0, 16
OrionStar-Yi-34B-Chat
- range 8, 24
dolphin-2_2-yi-34b
- range 17, 32
OrionStar-Yi-34B-Chat
- range 25, 40
dolphin-2_2-yi-34b
- range 33, 48
OrionStar-Yi-34B-Chat
- range 41, 56
dolphin-2_2-yi-34b
- range 49, 64
OrionStar-Yi-34B-Chat
- range 57, 72
dolphin-2_2-yi-34b
- range 65, 80
OrionStar-Yi-34B-Chat
```
# Benchmarks
Coming soon.
# Acknowledgements
Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
Special thanks to [@Undi95](https://huggingface.co/Undi95).
Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing model
This model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)