File size: 1,230 Bytes
8730a50
70dc963
8730a50
4f007be
8730a50
 
 
804e0e7
4f007be
 
 
804e0e7
8730a50
804e0e7
8730a50
 
804e0e7
 
 
8730a50
 
 
804e0e7
 
 
8730a50
804e0e7
 
 
 
 
 
8730a50
804e0e7
 
 
 
8730a50
804e0e7
c8fcb1e
 
8730a50
 
 
 
 
804e0e7
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: cc-by-nc-4.0
---
ExllamaV2 version of model created by the work of IkariDev + Undi95


Original Card
https://huggingface.co/IkariDev/Athena-v2

Requires ExllamaV2, which is being developed by turboderp https://github.com/turboderp/exllamav2 under an MIT license.

---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/y9gdW2923RkORUxejcLVL.png)

Experimental Athena v2 model. Use Alpaca format.

<!-- description start -->
## Description

This repo contains fp16 files of Athena-V2.

<!-- description end -->
<!-- description start -->
## Models and loras used

- Xwin-LM/Xwin-LM-13B-V0.1
- Undi95/ReMM-v2.2-L2-13B
- Undi95/MLewd-L2-13B-v2-3
- Brouz/Slerpeno
- boomerchan/Magpie-13b
```
[Xwin (0.30) + ReMM v2.2 (0.70)](0.45) x [[Xwin (0.40) + MLewd v2-3 (0.60)](0.80) + [Slerpeno(0.50) + Magpie-13b(0.50)](0.20)](0.55)
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca

```
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

```

HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)