File size: 3,081 Bytes
dbf4096
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f5ee7f
 
 
 
 
 
 
dbf4096
03a288a
dbf4096
03a288a
 
 
 
 
 
 
 
dbf4096
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---

## Exllama v2 Quantizations of Mistral-22B-v0.2

Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization.

<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: https://huggingface.co/Vezora/Mistral-22B-v0.2

## Prompt Format 

```
### System: {system_prompt}
### Human: {prompt}
### Assistant:
```

## Available sizes

| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ------ | ---- | ------------ | ---- | ---- | ---- | ----------- |
| [8_0](https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/8_0)   | 8.0  | 8.0 | 23.5 GB | 26.0 GB | 29.5 GB | Near unquantized performance, max quality ExLlamaV2 can create.     |
| [6_5](https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/6_5)   | 6.5  | 8.0 | 19.4 GB | 21.9 GB | 25.4 GB | Near unquantized performance at vastly reduced size, **recommended**.       |
| [5_0](https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/5_0)   | 5.0  | 6.0 | 15.5 GB | 18.0 GB | 21.5 GB | Smaller size, lower quality, still very high performance, **recommended**.       |
| [4_25](https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/4_25) | 4.25 | 6.0 | 13.3 GB | 15.8 GB | 19.3 GB | GPTQ equivalent bits per weight, slightly higher quality.                   |
| [3_5](https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/3_5)   | 3.5  | 6.0 | 11.6 GB | 14.1 GB | 17.6 GB | Lower quality, only use if you have to.                                     |
| [3_0](https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2/tree/3_0)   | 3.0  | 6.0 | 9.8 GB | 12.3 GB | 15.8 GB | Very low quality. Usable on 12GB with low context or 16gb with 32k. |


## Download instructions

With git:

```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Mistral-22B-v0.2-exl2
```

With huggingface hub (credit to TheBloke for instructions):

```shell
pip3 install huggingface-hub
```

To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Mistral-22B-v0.2-exl2`:

```shell
mkdir Mistral-22B-v0.2-exl2
huggingface-cli download bartowski/Mistral-22B-v0.2-exl2 --local-dir Mistral-22B-v0.2-exl2 --local-dir-use-symlinks False
```

To download from a different branch, add the `--revision` parameter:

Linux:

```shell
mkdir Mistral-22B-v0.2-exl2-6_5
huggingface-cli download bartowski/Mistral-22B-v0.2-exl2 --revision 6_5 --local-dir Mistral-22B-v0.2-exl2-6_5 --local-dir-use-symlinks False
```

Windows (which apparently doesn't like _ in folders sometimes?):

```shell
mkdir Mistral-22B-v0.2-exl2-6.5
huggingface-cli download bartowski/Mistral-22B-v0.2-exl2 --revision 6_5 --local-dir Mistral-22B-v0.2-exl2-6.5 --local-dir-use-symlinks False
```