File size: 13,206 Bytes
0a8b32d c938aff 0a8b32d c938aff 34349e9 c938aff 71edffa b2e5792 71edffa 7e7c4d7 4c3d3ac 7e7c4d7 4c3d3ac 7e7c4d7 c938aff 4c3d3ac c938aff e202309 c938aff e202309 1f1a8c1 c938aff 1f1a8c1 c938aff e202309 c938aff e202309 c938aff e202309 c938aff e202309 c938aff e202309 c938aff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 |
---
license: other
inference: false
---
# StableVicuna-13B-GPTQ
This repo contains 4bit GPTQ format quantised models of [CarterAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
It is the result of first merging the deltas from the above repository with the original Llama 13B weights, then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ).
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
* [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
## PROMPT TEMPLATE
This model works best with the following prompt template:
```
### Human: your prompt here
### Assistant:
```
## How to easily download and use this model in text-generation-webui
Load text-generation-webui as you normally do.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter this repo name: `TheBloke/stable-vicuna-13B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
6. Now click the **Refresh** icon next to **Model** in the top left.
7. In the **Model drop-down**: choose this model: `stable-vicuna-13B-GPTQ`.
8. Click **Reload the Model** in the top right.
9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## GIBBERISH OUTPUT IN `text-generation-webui`?
If you're installing the model files manually, olease read the Provided Files section below. You should use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
If you're using a text-generation-webui one click installer, you MUST use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`.
## Provided files
Two files are provided. **The 'latest' file will not work unless you use a recent version of GPTQ-for-LLaMa**
If you do an automatic download with `text-generation-webui` as described above it will pick the 'compat' file which should work for everyone.
The 'latest' file uses `--act-order` for maximum quantisation quality and will not work with oobabooga's fork of GPTQ-for-LLaMa. Therefore at this time it will also not work with `text-generation-webui` one-click installers.
Unless you are able to use the latest GPTQ-for-LLaMa code, please use `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`.
* `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with text-generation-webui one-click-installers
* Works on Windows
* Parameters: Groupsize = 128g. No act-order.
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
```
* `stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors`
* Only works with recent GPTQ-for-LLaMa code
* **Does not** work with text-generation-webui one-click-installers
* Parameters: Groupsize = 128g. act-order.
* Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
```
## Manual instructions for `text-generation-webui`
File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
[Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
```
Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model stable-vicuna-13B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
# Original StableVicuna-13B model card
## Model Description
StableVicuna-13B is a [Vicuna-13B v0](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
## Model Details
* **Trained by**: [Duy Phung](https://github.com/PhungVanDuy) of [CarperAI](https://carper.ai)
* **Model type:** **StableVicuna-13B** is an auto-regressive language model based on the LLaMA transformer architecture.
* **Language(s)**: English
* **Library**: [trlX](https://github.com/CarperAI/trlx)
* **License for delta weights**: [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
* *Note*: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
* **Contact**: For questions and comments about the model, visit the [CarperAI](https://discord.com/invite/KgfkCVYHdu) and [StableFoundation](https://discord.gg/stablediffusion) Discord servers.
| Hyperparameter | Value |
|---------------------------|-------|
| \\(n_\text{parameters}\\) | 13B |
| \\(d_\text{model}\\) | 5120 |
| \\(n_\text{layers}\\) | 40 |
| \\(n_\text{heads}\\) | 40 |
## Training
### Training Dataset
StableVicuna-13B is fine-tuned on a mix of three datasets. [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
[GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
The reward model used during RLHF was also trained on [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [Stanford Human Preferences Dataset](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
### Training Procedure
`CarperAI/stable-vicuna-13b-delta` was trained using PPO as implemented in [`trlX`](https://github.com/CarperAI/trlx/blob/main/trlx/trainer/accelerate_ppo_trainer.py) with the following configuration:
| Hyperparameter | Value |
|-------------------|---------|
| num_rollouts | 128 |
| chunk_size | 16 |
| ppo_epochs | 4 |
| init_kl_coef | 0.1 |
| target | 6 |
| horizon | 10000 |
| gamma | 1 |
| lam | 0.95 |
| cliprange | 0.2 |
| cliprange_value | 0.2 |
| vf_coef | 1.0 |
| scale_reward | None |
| cliprange_reward | 10 |
| generation_kwargs | |
| max_length | 512 |
| min_length | 48 |
| top_k | 0.0 |
| top_p | 1.0 |
| do_sample | True |
| temperature | 1.0 |
## Use and Limitations
### Intended Use
This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial [license](https://creativecommons.org/licenses/by-nc/4.0/).
### Limitations and bias
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA [paper](https://arxiv.org/abs/2302.13971). We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
## Acknowledgements
This work would not have been possible without the support of [Stability AI](https://stability.ai/).
## Citations
```bibtex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```bibtex
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
```
```bibtex
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtex
@software{leandro_von_werra_2023_7790115,
author = {Leandro von Werra and
Alex Havrilla and
Max reciprocated and
Jonathan Tow and
Aman cat-state and
Duy V. Phung and
Louis Castricato and
Shahbuland Matiana and
Alan and
Ayush Thakur and
Alexey Bukhtiyarov and
aaronrmm and
Fabrizio Milo and
Daniel and
Daniel King and
Dong Shin and
Ethan Kim and
Justin Wei and
Manuel Romero and
Nicky Pochinkov and
Omar Sanseviero and
Reshinth Adithyan and
Sherman Siu and
Thomas Simonini and
Vladimir Blagojevic and
Xu Song and
Zack Witten and
alexandremuzio and
crumb},
title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
Util, T5 ILQL, Tests}},
month = mar,
year = 2023,
publisher = {Zenodo},
version = {v0.6.0},
doi = {10.5281/zenodo.7790115},
url = {https://doi.org/10.5281/zenodo.7790115}
}
```
|