Text Generation
Transformers
Safetensors
English
llama
causal-lm
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
c938aff
1 Parent(s): 3b488c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +233 -0
README.md CHANGED
@@ -1,3 +1,236 @@
1
  ---
2
  license: other
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ inference: false
4
  ---
5
+
6
+ # StableVicuna-13B
7
+
8
+ This repo contains 4bit GPTQ format quantised models of [CarterAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
9
+
10
+ It is the result of first merging the deltas from the above repository with the original Llama 13B weights, then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
11
+
12
+ ## Repositories available
13
+
14
+ * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ).
15
+ * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
16
+ * [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
17
+
18
+ ## GIBBERISH OUTPUT IN `text-generation-webui`?
19
+
20
+ Please read the Provided Files section below. You should use `wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
21
+
22
+ If you're using a text-generation-webui one click installer, you MUST use `wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors`.
23
+
24
+ ## Provided files
25
+
26
+ Two files are provided. **The second file will not work unless you use a recent version of GPTQ-for-LLaMa**
27
+
28
+ Specifically, the second file uses `--act-order` for maximum quantisation quality and will not work with oobabooga's fork of GPTQ-for-LLaMa. Therefore at this time it will also not work with `text-generation-webui` one-click installers.
29
+
30
+ Unless you are able to use the latest GPTQ-for-LLaMa code, please use `wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors`.
31
+
32
+ * `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors`
33
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
34
+ * Works with text-generation-webui one-click-installers
35
+ * Works on Windows
36
+ * Parameters: Groupsize = 128g. No act-order.
37
+ * Command used to create the GPTQ:
38
+ ```
39
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
40
+ ```
41
+ * `stable-vicuna-13B-GPTQ-4bit.act-order.safetensors`
42
+ * Only works with recent GPTQ-for-LLaMa code
43
+ * **Does not** work with text-generation-webui one-click-installers
44
+ * Parameters: Groupsize = 128g. act-order.
45
+ * Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
46
+ * Command used to create the GPTQ:
47
+ ```
48
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
49
+ ```
50
+
51
+ ## How to run in `text-generation-webui`
52
+
53
+ File `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
54
+
55
+ [Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
56
+
57
+ The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
58
+
59
+ If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
60
+ ```
61
+ # Clone text-generation-webui, if you don't already have it
62
+ git clone https://github.com/oobabooga/text-generation-webui
63
+ # Make a repositories directory
64
+ mkdir text-generation-webui/repositories
65
+ cd text-generation-webui/repositories
66
+ # Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
67
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
68
+ ```
69
+
70
+ Then install this model into `text-generation-webui/models` and launch the UI as follows:
71
+ ```
72
+ cd text-generation-webui
73
+ python server.py --model stable-vicuna-13B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
74
+ ```
75
+
76
+ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
77
+
78
+ If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
79
+
80
+ # Original StableVicuna-13B model card
81
+
82
+ ## Model Description
83
+
84
+ StableVicuna-13B is a [Vicuna-13B v0](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
85
+
86
+ ## Model Details
87
+
88
+ * **Trained by**: [Duy Phung](https://github.com/PhungVanDuy) of [CarperAI](https://carper.ai)
89
+ * **Model type:** **StableVicuna-13B** is an auto-regressive language model based on the LLaMA transformer architecture.
90
+ * **Language(s)**: English
91
+ * **Library**: [trlX](https://github.com/CarperAI/trlx)
92
+ * **License for delta weights**: [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
93
+ * *Note*: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
94
+ * **Contact**: For questions and comments about the model, visit the [CarperAI](https://discord.com/invite/KgfkCVYHdu) and [StableFoundation](https://discord.gg/stablediffusion) Discord servers.
95
+
96
+ | Hyperparameter | Value |
97
+ |---------------------------|-------|
98
+ | \\(n_\text{parameters}\\) | 13B |
99
+ | \\(d_\text{model}\\) | 5120 |
100
+ | \\(n_\text{layers}\\) | 40 |
101
+ | \\(n_\text{heads}\\) | 40 |
102
+
103
+ ## Training
104
+
105
+ ### Training Dataset
106
+
107
+ StableVicuna-13B is fine-tuned on a mix of three datasets. [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
108
+ [GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
109
+
110
+ The reward model used during RLHF was also trained on [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [Stanford Human Preferences Dataset](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
111
+
112
+ ### Training Procedure
113
+
114
+ `CarperAI/stable-vicuna-13b-delta` was trained using PPO as implemented in [`trlX`](https://github.com/CarperAI/trlx/blob/main/trlx/trainer/accelerate_ppo_trainer.py) with the following configuration:
115
+
116
+ | Hyperparameter | Value |
117
+ |-------------------|---------|
118
+ | num_rollouts | 128 |
119
+ | chunk_size | 16 |
120
+ | ppo_epochs | 4 |
121
+ | init_kl_coef | 0.1 |
122
+ | target | 6 |
123
+ | horizon | 10000 |
124
+ | gamma | 1 |
125
+ | lam | 0.95 |
126
+ | cliprange | 0.2 |
127
+ | cliprange_value | 0.2 |
128
+ | vf_coef | 1.0 |
129
+ | scale_reward | None |
130
+ | cliprange_reward | 10 |
131
+ | generation_kwargs | |
132
+ | max_length | 512 |
133
+ | min_length | 48 |
134
+ | top_k | 0.0 |
135
+ | top_p | 1.0 |
136
+ | do_sample | True |
137
+ | temperature | 1.0 |
138
+
139
+ ## Use and Limitations
140
+
141
+ ### Intended Use
142
+
143
+ This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial [license](https://creativecommons.org/licenses/by-nc/4.0/).
144
+
145
+ ### Limitations and bias
146
+
147
+ The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA [paper](https://arxiv.org/abs/2302.13971). We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
148
+
149
+ ## Acknowledgements
150
+
151
+ This work would not have been possible without the support of [Stability AI](https://stability.ai/).
152
+
153
+ ## Citations
154
+
155
+ ```bibtex
156
+ @article{touvron2023llama,
157
+ title={LLaMA: Open and Efficient Foundation Language Models},
158
+ author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
159
+ journal={arXiv preprint arXiv:2302.13971},
160
+ year={2023}
161
+ }
162
+ ```
163
+
164
+ ```bibtex
165
+ @misc{vicuna2023,
166
+ title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
167
+ url = {https://vicuna.lmsys.org},
168
+ author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
169
+ month = {March},
170
+ year = {2023}
171
+ }
172
+ ```
173
+
174
+ ```bibtex
175
+ @misc{gpt4all,
176
+ author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
177
+ title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
178
+ year = {2023},
179
+ publisher = {GitHub},
180
+ journal = {GitHub repository},
181
+ howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
182
+ }
183
+ ```
184
+
185
+ ```bibtex
186
+ @misc{alpaca,
187
+ author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
188
+ title = {Stanford Alpaca: An Instruction-following LLaMA model},
189
+ year = {2023},
190
+ publisher = {GitHub},
191
+ journal = {GitHub repository},
192
+ howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
193
+ }
194
+ ```
195
+
196
+ ```bibtex
197
+ @software{leandro_von_werra_2023_7790115,
198
+ author = {Leandro von Werra and
199
+ Alex Havrilla and
200
+ Max reciprocated and
201
+ Jonathan Tow and
202
+ Aman cat-state and
203
+ Duy V. Phung and
204
+ Louis Castricato and
205
+ Shahbuland Matiana and
206
+ Alan and
207
+ Ayush Thakur and
208
+ Alexey Bukhtiyarov and
209
+ aaronrmm and
210
+ Fabrizio Milo and
211
+ Daniel and
212
+ Daniel King and
213
+ Dong Shin and
214
+ Ethan Kim and
215
+ Justin Wei and
216
+ Manuel Romero and
217
+ Nicky Pochinkov and
218
+ Omar Sanseviero and
219
+ Reshinth Adithyan and
220
+ Sherman Siu and
221
+ Thomas Simonini and
222
+ Vladimir Blagojevic and
223
+ Xu Song and
224
+ Zack Witten and
225
+ alexandremuzio and
226
+ crumb},
227
+ title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
228
+ Util, T5 ILQL, Tests}},
229
+ month = mar,
230
+ year = 2023,
231
+ publisher = {Zenodo},
232
+ version = {v0.6.0},
233
+ doi = {10.5281/zenodo.7790115},
234
+ url = {https://doi.org/10.5281/zenodo.7790115}
235
+ }
236
+ ```