TheBloke commited on
Commit
d316dc9
1 Parent(s): 7f31b54

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +43 -17
README.md CHANGED
@@ -23,12 +23,14 @@ tags:
23
  </div>
24
  <!-- header end -->
25
 
26
- # NousResearch's Redmond Puffin 13B GPTQ
27
 
28
- These files are GPTQ model files for [NousResearch's Redmond Puffin 13B](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
29
 
30
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
31
 
 
 
32
  ## Repositories available
33
 
34
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
@@ -50,10 +52,14 @@ Each separate quant is in a different branch. See below for instructions on fet
50
 
51
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
52
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
53
- | main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
54
- | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
55
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
56
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
 
 
 
 
57
 
58
  ## How to download from branches
59
 
@@ -103,7 +109,7 @@ use_triton = False
103
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
104
 
105
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
106
- model_basename=model_basename
107
  use_safetensors=True,
108
  trust_remote_code=False,
109
  device="cuda:0",
@@ -180,17 +186,19 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
180
 
181
  **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
182
 
183
- **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
 
184
 
185
  Thank you to all my generous patrons and donaters!
186
 
187
  <!-- footer end -->
188
 
189
- # Original model card: NousResearch's Redmond Puffin 13B
 
190
 
191
  ![puffin](https://i.imgur.com/R2xTHMb.png)
192
 
193
- ## **Redmond-Puffin-13b (Currently available as a Preview edition)**
194
 
195
  **The first commercially available language model released by Nous Research!**
196
 
@@ -204,25 +212,39 @@ Notable mentions for assisting in some of the training issues goes to: Caseus an
204
 
205
  ## Model Training
206
 
207
- Redmond-Puffin-13B is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
208
 
209
- Additional data came from carefully curated subsections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
210
 
211
  ## Prompt Format
212
 
213
- The model follows the Vicuna ShareGPT prompt format:
214
 
215
  ```
216
  ### human:
217
 
218
- ### gpt:
 
 
 
 
219
  ```
 
 
 
 
 
 
 
 
 
 
220
 
221
  ## Notable Features:
222
 
223
  - The first Llama-2 based fine-tuned model released by Nous Research.
224
 
225
- - Ability to recall information from upto late 2022 without internet. (ChatGPT cut off date is in 2021)
226
 
227
  - Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
228
 
@@ -240,9 +262,13 @@ We plan to have these solved in an updated Puffin model in the very near future,
240
 
241
  This is a relatively early build amongst the grand plans for the future of Puffin!
242
 
243
- Current limitations: Some token mismatch problems and formatting issues have been idenitifed, these may very possibly effect the current output quality, we plan to have these solved in an updated Puffin model in the near future.
 
 
 
 
244
 
245
- In the near future we plan on releasing an improved version of the model with the help of domain specific expert volunteers, which will help eliminate any wrong data from this curation and improve the further ones.
246
 
247
  ## Benchmarks coming soon
248
 
 
23
  </div>
24
  <!-- header end -->
25
 
26
+ # NousResearch's Redmond Puffin 13B V1.3 GPTQ
27
 
28
+ These files are GPTQ model files for [NousResearch's Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
29
 
30
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
31
 
32
+ Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
33
+
34
  ## Repositories available
35
 
36
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
 
52
 
53
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
54
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
55
+ | main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
56
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
57
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
58
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
59
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
60
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
61
+ | gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
62
+ | gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
63
 
64
  ## How to download from branches
65
 
 
109
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
110
 
111
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
112
+ model_basename=model_basename,
113
  use_safetensors=True,
114
  trust_remote_code=False,
115
  device="cuda:0",
 
186
 
187
  **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
188
 
189
+ **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
190
+
191
 
192
  Thank you to all my generous patrons and donaters!
193
 
194
  <!-- footer end -->
195
 
196
+ # Original model card: NousResearch's Redmond Puffin 13B V1.3
197
+
198
 
199
  ![puffin](https://i.imgur.com/R2xTHMb.png)
200
 
201
+ ## **Redmond-Puffin-13b-V1.3**
202
 
203
  **The first commercially available language model released by Nous Research!**
204
 
 
212
 
213
  ## Model Training
214
 
215
+ Redmond-Puffin-13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
216
 
217
+ Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
218
 
219
  ## Prompt Format
220
 
221
+ The reccomended model usage is:
222
 
223
  ```
224
  ### human:
225
 
226
+ ### response:
227
+
228
+ ```
229
+ Optional reccomended pre-prompt / system prompt:
230
+
231
  ```
232
+ ### human: Interact in conversation to the best of your ability, please be concise, logical, intelligent and coherent.
233
+
234
+ ### response: Sure! sounds good.
235
+ ```
236
+
237
+ ## Improvements over previous version:
238
+
239
+ The original Puffin model was loved by many, however it was quickly discovered to have dataset errors in a significant amount of the conversations.
240
+ Puffin-V1.3 dataset solves this issue and the resulting fixed model has now fully finished training!
241
+
242
 
243
  ## Notable Features:
244
 
245
  - The first Llama-2 based fine-tuned model released by Nous Research.
246
 
247
+ - Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021)
248
 
249
  - Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
250
 
 
262
 
263
  This is a relatively early build amongst the grand plans for the future of Puffin!
264
 
265
+ Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
266
+
267
+ ## How you can help!
268
+
269
+ In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
270
 
271
+ If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact ldj on discord!
272
 
273
  ## Benchmarks coming soon
274