TheBloke commited on
Commit
af978bd
1 Parent(s): bd22049

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -4,17 +4,20 @@ license: other
4
  ---
5
 
6
  <!-- header start -->
7
- <div style="width: 100%;">
8
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
9
  </div>
10
  <div style="display: flex; justify-content: space-between; width: 100%;">
11
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
  </div>
14
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
  </div>
17
  </div>
 
 
18
  <!-- header end -->
19
 
20
  # H2O's H2OGPT Research OASST1 LLaMa 65B GPTQ
@@ -124,6 +127,7 @@ It was created without group_size to lower VRAM requirements, and with --act-ord
124
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
125
 
126
  <!-- footer start -->
 
127
  ## Discord
128
 
129
  For further support, and discussions on these models and AI in general, join us at:
@@ -143,12 +147,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
143
  * Patreon: https://patreon.com/TheBlokeAI
144
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
145
 
146
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
147
 
148
- **Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
149
 
150
  Thank you to all my generous patrons and donaters!
151
 
 
 
152
  <!-- footer end -->
153
 
154
  # Original model card: H2O's H2OGPT Research OASST1 LLaMa 65B
 
4
  ---
5
 
6
  <!-- header start -->
7
+ <!-- 200823 -->
8
+ <div style="width: auto; margin-left: auto; margin-right: auto">
9
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
10
  </div>
11
  <div style="display: flex; justify-content: space-between; width: 100%;">
12
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
13
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
14
  </div>
15
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
16
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
17
  </div>
18
  </div>
19
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
20
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
21
  <!-- header end -->
22
 
23
  # H2O's H2OGPT Research OASST1 LLaMa 65B GPTQ
 
127
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
128
 
129
  <!-- footer start -->
130
+ <!-- 200823 -->
131
  ## Discord
132
 
133
  For further support, and discussions on these models and AI in general, join us at:
 
147
  * Patreon: https://patreon.com/TheBlokeAI
148
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
149
 
150
+ **Special thanks to**: Aemon Algiz.
151
+
152
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
153
 
 
154
 
155
  Thank you to all my generous patrons and donaters!
156
 
157
+ And thank you again to a16z for their generous grant.
158
+
159
  <!-- footer end -->
160
 
161
  # Original model card: H2O's H2OGPT Research OASST1 LLaMa 65B
config.json CHANGED
@@ -1,30 +1,40 @@
1
  {
2
- "_name_or_path": "decapoda-research/llama-65b-hf",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 0,
7
- "custom_pipelines": {
8
- "text-generation": {
9
- "impl": "h2oai_pipeline.H2OTextGenerationPipeline",
10
- "pt": "AutoModelForCausalLM"
11
- }
12
- },
13
- "eos_token_id": 1,
14
- "hidden_act": "silu",
15
- "hidden_size": 8192,
16
- "initializer_range": 0.02,
17
- "intermediate_size": 22016,
18
- "max_position_embeddings": 2048,
19
- "max_sequence_length": 2048,
20
- "model_type": "llama",
21
- "num_attention_heads": 64,
22
- "num_hidden_layers": 80,
23
- "pad_token_id": -1,
24
- "rms_norm_eps": 1e-05,
25
- "tie_word_embeddings": false,
26
- "torch_dtype": "float16",
27
- "transformers_version": "4.30.1",
28
- "use_cache": true,
29
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
30
  }
 
1
  {
2
+ "_name_or_path": "decapoda-research/llama-65b-hf",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 0,
7
+ "custom_pipelines": {
8
+ "text-generation": {
9
+ "impl": "h2oai_pipeline.H2OTextGenerationPipeline",
10
+ "pt": "AutoModelForCausalLM"
11
+ }
12
+ },
13
+ "eos_token_id": 1,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 8192,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 22016,
18
+ "max_position_embeddings": 2048,
19
+ "max_sequence_length": 2048,
20
+ "model_type": "llama",
21
+ "num_attention_heads": 64,
22
+ "num_hidden_layers": 80,
23
+ "pad_token_id": -1,
24
+ "rms_norm_eps": 1e-05,
25
+ "tie_word_embeddings": false,
26
+ "torch_dtype": "float16",
27
+ "transformers_version": "4.30.1",
28
+ "use_cache": true,
29
+ "vocab_size": 32000,
30
+ "quantization_config": {
31
+ "bits": 4,
32
+ "group_size": -1,
33
+ "damp_percent": 0.01,
34
+ "desc_act": true,
35
+ "sym": true,
36
+ "true_sequential": true,
37
+ "model_file_base_name": "model",
38
+ "quant_method": "gptq"
39
+ }
40
  }
h2ogpt-research-oasst1-llama-65b-GPTQ-4bit--1g.act.order.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:80dc65aa1a11490962921ca6ad8f859b359012c4d227a0ce794cc012217d0518
3
- size 33489332352
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b91ca0282321bc785dac47163d52ba8e8bd06b6be70eed3b959e8351d91a2218
3
+ size 33489332408
quantize_config.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "bits": 4,
3
- "group_size": -1,
4
- "damp_percent": 0.01,
5
- "desc_act": true,
6
- "sym": true,
7
- "true_sequential": true
 
8
  }
 
1
  {
2
+ "bits": 4,
3
+ "group_size": -1,
4
+ "damp_percent": 0.01,
5
+ "desc_act": true,
6
+ "sym": true,
7
+ "true_sequential": true,
8
+ "model_file_base_name": "model"
9
  }