TheBloke commited on
Commit
1aacb5d
1 Parent(s): 8d776b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -14
README.md CHANGED
@@ -23,13 +23,28 @@ These files are GPTQ 4bit model files for [Panchovix's merge of Guanaco 33B and
23
 
24
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ## Repositories available
27
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ)
29
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/none)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Panchovix/Guanaco-33B-SuperHOT-8k)
31
 
32
- ## How to easily download and use this model in text-generation-webui
33
 
34
  Please make sure you're using the latest version of text-generation-webui
35
 
@@ -37,20 +52,25 @@ Please make sure you're using the latest version of text-generation-webui
37
  2. Under **Download custom model or LoRA**, enter `TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ`.
38
  3. Click **Download**.
39
  4. The model will start downloading. Once it's finished it will say "Done"
40
- 5. In the top left, click the refresh icon next to **Model**.
41
- 6. In the **Model** dropdown, choose the model you just downloaded: `Guanaco-33B-SuperHOT-8K-GPTQ`
42
- 7. The model will automatically load, and is now ready for use!
43
- 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
44
- * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
45
- 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
 
46
 
47
- ## How to use this GPTQ model from Python code
48
 
49
- First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
50
 
51
- `pip install auto-gptq`
 
 
52
 
53
- Then try the following example code:
 
 
54
 
55
  ```python
56
  from transformers import AutoTokenizer, pipeline, logging
@@ -67,11 +87,13 @@ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
67
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
68
  model_basename=model_basename,
69
  use_safetensors=True,
70
- trust_remote_code=False,
71
- device="cuda:0",
72
  use_triton=use_triton,
73
  quantize_config=None)
74
 
 
 
75
  # Note: check the prompt template is correct for this model.
76
  prompt = "Tell me about AI"
77
  prompt_template=f'''USER: {prompt}
@@ -102,6 +124,12 @@ pipe = pipeline(
102
  print(pipe(prompt_template)[0]['generated_text'])
103
  ```
104
 
 
 
 
 
 
 
105
  ## Provided files
106
 
107
  **guanaco-33b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors**
 
23
 
24
  It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
26
+ **This is an experimental new GPTQ which offers up to 8K context size**
27
+
28
+ The increased context is tested to work with [ExLlama](https://github.com/turboderp/exllama), via the latest release of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
29
+
30
+ It has also been tested from Python code using AutoGPTQ, and `trust_remote_code=True`.
31
+
32
+ Code credits:
33
+ - Original concept and code for inreasing context length: [kaiokendev](https://huggingface.co/kaiokendev)
34
+ - Updated Llama modelling code that includes this automatically via trust_remote_code: [emozilla](https://huggingface.co/emozilla).
35
+
36
+ Please read carefully below to see how to use it.
37
+
38
+ **NOTE**: Using the full 8K context on a 30B model will exceed 24GB VRAM.
39
+
40
+ GGML versions are not yet provided, as there is not yet support for SuperHOT in llama.cpp. This is being investigated and will hopefully come soon.
41
+
42
  ## Repositories available
43
 
44
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ)
 
45
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Panchovix/Guanaco-33B-SuperHOT-8k)
46
 
47
+ ## How to easily download and use this model in text-generation-webui with ExLlama
48
 
49
  Please make sure you're using the latest version of text-generation-webui
50
 
 
52
  2. Under **Download custom model or LoRA**, enter `TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ`.
53
  3. Click **Download**.
54
  4. The model will start downloading. Once it's finished it will say "Done"
55
+ 5. Untick **Autoload the model**
56
+ 6. In the top left, click the refresh icon next to **Model**.
57
+ 7. In the **Model** dropdown, choose the model you just downloaded: `Guanaco-33B-SuperHOT-8K-GPTQ`
58
+ 8. To use the increased context, set the **Loader** to **ExLlama**, set **max_seq_len** to 8192 or 4096, and set **compress_pos_emb** to **4** for 8192 context, or to **2** for 4096 context.
59
+ 9. Now click **Save Settings** followed by **Reload**
60
+ 10. The model will automatically load, and is now ready for use!
61
+ 11. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
62
 
63
+ ## How to use this GPTQ model from Python code with AutoGPTQ
64
 
65
+ First make sure you have AutoGPTQ and Einops installed:
66
 
67
+ ```
68
+ pip3 install einops auto-gptq
69
+ ```
70
 
71
+ Then run the following code. Note that in order to get this to work, `config.json` has been hardcoded to a sequence length of 8192.
72
+
73
+ If you want to try 4096 instead to reduce VRAM usage, please manually edit `config.json` to set `max_position_embeddings` to the value you want.
74
 
75
  ```python
76
  from transformers import AutoTokenizer, pipeline, logging
 
87
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
88
  model_basename=model_basename,
89
  use_safetensors=True,
90
+ trust_remote_code=True,
91
+ device_map='auto',
92
  use_triton=use_triton,
93
  quantize_config=None)
94
 
95
+ model.seqlen = 8192
96
+
97
  # Note: check the prompt template is correct for this model.
98
  prompt = "Tell me about AI"
99
  prompt_template=f'''USER: {prompt}
 
124
  print(pipe(prompt_template)[0]['generated_text'])
125
  ```
126
 
127
+ ## Using other UIs: monkey patch
128
+
129
+ Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
130
+
131
+ It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
132
+
133
  ## Provided files
134
 
135
  **guanaco-33b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors**