TheBloke commited on
Commit
0fb3ade
1 Parent(s): e9d3bbb

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -88,15 +88,8 @@ Below is an instruction that describes a task. Write a response that appropriate
88
  ```
89
 
90
  <!-- prompt-template end -->
91
- <!-- licensing start -->
92
- ## Licensing
93
 
94
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
95
 
96
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
97
-
98
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Ausboss' LLaMa 13B Supercot](https://huggingface.co/ausboss/llama-13b-supercot).
99
- <!-- licensing end -->
100
  <!-- compatibility_gguf start -->
101
  ## Compatibility
102
 
@@ -155,7 +148,7 @@ The following clients/libraries will automatically download models for you, prov
155
 
156
  ### In `text-generation-webui`
157
 
158
- Under Download Model, you can enter the model repo: TheBloke/llama-13b-supercot-GGUF and below it, a specific filename to download, such as: llama-13b-supercot.q4_K_M.gguf.
159
 
160
  Then click Download.
161
 
@@ -170,7 +163,7 @@ pip3 install huggingface-hub>=0.17.1
170
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
171
 
172
  ```shell
173
- huggingface-cli download TheBloke/llama-13b-supercot-GGUF llama-13b-supercot.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
174
  ```
175
 
176
  <details>
@@ -193,7 +186,7 @@ pip3 install hf_transfer
193
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
194
 
195
  ```shell
196
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama-13b-supercot-GGUF llama-13b-supercot.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
197
  ```
198
 
199
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -206,7 +199,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
206
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
207
 
208
  ```shell
209
- ./main -ngl 32 -m llama-13b-supercot.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
210
  ```
211
 
212
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -246,7 +239,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
246
  from ctransformers import AutoModelForCausalLM
247
 
248
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
249
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/llama-13b-supercot-GGUF", model_file="llama-13b-supercot.q4_K_M.gguf", model_type="llama", gpu_layers=50)
250
 
251
  print(llm("AI is going to"))
252
  ```
 
88
  ```
89
 
90
  <!-- prompt-template end -->
 
 
91
 
 
92
 
 
 
 
 
93
  <!-- compatibility_gguf start -->
94
  ## Compatibility
95
 
 
148
 
149
  ### In `text-generation-webui`
150
 
151
+ Under Download Model, you can enter the model repo: TheBloke/llama-13b-supercot-GGUF and below it, a specific filename to download, such as: llama-13b-supercot.Q4_K_M.gguf.
152
 
153
  Then click Download.
154
 
 
163
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
164
 
165
  ```shell
166
+ huggingface-cli download TheBloke/llama-13b-supercot-GGUF llama-13b-supercot.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
167
  ```
168
 
169
  <details>
 
186
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
187
 
188
  ```shell
189
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama-13b-supercot-GGUF llama-13b-supercot.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
190
  ```
191
 
192
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
199
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
200
 
201
  ```shell
202
+ ./main -ngl 32 -m llama-13b-supercot.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
203
  ```
204
 
205
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
239
  from ctransformers import AutoModelForCausalLM
240
 
241
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
242
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/llama-13b-supercot-GGUF", model_file="llama-13b-supercot.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
243
 
244
  print(llm("AI is going to"))
245
  ```