Upload README.md
Browse files
README.md
CHANGED
@@ -129,6 +129,63 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
129 |
|
130 |
<!-- README_GGUF.md-provided-files end -->
|
131 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
132 |
<!-- README_GGUF.md-how-to-run start -->
|
133 |
## Example `llama.cpp` command
|
134 |
|
|
|
129 |
|
130 |
<!-- README_GGUF.md-provided-files end -->
|
131 |
|
132 |
+
<!-- README_GGUF.md-how-to-download start -->
|
133 |
+
## How to download GGUF files
|
134 |
+
|
135 |
+
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
136 |
+
|
137 |
+
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
|
138 |
+
- LM Studio
|
139 |
+
- LoLLMS Web UI
|
140 |
+
- Faraday.dev
|
141 |
+
|
142 |
+
### In `text-generation-webui`
|
143 |
+
|
144 |
+
Under Download Model, you can enter the model repo: TheBloke/CodeFuse-CodeLlama-34B-GGUF and below it, a specific filename to download, such as: codefuse-codellama-34b.q4_K_M.gguf.
|
145 |
+
|
146 |
+
Then click Download.
|
147 |
+
|
148 |
+
### On the command line, including multiple files at once
|
149 |
+
|
150 |
+
I recommend using the `huggingface-hub` Python library:
|
151 |
+
|
152 |
+
```shell
|
153 |
+
pip3 install huggingface-hub>=0.17.1
|
154 |
+
```
|
155 |
+
|
156 |
+
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
157 |
+
|
158 |
+
```shell
|
159 |
+
huggingface-cli download TheBloke/CodeFuse-CodeLlama-34B-GGUF codefuse-codellama-34b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
160 |
+
```
|
161 |
+
|
162 |
+
<details>
|
163 |
+
<summary>More advanced huggingface-cli download usage</summary>
|
164 |
+
|
165 |
+
You can also download multiple files at once with a pattern:
|
166 |
+
|
167 |
+
```shell
|
168 |
+
huggingface-cli download TheBloke/CodeFuse-CodeLlama-34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
169 |
+
```
|
170 |
+
|
171 |
+
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
172 |
+
|
173 |
+
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
|
174 |
+
|
175 |
+
```shell
|
176 |
+
pip3 install hf_transfer
|
177 |
+
```
|
178 |
+
|
179 |
+
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
180 |
+
|
181 |
+
```shell
|
182 |
+
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeFuse-CodeLlama-34B-GGUF codefuse-codellama-34b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
183 |
+
```
|
184 |
+
|
185 |
+
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
186 |
+
</details>
|
187 |
+
<!-- README_GGUF.md-how-to-download end -->
|
188 |
+
|
189 |
<!-- README_GGUF.md-how-to-run start -->
|
190 |
## Example `llama.cpp` command
|
191 |
|