TheBloke commited on
Commit
78ad800
1 Parent(s): 7a5ceb8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -8
README.md CHANGED
@@ -41,14 +41,6 @@ I have the following Vicuna 1.1 repositories available:
41
  * [GPTQ quantized 4bit 7B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g)
42
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/vicuna-7B-1.1-GGML)
43
 
44
- **GGMLs for CPU inference**
45
-
46
- I removed the GGMLs I originally made for Vicuna 1.1 because they were directly converted GPTQ -> GGML and this seemed to give poor results
47
-
48
- Instead I recommend you use eachadea's GGMLs:
49
- * [eachadea's Vicuna 13B 1.1 GGML format for `llama.cpp`](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1)
50
- * [eachadea's Vicuna 7B 1.1 GGML format for `llama.cpp`](https://huggingface.co/eachadea/ggml-vicuna-7b-1.1)
51
-
52
  ## How to easily download and use this model in text-generation-webui
53
 
54
  Open the text-generation-webui UI as normal.
 
41
  * [GPTQ quantized 4bit 7B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g)
42
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/vicuna-7B-1.1-GGML)
43
 
 
 
 
 
 
 
 
 
44
  ## How to easily download and use this model in text-generation-webui
45
 
46
  Open the text-generation-webui UI as normal.