FPHam commited on
Commit
6d872e4
1 Parent(s): 5055c01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ Based on LLAMA 13b and Wizard-Vucna-uncensored finetune, then finetuned with abo
10
 
11
  ## Quantized version (Quantized by TheBloke)
12
 
13
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GPTQ)
14
  * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
15
 
16
  Karen gets triggered by this prompt (pun intended):
 
10
 
11
  ## Quantized version (Quantized by TheBloke)
12
 
13
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/FPHam/Karen_theEditor-13B-4bit-128g-GPTQ)
14
  * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
15
 
16
  Karen gets triggered by this prompt (pun intended):