mradermacher commited on
Commit
975f290
1 Parent(s): 784ce34

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -32,6 +32,7 @@ more details, including on how to concatenate multi-part files.
32
 
33
  | Link | Type | Size/GB | Notes |
34
  |:-----|:-----|--------:|:------|
 
35
  | [GGUF](https://huggingface.co/mradermacher/Vakeel-8B-v2-mini-GGUF/resolve/main/Vakeel-8B-v2-mini.f16.gguf) | f16 | 3.8 | 16 bpw, overkill |
36
 
37
  Here is a handy graph by ikawrakow comparing some lower-quality quant
@@ -51,6 +52,6 @@ questions you might have and/or if you want some other model quantized.
51
 
52
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
53
  me use its servers and providing upgrades to my workstation to enable
54
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
55
 
56
  <!-- end -->
 
32
 
33
  | Link | Type | Size/GB | Notes |
34
  |:-----|:-----|--------:|:------|
35
+ | [GGUF](https://huggingface.co/mradermacher/Vakeel-8B-v2-mini-GGUF/resolve/main/Vakeel-8B-v2-mini.IQ3_S.gguf) | IQ3_S | 0.9 | beats Q3_K* |
36
  | [GGUF](https://huggingface.co/mradermacher/Vakeel-8B-v2-mini-GGUF/resolve/main/Vakeel-8B-v2-mini.f16.gguf) | f16 | 3.8 | 16 bpw, overkill |
37
 
38
  Here is a handy graph by ikawrakow comparing some lower-quality quant
 
52
 
53
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
54
  me use its servers and providing upgrades to my workstation to enable
55
+ this work in my free time.
56
 
57
  <!-- end -->