Update README.md
Browse files
README.md
CHANGED
@@ -23,6 +23,8 @@ These files are GPTQ 4bit model files for [LmSys' Vicuna 13B v1.3](https://huggi
|
|
23 |
|
24 |
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
25 |
|
|
|
|
|
26 |
## Repositories available
|
27 |
|
28 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GPTQ)
|
|
|
23 |
|
24 |
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
25 |
|
26 |
+
**NOTE**: This model was recently updated by the LmSys Team. If you already downloaded Vicuna 13B v1.3 GPTQ or GGML, you may want to re-download it from this repo, as the weights were updated.
|
27 |
+
|
28 |
## Repositories available
|
29 |
|
30 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GPTQ)
|