|
--- |
|
inference: false |
|
license: other |
|
--- |
|
|
|
<!-- header start --> |
|
<div style="width: 100%;"> |
|
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; justify-content: space-between; width: 100%;"> |
|
<div style="display: flex; flex-direction: column; align-items: flex-start;"> |
|
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: flex-end;"> |
|
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> |
|
</div> |
|
</div> |
|
<!-- header end --> |
|
|
|
# LmSys' Vicuna 13B v1.3 GPTQ |
|
|
|
These files are GPTQ 4bit model files for [LmSys' Vicuna 13B v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3). |
|
|
|
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). |
|
|
|
**NOTE**: This model was recently updated by the LmSys Team. If you already downloaded Vicuna 13B v1.3 GPTQ or GGML, you may want to re-download it from this repo, as the weights were updated. The original model I uploaded has been renamed to v1.3-preview. |
|
|
|
## Repositories available |
|
|
|
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GPTQ) |
|
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML) |
|
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.3) |
|
|
|
## How to easily download and use this model in text-generation-webui |
|
|
|
Please make sure you're using the latest version of text-generation-webui |
|
|
|
1. Click the **Model tab**. |
|
2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-13b-v1.3.0-GPTQ`. |
|
3. Click **Download**. |
|
4. The model will start downloading. Once it's finished it will say "Done" |
|
5. In the top left, click the refresh icon next to **Model**. |
|
6. In the **Model** dropdown, choose the model you just downloaded: `vicuna-13b-v1.3.0-GPTQ` |
|
7. The model will automatically load, and is now ready for use! |
|
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. |
|
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. |
|
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! |
|
|
|
## How to use this GPTQ model from Python code |
|
|
|
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: |
|
|
|
`pip install auto-gptq` |
|
|
|
Then try the following example code: |
|
|
|
```python |
|
from transformers import AutoTokenizer, pipeline, logging |
|
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig |
|
import argparse |
|
|
|
model_name_or_path = "TheBloke/vicuna-13b-v1.3.0-GPTQ" |
|
model_basename = "vicuna-13b-v1.3.0-GPTQ-4bit-128g.no-act.order" |
|
|
|
use_triton = False |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) |
|
|
|
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, |
|
model_basename=model_basename, |
|
use_safetensors=True, |
|
trust_remote_code=False, |
|
device="cuda:0", |
|
use_triton=use_triton, |
|
quantize_config=None) |
|
|
|
# Note: check the prompt template is correct for this model. |
|
prompt = "Tell me about AI" |
|
prompt_template=f'''USER: {prompt} |
|
ASSISTANT:''' |
|
|
|
print("\n\n*** Generate:") |
|
|
|
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() |
|
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) |
|
print(tokenizer.decode(output[0])) |
|
|
|
# Inference can also be done using transformers' pipeline |
|
|
|
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ |
|
logging.set_verbosity(logging.CRITICAL) |
|
|
|
print("*** Pipeline:") |
|
pipe = pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
max_new_tokens=512, |
|
temperature=0.7, |
|
top_p=0.95, |
|
repetition_penalty=1.15 |
|
) |
|
|
|
print(pipe(prompt_template)[0]['generated_text']) |
|
``` |
|
|
|
## Provided files |
|
|
|
**vicuna-13b-v1.3.0-GPTQ-4bit-128g.no-act.order.safetensors** |
|
|
|
This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead. |
|
|
|
It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed. |
|
|
|
* `vicuna-13b-v1.3.0-GPTQ-4bit-128g.no-act.order.safetensors` |
|
* Works with AutoGPTQ in CUDA or Triton modes. |
|
* LLaMa models also work with [ExLlama](https://github.com/turboderp/exllama}, which usually provides much higher performance, and uses less VRAM, than AutoGPTQ. |
|
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode. |
|
* Works with text-generation-webui, including one-click-installers. |
|
* Parameters: Groupsize = 128. Act Order / desc_act = False. |
|
|
|
<!-- footer start --> |
|
## Discord |
|
|
|
For further support, and discussions on these models and AI in general, join us at: |
|
|
|
[TheBloke AI's Discord server](https://discord.gg/theblokeai) |
|
|
|
## Thanks, and how to contribute. |
|
|
|
Thanks to the [chirper.ai](https://chirper.ai) team! |
|
|
|
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. |
|
|
|
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. |
|
|
|
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. |
|
|
|
* Patreon: https://patreon.com/TheBlokeAI |
|
* Ko-Fi: https://ko-fi.com/TheBlokeAI |
|
|
|
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. |
|
|
|
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. |
|
|
|
Thank you to all my generous patrons and donaters! |
|
|
|
<!-- footer end --> |
|
|
|
# Original model card: LmSys' Vicuna 13B v1.3 |
|
|
|
|
|
# Vicuna Model Card |
|
|
|
## Model Details |
|
|
|
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. |
|
|
|
- **Developed by:** [LMSYS](https://lmsys.org/) |
|
- **Model type:** An auto-regressive language model based on the transformer architecture. |
|
- **License:** Non-commercial license |
|
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). |
|
|
|
### Model Sources |
|
|
|
- **Repository:** https://github.com/lm-sys/FastChat |
|
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ |
|
- **Paper:** https://arxiv.org/abs/2306.05685 |
|
- **Demo:** https://chat.lmsys.org/ |
|
|
|
## Uses |
|
|
|
The primary use of Vicuna is research on large language models and chatbots. |
|
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. |
|
|
|
## How to Get Started with the Model |
|
|
|
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. |
|
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. |
|
|
|
## Training Details |
|
|
|
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. |
|
The training data is around 140K conversations collected from ShareGPT.com. |
|
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). |
|
|
|
## Evaluation |
|
|
|
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf). |
|
|
|
## Difference between different versions of Vicuna |
|
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) |
|
|