--- license: apache-2.0 ---
TigerBot

A cutting-edge foundation for your very own LLM.

🌐 TigerBot • 🤗 Hugging Face

This is a 4-bit GPTQ version of the [Tigerbot 70b chat v2](https://huggingface.co/TigerResearch/tigerbot-70b-chat). It was quantized to 4bit using: https://github.com/PanQiWei/AutoGPTQ ## How to download and use this model in github: https://github.com/TigerResearch/TigerBot Here are commands to clone the TigerBot and install. ``` conda create --name tigerbot python=3.8 conda activate tigerbot conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia git clone https://github.com/TigerResearch/TigerBot cd TigerBot pip install -r requirements.txt ``` Inference with command line interface infer with exllama ``` # 安装exllama_lib pip install exllama_lib@git+https://github.com/taprosoft/exllama.git # 启动推理 CUDA_VISIBLE_DEVICES=0 python other_infer/exllama_infer.py --model_path TigerResearch/tigerbot-70b-chat-4bit ``` infer with auto-gptq ``` # 安装auto-gptq pip install auto-gptq # 启动推理 CUDA_VISIBLE_DEVICES=0 python other_infer/gptq_infer.py --model_path TigerResearch/tigerbot-70b-chat-4bit ```