Edit model card
TigerBot

A cutting-edge foundation for your very own LLM.

🌐 TigerBot • 🤗 Hugging Face

This is a 8-bit GPTQ version of the Tigerbot 13b chat.

It was quantized to 8bit using: https://github.com/PanQiWei/AutoGPTQ

How to download and use this model in github: https://github.com/TigerResearch/TigerBot

Here are commands to clone the TigerBot and install.

conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt

Inference with command line interface

# 安装auto-gptq
pip install auto-gptq

# 启动推理
CUDA_VISIBLE_DEVICES=0 python other_infer/gptq_infer.py --model_path TigerResearch/tigerbot-13b-chat-8bit
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.