File size: 650 Bytes
de75805 7c46f82 |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
Strict copy of https://huggingface.co/tiiuae/falcon-40b but quantized with GPTQ (on wikitext-2, 4bits, groupsize=128).
Intended to be used with https://github.com/huggingface/text-generation-inference
```
model=huggingface/falcon-40b-gptq
num_shard=2
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:0.8 --model-id $model --num-shard $num_shard --quantize gptq
```
For full configuration and usage outside docker, please refer to https://github.com/huggingface/text-generation-inference |