File size: 259 Bytes
388d851 |
1 2 3 4 5 6 7 8 9 10 |
GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca
Command:
CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca
--wbits 4
--true-sequential
--act-order
--groupsize 128
--save gpt-x-alpaca-13b-native-4bit-128g.pt
|