File size: 855 Bytes
14d6f0a
 
 
 
 
 
 
 
 
 
 
388d851
7fc3159
 
388d851
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Update: Okay... Two different models now. One generated in the Triton branch, one generated in Cuda. Use the Cuda one for now unless the Triton branch becomes widely used.
Cuda info (use this one):
Command: 
CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca
--wbits 4 
--true-sequential 
--groupsize 128 
--save gpt-x-alpaca-13b-native-4bit-128g-cuda.pt


Prev. info
GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca
Note: This was quantized with this branch of GPTQ-for-LLaMA: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
Because of this, it appears to be incompatible with Oobabooga at the moment. Stay tuned?

Command: 
CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca
--wbits 4 
--true-sequential 
--act-order 
--groupsize 128 
--save gpt-x-alpaca-13b-native-4bit-128g.pt