anon8231489123
commited on
Commit
•
7fc3159
1
Parent(s):
dd5cae8
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
(untested)
|
2 |
GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca
|
|
|
|
|
3 |
|
4 |
Command:
|
5 |
CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca
|
|
|
1 |
(untested)
|
2 |
GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca
|
3 |
+
Note: This was quantized with this branch of GPTQ-for-LLaMA: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
|
4 |
+
Because of this, it appears to be incompatible with Oobabooga at the moment. Stay tuned?
|
5 |
|
6 |
Command:
|
7 |
CUDA_VISIBLE_DEVICES=0 python llama.py ./models/chavinlo-gpt4-x-alpaca
|