Update README.md
Browse files
README.md
CHANGED
@@ -1 +1,12 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
---
|
4 |
+
|
5 |
+
## This model is a merge of LLAMA-13b and SuperCOT LoRA
|
6 |
+
|
7 |
+
[huggyllama/llama-13b](https://huggingface.co/huggyllama/llama-13b) + [kaiokendev/SuperCOT-LoRA/13b/gpu/cutoff-2048](https://huggingface.co/kaiokendev/SuperCOT-LoRA)
|
8 |
+
|
9 |
+
CUDA_VISIBLE_DEVICES=0 python llama.py c4 --wbits 4 --true-sequential --act-order --groupsize 128
|
10 |
+
|
11 |
+
|
12 |
+
In ooba make sure to use --groupsize 128 --wbits 4
|