Updating model files
Browse files
README.md
CHANGED
@@ -10,6 +10,17 @@ tags:
|
|
10 |
- gpt4
|
11 |
inference: false
|
12 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
# GPT4 Alpaca LoRA 30B - GPTQ 4bit 128g
|
14 |
|
15 |
This is a 4-bit GPTQ version of the [Chansung GPT4 Alpaca 30B LoRA model](https://huggingface.co/chansung/gpt4-alpaca-lora-30b).
|
@@ -72,6 +83,18 @@ python setup_cuda.py install --force
|
|
72 |
```
|
73 |
Then link that into `text-generation-webui/repositories` as described above.
|
74 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
# Original GPT4 Alpaca Lora model card
|
76 |
|
77 |
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
|
|
|
10 |
- gpt4
|
11 |
inference: false
|
12 |
---
|
13 |
+
<div style="width: 100%;">
|
14 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
15 |
+
</div>
|
16 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
17 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
18 |
+
<p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
|
19 |
+
</div>
|
20 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
21 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
|
22 |
+
</div>
|
23 |
+
</div>
|
24 |
# GPT4 Alpaca LoRA 30B - GPTQ 4bit 128g
|
25 |
|
26 |
This is a 4-bit GPTQ version of the [Chansung GPT4 Alpaca 30B LoRA model](https://huggingface.co/chansung/gpt4-alpaca-lora-30b).
|
|
|
83 |
```
|
84 |
Then link that into `text-generation-webui/repositories` as described above.
|
85 |
|
86 |
+
## Want to support my work?
|
87 |
+
|
88 |
+
I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
|
89 |
+
|
90 |
+
So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
|
91 |
+
|
92 |
+
Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
|
93 |
+
|
94 |
+
* Patreon: coming soon! (just awaiting approval)
|
95 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
96 |
+
* Discord: https://discord.gg/UBgz4VXf
|
97 |
+
|
98 |
# Original GPT4 Alpaca Lora model card
|
99 |
|
100 |
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
|