Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ tags:
|
|
9 |
- nsfw
|
10 |
---
|
11 |
## What is PetrolLoRA?
|
12 |
-
PetrolLoRA is
|
13 |
|
14 |
The dataset consists of 2800 samples, with the composition as follows:
|
15 |
* AICG Logs (~34%)
|
@@ -20,7 +20,7 @@ The dataset consists of 2800 samples, with the composition as follows:
|
|
20 |
These samples were then back-filled using gpt-4/gpt-3.5-turbo-16k or otherwise converted to fit the prompt format.
|
21 |
|
22 |
## Prompt Format
|
23 |
-
The
|
24 |
```
|
25 |
---
|
26 |
style: roleplay
|
|
|
9 |
- nsfw
|
10 |
---
|
11 |
## What is PetrolLoRA?
|
12 |
+
PetrolLoRA is the LoRA equivalent of [PetroLM](https://huggingface.co/Norquinal/PetrolLM), without any of the instruction-tuning of the prior.
|
13 |
|
14 |
The dataset consists of 2800 samples, with the composition as follows:
|
15 |
* AICG Logs (~34%)
|
|
|
20 |
These samples were then back-filled using gpt-4/gpt-3.5-turbo-16k or otherwise converted to fit the prompt format.
|
21 |
|
22 |
## Prompt Format
|
23 |
+
The LoRA was finetuned with a prompt format similar to the original SuperHOT prototype:
|
24 |
```
|
25 |
---
|
26 |
style: roleplay
|