Text Generation
Transformers
PyTorch
mistral
Not-For-All-Audiences
nsfw
text-generation-inference
Inference Endpoints
Update README.md
Browse files
README.md
CHANGED
@@ -10,9 +10,18 @@ tags:
|
|
10 |
- not-for-all-audiences
|
11 |
- nsfw
|
12 |
---
|
|
|
|
|
13 |
|
14 |
-
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
## Prompt Format
|
17 |
The model was finetuned with a prompt format similar to the original SuperHOT prototype:
|
18 |
```
|
|
|
10 |
- not-for-all-audiences
|
11 |
- nsfw
|
12 |
---
|
13 |
+
## What is PetrolLM?
|
14 |
+
PetrolLM is [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tune using QLoRA (4-bit precision) for the purposes of creative writing and roleplay.
|
15 |
|
16 |
+
The dataset consists of 5800 samples, with the composition as follows:
|
17 |
+
* AICG Logs (~17%)
|
18 |
+
* PygmalionAI/PIPPA (~17%)
|
19 |
+
* Squish42/bluemoon-fandom-1-1-rp-cleaned (~13%)
|
20 |
+
* OpenLeecher/Teatime (~2%)
|
21 |
+
* Norquinal/claude_multiround_chat_1k (~17%)
|
22 |
+
* jundurbin/airoboros-gpt4-1.4 (~17%)
|
23 |
+
* totally-not-an-llm/EverythingLM-data-V2-sharegpt (~17%)
|
24 |
+
|
25 |
## Prompt Format
|
26 |
The model was finetuned with a prompt format similar to the original SuperHOT prototype:
|
27 |
```
|