|
--- |
|
library_name: peft |
|
datasets: |
|
- Squish42/bluemoon-fandom-1-1-rp-cleaned |
|
- OpenLeecher/Teatime |
|
- PygmalionAI/PIPPA |
|
tags: |
|
- not-for-all-audiences |
|
- nsfw |
|
license: cc-by-nc-4.0 |
|
--- |
|
## What is PetrolLoRA? |
|
PetrolLoRA is the LoRA equivalent of [PetrolLM](https://huggingface.co/Norquinal/PetrolLM), without any of the instruction-tuning of the prior. |
|
|
|
The dataset consists of 2800 samples, with the composition as follows: |
|
* AICG Logs (~34%) |
|
* PygmalionAI/PIPPA (~33%) |
|
* Squish42/bluemoon-fandom-1-1-rp-cleaned (~29%) |
|
* OpenLeecher/Teatime (~4%) |
|
|
|
These samples were then back-filled using gpt-4/gpt-3.5-turbo-16k or otherwise converted to fit the prompt format. |
|
|
|
## Prompt Format |
|
The LoRA was finetuned with a prompt format similar to the original SuperHOT prototype: |
|
``` |
|
--- |
|
style: roleplay |
|
characters: |
|
[char]: [description] |
|
summary: [scenario] |
|
--- |
|
<chat_history> |
|
Format: |
|
[char]: [message] |
|
Human: [message] |
|
``` |
|
|
|
## Training procedure |
|
|
|
|
|
The following `bitsandbytes` quantization config was used during training: |
|
- quant_method: bitsandbytes |
|
- load_in_8bit: False |
|
- load_in_4bit: True |
|
- llm_int8_threshold: 6.0 |
|
- llm_int8_skip_modules: None |
|
- llm_int8_enable_fp32_cpu_offload: False |
|
- llm_int8_has_fp16_weight: False |
|
- bnb_4bit_quant_type: nf4 |
|
- bnb_4bit_use_double_quant: False |
|
- bnb_4bit_compute_dtype: float16 |
|
|
|
The following `bitsandbytes` quantization config was used during training: |
|
- quant_method: bitsandbytes |
|
- load_in_8bit: False |
|
- load_in_4bit: True |
|
- llm_int8_threshold: 6.0 |
|
- llm_int8_skip_modules: None |
|
- llm_int8_enable_fp32_cpu_offload: False |
|
- llm_int8_has_fp16_weight: False |
|
- bnb_4bit_quant_type: nf4 |
|
- bnb_4bit_use_double_quant: False |
|
- bnb_4bit_compute_dtype: float16 |
|
### Framework versions |
|
|
|
- PEFT 0.4.0 |
|
|
|
- PEFT 0.4.0 |
|
|