Edit model card

This is an experimental lima style model trained on a small subset of freedom-rp and erotica-analysis-16k. Due to the much smaller dataset size (about 1000 samples from each original dataset) it was much easier to edit and clean thoroughly. I also used a slightly lower learning rate of 0.00015.

The prompt format is chatml.

I have not tested the model yet, but I am hoping I can use this to help me create more training data for specific genres.

Please consider subscribing to my patreon or buying a giant candle dick on my etsy to show your support.

https://www.patreon.com/openerotica

http://openerotica.etsy.com/

Downloads last month
32
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train LoneStriker/Gorgon-7b-v0.1-GGUF