Edit model card

Model Card for Psyfighter2-13B-vore-GGUF

This is a quantized version of SnakyMcSnekFace/Psyfighter2-13B-vore model. You can find the F16 precision model weights and details of how the model was trained in that repository.

This model is a version of KoboldAI/LLaMA2-13B-Psyfighter2 finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, a conversational model in a chat, and an interactive choose-your-own-adventure text game.

The model has been specifically trained to perform in Kobold AI Adventure Mode, the second-person choose-your-own-adventure story format.

Model Details

The model behaves similarly to KoboldAI/LLaMA2-13B-Psyfighter2, which it was derived from. Please see the README.md here to learn more.

This model was fine-tuned on ~55 MiB of stories focused around the vore theme, followed by further alignment on ~3 MiB of personal Kobold AI Adventure Mode playthroughs. During the alignment, the model was encouraged to respect player's actions and agency, construct a coherent narrative, and use evocative language to describe the world and the outcome of the player's actions.

How to Get Started with the Model

The model can be used with any AI chatbots and front-ends designed to work with .gguf models. The model fits fully into 8GB VRAM, but can also run with degraded performance on smaller graphics cards.

Similarly to the base model, the less prompt the model receives, the more creative is the output. For example, the writing assistant will generate an entire story when prompted with only 2-3 words.

In the chat mode, if the conversation is not going where you would like it to go, edit the model's output and let it continue generation. The model will also match the style of the conversation.

There are two versions of the model: Q4_K_M (smaller and faster) and Q8_0 (slower, but better prose quality).

Koboldcpp Colab Notebook

The easiest way to try out the model is Koboldcpp Colab Notebook. This method doesn't require you to have a powerful graphics card.

  • Open the notebook
  • Paste the model URL into the Model field: https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf
  • Start the notebook, wait for the URL to CloudFlare tunnel to appear at the bottom and click it
  • Select "Settings" and configure them as follows:
    • In "Basic" tab:
      • Temperature = 0.8
      • Amount to Gen. = 512
      • Top p Sampling = 0.9
      • Repetition Pentalty = 1.1
    • In "Advanced" tab:
      • Min-P = 0.1
      • EOS Token Ban = Unban
      • Placeholder Tags = Checked
  • Select "Scenarios" -> "New Story" to use the model as a writing assistant

To run Q8_0 model in Colab notebook

  • Paste the model URL into the Model field: https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q8_0.gguf
  • Set Layers field to 30 (reduce this number if the model fails to start)

Adventure mode

Select "Story" in the bottom left corner to generate premise of the story, and "Action" to take actions with your character. In the adventure mode, the model expects all player actions to be written in second person. For example:

As you venture deeper into the damp cave, you come across a lone goblin. The vile creature mumbles something to itself as it stares at the glowing text on a cave wall. It doesn't notice your approach.

> You sneak behind the goblin and hit it with the sword.

Check "Allow Editing" to make edits to the story to overwrite and re-generate parts of the model's response. This is useful if the model makes a mistake or the story doesn't go in the direction that you like.

Backyard AI

Another convenient way to use the model is Backyard AI application, which allows you to run the model locally on your computer. You'll need a graphics card with at least 8GB VRAM to use the model comfortably.

If you don't have a powerful GPU, Backyard AI provides an option of running the model on their servers, but it costs money.

Download directly from HuggingFace (beta)

In the left panel, click Manage Models, then select Hugging face models. Paste https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF into the text field and press Fetch Models. Click Download button to the next to the model format. Once the model is downloaded, you can select it in your character card or set it as a default model.

Download manually

Download Psyfighter2-13B-vore.Q4_K_M.gguf or Psyfighter2-13B-vore.Q8_0.gguf file into %appdata%\faraday\models folder on your computer. The model should appear in Manage Models menu under Downloaded Models. You can then select it in your character card or set it as a default model.

Model updates

  • 14/09/2024 - aligned the model for better Adventure Mode flow and improved narrative quality
  • 09/06/2024 - fine-tuned the model to follow Kobold AI Adventure Mode format
  • 02/06/2024 - fixed errors in training and merging, significantly improving the overall prose quality
  • 25/05/2024 - updated training process, making the model more coherent and improving the writing quality
  • 13/04/2024 - uploaded the first version of the model
Downloads last month
271
GGUF
Model size
13B params
Architecture
llama

4-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF

Quantized
(3)
this model