SnakyMcSnekFace commited on
Commit
c1d5c40
1 Parent(s): 00ea061

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -9,7 +9,6 @@ tags:
9
  - storywriting
10
  - finetuned
11
  - not-for-all-audiences
12
- - gguf
13
  base_model: KoboldAI/LLaMA2-13B-Psyfighter2
14
  model_type: llama
15
  prompt_template: >
@@ -27,15 +26,17 @@ prompt_template: >
27
 
28
  # Model Card for Psyfighter2-13B-vore
29
 
30
- This is a version of [LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, as well as a conversational model in a chat.
31
 
32
  The Adventure Mode is still work in progress, and will be added later.
33
 
 
 
34
  ## Model Details
35
 
36
  ### Model Description
37
 
38
- The model behaves similarly to `LLaMA2-13B-Psyfighter2`, which it was derived from. Please [see the README.md here](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2/blob/main/README.md) to learn more.
39
 
40
  This model was fine-tuned on ~55 MiB of free-form text, containing stories focused around the vore theme. As a result, it has a strong vorny bias.
41
 
@@ -52,16 +53,16 @@ In the chat mode, if the conversation is not going where you would like it to go
52
  The easiest way to try out the model is [Koboldcpp Colab Notebook](https://colab.research.google.com/github/lostruins/koboldcpp/blob/concedo/colab.ipynb). This method doesn't require you to have a powerful graphics card.
53
 
54
  - Open the notebook
55
- - Paste the model URL into the field: `https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore/resolve/main/Psyfighter2-13B-vore_q4_k_m.gguf`
56
  - Start the notebook, wait for the URL to CloudFlare tunnel to appear at the bottom and click it
57
  - Use the model as a writing assistant
58
  - You can try an adventure from [https://aetherroom.club/](https://aetherroom.club/), but keep in mind that the model will not let you take turn unless you stop it. Adventure mode is work-in-progress.
59
 
60
  ### Faraday
61
 
62
- Another convenient way to use the model is [Faraday.dev](https://faraday.dev/) application, which allows you to run the model locally on your computer. You'll need a graphics card with at least 8GB VRAM to use this method comfortably.
63
 
64
- Download the [Psyfighter2-13B-vore_q4_k_m.gguf](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore/resolve/main/Psyfighter2-13B-vore_q4_k_m.gguf) file into `%appdata%\faraday\models` folder on your computer. The model should appear in `Manage Models` menu under `Downloaded Models`. You can then select it in your character card or set it as a default model.
65
 
66
  ### Others
67
 
@@ -92,8 +93,6 @@ Training parameters:
92
  - Sample size: 768 tokens
93
  - Samples per epoch: 47420
94
  - Number of epochs: 2
95
- - Batch size: 1
96
- - Gradient accumulation steps: 16
97
  - First epoch: Learning rate = 3e-4, 1000 steps warmup, cosine schedule
98
  - Second epoch: Learning rate = 1e-4, 256 steps warmup, inverse sqrt schedule
99
 
 
9
  - storywriting
10
  - finetuned
11
  - not-for-all-audiences
 
12
  base_model: KoboldAI/LLaMA2-13B-Psyfighter2
13
  model_type: llama
14
  prompt_template: >
 
26
 
27
  # Model Card for Psyfighter2-13B-vore
28
 
29
+ This model is a version of [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, as well as a conversational model in a chat.
30
 
31
  The Adventure Mode is still work in progress, and will be added later.
32
 
33
+ Download the quantized version of this model here: [SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF)
34
+
35
  ## Model Details
36
 
37
  ### Model Description
38
 
39
+ The model behaves similarly to `KoboldAI/LLaMA2-13B-Psyfighter2`, which it was derived from. Please [see the README.md here](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2/blob/main/README.md) to learn more.
40
 
41
  This model was fine-tuned on ~55 MiB of free-form text, containing stories focused around the vore theme. As a result, it has a strong vorny bias.
42
 
 
53
  The easiest way to try out the model is [Koboldcpp Colab Notebook](https://colab.research.google.com/github/lostruins/koboldcpp/blob/concedo/colab.ipynb). This method doesn't require you to have a powerful graphics card.
54
 
55
  - Open the notebook
56
+ - Paste the model URL into the field: `https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf`
57
  - Start the notebook, wait for the URL to CloudFlare tunnel to appear at the bottom and click it
58
  - Use the model as a writing assistant
59
  - You can try an adventure from [https://aetherroom.club/](https://aetherroom.club/), but keep in mind that the model will not let you take turn unless you stop it. Adventure mode is work-in-progress.
60
 
61
  ### Faraday
62
 
63
+ Another convenient way to use the model is [Faraday.dev](https://faraday.dev/) application, which allows you to run the model locally on your computer. You'll need a graphics card with at least 8GB VRAM to use `Q4_K_M` version comfortably, and 16GB VRAM for `Q8_0`. (`Q4_K_M` version is smaller and faster, `Q8_0` is slower but more coherent.)
64
 
65
+ Download the [Psyfighter2-13B-vore.Q4_K_M.gguf](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf) or [Psyfighter2-13B-vore.Q8_0.gguf](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q8_0.gguf) file into `%appdata%\faraday\models` folder on your computer. The model should appear in `Manage Models` menu under `Downloaded Models`. You can then select it in your character card or set it as a default model.
66
 
67
  ### Others
68
 
 
93
  - Sample size: 768 tokens
94
  - Samples per epoch: 47420
95
  - Number of epochs: 2
 
 
96
  - First epoch: Learning rate = 3e-4, 1000 steps warmup, cosine schedule
97
  - Second epoch: Learning rate = 1e-4, 256 steps warmup, inverse sqrt schedule
98