SnakyMcSnekFace
commited on
Commit
•
9ef4b89
1
Parent(s):
1211f0a
Preference alignment
Browse files- README.md +61 -7
- config.json +2 -2
- generation_config.json +1 -1
- model-00001-of-00006.safetensors +1 -1
- model-00002-of-00006.safetensors +1 -1
- model-00003-of-00006.safetensors +1 -1
- model-00004-of-00006.safetensors +1 -1
- model-00005-of-00006.safetensors +1 -1
- model-00006-of-00006.safetensors +1 -1
- tokenizer_config.json +2 -0
README.md
CHANGED
@@ -28,7 +28,7 @@ prompt_template: >
|
|
28 |
|
29 |
This model is a version of [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, a conversational model in a chat, and an interactive choose-your-own-adventure text game.
|
30 |
|
31 |
-
The
|
32 |
|
33 |
This is the FP16-precision version of the model for merging and fine-tuning. **For using the model, please see the quantized version and the instructions here: [SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF)**
|
34 |
|
@@ -38,10 +38,11 @@ The model behaves similarly to `KoboldAI/LLaMA2-13B-Psyfighter2`, which it was d
|
|
38 |
|
39 |
### Updates
|
40 |
|
41 |
-
- 09/
|
42 |
-
- 06/
|
43 |
-
-
|
44 |
-
-
|
|
|
45 |
|
46 |
|
47 |
## Bias, Risks, and Limitations
|
@@ -160,6 +161,59 @@ Setting or removing the instructions allows the model to generate accepted/rejec
|
|
160 |
![Gradient Norm](img/sft_grad_norm.png)
|
161 |
![Learning rate](img/sft_learning_rate.png)
|
162 |
|
163 |
-
### Adventure
|
164 |
|
165 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
This model is a version of [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, a conversational model in a chat, and an interactive choose-your-own-adventure text game.
|
30 |
|
31 |
+
The model has been specifically trained to perform in Kobold AI Adventure Mode, the second-person choose-your-own-adventure story format.
|
32 |
|
33 |
This is the FP16-precision version of the model for merging and fine-tuning. **For using the model, please see the quantized version and the instructions here: [SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF)**
|
34 |
|
|
|
38 |
|
39 |
### Updates
|
40 |
|
41 |
+
- 14/09/2024 - aligned the model for better Adventure Mode flow and improved narrative quality
|
42 |
+
- 09/06/2024 - fine-tuned the model to follow Kobold AI Adventure Mode format
|
43 |
+
- 02/06/2024 - fixed errors in training and merging, significantly improving the overall prose quality
|
44 |
+
- 25/05/2024 - updated training process, making the model more coherent and improving the writing quality
|
45 |
+
- 13/04/2024 - uploaded the first version of the model
|
46 |
|
47 |
|
48 |
## Bias, Risks, and Limitations
|
|
|
161 |
![Gradient Norm](img/sft_grad_norm.png)
|
162 |
![Learning rate](img/sft_learning_rate.png)
|
163 |
|
164 |
+
### Preference alignment for Adventure Mode
|
165 |
|
166 |
+
Although the fine-tuned model understands the Adventure format, it's output leaves much to be desired. The responses are often limited to a few sentences, and the narrative is lacking. To address this issue, the model is given prompts from the SFT dataset and asked to generate ~8 responses, which are manually categorized as `accepted/rejected`. The dataset contains `3696` samples, with `2231` accepted and `1465` rejected samples.
|
167 |
+
|
168 |
+
#### Dataset
|
169 |
+
|
170 |
+
Half of the samples was generated by this model where prompts contained the adventure transcripts and player turns in raw format. The other samples were generated by [jebcarter/psyonic-cetacean-20B model](https://huggingface.co/jebcarter/psyonic-cetacean-20B) fine-tuned on the same domain adaptation and SFT datasets, with the context using the instruct format described above. Those accounted for the majority of `accepted` samples. In addition, the `accepted` samples were punched-up and errors were corrected as necessary with the aid of the fine-tuned [jebcarter/psyonic-cetacean-20B model](https://huggingface.co/jebcarter/psyonic-cetacean-20B) model.
|
171 |
+
|
172 |
+
|
173 |
+
#### Training procedure
|
174 |
+
|
175 |
+
[KTO](https://arxiv.org/abs/2402.01306) trainer from [Hugging Face TRL library](https://huggingface.co/docs/trl/en/kto_trainer) was employed for performing preference alignment. The LoRA adapter from the previous training stages was merged into the model, and a new LoRA adapter was created for the KTO training. The quantized base model serves as a reference.
|
176 |
+
|
177 |
+
#### QLoRa adapter configuration
|
178 |
+
|
179 |
+
- Rank: 16
|
180 |
+
- Alpha: 16
|
181 |
+
- Dropout rate: 0.0
|
182 |
+
- Target weights: `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`,
|
183 |
+
- `use_rslora=True`
|
184 |
+
|
185 |
+
#### Training parameters
|
186 |
+
|
187 |
+
- Max. sequence length: 4096 tokens
|
188 |
+
- Max. prompt length: 3072
|
189 |
+
- Samples per epoch: 924
|
190 |
+
- Number of epochs: 2
|
191 |
+
- Learning rate: 1e-4
|
192 |
+
- Beta: 0.1
|
193 |
+
- Desirable weight: 1.0
|
194 |
+
- Undesirable weight: 1.52
|
195 |
+
- Warmup: 32 steps
|
196 |
+
- LR Schedule: cosine
|
197 |
+
- Batch size: 4
|
198 |
+
- Gradient accumulation steps: 1
|
199 |
+
- Gradient checkpointing: yes
|
200 |
+
|
201 |
+
The training takes ~20 hours on NVIDIA GeForce RTX 4060 Ti.
|
202 |
+
|
203 |
+
#### Results
|
204 |
+
|
205 |
+
The model's performance in Adventure Mode has improved substantially. The writing has become more creative and engaging; the model now advances the story more consistently and NPCs are more likely to act on their own instead of remaining passive. As a side effect, the model's "positivity bias" has diminished, making the NPCs more willing to take actions against the player.
|
206 |
+
|
207 |
+
#### Plots
|
208 |
+
|
209 |
+
![Loss](img/kto_loss.png)
|
210 |
+
![Gradient Norm](img/kto_grad_norm.png)
|
211 |
+
![Learning rate](img/kto_learning_rate.png)
|
212 |
+
![Rewards](img/kto_train_rewards.png)
|
213 |
+
![Log probabilities](img/train_logps.png)
|
214 |
+
![KL divergence](img/kto_train_kl_divergence.png)
|
215 |
+
|
216 |
+
|
217 |
+
### Future plans
|
218 |
+
|
219 |
+
To further improve the model's performance in the future, I am planning to expand the domain adaptation dataset by an order of magnitude by improving and automating the data gathering and management. I will also be preparing additional preference alignment datasets to further improve the model's ability to act as a quality Game Master in Adventure Mode.
|
config.json
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"_name_or_path": "/
|
3 |
"architectures": [
|
4 |
"LlamaForCausalLM"
|
5 |
],
|
@@ -24,7 +24,7 @@
|
|
24 |
"rope_theta": 10000.0,
|
25 |
"tie_word_embeddings": false,
|
26 |
"torch_dtype": "float16",
|
27 |
-
"transformers_version": "4.
|
28 |
"use_cache": true,
|
29 |
"vocab_size": 32000,
|
30 |
"welcome": "# Welcome to Psyfighter2 by Jeb Carter and Twistedshadows \nPsyfighter2 is a creative writing focused model built on Henk717's Tiefighter. The addition of medical and psychological data to the model directs its attention toward psychological and spatial details, which improves the writing output by encouraging the model to focus on more relevant details.\n\nThe key to working with PsyfighterV2 is to the understand that Less Is More.\nThis model is meant to be creative, If you let it improvise you will get better results than if you drown it in details, which can scatter and shatter the model's focus. If your back end supports it, we recommend setting a min-p of 0.05. \n\n## Story Writing\nStory co-writing is supported in the traditional way - simply start your story and invoke the model's completions as needed. To guide the model at a higher level we recommend using this format to generate stories on demand or help shape the outputs the model will use in its story continuations.\n\n\n``` \nURL: https://www.gutenberg.org/$AuthorName/Stories \n\nTitle:\nTags:\nSynopsis:\nNotes:\nFirst Publication: $MagazineName, $YEAR\n\n$Title\n\nA $Genre [Tale|Story|Novel]\n\nby $AuthorName\n```\nnThe author name has the heaviest influence on the writing style, but you can shape the output through tags, setting a year of imaginary first publication, and proving commentary in Notes can tell the model how the story is expected to go.## Chatbots and personas\nThis model has been tested with various forms of chatting, testers have found that typically less is more and the model is good at improvising. Don't drown the model in paragraphs of detailed information, instead keep it simple first and see how far you can lean on the models own ability to figure out your character. Copy pasting paragraphs of background information is not suitable for a 13B model such as this one, code formatted characters or an instruction prompt describing who you wish to talk to goes much further.\n\nFor example, you can put this in memory in regular chat mode:\n``` \n### Instruction: \nGenerate a conversation between Alice and Jeb where they discuss language models.\nIn this conversation Jeb is excited to teach Alice about Psyfighter. \n### Response: \n```\n\nBecause the model is a merge of a variety of models, it should support a broad range of instruct formats, or plain chat mode. If you have a particular favourite try it, otherwise we recommend to either use the regular chat mode or Alpaca's format.\n\n## Instruct Prompting\nThis model features various instruct models on a variety of instruction styles, when testing the model we have used Alpaca for our own tests. If you prefer a different format chances are it can work.\n\nDuring instructions we have observed that in some cases the adventure data can leak, it may also be worth experimenting using > as the prefix for a user command to remedy this. But this may result in a stronger fiction bias. If using Instruct style directions during chat or storywriting, you can enclose your direction in formatting like this to keep it from contaminating the rest of the context: \n```\n***\n> [Instructions/Direction here]\n***\n```\n\nKeep in mind that while this model can be used as a factual instruct model, the focus was on fiction. Information provided by the model can be made up.\n\n## Adventuring and Adventure Games\nThis model contains a lora that was trained on the same adventure dataset as the KoboldAI Skein model. Adventuring is best done using an small introduction to the world and your objective while using the > prefix for a user command (KoboldAI's adventure mode). \n\nIt is possible that the model does not immediately pick up on what you wish to do and does not engage in its Adventure mode behaviour right away. Simply manually correct the output to trim excess dialogue or other undesirable behaviour and continue to submit your actions using the appropriate mode. The model should pick up on this style quickly and will correctly follow this format within 3 turns.\n\n## Discovered something cool and want to engage with us? \nJoin our community at https://koboldai.org/discord !\n\n### This model would not be possible without the KoboldAI MergeBox program and the awesome work from: \nDoctor Shotgun, Undi95, PocketDoc, Blackroot, Brouz, The Face of Goonery, zattio770, PygmalionAI, TokenBender, nRuaif, lemonilia, Xwin-LM, elinas, jondurbin, NousResearch, CalderaAI, MrSeeker, OpenAssistant, ehartford, Henk717, AI Dungeon, StabilityAI and zattio770."
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "/media/lena/3A9CD4D09CD487B1/AI/models/dequantized_Psyfighter2-13B/",
|
3 |
"architectures": [
|
4 |
"LlamaForCausalLM"
|
5 |
],
|
|
|
24 |
"rope_theta": 10000.0,
|
25 |
"tie_word_embeddings": false,
|
26 |
"torch_dtype": "float16",
|
27 |
+
"transformers_version": "4.44.2",
|
28 |
"use_cache": true,
|
29 |
"vocab_size": 32000,
|
30 |
"welcome": "# Welcome to Psyfighter2 by Jeb Carter and Twistedshadows \nPsyfighter2 is a creative writing focused model built on Henk717's Tiefighter. The addition of medical and psychological data to the model directs its attention toward psychological and spatial details, which improves the writing output by encouraging the model to focus on more relevant details.\n\nThe key to working with PsyfighterV2 is to the understand that Less Is More.\nThis model is meant to be creative, If you let it improvise you will get better results than if you drown it in details, which can scatter and shatter the model's focus. If your back end supports it, we recommend setting a min-p of 0.05. \n\n## Story Writing\nStory co-writing is supported in the traditional way - simply start your story and invoke the model's completions as needed. To guide the model at a higher level we recommend using this format to generate stories on demand or help shape the outputs the model will use in its story continuations.\n\n\n``` \nURL: https://www.gutenberg.org/$AuthorName/Stories \n\nTitle:\nTags:\nSynopsis:\nNotes:\nFirst Publication: $MagazineName, $YEAR\n\n$Title\n\nA $Genre [Tale|Story|Novel]\n\nby $AuthorName\n```\nnThe author name has the heaviest influence on the writing style, but you can shape the output through tags, setting a year of imaginary first publication, and proving commentary in Notes can tell the model how the story is expected to go.## Chatbots and personas\nThis model has been tested with various forms of chatting, testers have found that typically less is more and the model is good at improvising. Don't drown the model in paragraphs of detailed information, instead keep it simple first and see how far you can lean on the models own ability to figure out your character. Copy pasting paragraphs of background information is not suitable for a 13B model such as this one, code formatted characters or an instruction prompt describing who you wish to talk to goes much further.\n\nFor example, you can put this in memory in regular chat mode:\n``` \n### Instruction: \nGenerate a conversation between Alice and Jeb where they discuss language models.\nIn this conversation Jeb is excited to teach Alice about Psyfighter. \n### Response: \n```\n\nBecause the model is a merge of a variety of models, it should support a broad range of instruct formats, or plain chat mode. If you have a particular favourite try it, otherwise we recommend to either use the regular chat mode or Alpaca's format.\n\n## Instruct Prompting\nThis model features various instruct models on a variety of instruction styles, when testing the model we have used Alpaca for our own tests. If you prefer a different format chances are it can work.\n\nDuring instructions we have observed that in some cases the adventure data can leak, it may also be worth experimenting using > as the prefix for a user command to remedy this. But this may result in a stronger fiction bias. If using Instruct style directions during chat or storywriting, you can enclose your direction in formatting like this to keep it from contaminating the rest of the context: \n```\n***\n> [Instructions/Direction here]\n***\n```\n\nKeep in mind that while this model can be used as a factual instruct model, the focus was on fiction. Information provided by the model can be made up.\n\n## Adventuring and Adventure Games\nThis model contains a lora that was trained on the same adventure dataset as the KoboldAI Skein model. Adventuring is best done using an small introduction to the world and your objective while using the > prefix for a user command (KoboldAI's adventure mode). \n\nIt is possible that the model does not immediately pick up on what you wish to do and does not engage in its Adventure mode behaviour right away. Simply manually correct the output to trim excess dialogue or other undesirable behaviour and continue to submit your actions using the appropriate mode. The model should pick up on this style quickly and will correctly follow this format within 3 turns.\n\n## Discovered something cool and want to engage with us? \nJoin our community at https://koboldai.org/discord !\n\n### This model would not be possible without the KoboldAI MergeBox program and the awesome work from: \nDoctor Shotgun, Undi95, PocketDoc, Blackroot, Brouz, The Face of Goonery, zattio770, PygmalionAI, TokenBender, nRuaif, lemonilia, Xwin-LM, elinas, jondurbin, NousResearch, CalderaAI, MrSeeker, OpenAssistant, ehartford, Henk717, AI Dungeon, StabilityAI and zattio770."
|
generation_config.json
CHANGED
@@ -3,5 +3,5 @@
|
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
"pad_token_id": 0,
|
6 |
-
"transformers_version": "4.
|
7 |
}
|
|
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
"pad_token_id": 0,
|
6 |
+
"transformers_version": "4.44.2"
|
7 |
}
|
model-00001-of-00006.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4978265728
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f24707a7c88b39e79b26900405e6358921b1160d12d89662ab385f8026bcbec1
|
3 |
size 4978265728
|
model-00002-of-00006.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4970422160
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7936dba04eee32ef48e02ae47924c7240bc67f420c6fcb071a027a65f154f917
|
3 |
size 4970422160
|
model-00003-of-00006.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4970422184
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2d94fd6e871f230f0398fb90ec4bfc4d31e473f93214c55d079eedd75b7d0e3f
|
3 |
size 4970422184
|
model-00004-of-00006.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4933701432
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:601d9c9b133f7b7867c2fbe54180e43685f6039a9c339964d1c5d1d22e61972f
|
3 |
size 4933701432
|
model-00005-of-00006.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4933722144
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:42b5a4e5ed3f4511d873a652520ae50bf2a2da84d28d873e9ffae4c9a9daa3eb
|
3 |
size 4933722144
|
model-00006-of-00006.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1245236904
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:efe4ebbeab49ed75d8b7d67163ac2493c34ffe67ff6b94786408daa4b4f74ea8
|
3 |
size 1245236904
|
tokenizer_config.json
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
{
|
2 |
"add_bos_token": true,
|
3 |
"add_eos_token": false,
|
|
|
4 |
"added_tokens_decoder": {
|
5 |
"0": {
|
6 |
"content": "<unk>",
|
@@ -31,6 +32,7 @@
|
|
31 |
"bos_token": "<s>",
|
32 |
"clean_up_tokenization_spaces": false,
|
33 |
"eos_token": "</s>",
|
|
|
34 |
"model_max_length": 1000000000000000019884624838656,
|
35 |
"pad_token": null,
|
36 |
"sp_model_kwargs": {},
|
|
|
1 |
{
|
2 |
"add_bos_token": true,
|
3 |
"add_eos_token": false,
|
4 |
+
"add_prefix_space": null,
|
5 |
"added_tokens_decoder": {
|
6 |
"0": {
|
7 |
"content": "<unk>",
|
|
|
32 |
"bos_token": "<s>",
|
33 |
"clean_up_tokenization_spaces": false,
|
34 |
"eos_token": "</s>",
|
35 |
+
"legacy": true,
|
36 |
"model_max_length": 1000000000000000019884624838656,
|
37 |
"pad_token": null,
|
38 |
"sp_model_kwargs": {},
|