UsernameJustAnother commited on
Commit
fce97d0
1 Parent(s): 5c2a0ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -12
README.md CHANGED
@@ -10,14 +10,22 @@ tags:
10
  - mistral
11
  - trl
12
  - rp
13
- - writing
14
  - gguf
 
 
15
  - experimental
16
  - long-context
 
 
 
 
 
17
  ---
18
 
 
19
  <img src="https://cdn-uploads.huggingface.co/production/uploads/662c17b252e194d5d436c708/ULeHz0KITPcS0znTN7gDl.png" width="500" height="500" />
20
 
 
21
 
22
  # Uploaded model
23
 
@@ -25,28 +33,29 @@ tags:
25
  - **License:** apache-2.0
26
  - **Finetuned from model :** unsloth/Mistral-Nemo-Base-2407
27
 
28
- **Standard disclaimer:** This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
29
 
30
- This is a Q_8.0 gguf of [UsernameJustAnother/Nemo-12B-Marlin-v8](https://huggingface.co/UsernameJustAnother/Nemo-12B-Marlin-v8).
31
 
32
  # New for v8:
33
  - Fine-tuned on Nemo Base instead of Instruct, because why not?
34
- - **FULL BORE MODE: ACTIVATE!** 10K-ish records of mostly-human convos and stories, curated by me, trained in ChatML, up from 8K in v6. Specifically:
35
  - 4K records from Reddit Writing Prompts (equal split of highest-rated sfw & nfsw)
36
- - 2K of Claude instruct, lightly curated & de-clauded.
37
- - 2K of curated Fallen Skies
38
  - 2K of curated/lightly de-ministrated C2 chat
39
- - Trained on a single 80GB A100 from runpod.io, with batch size of 8 (up from 2 on A100 40G), so far less steps involved.
 
40
 
41
- I pulled v7 because I honestly don't think it's as good as v6, and don't want folks to get the wrong idea that it's better just because the version number is higher.
42
 
43
- Props again to Unsloth.ai for letting me train this on a single A100 with variable (wildly variable) context length.
44
 
45
  Here's what the train/eval loss looked like:
46
 
47
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/662c17b252e194d5d436c708/hUKuy7ht_qObuFNDTVEe9.png)
48
 
49
- I still don't know what makes training loss drop at the end of epoch 1, or why eval loss doesn't drop down to match (it continues to decrease, but slowly).
50
 
51
  It was trained with the following settings:
52
 
@@ -95,4 +104,4 @@ lr_scheduler_kwargs = {
95
 
96
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
97
 
98
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
10
  - mistral
11
  - trl
12
  - rp
 
13
  - gguf
14
+ - Q8_0
15
+ - writing
16
  - experimental
17
  - long-context
18
+ datasets:
19
+ - kalomaze/Opus_Instruct_25k
20
+ - Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered
21
+ - Sao10K/c2-Logs-Filtered
22
+ - nothingiisreal/Reddit-Dirty-And-WritingPrompts
23
  ---
24
 
25
+ # Marlin v8: The Big Kahuna Update
26
  <img src="https://cdn-uploads.huggingface.co/production/uploads/662c17b252e194d5d436c708/ULeHz0KITPcS0znTN7gDl.png" width="500" height="500" />
27
 
28
+ This is a simple Q8_0 gguf of [UsernameJustAnother/Nemo-12B-Marlin-v8](https://huggingface.co/UsernameJustAnother/Nemo-12B-Marlin-v8).
29
 
30
  # Uploaded model
31
 
 
33
  - **License:** apache-2.0
34
  - **Finetuned from model :** unsloth/Mistral-Nemo-Base-2407
35
 
36
+ **Standard disclaimer:** This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from [MN-12B-Celeste-V1.9](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9). Huge props to [nothingisreal](https://huggingface.co/nothingiisreal) for posting their process and making me think this was even possible for a little fish like me.
37
 
38
+ The aim here is for a solid RP/storywriting model that will fit in 16GB of VRAM with a decent amount of context (> 16K).
39
 
40
  # New for v8:
41
  - Fine-tuned on Nemo Base instead of Instruct, because why not?
42
+ - **BIG KAHUNA POWERS: ACTIVATE!** 10K-ish records of mostly-human convos and stories, trained in ChatML, up from 8K in v6. For all of these records I did additional filtering/editing/selection beyond what I think happened in Celeste v1.9, mostly to teach myself some dataset skillz, plus I added more stories. Specifically:
43
  - 4K records from Reddit Writing Prompts (equal split of highest-rated sfw & nfsw)
44
+ - 2K of Claude instruct, lightly curated & de-clauded
45
+ - 2K of curated Falling through the Skies
46
  - 2K of curated/lightly de-ministrated C2 chat
47
+ - Trained on a single 80GB A100 from runpod.io, with batch size of 8 (up from 2 on A100 40G), so far less steps involved. Took about 7.5hrs to run.
48
+ - And remember kids, water is wet and fish are moist.
49
 
50
+ I pulled v7 because I honestly don't think it's as good as v6, and don't want folks to get the wrong idea that it's better just because the version number is higher. Besides, nothing good ever fires on all _seven_ cylinders.
51
 
52
+ Props again to [Daniel](https://huggingface.co/danielhanchen) and [Unsloth](https://huggingface.co/unsloth) for writing magic that lets me train this on a single A100 with variable (wildly variable) context length. [The docker image I used to run Unsloth on runpod is here](https://hub.docker.com/r/usernamejustanother/runpod_unsloth).
53
 
54
  Here's what the train/eval loss looked like:
55
 
56
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/662c17b252e194d5d436c708/hUKuy7ht_qObuFNDTVEe9.png" width="800"/>
57
 
58
+ I still don't know what makes training loss drop at the end of epoch 1, or why eval loss doesn't drop down to match (it continues to decrease, but slowly). I did say this was experimental, right? If I want to throw more money at this I might try a 3 epoch run just to see what happens.
59
 
60
  It was trained with the following settings:
61
 
 
104
 
105
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
106
 
107
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)