kromeurus commited on
Commit
6fb5c98
1 Parent(s): de99772

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -23
README.md CHANGED
@@ -1,45 +1,121 @@
1
  ---
2
- base_model: []
 
 
 
 
 
 
 
 
 
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
-
 
 
 
8
  ---
9
- # vulca
 
 
 
 
 
 
10
 
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
13
- ## Merge Details
14
- ### Merge Method
15
 
16
- This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using merge/reshape as a base.
17
 
18
- ### Models Merged
19
 
20
- The following models were included in the merge:
21
- * merge/apollobulk
22
 
23
- ### Configuration
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- The following YAML configuration was used to produce this model:
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ```yaml
28
- base_model: merge/reshape
 
 
 
 
 
 
 
29
  dtype: float32
30
- merge_method: dare_linear
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  parameters:
32
- int8_mask: 1.0
33
- normalize: 0.0
34
- slices:
35
- - sources:
36
- - layer_range: [0, 32]
37
- model: merge/reshape
 
 
38
  parameters:
39
  weight: [0.1, 0.9]
40
- - layer_range: [0, 32]
41
- model: merge/apollobulk
42
  parameters:
43
  weight: [0.9, 0.1]
 
44
  tokenizer_source: base
45
- ```
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Locutusque/Apollo-0.4-Llama-3.1-8B
4
+ - maldv/badger-writer-llama-3-8b
5
+ - ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
6
+ - arcee-ai/Llama-3.1-SuperNova-Lite
7
+ - v000000/L3-8B-Poppy-Moonfall-C
8
+ - Casual-Autopsy/Jamet-L3-Stheno-BlackOasis-8B
9
+ - SicariusSicariiStuff/Dusk_Rainbow
10
+ - ResplendentAI/Rawr_Llama3_8B
11
+ - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
12
+ - maximalists/BRAG-Llama-3.1-8b-v0.1
13
+ - tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
14
  library_name: transformers
15
  tags:
16
  - mergekit
17
  - merge
18
+ - roleplay
19
+ - RP
20
+ - storytelling
21
+ license: cc-by-nc-4.0
22
  ---
23
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/667eea5cdebd46a5ec4dcc3d/5zwye-BvUG51XaSHcW-vG.png)
24
+
25
+ ...I'm tired.
26
+
27
+ Behold: Aglow Vulca. Had a Theseus Paradox of weather or not to keep the original name since everything save three models were replaced with another model, but it still has a spirit of Ablaze Vulca so I just changed the preceding adjective.
28
+
29
+ This took over a month, way too much money, and half of what was remaining of my sanity. If I could verbalize what the hell I went through trying to get this model to work, this repo would be 32k tokens long kekw. After figuring out how fast v0.1 could crack, I'd gotten to work on a v0.2 to at least smooth out the problems. Simple right?
30
 
31
+ No. It was not. But, much pain and suffering later, I've come out with a beast of an 8B merge that can handle almost anything thrown at it.
32
 
33
+ I'd like to give a special thanks to those in the BackyardAI discord for helping me test (especially one person, you know who you are) and watch me go down an insane downward spiral. They made the image above and helped troubleshoot versions until the final model was created. This merge would have taken much longer and the final version would be poorer without them. I'm the most active in that server so if you have questions, please join and say hi.
 
34
 
35
+ For Quants, look to your right at the model tree next to 'Quantizations'.
36
 
37
+ ### Model Details & Recommended Settings
38
 
39
+ This is a story telling first model that is proficient in narrative driven RP. Does best with straight forward instructions. Any wishy washy language will confused it. As per usual with any of my models with Formax in it, it's pretty sensitive to instructs so choose your worse wisely.
 
40
 
41
+ Once going though, it's able to generate detailed and human-ish outputs with lots of personality depending on the information given. Has a habit of matching the style and format of the input; style, spacing, grammar, etc. Can interweave details from the character persona, chat history, and user persona (if there is one) to create unique interactions and plot points. Leans more or less positive naturally but can be flipped if prompted correctly.
42
+
43
+ Being an Llama 3.1 model, it's still subject to the normal pros and cons of L3/L3.1 but I'd like to think I tamed some of it. Keep the temp on the lower end since there is a low chance it might freak out. If it does, swipe/regen the chat or delete the afflicted output and try again.
44
+
45
+ Rec. Settings:
46
+ ```
47
+ Template: Llama 3
48
+ Token Count: 128k Max
49
+ Temperature: 1.3
50
+ Min P: 0.1
51
+ Repeat Penalty: 1.05
52
+ Repeat Penalty Tokens: 256
53
+ ```
54
 
55
+ ### Merge Theory
56
+
57
+ Where to begin. The general though process was still the roughly same as the Ablaze, making one very smart model and another more creative focused model. This time, I merged Formax and RPmax in separately instead of doing one merge since they have different focuses.
58
+
59
+ 'Apollobulk' is the smarts, having the storytelling capabilities from badger writer, instruct following from Formax (duh) and the smarts of Super Nova. Apollo 0.4 was use as a RP temper to keep the overall model aligned with RP. Apollo 2.0 wasn't used as it skewed the merge too far towards inconsistent narratives.
60
+
61
+ 'Reshape' is the creative end, taking some inspo from the Ablaze's creative center. First created 'Darkened' as the main influence over the final writing style of Aglow. Poppy Moonfall C had the personality I was looking for but the smarts (though not important was still necessary) so the other three were added to round out it's overall capabilities while being very creative. Plopping that atop RPmax (For excellent unique RP interactions), BRAG (serious recall), and Natsumura (a great Storytelling/RP base) and model stock it, you get a really solid model on it's own.
62
+
63
+ Slap the two components together in a simple gradient dare_linear merge and boom; this unit of an 8B model. As of writing and releasing this model, mergekit is fucked for me (one of it's dependencies has broken L3 merging) so I can't test any other methods atm. If there is a better final merge method, I'll be uploading a v0.2 once the bug is fixed.
64
+
65
+ This time around, everything was done with [DavidAU]()'s High Quality method, merging with float32 at all steps. Made a significant difference in nuanced understanding of text.
66
+
67
+ ### Config
68
 
69
  ```yaml
70
+ models:
71
+ - model: Locutusque/Apollo-0.4-Llama-3.1-8B
72
+ - model: maldv/badger-writer-llama-3-8b
73
+ - model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
74
+ base_model: arcee-ai/Llama-3.1-SuperNova-Lite
75
+ parameters:
76
+ int8_mask: true
77
+ merge_method: model_stock
78
  dtype: float32
79
+ tokenizer_source: base
80
+ name: apollobulk
81
+ ---
82
+ models:
83
+ - model: v000000/L3-8B-Poppy-Moonfall-C
84
+ - model: Casual-Autopsy/Jamet-L3-Stheno-BlackOasis-8B
85
+ - model: SicariusSicariiStuff/Dusk_Rainbow
86
+ base_model: ResplendentAI/Rawr_Llama3_8B
87
+ parameters:
88
+ int8_mask: true
89
+ merge_method: model_stock
90
+ dtype: float32
91
+ tokenizer_source: base
92
+ name: darkened
93
+ ---
94
+ models:
95
+ - model: darkened
96
+ - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
97
+ - model: maximalists/BRAG-Llama-3.1-8b-v0.1
98
+ base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
99
  parameters:
100
+ int8_mask: true
101
+ merge_method: model_stock
102
+ dtype: float32
103
+ tokenizer_source: base
104
+ name: reshape
105
+ ---
106
+ models:
107
+ - model: reshape
108
  parameters:
109
  weight: [0.1, 0.9]
110
+ - model: apollobulk
 
111
  parameters:
112
  weight: [0.9, 0.1]
113
+ base_model: reshape
114
  tokenizer_source: base
115
+ parameters:
116
+ normalize: false
117
+ int8_mask: true
118
+ merge_method: dare_linear
119
+ dtype: float32
120
+ name: vulca
121
+ ```