lucyknada commited on
Commit
794d52f
1 Parent(s): 3d65124

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -51
README.md CHANGED
@@ -1,25 +1,100 @@
1
  ---
2
  license: gemma
3
  base_model: IntervitensInc/gemma-2-9b-chatml
4
- tags:
5
- - generated_from_trainer
6
  model-index:
7
  - name: magnum-v3-9b-chatml
8
  results: []
9
  ---
10
- ### exl2 quant (measurement.json in main branch)
11
- ---
12
- ### check revisions for quants
13
- ---
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
 
19
- [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
20
  <details><summary>See axolotl config</summary>
21
 
22
- axolotl version: `0.4.1`
23
  ```yaml
24
  base_model: IntervitensInc/gemma-2-9b-chatml
25
  model_type: AutoModelForCausalLM
@@ -106,52 +181,25 @@ weight_decay: 0.05
106
  fsdp:
107
  fsdp_config:
108
  special_tokens:
109
- ```
110
 
 
111
  </details><br>
112
 
113
- # magnum-v3-9b-chatml
114
-
115
- This model is a fine-tuned version of [IntervitensInc/gemma-2-9b-chatml](https://huggingface.co/IntervitensInc/gemma-2-9b-chatml) on the None dataset.
116
-
117
- ## Model description
118
-
119
- More information needed
120
-
121
- ## Intended uses & limitations
122
-
123
- More information needed
124
-
125
- ## Training and evaluation data
126
-
127
- More information needed
128
-
129
- ## Training procedure
130
-
131
- ### Training hyperparameters
132
-
133
- The following hyperparameters were used during training:
134
- - learning_rate: 6e-06
135
- - train_batch_size: 1
136
- - eval_batch_size: 1
137
- - seed: 42
138
- - distributed_type: multi-GPU
139
- - num_devices: 8
140
- - gradient_accumulation_steps: 8
141
- - total_train_batch_size: 64
142
- - total_eval_batch_size: 8
143
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
144
- - lr_scheduler_type: cosine
145
- - lr_scheduler_warmup_steps: 50
146
- - num_epochs: 2
147
 
148
- ### Training results
149
 
 
 
 
 
 
150
 
 
 
151
 
152
- ### Framework versions
153
 
154
- - Transformers 4.44.0
155
- - Pytorch 2.4.0+cu121
156
- - Datasets 2.20.0
157
- - Tokenizers 0.19.1
 
1
  ---
2
  license: gemma
3
  base_model: IntervitensInc/gemma-2-9b-chatml
 
 
4
  model-index:
5
  - name: magnum-v3-9b-chatml
6
  results: []
7
  ---
 
 
 
 
8
 
9
+ ## This repo contains EXL2 quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml).
10
+ ## Base repo only contains the measurement file, see revisions for your quant of choice.
11
+
12
+ - [measurement.json](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml-exl2/tree/main)
13
+ - [3.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml-exl2/tree/3.0bpw)
14
+ - [4.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml-exl2/tree/4.0bpw)
15
+ - [5.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml-exl2/tree/5.0bpw)
16
+ - [6.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml-exl2/tree/6.0bpw)
17
+ - [8.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml-exl2/tree/8.0bpw)
18
+
19
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9ZBUlmzDCnNmQEdUUbyEL.png)
20
+
21
+ This is the 11th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
22
+
23
+ This model is fine-tuned on top of [IntervitensInc/gemma-2-9b-chatml](IntervitensInc/gemma-2-9b-chatml). (chatMLified gemma-2-9b)
24
+
25
+ ## Prompting
26
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
27
+
28
+ ```py
29
+ """<|im_start|>system
30
+ system prompt<|im_end|>
31
+ <|im_start|>user
32
+ Hi there!<|im_end|>
33
+ <|im_start|>assistant
34
+ Nice to meet you!<|im_end|>
35
+ <|im_start|>user
36
+ Can I ask a question?<|im_end|>
37
+ <|im_start|>assistant
38
+ """
39
+ ```
40
+
41
+ ## SillyTavern templates
42
+
43
+ Below are Instruct and Context templates for use within SillyTavern.
44
+
45
+ <details><summary>context template</summary>
46
+
47
+ ```yaml
48
+ {
49
+ "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
50
+ "example_separator": "",
51
+ "chat_start": "",
52
+ "use_stop_strings": false,
53
+ "allow_jailbreak": false,
54
+ "always_force_name2": true,
55
+ "trim_sentences": false,
56
+ "include_newline": false,
57
+ "single_line": false,
58
+ "name": "Magnum ChatML"
59
+ }
60
+ ```
61
+
62
+ </details><br>
63
+ <details><summary>instruct template</summary>
64
+
65
+ ```yaml
66
+ {
67
+ "system_prompt": "You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.",
68
+ "input_sequence": "<|im_start|>user\n",
69
+ "output_sequence": "<|im_start|>assistant\n",
70
+ "last_output_sequence": "",
71
+ "system_sequence": "<|im_start|>system\n",
72
+ "stop_sequence": "<|im_end|>",
73
+ "wrap": false,
74
+ "macro": true,
75
+ "names": true,
76
+ "names_force_groups": true,
77
+ "activation_regex": "",
78
+ "system_sequence_prefix": "",
79
+ "system_sequence_suffix": "",
80
+ "first_output_sequence": "",
81
+ "skip_examples": false,
82
+ "output_suffix": "<|im_end|>\n",
83
+ "input_suffix": "<|im_end|>\n",
84
+ "system_suffix": "<|im_end|>\n",
85
+ "user_alignment_message": "",
86
+ "system_same_as_user": false,
87
+ "last_system_sequence": "",
88
+ "name": "Magnum ChatML"
89
+ }
90
+ ```
91
+
92
+ </details><br>
93
 
94
+ ## Axolotl config
 
95
 
 
96
  <details><summary>See axolotl config</summary>
97
 
 
98
  ```yaml
99
  base_model: IntervitensInc/gemma-2-9b-chatml
100
  model_type: AutoModelForCausalLM
 
181
  fsdp:
182
  fsdp_config:
183
  special_tokens:
 
184
 
185
+ ```
186
  </details><br>
187
 
188
+ ## Credits
189
+ We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
190
 
191
+ We would also like to thank all members of Anthracite who made this finetune possible.
192
 
193
+ - [anthracite-org/stheno-filtered-v1.1](https://huggingface.co/datasets/anthracite-org/stheno-filtered-v1.1)
194
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
195
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
196
+ - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
197
+ - [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
198
 
199
+ ## Training
200
+ The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
201
 
202
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
203
 
204
+ ## Safety
205
+ ...