Update README.md
Browse files
README.md
CHANGED
@@ -1,33 +1,67 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
|
|
5 |
tags:
|
6 |
-
-
|
7 |
-
-
|
8 |
-
|
9 |
-
- name: magnum-v4-12b-r2
|
10 |
-
results: []
|
11 |
-
---
|
12 |
-
### exl2 quant (measurement.json in main branch)
|
13 |
-
---
|
14 |
-
### check revisions for quants
|
15 |
---
|
16 |
|
17 |
|
18 |
-
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
-
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
22 |
<details><summary>See axolotl config</summary>
|
23 |
|
24 |
-
axolotl version: `0.4.1`
|
25 |
```yaml
|
26 |
base_model: mistralai/Mistral-Nemo-Instruct-2407
|
27 |
model_type: AutoModelForCausalLM
|
28 |
tokenizer_type: AutoTokenizer
|
29 |
|
30 |
-
hub_model_id: anthracite-
|
31 |
hub_strategy: "all_checkpoints"
|
32 |
push_dataset_to_hub:
|
33 |
hf_use_auth_token: true
|
@@ -44,17 +78,17 @@ load_in_4bit: false
|
|
44 |
strict: false
|
45 |
|
46 |
datasets:
|
47 |
-
- path: anthracite-
|
48 |
type: custommistralv3tekken
|
49 |
-
- path: anthracite-
|
50 |
type: custommistralv3tekken
|
51 |
-
- path: anthracite-
|
52 |
type: custommistralv3tekken
|
53 |
- path: anthracite-org/nopm_claude_writing_fixed
|
54 |
type: custommistralv3tekken
|
55 |
-
- path: anthracite-
|
56 |
type: custommistralv3tekken
|
57 |
-
- path: anthracite-
|
58 |
type: custommistralv3tekken
|
59 |
#chat_template: chatml
|
60 |
shuffle_merged_datasets: true
|
@@ -115,51 +149,25 @@ fsdp_config:
|
|
115 |
special_tokens:
|
116 |
pad_token: <pad>
|
117 |
```
|
118 |
-
|
119 |
</details><br>
|
120 |
|
121 |
-
|
122 |
-
|
123 |
-
This model is a fine-tuned version of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) on the None dataset.
|
124 |
-
|
125 |
-
## Model description
|
126 |
-
|
127 |
-
More information needed
|
128 |
-
|
129 |
-
## Intended uses & limitations
|
130 |
-
|
131 |
-
More information needed
|
132 |
-
|
133 |
-
## Training and evaluation data
|
134 |
-
|
135 |
-
More information needed
|
136 |
-
|
137 |
-
## Training procedure
|
138 |
-
|
139 |
-
### Training hyperparameters
|
140 |
-
|
141 |
-
The following hyperparameters were used during training:
|
142 |
-
- learning_rate: 1e-05
|
143 |
-
- train_batch_size: 1
|
144 |
-
- eval_batch_size: 1
|
145 |
-
- seed: 42
|
146 |
-
- distributed_type: multi-GPU
|
147 |
-
- num_devices: 8
|
148 |
-
- gradient_accumulation_steps: 2
|
149 |
-
- total_train_batch_size: 16
|
150 |
-
- total_eval_batch_size: 8
|
151 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
152 |
-
- lr_scheduler_type: cosine
|
153 |
-
- lr_scheduler_warmup_steps: 40
|
154 |
-
- num_epochs: 2
|
155 |
|
156 |
-
|
157 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
158 |
|
|
|
|
|
159 |
|
160 |
-
|
161 |
|
162 |
-
|
163 |
-
|
164 |
-
- Datasets 2.21.0
|
165 |
-
- Tokenizers 0.19.1
|
|
|
1 |
---
|
2 |
+
license: other
|
3 |
+
license_name: mrl
|
4 |
+
language:
|
5 |
+
- en
|
6 |
tags:
|
7 |
+
- chat
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
library_name: transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
|
13 |
+
## This repo contains EXL2 quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v4-12b).
|
14 |
+
## Base repo only contains the measurement file, see revisions for your quant of choice.
|
15 |
+
|
16 |
+
- [measurement.json](https://huggingface.co/anthracite-org/magnum-v4-12b-exl2/tree/main)
|
17 |
+
- [3.0bpw](https://huggingface.co/anthracite-org/magnum-v4-12b-exl2/tree/3.0bpw)
|
18 |
+
- [4.0bpw](https://huggingface.co/anthracite-org/magnum-v4-12b-exl2/tree/4.0bpw)
|
19 |
+
- [5.0bpw](https://huggingface.co/anthracite-org/magnum-v4-12b-exl2/tree/5.0bpw)
|
20 |
+
- [6.0bpw](https://huggingface.co/anthracite-org/magnum-v4-12b-exl2/tree/6.0bpw)
|
21 |
+
- [8.0bpw](https://huggingface.co/anthracite-org/magnum-v4-12b-exl2/tree/8.0bpw)
|
22 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/-UC6YN1Gt3e1FDh8EqyaB.png)
|
23 |
+
|
24 |
+
This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
|
25 |
+
|
26 |
+
This model is fine-tuned on top of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407).
|
27 |
+
|
28 |
+
## Prompting
|
29 |
+
A typical input would look like this:
|
30 |
+
|
31 |
+
```py
|
32 |
+
<s>[INST] SYSTEM MESSAGE
|
33 |
+
USER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST]
|
34 |
+
```
|
35 |
+
|
36 |
+
## SillyTavern templates
|
37 |
+
|
38 |
+
Below are Instruct and Context templates for use within SillyTavern.
|
39 |
+
|
40 |
+
<details><summary>context template</summary>
|
41 |
+
|
42 |
+
```yaml
|
43 |
+
default SillyTavern template works fine
|
44 |
+
```
|
45 |
+
|
46 |
+
</details><br>
|
47 |
+
<details><summary>instruct template</summary>
|
48 |
+
|
49 |
+
```yaml
|
50 |
+
default SillyTavern template works fine
|
51 |
+
```
|
52 |
+
|
53 |
+
</details><br>
|
54 |
+
|
55 |
+
## Axolotl config
|
56 |
|
|
|
57 |
<details><summary>See axolotl config</summary>
|
58 |
|
|
|
59 |
```yaml
|
60 |
base_model: mistralai/Mistral-Nemo-Instruct-2407
|
61 |
model_type: AutoModelForCausalLM
|
62 |
tokenizer_type: AutoTokenizer
|
63 |
|
64 |
+
hub_model_id: anthracite-org/magnum-v4-12b-r2
|
65 |
hub_strategy: "all_checkpoints"
|
66 |
push_dataset_to_hub:
|
67 |
hf_use_auth_token: true
|
|
|
78 |
strict: false
|
79 |
|
80 |
datasets:
|
81 |
+
- path: anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system
|
82 |
type: custommistralv3tekken
|
83 |
+
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system
|
84 |
type: custommistralv3tekken
|
85 |
+
- path: anthracite-org/kalo-opus-instruct-3k-filtered-no-system
|
86 |
type: custommistralv3tekken
|
87 |
- path: anthracite-org/nopm_claude_writing_fixed
|
88 |
type: custommistralv3tekken
|
89 |
+
- path: anthracite-org/kalo_opus_misc_240827_no_system
|
90 |
type: custommistralv3tekken
|
91 |
+
- path: anthracite-org/kalo_misc_part2_no_system
|
92 |
type: custommistralv3tekken
|
93 |
#chat_template: chatml
|
94 |
shuffle_merged_datasets: true
|
|
|
149 |
special_tokens:
|
150 |
pad_token: <pad>
|
151 |
```
|
|
|
152 |
</details><br>
|
153 |
|
154 |
+
## Credits
|
155 |
+
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
156 |
|
157 |
+
We would also like to thank all members of Anthracite who made this finetune possible.
|
158 |
|
159 |
+
## Datasets
|
160 |
+
- [anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system](https://huggingface.co/datasets/anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system)
|
161 |
+
- [anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system)
|
162 |
+
- [anthracite-org/kalo-opus-instruct-3k-filtered-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-3k-filtered-no-system)
|
163 |
+
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
|
164 |
+
- [anthracite-org/kalo_opus_misc_240827_no_system](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827_no_system)
|
165 |
+
- [anthracite-org/kalo_misc_part2_no_system](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2_no_system)
|
166 |
|
167 |
+
## Training
|
168 |
+
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
|
169 |
|
170 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
171 |
|
172 |
+
## Safety
|
173 |
+
...
|
|
|
|