coryg89 commited on
Commit
0cc268f
1 Parent(s): 6662d04

Initial commit of model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.model.bak filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ - Yi
10
+ license_name: yi-license
11
+ license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
12
+ base_model: []
13
+ model-index:
14
+ - name: Yi-34B-200K-DARE-merge-v7
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: AI2 Reasoning Challenge (25-Shot)
21
+ type: ai2_arc
22
+ config: ARC-Challenge
23
+ split: test
24
+ args:
25
+ num_few_shot: 25
26
+ metrics:
27
+ - type: acc_norm
28
+ value: 68.09
29
+ name: normalized accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: HellaSwag (10-Shot)
38
+ type: hellaswag
39
+ split: validation
40
+ args:
41
+ num_few_shot: 10
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 85.99
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MMLU (5-Shot)
54
+ type: cais/mmlu
55
+ config: all
56
+ split: test
57
+ args:
58
+ num_few_shot: 5
59
+ metrics:
60
+ - type: acc
61
+ value: 77.3
62
+ name: accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: TruthfulQA (0-shot)
71
+ type: truthful_qa
72
+ config: multiple_choice
73
+ split: validation
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: mc2
78
+ value: 58.9
79
+ source:
80
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: Winogrande (5-shot)
87
+ type: winogrande
88
+ config: winogrande_xl
89
+ split: validation
90
+ args:
91
+ num_few_shot: 5
92
+ metrics:
93
+ - type: acc
94
+ value: 83.11
95
+ name: accuracy
96
+ source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: GSM8k (5-shot)
104
+ type: gsm8k
105
+ config: main
106
+ split: test
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 65.35
112
+ name: accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
115
+ name: Open LLM Leaderboard
116
+ ---
117
+ # Possibly made obsolete by: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8
118
+
119
+
120
+
121
+ # Yi 34B 200K DARE Merge v7
122
+
123
+ A merge of several Yi 34B 200K models using the new DARE Ties method via mergekit. The goal is to create a merge model that excels at 32K+ context performance.
124
+
125
+ ## Prompt template: Orca-Vicuna
126
+ ```
127
+ SYSTEM: {system_message}
128
+ USER: {prompt}
129
+ ASSISTANT:
130
+ ```
131
+ It might recognize ChatML, and possibly Alpaca-like formats. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/
132
+
133
+
134
+
135
+ ## Running
136
+ Being a Yi model, try running a lower temperature with 0.02-0.06 MinP, a little repetition penalty, maybe mirostat with a low tau, and no other samplers. Yi tends to run "hot" by default, and it really needs a low temperature + MinP to cull the huge vocabulary.
137
+
138
+ 24GB GPUs can efficiently run Yi-34B-200K models at **45K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). 16GB GPUs can still run the high context with aggressive quantization.
139
+
140
+ To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends like exllamav2 or unsloth.
141
+
142
+
143
+ ## Testing Notes
144
+
145
+ See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5#testing-notes
146
+
147
+ A "4k" merge model was created to try and extend the context of SUS Chat and DPO-bagel before adding them to the merge: https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test
148
+
149
+ In addition, the weight gradients are biased towards Vicuna-format models in the first few layers to try and "emphasize" the Orca-Vicuna prompt template. How sucessful this is remains to be seen.
150
+
151
+
152
+ ### Merge Method
153
+
154
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base.
155
+
156
+ ### Models Merged
157
+
158
+ The following models were included in the merge:
159
+ * https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat
160
+ * https://huggingface.co/jondurbin/bagel-34b-v0.2
161
+ * https://huggingface.co/NousResearch/Nous-Capybara-34B
162
+ * https://huggingface.co/migtissera/Tess-M-Creative-v1.0
163
+ * https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test
164
+ * https://huggingface.co/Mihaiii/Pallas-0.5
165
+ * https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
166
+ * https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2
167
+ * https://huggingface.co/migtissera/Tess-34B-v1.4
168
+ * https://huggingface.co/SUSTech/SUS-Chat-34B
169
+ * https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2
170
+ * https://huggingface.co/chargoddard/Yi-34B-200K-Llama
171
+ * https://huggingface.co/chargoddard/Yi-34B-Llama
172
+
173
+
174
+ ### Configuration
175
+
176
+ The following YAML configuration was used to produce this model:
177
+
178
+ ```yaml
179
+ models:
180
+ - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
181
+ # No parameters necessary for base model
182
+ - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
183
+ parameters:
184
+ weight: [0.23, 0.125, 0.125, 0.125, 0.125, 0.125]
185
+ density: 0.59
186
+ - model: /home/alpha/Models/Raw/Mihaiii_Pallas-0.5
187
+ parameters:
188
+ weight: [0.23, 0.125, 0.125, 0.125, 0.125, 0.125]
189
+ density: 0.59
190
+ - model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
191
+ parameters:
192
+ weight: [0.02, 0.106, 0.106, 0.106, 0.106, 0.106]
193
+ density: 0.59
194
+ - model: /home/alpha/Storage/Models/Raw/jondurbin_bagel-34b-v0.2
195
+ #Only the SFT in the main merge since the DPO version seems to have no long context ability at all
196
+ parameters:
197
+ weight: [0.02, 0.100, 0.100, 0.100, 0.100, 0.100]
198
+ density: 0.4
199
+ - model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat
200
+ parameters:
201
+ weight: [0.02, 0.100, 0.100, 0.100, 0.100, 0.100]
202
+ density: 0.59
203
+ #- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k
204
+ # Dolphin 200K seems to be funky according to multiple leaderboards and perplexity tests?
205
+ # parameters:
206
+ # weight: 0.15
207
+ # density: 0.6
208
+ - model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2
209
+ parameters:
210
+ weight: [0.02, 0.110, 0.110, 0.110, 0.110, 0.110]
211
+ density: 0.59
212
+ - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
213
+ parameters:
214
+ weight: [0.22, 0.126, 0.126, 0.126, 0.126, 0.126]
215
+ density: 0.59
216
+ - model: /home/alpha/Storage/Models/Raw/4kmerge
217
+ parameters:
218
+ weight: [0.02, 0.108, 0.108, 0.108, 0.108, 0.108]
219
+ density: 0.5
220
+ - model: /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0
221
+ parameters:
222
+ weight: [0.22, 0.100, 0.100, 0.100, 0.100, 0.10]
223
+ density: 0.59
224
+ merge_method: dare_ties
225
+ tokenizer_source: union
226
+ base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
227
+ parameters:
228
+ int8_mask: true
229
+ dtype: bfloat16
230
+
231
+ ```
232
+
233
+ The following config was used for the "4kmerge" model:
234
+
235
+ ```yaml
236
+ models:
237
+ - model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
238
+ # No parameters necessary for base model
239
+ - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
240
+ parameters:
241
+ weight: 0.5
242
+ density: 1
243
+ - model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B
244
+ parameters:
245
+ weight: 0.2
246
+ density: 0.12
247
+ - model: /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2
248
+ parameters:
249
+ weight: 0.2
250
+ density: 0.15
251
+ - model: /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2
252
+ parameters:
253
+ weight: 0.1
254
+ density: 0.12
255
+ merge_method: dare_ties
256
+ tokenizer_source: union
257
+ base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
258
+ parameters:
259
+ int8_mask: true
260
+ dtype: bfloat16
261
+ ```
262
+
263
+
264
+
265
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
266
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-merge-v7)
267
+
268
+ | Metric |Value|
269
+ |---------------------------------|----:|
270
+ |Avg. |73.12|
271
+ |AI2 Reasoning Challenge (25-Shot)|68.09|
272
+ |HellaSwag (10-Shot) |85.99|
273
+ |MMLU (5-Shot) |77.30|
274
+ |TruthfulQA (0-shot) |58.90|
275
+ |Winogrande (5-shot) |83.11|
276
+ |GSM8k (5-shot) |65.35|
277
+
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 7168,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 20480,
14
+ "max_position_embeddings": 200000,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 56,
17
+ "num_hidden_layers": 60,
18
+ "num_key_value_heads": 8,
19
+ "pad_token_id": 0,
20
+ "pretraining_tp": 1,
21
+ "rms_norm_eps": 1e-05,
22
+ "rope_scaling": null,
23
+ "rope_theta": 5000000.0,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "bfloat16",
26
+ "transformers_version": "4.36.2",
27
+ "use_cache": true,
28
+ "vocab_size": 64002,
29
+ "quantization_config": {
30
+ "quant_method": "exl2",
31
+ "version": "0.0.15",
32
+ "bits": 4.0,
33
+ "head_bits": 6,
34
+ "calibration": {
35
+ "rows": 100,
36
+ "length": 2048,
37
+ "dataset": "(default)"
38
+ }
39
+ }
40
+ }
model.safetensors.index.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata": {"mergekit_version": "0.0.3.2"}, "weight_map": {"model.embed_tokens.weight": "model-00001-of-00007.safetensors", "model.layers.0.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.0.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "model.layers.0.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "model.layers.1.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.1.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "model.layers.1.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "model.layers.2.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "model.layers.2.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.2.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.3.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.3.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "model.layers.3.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "model.layers.4.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.4.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "model.layers.4.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "model.layers.5.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.5.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "model.layers.5.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "model.layers.6.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.6.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "model.layers.6.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.7.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.7.mlp.down_proj.weight": "model-00001-of-00007.safetensors", "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00007.safetensors", "model.layers.7.mlp.up_proj.weight": "model-00001-of-00007.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00007.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00007.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00007.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00007.safetensors", "model.layers.8.input_layernorm.weight": "model-00001-of-00007.safetensors", "model.layers.8.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.8.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.9.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.10.input_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.10.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.10.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.11.input_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.11.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.11.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.12.input_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.12.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.12.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.9.input_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.9.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.13.input_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.13.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.13.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.14.input_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.14.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.14.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.15.input_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.15.mlp.down_proj.weight": "model-00002-of-00007.safetensors", "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.15.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00007.safetensors", "model.layers.16.mlp.up_proj.weight": "model-00002-of-00007.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00007.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00007.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00007.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00007.safetensors", "model.layers.16.input_layernorm.weight": "model-00002-of-00007.safetensors", "model.layers.16.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.17.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.17.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.17.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "model.layers.18.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.18.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.18.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "model.layers.19.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.19.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.19.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "model.layers.20.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.20.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.20.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.21.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.21.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.21.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "model.layers.22.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.22.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.22.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.23.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "model.layers.23.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.23.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.24.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.24.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.24.mlp.up_proj.weight": "model-00003-of-00007.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00007.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00007.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00007.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00007.safetensors", "model.layers.25.input_layernorm.weight": "model-00003-of-00007.safetensors", "model.layers.25.mlp.down_proj.weight": "model-00003-of-00007.safetensors", "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00007.safetensors", "model.layers.25.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.26.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.26.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.26.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "model.layers.26.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.27.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.27.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "model.layers.27.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.28.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.28.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.28.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "model.layers.28.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.29.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.29.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.29.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "model.layers.29.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.30.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "model.layers.30.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.30.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.30.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.31.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.31.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.31.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "model.layers.31.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.32.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.32.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.32.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "model.layers.32.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.32.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.32.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.32.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.32.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.32.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.33.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.33.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.33.mlp.gate_proj.weight": "model-00004-of-00007.safetensors", "model.layers.33.mlp.up_proj.weight": "model-00004-of-00007.safetensors", "model.layers.33.post_attention_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.33.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.33.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.33.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.33.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.34.self_attn.k_proj.weight": "model-00004-of-00007.safetensors", "model.layers.34.self_attn.o_proj.weight": "model-00004-of-00007.safetensors", "model.layers.34.self_attn.q_proj.weight": "model-00004-of-00007.safetensors", "model.layers.34.self_attn.v_proj.weight": "model-00004-of-00007.safetensors", "model.layers.34.input_layernorm.weight": "model-00004-of-00007.safetensors", "model.layers.34.mlp.down_proj.weight": "model-00004-of-00007.safetensors", "model.layers.34.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.34.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.34.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.35.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.35.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.35.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.35.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.35.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.35.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "model.layers.35.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "model.layers.35.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "model.layers.35.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "model.layers.36.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.36.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.36.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.36.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.36.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.36.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "model.layers.36.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "model.layers.36.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "model.layers.36.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "model.layers.37.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.37.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.37.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "model.layers.37.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "model.layers.37.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "model.layers.37.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "model.layers.37.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.37.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.37.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.38.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.38.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.38.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.38.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.38.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.38.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "model.layers.38.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "model.layers.38.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "model.layers.38.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "model.layers.39.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.39.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.39.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.39.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.39.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.39.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "model.layers.39.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "model.layers.39.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "model.layers.39.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "model.layers.40.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.40.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.40.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.40.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.40.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.40.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "model.layers.40.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "model.layers.40.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "model.layers.40.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "model.layers.41.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "model.layers.41.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "model.layers.41.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "model.layers.41.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "model.layers.41.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.41.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.41.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.41.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.41.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.42.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.42.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.42.mlp.gate_proj.weight": "model-00005-of-00007.safetensors", "model.layers.42.mlp.up_proj.weight": "model-00005-of-00007.safetensors", "model.layers.42.post_attention_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.42.self_attn.k_proj.weight": "model-00005-of-00007.safetensors", "model.layers.42.self_attn.o_proj.weight": "model-00005-of-00007.safetensors", "model.layers.42.self_attn.q_proj.weight": "model-00005-of-00007.safetensors", "model.layers.42.self_attn.v_proj.weight": "model-00005-of-00007.safetensors", "model.layers.43.input_layernorm.weight": "model-00005-of-00007.safetensors", "model.layers.43.mlp.down_proj.weight": "model-00005-of-00007.safetensors", "model.layers.43.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.43.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.43.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.43.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.43.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.43.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.43.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.44.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.44.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.44.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.44.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.44.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.44.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.44.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.44.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "model.layers.44.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.45.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.45.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "model.layers.45.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.45.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.45.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.45.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.45.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.45.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.45.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.46.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.46.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "model.layers.46.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.46.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.46.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.46.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.46.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.46.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.46.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.47.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.47.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "model.layers.47.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.47.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.47.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.47.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.47.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.47.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.47.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.48.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.48.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.48.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.48.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.48.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.48.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "model.layers.48.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.48.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.48.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.49.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.49.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "model.layers.49.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.49.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.49.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.49.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.49.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.49.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.49.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.50.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.50.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "model.layers.50.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.50.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.50.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.50.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.50.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.50.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.50.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.51.mlp.gate_proj.weight": "model-00006-of-00007.safetensors", "model.layers.51.mlp.up_proj.weight": "model-00006-of-00007.safetensors", "model.layers.51.self_attn.k_proj.weight": "model-00006-of-00007.safetensors", "model.layers.51.self_attn.o_proj.weight": "model-00006-of-00007.safetensors", "model.layers.51.self_attn.q_proj.weight": "model-00006-of-00007.safetensors", "model.layers.51.self_attn.v_proj.weight": "model-00006-of-00007.safetensors", "model.layers.51.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.51.mlp.down_proj.weight": "model-00006-of-00007.safetensors", "model.layers.51.post_attention_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.52.input_layernorm.weight": "model-00006-of-00007.safetensors", "model.layers.52.mlp.down_proj.weight": "model-00007-of-00007.safetensors", "model.layers.52.mlp.gate_proj.weight": "model-00007-of-00007.safetensors", "model.layers.52.mlp.up_proj.weight": "model-00007-of-00007.safetensors", "model.layers.52.post_attention_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.52.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "model.layers.52.self_attn.o_proj.weight": "model-00007-of-00007.safetensors", "model.layers.52.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "model.layers.52.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "model.layers.53.input_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.53.mlp.down_proj.weight": "model-00007-of-00007.safetensors", "model.layers.53.mlp.gate_proj.weight": "model-00007-of-00007.safetensors", "model.layers.53.mlp.up_proj.weight": "model-00007-of-00007.safetensors", "model.layers.53.post_attention_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.53.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "model.layers.53.self_attn.o_proj.weight": "model-00007-of-00007.safetensors", "model.layers.53.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "model.layers.53.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "model.layers.54.input_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.54.mlp.down_proj.weight": "model-00007-of-00007.safetensors", "model.layers.54.mlp.gate_proj.weight": "model-00007-of-00007.safetensors", "model.layers.54.mlp.up_proj.weight": "model-00007-of-00007.safetensors", "model.layers.54.post_attention_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.54.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "model.layers.54.self_attn.o_proj.weight": "model-00007-of-00007.safetensors", "model.layers.54.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "model.layers.54.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "model.layers.55.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "model.layers.55.self_attn.o_proj.weight": "model-00007-of-00007.safetensors", "model.layers.55.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "model.layers.55.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "model.layers.55.input_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.55.mlp.down_proj.weight": "model-00007-of-00007.safetensors", "model.layers.55.mlp.gate_proj.weight": "model-00007-of-00007.safetensors", "model.layers.55.mlp.up_proj.weight": "model-00007-of-00007.safetensors", "model.layers.55.post_attention_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.56.input_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.56.mlp.down_proj.weight": "model-00007-of-00007.safetensors", "model.layers.56.mlp.gate_proj.weight": "model-00007-of-00007.safetensors", "model.layers.56.mlp.up_proj.weight": "model-00007-of-00007.safetensors", "model.layers.56.post_attention_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.56.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "model.layers.56.self_attn.o_proj.weight": "model-00007-of-00007.safetensors", "model.layers.56.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "model.layers.56.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "model.layers.57.input_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.57.mlp.down_proj.weight": "model-00007-of-00007.safetensors", "model.layers.57.mlp.gate_proj.weight": "model-00007-of-00007.safetensors", "model.layers.57.mlp.up_proj.weight": "model-00007-of-00007.safetensors", "model.layers.57.post_attention_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.57.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "model.layers.57.self_attn.o_proj.weight": "model-00007-of-00007.safetensors", "model.layers.57.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "model.layers.57.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "model.layers.58.mlp.gate_proj.weight": "model-00007-of-00007.safetensors", "model.layers.58.mlp.up_proj.weight": "model-00007-of-00007.safetensors", "model.layers.58.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "model.layers.58.self_attn.o_proj.weight": "model-00007-of-00007.safetensors", "model.layers.58.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "model.layers.58.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "lm_head.weight": "model-00007-of-00007.safetensors", "model.layers.58.input_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.58.mlp.down_proj.weight": "model-00007-of-00007.safetensors", "model.layers.58.post_attention_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.59.input_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.59.mlp.down_proj.weight": "model-00007-of-00007.safetensors", "model.layers.59.mlp.gate_proj.weight": "model-00007-of-00007.safetensors", "model.layers.59.mlp.up_proj.weight": "model-00007-of-00007.safetensors", "model.layers.59.post_attention_layernorm.weight": "model-00007-of-00007.safetensors", "model.layers.59.self_attn.k_proj.weight": "model-00007-of-00007.safetensors", "model.layers.59.self_attn.o_proj.weight": "model-00007-of-00007.safetensors", "model.layers.59.self_attn.q_proj.weight": "model-00007-of-00007.safetensors", "model.layers.59.self_attn.v_proj.weight": "model-00007-of-00007.safetensors", "model.norm.weight": "model-00007-of-00007.safetensors"}}
output-00001-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e8fc8e056d9bb9aa19267396b8d499b71d0aad909dd5158b3bfee033c1c3e4d
3
+ size 4293963912
output-00002-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c142395c4ccc2f03d5b7162200b5313291cd379ebd2ad0448a284144ab9938a3
3
+ size 4276666352
output-00003-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4126541324beca23c33d6c501effac9dd07bbf08a9cb409968678856fc6ca45
3
+ size 4255601368
output-00004-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c453bfd3cbc7f4c6003c3daf6a42a8e0d0fa479529fde3b51dfc37eb163dbb8
3
+ size 4250708424
output-00005-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab3f1b7c8810ace33389a1c27f160853f373ad5c993d6685cff880a838cd19e8
3
+ size 936279160
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": false
12
+ },
13
+ "1": {
14
+ "content": "<|startoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": false
20
+ },
21
+ "2": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": false
28
+ },
29
+ "64000": {
30
+ "content": "<s>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": false
36
+ },
37
+ "64001": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": false
44
+ }
45
+ },
46
+ "bos_token": "<|startoftext|>",
47
+ "clean_up_tokenization_spaces": false,
48
+ "eos_token": "<|endoftext|>",
49
+ "legacy": false,
50
+ "model_max_length": 200000,
51
+ "pad_token": "<unk>",
52
+ "padding_side": "right",
53
+ "sp_model_kwargs": {},
54
+ "spaces_between_special_tokens": false,
55
+ "tokenizer_class": "LlamaTokenizer",
56
+ "truncation_side": "right",
57
+ "unk_token": "<unk>",
58
+ "use_default_system_prompt": false
59
+ }