Joseph717171 commited on
Commit
f1e2cad
1 Parent(s): f38676f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -6
README.md CHANGED
@@ -1,19 +1,26 @@
1
  ---
2
- base_model: []
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
-
8
  ---
9
  # Llama-3.1-SuperNova-Lite_TIES_with_Base
10
 
11
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
13
- ## Merge Details
14
- ### Merge Method
 
 
 
 
 
 
 
 
 
15
 
16
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using /Users/jsarnecki/opt/Workspace/meta-llama/Llama-3.1-8B as a base.
17
 
18
  ### Models Merged
19
 
@@ -44,4 +51,4 @@ parameters:
44
  int8_mask: true
45
  dtype: bfloat16
46
 
47
- ```
 
1
  ---
 
2
  library_name: transformers
3
  tags:
4
  - mergekit
5
  - merge
6
+ license: llama3.1
7
  ---
8
  # Llama-3.1-SuperNova-Lite_TIES_with_Base
9
 
10
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
11
 
12
+ ## Merge Details/Method
13
+
14
+ This is a merge of [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) with its base [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) (the base model being: the model which the instruct model was fine-tuned on - even though in our case, [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite), was fine-tuned, etc on top of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and not directly on top of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B))
15
+
16
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using meta-llama/Llama-3.1-8B as a base.
17
+
18
+ The merge was inspired by RomboDawg's ([Replete-AI](https://huggingface.co/Replete-AI)) TIES merge of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) with its base [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B), which topped the OpenLLM Learderboard for the highest Average score for a 7B parameter model.
19
+
20
+ After experimenting and discussing/researching the merge with Rombodawg, I looked into mergekit's TIES merge method some more, which led me to find a pertinent parameter that we weren't utilizing for our TIES merge: density. I decided to use density along with the weight parameter to see if we could restore some of the instruction following that our merges seemed to lack in comparison to the original Instruct model. The resulant merges turned out to be great! By using the density parameter along with the weight parameter, we were able to restore more of the Instruction following which was diminished and/or not present when solely using the weight parameter for our TIES merge.
21
+
22
+ The way this works is: the Instruct model is TIES merged with the base model, with the weight = 1 and density = 1. After the merge is complete, the merge's .json config files (excluding 'model.safetensors.index.json') are replaced with the original Instruct's .json config files.
23
 
 
24
 
25
  ### Models Merged
26
 
 
51
  int8_mask: true
52
  dtype: bfloat16
53
 
54
+ ```