Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ language:
|
|
18 |
`Llama-3.1-8B-Fusion-5050` is a mixed model that combines the strengths of two powerful Llama-based models: [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) and [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated).
|
19 |
The weights are blended in a 5:5 ratio, with 50% of the weights from SuperNova-Lite and 50% from the abliterated Meta-Llama-3.1-8B-Instruct model.
|
20 |
**Although it's a simple mix, the model is usable, and no gibberish has appeared**.
|
21 |
-
This is an experiment.
|
22 |
All model evaluation reports will be provided subsequently.
|
23 |
|
24 |
## Model Details
|
|
|
18 |
`Llama-3.1-8B-Fusion-5050` is a mixed model that combines the strengths of two powerful Llama-based models: [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) and [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated).
|
19 |
The weights are blended in a 5:5 ratio, with 50% of the weights from SuperNova-Lite and 50% from the abliterated Meta-Llama-3.1-8B-Instruct model.
|
20 |
**Although it's a simple mix, the model is usable, and no gibberish has appeared**.
|
21 |
+
This is an experiment. I test the [9:1](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-9010), [8:2](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-8020), [7:3](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-7030), [6:4](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-6040) and [5:5](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-5050) ratios separately to see how much impact they have on the model.
|
22 |
All model evaluation reports will be provided subsequently.
|
23 |
|
24 |
## Model Details
|