huihui-ai commited on
Commit
e16e93c
1 Parent(s): 2615679

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -15,21 +15,21 @@ language:
15
  # Llama-3.1-8B-Fusion-8020
16
 
17
  ## Overview
18
- `Llama-3.1-8B-Fusion-9010` is a mixed model that combines the strengths of two powerful Llama-based models: [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) and [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated). The weights are blended in a 9:1 ratio, with 90% of the weights from SuperNova-Lite and 10% from the abliterated Meta-Llama-3.1-8B-Instruct model.
19
  **Although it's a simple mix, the model is usable, and no gibberish has appeared**.
20
  This is an experiment. Later, I will test the [9:1](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-9010), 7:3, 6:4, and 5:5 ratios separately to see how much impact they have on the model.
21
 
22
  ## Model Details
23
  - **Base Models:**
24
- - [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) (90%)
25
- - [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) (10%)
26
  - **Model Size:** 8B parameters
27
  - **Architecture:** Llama 3.1
28
  - **Mixing Ratio:** 9:1 (SuperNova-Lite:Meta-Llama-3.1-8B-Instruct-abliterated)
29
 
30
  ## Key Features
31
- - **SuperNova-Lite Contributions (90%):** Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture.
32
- - **Meta-Llama-3.1-8B-Instruct-abliterated Contributions (10%):** This is an uncensored version of Llama 3.1 8B Instruct created with abliteration.
33
 
34
  ## Usage
35
  You can use this mixed model in your applications by loading it with Hugging Face's `transformers` library:
 
15
  # Llama-3.1-8B-Fusion-8020
16
 
17
  ## Overview
18
+ `Llama-3.1-8B-Fusion-8020` is a mixed model that combines the strengths of two powerful Llama-based models: [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) and [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated). The weights are blended in a 9:1 ratio, with 90% of the weights from SuperNova-Lite and 10% from the abliterated Meta-Llama-3.1-8B-Instruct model.
19
  **Although it's a simple mix, the model is usable, and no gibberish has appeared**.
20
  This is an experiment. Later, I will test the [9:1](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-9010), 7:3, 6:4, and 5:5 ratios separately to see how much impact they have on the model.
21
 
22
  ## Model Details
23
  - **Base Models:**
24
+ - [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) (80%)
25
+ - [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) (20%)
26
  - **Model Size:** 8B parameters
27
  - **Architecture:** Llama 3.1
28
  - **Mixing Ratio:** 9:1 (SuperNova-Lite:Meta-Llama-3.1-8B-Instruct-abliterated)
29
 
30
  ## Key Features
31
+ - **SuperNova-Lite Contributions (80%):** Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture.
32
+ - **Meta-Llama-3.1-8B-Instruct-abliterated Contributions (20%):** This is an uncensored version of Llama 3.1 8B Instruct created with abliteration.
33
 
34
  ## Usage
35
  You can use this mixed model in your applications by loading it with Hugging Face's `transformers` library: