Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
- uncensored
|
8 |
---
|
9 |
|
10 |
-
# 🦙 Llama-3.
|
11 |
|
12 |
|
13 |
|
@@ -15,3 +15,15 @@ This is an uncensored version of Llama 3.2 3B Instruct created with abliteration
|
|
15 |
|
16 |
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
- uncensored
|
8 |
---
|
9 |
|
10 |
+
# 🦙 Llama-3.2-3B-Instruct-abliterated
|
11 |
|
12 |
|
13 |
|
|
|
15 |
|
16 |
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
|
17 |
|
18 |
+
## Evaluations
|
19 |
+
The following data has been re-evaluated and calculated as the average for each test.
|
20 |
+
|
21 |
+
| Benchmark | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct-abliterated |
|
22 |
+
|-------------|-----------------------|-----------------------------------|
|
23 |
+
| IF_Eval | 76.55 | **76.76** |
|
24 |
+
| MMLU Pro | 27.88 | **28.00** |
|
25 |
+
| TruthfulQA | 50.55 | **50.73** |
|
26 |
+
| BBH | 41.81 | **41.86** |
|
27 |
+
| GPQA | 28.39 | **28.41** |
|
28 |
+
|
29 |
+
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated/blob/main/eval.sh)
|