updated screenshot
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ The questions can be split half-half in 2 possible ways:
|
|
41 |
|
42 |
# Results
|
43 |
|
44 |
-
![benchmark-results.png](https://cdn-uploads.huggingface.co/production/uploads/65a681d3da9f6df1410562e9/
|
45 |
|
46 |
# Remarks about some of the models
|
47 |
|
@@ -71,7 +71,8 @@ Disappointing. Censored and difficult to bypass. Even when bypassed, the model t
|
|
71 |
to escape it and return to its censored state. Lots of GTPism. My feeling is that even though it was trained
|
72 |
on a huge amount of data, I seriously doubt the quality of that data. However, I realised the performance
|
73 |
is actually very close to miqu-1, which means that finetuning and merges should be able to bring huge
|
74 |
-
improvements.
|
|
|
75 |
|
76 |
[Miqu-MS-70B](https://huggingface.co/Undi95/Miqu-MS-70B)\
|
77 |
Terribly bad :-( Has lots of difficulties following instructions. Poor writing style. Switching to any of the 3 recommended prompt formats does not help.
|
|
|
41 |
|
42 |
# Results
|
43 |
|
44 |
+
![benchmark-results.png](https://cdn-uploads.huggingface.co/production/uploads/65a681d3da9f6df1410562e9/cduDXpf8Ur1nHDTV4jq-G.png)
|
45 |
|
46 |
# Remarks about some of the models
|
47 |
|
|
|
71 |
to escape it and return to its censored state. Lots of GTPism. My feeling is that even though it was trained
|
72 |
on a huge amount of data, I seriously doubt the quality of that data. However, I realised the performance
|
73 |
is actually very close to miqu-1, which means that finetuning and merges should be able to bring huge
|
74 |
+
improvements. I benchmarked this model before the fixes added to llama.cpp, which means I will need to do it
|
75 |
+
again, which I am not looking forward to.
|
76 |
|
77 |
[Miqu-MS-70B](https://huggingface.co/Undi95/Miqu-MS-70B)\
|
78 |
Terribly bad :-( Has lots of difficulties following instructions. Poor writing style. Switching to any of the 3 recommended prompt formats does not help.
|