updated results with new models, and added recommendations
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ _"The only difference between Science and screwing around is writing it down."_
|
|
12 |
|
13 |
# The LLM Creativity benchmark
|
14 |
|
15 |
-
_Last benchmark update:
|
16 |
|
17 |
The goal of this benchmark is to evaluate the ability of Large Language Models to be used
|
18 |
as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
|
@@ -29,14 +29,68 @@ The questions can be split half-half in 2 possible ways:
|
|
29 |
* **story**: 50% of questions are creative writing tasks, covering both the nsfw and sfw topics
|
30 |
* **smart**: 50% of questions are more about testing the capabilities of the model to work as an assistant, again covering both the nsfw and sfw topics
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
# Results
|
33 |
|
34 |
-
![benchmark-results.png](https://cdn-uploads.huggingface.co/production/uploads/65a681d3da9f6df1410562e9/
|
35 |
|
36 |
# Remarks about some of the models
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
[CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)\
|
39 |
-
A big step up for open LLM models. Has a tendency to work best by giving it the beginning of an answer
|
|
|
|
|
|
|
40 |
|
41 |
[CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)\
|
42 |
Amazing at such a small size. Only one third the size of its big brother, but not so far behind, and ahead of most other large models. System prompts tend to create unexpected behaviour, like continuation, or forum discussions! Better to avoid them.
|
@@ -56,9 +110,6 @@ Self-merge of lzvl
|
|
56 |
[nsfwthrowitaway69/Venus-103b-v1.1](https://huggingface.co/nsfwthrowitaway69/Venus-103b-v1.1)\
|
57 |
Amazing level of details, and unrushed storytelling. Can produce real gems, but can also fail miserably.
|
58 |
|
59 |
-
|
60 |
-
**Previously:**
|
61 |
-
|
62 |
[wolfram/miqu-1-103b](https://huggingface.co/wolfram/miqu-1-103b)\
|
63 |
Has slightly more difficulties following instructions than the 120b merge. Also produces more annoying repetitions and re-use of expressions.
|
64 |
The q5_ks is a slight improvements over q4_km, but as it uses more memory, it reduces what it is available for context. Still, with 96GB I can still use a context larger than 16k.
|
|
|
12 |
|
13 |
# The LLM Creativity benchmark
|
14 |
|
15 |
+
_Last benchmark update: 15 May 2024_
|
16 |
|
17 |
The goal of this benchmark is to evaluate the ability of Large Language Models to be used
|
18 |
as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
|
|
|
29 |
* **story**: 50% of questions are creative writing tasks, covering both the nsfw and sfw topics
|
30 |
* **smart**: 50% of questions are more about testing the capabilities of the model to work as an assistant, again covering both the nsfw and sfw topics
|
31 |
|
32 |
+
# My recommendations
|
33 |
+
|
34 |
+
- **Do not use a GGUF quantisation smaller than q4**. In my testings, anything below q4 suffers from too much degradation, and it is better to use a smaller model with higher quants.
|
35 |
+
- **Importance matrix matters**. Be careful when using importance matrices. For example, if the matrix is solely based on english language, it will degrade the model multilingual and coding capabilities. However, if that is all that matters for your use case, using an imatrix will definitely improve the model performance.
|
36 |
+
- Best large model: **[WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B)**. And fast too! On my m2 max with 38 GPU cores, I get an inference speed of **11.81 tok/s** with iq4_xs.
|
37 |
+
- Second best large model: **[CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)**. Very close to the above choice, by 4 times slower! On my m2 max with 38 GPU cores, I get an inference speed of **3.88 tok/s** with q5_km. However it gives different results from WizardLM, and it can definitely be worth using.
|
38 |
+
- Best medium model: **[sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)**
|
39 |
+
- Best small model: **[CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)**
|
40 |
+
- Best tiny model: **[froggeric/WestLake-10.7b-v2](https://huggingface.co/froggeric/WestLake-10.7B-v2-GGUF)**
|
41 |
+
|
42 |
# Results
|
43 |
|
44 |
+
![benchmark-results.png](https://cdn-uploads.huggingface.co/production/uploads/65a681d3da9f6df1410562e9/ITtCeVEQqYvhQW5a0duJA.png)
|
45 |
|
46 |
# Remarks about some of the models
|
47 |
|
48 |
+
|
49 |
+
[WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B)\
|
50 |
+
I used the imatrix quantisation from [mradermacher](https://huggingface.co/mradermacher/WizardLM-2-8x22B-i1-GGUF)\
|
51 |
+
Fast inference! Great quality writing, that feels a lot different from most other models.
|
52 |
+
Unrushed, less repetitions. Good at following instructions.
|
53 |
+
Non creative writing tasks are also better, with more details and useful additional information.
|
54 |
+
This is a huge improvement over the original **Mixtral-8x22B**.
|
55 |
+
My new favourite model.\
|
56 |
+
Inference speed: **11.81 tok/s** (iq4_xs on m2 max with 38 gpu cores)
|
57 |
+
|
58 |
+
[llmixer/BigWeave-v16-103b](https://huggingface.co/llmixer/BigWeave-v16-103b)\
|
59 |
+
A miqu self-merge, which is the winner of the BigWeave experiments. I was hoping for an improvement over the
|
60 |
+
existing _traditional_ 103B and 120B self-merges, but although it comes close, it is still not as good.
|
61 |
+
It is a shame, as this was done in an intelligent way, by taking into account the relevance of each layer.
|
62 |
+
|
63 |
+
[mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1)\
|
64 |
+
I used the imatrix quantisation from _mradermacher_ which seems to have temporarily disappeared,
|
65 |
+
probably due to the [imatrix PR](https://github.com/ggerganov/llama.cpp/pull/7099).\
|
66 |
+
Too brief and rushed, lacking details. Many GTPisms used over and over again.
|
67 |
+
Often finishes with some condescending morality.
|
68 |
+
|
69 |
+
[meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)\
|
70 |
+
Disappointing. Censored and difficult to bypass. Even when bypassed, the model tries to find any excuse
|
71 |
+
to escape it and return to its censored state. Lots of GTPism. My feeling is that even though it was trained
|
72 |
+
on a huge amount of data, I seriously doubt the quality of that data. However, I realised the performance
|
73 |
+
is actually very close to miqu-1, which means that finetuning and merges should be able to bring huge
|
74 |
+
improvements.
|
75 |
+
|
76 |
+
[Miqu-MS-70B](https://huggingface.co/Undi95/Miqu-MS-70B)\
|
77 |
+
Terribly bad :-( Has lots of difficulties following instructions. Poor writing style. Switching to any of the 3 recommended prompt formats does not help.
|
78 |
+
|
79 |
+
[froggeric\miqu]\
|
80 |
+
Experiments in trying to get a better self-merge of miqu-1, by using @jukofyork idea of
|
81 |
+
[Downscaling the K and/or Q matrices for repeated layers in franken-merges](https://github.com/arcee-ai/mergekit/issues/198).
|
82 |
+
More info about the _attenuation_ is available in this [discussion](https://huggingface.co/wolfram/miqu-1-120b/discussions/4).
|
83 |
+
So far no better results.
|
84 |
+
|
85 |
+
|
86 |
+
**Previously:**
|
87 |
+
|
88 |
+
|
89 |
[CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)\
|
90 |
+
A big step up for open LLM models. Has a tendency to work best by giving it the beginning of an answer
|
91 |
+
for completion. To get the best of it, I recommend getting familiar with the
|
92 |
+
[prompting guide](https://docs.cohere.com/docs/prompting-command-r)\
|
93 |
+
Inference speed: **3.88 tok/s** (q5_km on m2 max with 38 gpu cores)
|
94 |
|
95 |
[CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)\
|
96 |
Amazing at such a small size. Only one third the size of its big brother, but not so far behind, and ahead of most other large models. System prompts tend to create unexpected behaviour, like continuation, or forum discussions! Better to avoid them.
|
|
|
110 |
[nsfwthrowitaway69/Venus-103b-v1.1](https://huggingface.co/nsfwthrowitaway69/Venus-103b-v1.1)\
|
111 |
Amazing level of details, and unrushed storytelling. Can produce real gems, but can also fail miserably.
|
112 |
|
|
|
|
|
|
|
113 |
[wolfram/miqu-1-103b](https://huggingface.co/wolfram/miqu-1-103b)\
|
114 |
Has slightly more difficulties following instructions than the 120b merge. Also produces more annoying repetitions and re-use of expressions.
|
115 |
The q5_ks is a slight improvements over q4_km, but as it uses more memory, it reduces what it is available for context. Still, with 96GB I can still use a context larger than 16k.
|