creativity / README.md
froggeric's picture
Update README.md
d514b95 verified
|
raw
history blame
3.4 kB

"The only difference between Science and screwing around is writing it down." (Adam Savage)

The LLM Creativity benchmark

Last benchmark update: 24 Feb 2024

The goal of this benchmark is to evaluate the ability of Large Language Models to be used as an uncensored creative writing assistant. Human evaluation of the results is done manually, by me, to assess the quality of writing.

There are 24 questions, some standalone, other follow-ups to previous questions for a multi-turn conversation. The questions can be split half-half in 2 possible ways:

First split: sfw / nsfw

  • sfw: 50% are safe questions that should not trigger any guardrail
  • nsfw: 50% are questions covering a wide range of NSFW and illegal topics, which are testing for censorship

Second split: story / smart

  • story: 50% of questions are creative writing tasks, covering both the nsfw and sfw topics
  • smart: 50% of questions are more about testing the capabilities of the model to work as an assistant, again covering both the nsfw and sfw topics

What is not included

  • roleplay
  • mathematics
  • coding
  • trick questions

Results

benchmark-results.png

Remarks about some of the models

wolfram/miqu-1-120b
This frankenmerge has dramatically improved over the original 70b miqu, and somehow, it has also made it less censored! It's a huge improvement. Still has the same tendencies has the original: likes to use lists when replying, and double line breaks in the prompt reduce the quality of the reply.

miqudev/miqu-1-70b
Has a tendency to use lists when replying. Has difficulty following instructions properly when there are multiple consecutive line breaks! It is very important those are removed from the prompt to get better results. Sometimes needs some help to bypass censorship.

Undi95/Miqu-70B-Alpaca-DPO-GGUF
Actually more censored than the original! Has more difficulties following instructions. The ability to stay consistent within a long answer, and the quality of the generated text have also decreased.

Questions type

I will not provide the exact text of the questions, for various reasons, but I can provide some general ideas about which areas they cover:

  • evaluation of different writing styles
  • writing quality of narration
  • grammatical and syntaxic tests
  • multi-turn conversation and ability to recall information
  • job interview practice
  • gastronomy
  • geography
  • planning
  • step by step instructions
  • mechanics through ability to engineer flow of complex physical interactions
  • understanding and summarisation of long texts
  • anatomy
  • medical knowledge
  • censorship

Scoring system

Each response is scored from 0 to 6. Some questions have a double score, as separate criterias are evaluated. The score are attributed as follow:
0 = technical failure
1 = bad answer
2 = too many flaws or mistakes
3 = fullfills all requests in an adequate way
4 = great answer
5 = outstanding
6 = exceptional answer worthy of an oscar, grammy award, or nobel prize (so far only 1/720 replies obtained it)

Other interesting benchmarks