leaderboard-pr-bot's picture
Adding Evaluation Results
9af5484
|
raw
history blame
1.34 kB
metadata
license: other

Thank you chargoddard for the original 22b model and merge script: https://huggingface.co/chargoddard/llama2-22b

This is llama 2 13b chat, with https://huggingface.co/ehartford/WizardLM-33B-V1.0-Uncensored as the donor model.

This a highly experimental model, which has barely been tested and isn't necessarily much smarter than stock 13b, but produces a different variety of responses.

Took around 2 hours to merge with 32gb of ram and about 115gb of swap used.

Note that while the donor model is uncensored, it will still contain similar behavior to the base model. I will probably attempt some future merges using less censored base models.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 46.83
ARC (25-shot) 56.23
HellaSwag (10-shot) 80.39
MMLU (5-shot) 53.62
TruthfulQA (0-shot) 45.76
Winogrande (5-shot) 70.24
GSM8K (5-shot) 11.14
DROP (3-shot) 10.4