llama-13b-supercot / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
718c2dd
|
raw
history blame
906 Bytes
metadata
license: other

This model is a merge of LLAMA-13b and SuperCOT LoRA

huggyllama/llama-13b + kaiokendev/SuperCOT-LoRA/13b/gpu/cutoff-2048

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 48.22
ARC (25-shot) 56.06
HellaSwag (10-shot) 81.71
MMLU (5-shot) 45.36
TruthfulQA (0-shot) 48.55
Winogrande (5-shot) 75.77
GSM8K (5-shot) 7.2
DROP (3-shot) 22.92