leaderboard-pr-bot commited on
Commit
82bb329
1 Parent(s): 1edacb6

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -96,3 +96,17 @@ New Sota: Puffin - 69.9 (+1.1)
96
  Puffin 13B supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
97
 
98
  Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  Puffin 13B supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
97
 
98
  Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!
99
+
100
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
101
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Nous-Puffin-70B)
102
+
103
+ | Metric | Value |
104
+ |-----------------------|---------------------------|
105
+ | Avg. | 56.58 |
106
+ | ARC (25-shot) | 67.41 |
107
+ | HellaSwag (10-shot) | 87.37 |
108
+ | MMLU (5-shot) | 69.77 |
109
+ | TruthfulQA (0-shot) | 46.77 |
110
+ | Winogrande (5-shot) | 83.9 |
111
+ | GSM8K (5-shot) | 34.27 |
112
+ | DROP (3-shot) | 6.6 |