Adding Evaluation Results
#3
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -36,3 +36,17 @@ Input prompt example:
|
|
36 |
The input ends with the `<|assistant|>` token to signal that the model should
|
37 |
start generating the assistant reply.
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
The input ends with the `<|assistant|>` token to signal that the model should
|
37 |
start generating the assistant reply.
|
38 |
|
39 |
+
|
40 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
41 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jordiclive__Llama-2-70b-oasst-1-200)
|
42 |
+
|
43 |
+
| Metric | Value |
|
44 |
+
|-----------------------|---------------------------|
|
45 |
+
| Avg. | 57.11 |
|
46 |
+
| ARC (25-shot) | 67.66 |
|
47 |
+
| HellaSwag (10-shot) | 87.24 |
|
48 |
+
| MMLU (5-shot) | 69.95 |
|
49 |
+
| TruthfulQA (0-shot) | 51.28 |
|
50 |
+
| Winogrande (5-shot) | 84.14 |
|
51 |
+
| GSM8K (5-shot) | 32.75 |
|
52 |
+
| DROP (3-shot) | 6.73 |
|