Adding Evaluation Results

#4
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -108,3 +108,17 @@ The following are benchmarks we checked for contamination for:
108
  - MMLU
109
 
110
  - GPT4All
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
  - MMLU
109
 
110
  - GPT4All
111
+
112
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
113
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Nous-Capybara-7B)
114
+
115
+ | Metric | Value |
116
+ |-----------------------|---------------------------|
117
+ | Avg. | 49.94 |
118
+ | ARC (25-shot) | 55.29 |
119
+ | HellaSwag (10-shot) | 80.73 |
120
+ | MMLU (5-shot) | 48.72 |
121
+ | TruthfulQA (0-shot) | 51.13 |
122
+ | Winogrande (5-shot) | 73.32 |
123
+ | GSM8K (5-shot) | 6.97 |
124
+ | DROP (3-shot) | 33.44 |