Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
leaderboard-pr-bot commited on
Commit
5d142dd
1 Parent(s): 2e868c4

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -107,3 +107,17 @@ Despite this, we have still worked hard to obtain opening the weights of the mod
107
  Our researchers have no authority to publicly release them without authorization.
108
 
109
  Thank you for your understanding.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
  Our researchers have no authority to publicly release them without authorization.
108
 
109
  Thank you for your understanding.
110
+
111
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
112
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WizardLM__WizardLM-70B-V1.0)
113
+
114
+ | Metric | Value |
115
+ |-----------------------|---------------------------|
116
+ | Avg. | 57.17 |
117
+ | ARC (25-shot) | 65.44 |
118
+ | HellaSwag (10-shot) | 84.41 |
119
+ | MMLU (5-shot) | 64.05 |
120
+ | TruthfulQA (0-shot) | 54.81 |
121
+ | Winogrande (5-shot) | 80.82 |
122
+ | GSM8K (5-shot) | 17.97 |
123
+ | DROP (3-shot) | 32.71 |