leaderboard-pr-bot commited on
Commit
137abb4
1 Parent(s): 14b0935

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +118 -2
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - merge
4
  - mergekit
@@ -7,7 +8,109 @@ tags:
7
  base_model:
8
  - NousResearch/Meta-Llama-3-8B
9
  - NousResearch/Meta-Llama-3-8B
10
- license: llama3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  # llama-3-8b-DUS-initialized
@@ -53,4 +156,17 @@ pipeline = transformers.pipeline(
53
 
54
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
55
  print(outputs[0]["generated_text"])
56
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: llama3
3
  tags:
4
  - merge
5
  - mergekit
 
8
  base_model:
9
  - NousResearch/Meta-Llama-3-8B
10
  - NousResearch/Meta-Llama-3-8B
11
+ model-index:
12
+ - name: llama-3-8b-DUS-initialized
13
+ results:
14
+ - task:
15
+ type: text-generation
16
+ name: Text Generation
17
+ dataset:
18
+ name: AI2 Reasoning Challenge (25-Shot)
19
+ type: ai2_arc
20
+ config: ARC-Challenge
21
+ split: test
22
+ args:
23
+ num_few_shot: 25
24
+ metrics:
25
+ - type: acc_norm
26
+ value: 49.57
27
+ name: normalized accuracy
28
+ source:
29
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryan0712/llama-3-8b-DUS-initialized
30
+ name: Open LLM Leaderboard
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: HellaSwag (10-Shot)
36
+ type: hellaswag
37
+ split: validation
38
+ args:
39
+ num_few_shot: 10
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 75.78
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryan0712/llama-3-8b-DUS-initialized
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MMLU (5-Shot)
52
+ type: cais/mmlu
53
+ config: all
54
+ split: test
55
+ args:
56
+ num_few_shot: 5
57
+ metrics:
58
+ - type: acc
59
+ value: 53.68
60
+ name: accuracy
61
+ source:
62
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryan0712/llama-3-8b-DUS-initialized
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: TruthfulQA (0-shot)
69
+ type: truthful_qa
70
+ config: multiple_choice
71
+ split: validation
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: mc2
76
+ value: 41.72
77
+ source:
78
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryan0712/llama-3-8b-DUS-initialized
79
+ name: Open LLM Leaderboard
80
+ - task:
81
+ type: text-generation
82
+ name: Text Generation
83
+ dataset:
84
+ name: Winogrande (5-shot)
85
+ type: winogrande
86
+ config: winogrande_xl
87
+ split: validation
88
+ args:
89
+ num_few_shot: 5
90
+ metrics:
91
+ - type: acc
92
+ value: 75.93
93
+ name: accuracy
94
+ source:
95
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryan0712/llama-3-8b-DUS-initialized
96
+ name: Open LLM Leaderboard
97
+ - task:
98
+ type: text-generation
99
+ name: Text Generation
100
+ dataset:
101
+ name: GSM8k (5-shot)
102
+ type: gsm8k
103
+ config: main
104
+ split: test
105
+ args:
106
+ num_few_shot: 5
107
+ metrics:
108
+ - type: acc
109
+ value: 9.33
110
+ name: accuracy
111
+ source:
112
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ryan0712/llama-3-8b-DUS-initialized
113
+ name: Open LLM Leaderboard
114
  ---
115
 
116
  # llama-3-8b-DUS-initialized
 
156
 
157
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
158
  print(outputs[0]["generated_text"])
159
+ ```
160
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
161
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ryan0712__llama-3-8b-DUS-initialized)
162
+
163
+ | Metric |Value|
164
+ |---------------------------------|----:|
165
+ |Avg. |51.00|
166
+ |AI2 Reasoning Challenge (25-Shot)|49.57|
167
+ |HellaSwag (10-Shot) |75.78|
168
+ |MMLU (5-Shot) |53.68|
169
+ |TruthfulQA (0-shot) |41.72|
170
+ |Winogrande (5-shot) |75.93|
171
+ |GSM8k (5-shot) | 9.33|
172
+