leafspark leaderboard-pr-bot commited on
Commit
a355780
1 Parent(s): 7e381ea

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (851556518c302703c1b142bd4c482480b5089d36)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +113 -4
README.md CHANGED
@@ -1,8 +1,4 @@
1
  ---
2
- license: llama3.1
3
- base_model:
4
- - meta-llama/Meta-Llama-3.1-8B-Instruct
5
- library_name: transformers
6
  language:
7
  - en
8
  - de
@@ -12,16 +8,115 @@ language:
12
  - hi
13
  - es
14
  - th
 
 
15
  tags:
16
  - reflection
17
  - unsloth
18
  - peft
19
  - llama
 
 
20
  datasets:
21
  - leafspark/DetailedReflection-Claude-v3_5-Sonnet
22
  metrics:
23
  - accuracy
24
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
26
 
27
  # Llama-3.1-8B-MultiReflection-Instruct
@@ -292,3 +387,17 @@ Now, it's your turn again. Can you think of any real-world applications of this
292
  Remember, this is a proof, not a mathematical exercise. Feel free to ask questions or share your thoughts about the theorem and its implications.
293
  </output>
294
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
2
  language:
3
  - en
4
  - de
 
8
  - hi
9
  - es
10
  - th
11
+ license: llama3.1
12
+ library_name: transformers
13
  tags:
14
  - reflection
15
  - unsloth
16
  - peft
17
  - llama
18
+ base_model:
19
+ - meta-llama/Meta-Llama-3.1-8B-Instruct
20
  datasets:
21
  - leafspark/DetailedReflection-Claude-v3_5-Sonnet
22
  metrics:
23
  - accuracy
24
  pipeline_tag: text-generation
25
+ model-index:
26
+ - name: Llama-3.1-8B-MultiReflection-Instruct
27
+ results:
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: IFEval (0-Shot)
33
+ type: HuggingFaceH4/ifeval
34
+ args:
35
+ num_few_shot: 0
36
+ metrics:
37
+ - type: inst_level_strict_acc and prompt_level_strict_acc
38
+ value: 71.25
39
+ name: strict accuracy
40
+ source:
41
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
42
+ name: Open LLM Leaderboard
43
+ - task:
44
+ type: text-generation
45
+ name: Text Generation
46
+ dataset:
47
+ name: BBH (3-Shot)
48
+ type: BBH
49
+ args:
50
+ num_few_shot: 3
51
+ metrics:
52
+ - type: acc_norm
53
+ value: 28.45
54
+ name: normalized accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: MATH Lvl 5 (4-Shot)
63
+ type: hendrycks/competition_math
64
+ args:
65
+ num_few_shot: 4
66
+ metrics:
67
+ - type: exact_match
68
+ value: 12.54
69
+ name: exact match
70
+ source:
71
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: GPQA (0-shot)
78
+ type: Idavidrein/gpqa
79
+ args:
80
+ num_few_shot: 0
81
+ metrics:
82
+ - type: acc_norm
83
+ value: 5.7
84
+ name: acc_norm
85
+ source:
86
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
87
+ name: Open LLM Leaderboard
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: MuSR (0-shot)
93
+ type: TAUR-Lab/MuSR
94
+ args:
95
+ num_few_shot: 0
96
+ metrics:
97
+ - type: acc_norm
98
+ value: 8.52
99
+ name: acc_norm
100
+ source:
101
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
102
+ name: Open LLM Leaderboard
103
+ - task:
104
+ type: text-generation
105
+ name: Text Generation
106
+ dataset:
107
+ name: MMLU-PRO (5-shot)
108
+ type: TIGER-Lab/MMLU-Pro
109
+ config: main
110
+ split: test
111
+ args:
112
+ num_few_shot: 5
113
+ metrics:
114
+ - type: acc
115
+ value: 30.27
116
+ name: accuracy
117
+ source:
118
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
119
+ name: Open LLM Leaderboard
120
  ---
121
 
122
  # Llama-3.1-8B-MultiReflection-Instruct
 
387
  Remember, this is a proof, not a mathematical exercise. Feel free to ask questions or share your thoughts about the theorem and its implications.
388
  </output>
389
  ```
390
+
391
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
392
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_leafspark__Llama-3.1-8B-MultiReflection-Instruct)
393
+
394
+ | Metric |Value|
395
+ |-------------------|----:|
396
+ |Avg. |26.12|
397
+ |IFEval (0-Shot) |71.25|
398
+ |BBH (3-Shot) |28.45|
399
+ |MATH Lvl 5 (4-Shot)|12.54|
400
+ |GPQA (0-shot) | 5.70|
401
+ |MuSR (0-shot) | 8.52|
402
+ |MMLU-PRO (5-shot) |30.27|
403
+