leaderboard-pr-bot commited on
Commit
04d19da
1 Parent(s): 36af7ec

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +110 -2
README.md CHANGED
@@ -12,7 +12,6 @@ tags:
12
  - merge
13
  base_model:
14
  - mistralai/Mistral-Small-Instruct-2409
15
- pipeline_tag: text-generation
16
  datasets:
17
  - roleplay4fun/aesir-v1.1
18
  - kalomaze/Opus_Instruct_3k
@@ -20,6 +19,102 @@ datasets:
20
  - Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
21
  - Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
22
  - SkunkworksAI/reasoning-0.01
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ---
24
 
25
  # Model Card for Model ID
@@ -160,4 +255,17 @@ By sharing this model, I hope to contribute to the research efforts of our commu
160
  url = { https://huggingface.co/spow12/ChatWaifu_22B_v2.0_preview },
161
  publisher = { Hugging Face }
162
  }
163
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - merge
13
  base_model:
14
  - mistralai/Mistral-Small-Instruct-2409
 
15
  datasets:
16
  - roleplay4fun/aesir-v1.1
17
  - kalomaze/Opus_Instruct_3k
 
19
  - Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
20
  - Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
21
  - SkunkworksAI/reasoning-0.01
22
+ pipeline_tag: text-generation
23
+ model-index:
24
+ - name: ChatWaifu_22B_v2.0_preview
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: IFEval (0-Shot)
31
+ type: HuggingFaceH4/ifeval
32
+ args:
33
+ num_few_shot: 0
34
+ metrics:
35
+ - type: inst_level_strict_acc and prompt_level_strict_acc
36
+ value: 67.45
37
+ name: strict accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: BBH (3-Shot)
46
+ type: BBH
47
+ args:
48
+ num_few_shot: 3
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 45.49
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MATH Lvl 5 (4-Shot)
61
+ type: hendrycks/competition_math
62
+ args:
63
+ num_few_shot: 4
64
+ metrics:
65
+ - type: exact_match
66
+ value: 16.31
67
+ name: exact match
68
+ source:
69
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: GPQA (0-shot)
76
+ type: Idavidrein/gpqa
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 8.72
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MuSR (0-shot)
91
+ type: TAUR-Lab/MuSR
92
+ args:
93
+ num_few_shot: 0
94
+ metrics:
95
+ - type: acc_norm
96
+ value: 3.53
97
+ name: acc_norm
98
+ source:
99
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: MMLU-PRO (5-shot)
106
+ type: TIGER-Lab/MMLU-Pro
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 33.2
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
117
+ name: Open LLM Leaderboard
118
  ---
119
 
120
  # Model Card for Model ID
 
255
  url = { https://huggingface.co/spow12/ChatWaifu_22B_v2.0_preview },
256
  publisher = { Hugging Face }
257
  }
258
+ ```
259
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
260
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_spow12__ChatWaifu_22B_v2.0_preview)
261
+
262
+ | Metric |Value|
263
+ |-------------------|----:|
264
+ |Avg. |29.12|
265
+ |IFEval (0-Shot) |67.45|
266
+ |BBH (3-Shot) |45.49|
267
+ |MATH Lvl 5 (4-Shot)|16.31|
268
+ |GPQA (0-shot) | 8.72|
269
+ |MuSR (0-shot) | 3.53|
270
+ |MMLU-PRO (5-shot) |33.20|
271
+