Commit
7b66b68
1 Parent(s): b46c066

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (91c9ac6c6533c0afebb4aa7a129dbf78fc0b6c09)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +114 -6
README.md CHANGED
@@ -1,4 +1,8 @@
1
  ---
 
 
 
 
2
  tags:
3
  - merge
4
  - mergekit
@@ -8,11 +12,6 @@ tags:
8
  - rp
9
  - roleplay
10
  - role-play
11
- license: llama3
12
- language:
13
- - en
14
- library_name: transformers
15
- pipeline_tag: text-generation
16
  base_model:
17
  - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
18
  - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
@@ -26,6 +25,102 @@ base_model:
26
  - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
27
  - Nitral-AI/Hathor_Stable-v0.2-L3-8B
28
  - Sao10K/L3-8B-Stheno-v3.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ---
30
 
31
  <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;">
@@ -224,4 +319,17 @@ models:
224
  merge_method: task_arithmetic
225
  base_model: Casual-Autopsy/Umbral-Mind-3
226
  dtype: bfloat16
227
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: llama3
5
+ library_name: transformers
6
  tags:
7
  - merge
8
  - mergekit
 
12
  - rp
13
  - roleplay
14
  - role-play
 
 
 
 
 
15
  base_model:
16
  - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
17
  - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
 
25
  - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
26
  - Nitral-AI/Hathor_Stable-v0.2-L3-8B
27
  - Sao10K/L3-8B-Stheno-v3.1
28
+ pipeline_tag: text-generation
29
+ model-index:
30
+ - name: L3-Umbral-Mind-RP-v2.0-8B
31
+ results:
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: IFEval (0-Shot)
37
+ type: HuggingFaceH4/ifeval
38
+ args:
39
+ num_few_shot: 0
40
+ metrics:
41
+ - type: inst_level_strict_acc and prompt_level_strict_acc
42
+ value: 71.23
43
+ name: strict accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: BBH (3-Shot)
52
+ type: BBH
53
+ args:
54
+ num_few_shot: 3
55
+ metrics:
56
+ - type: acc_norm
57
+ value: 32.49
58
+ name: normalized accuracy
59
+ source:
60
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
61
+ name: Open LLM Leaderboard
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: MATH Lvl 5 (4-Shot)
67
+ type: hendrycks/competition_math
68
+ args:
69
+ num_few_shot: 4
70
+ metrics:
71
+ - type: exact_match
72
+ value: 10.12
73
+ name: exact match
74
+ source:
75
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: GPQA (0-shot)
82
+ type: Idavidrein/gpqa
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: acc_norm
87
+ value: 4.92
88
+ name: acc_norm
89
+ source:
90
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: MuSR (0-shot)
97
+ type: TAUR-Lab/MuSR
98
+ args:
99
+ num_few_shot: 0
100
+ metrics:
101
+ - type: acc_norm
102
+ value: 5.55
103
+ name: acc_norm
104
+ source:
105
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
106
+ name: Open LLM Leaderboard
107
+ - task:
108
+ type: text-generation
109
+ name: Text Generation
110
+ dataset:
111
+ name: MMLU-PRO (5-shot)
112
+ type: TIGER-Lab/MMLU-Pro
113
+ config: main
114
+ split: test
115
+ args:
116
+ num_few_shot: 5
117
+ metrics:
118
+ - type: acc
119
+ value: 30.26
120
+ name: accuracy
121
+ source:
122
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
123
+ name: Open LLM Leaderboard
124
  ---
125
 
126
  <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;">
 
319
  merge_method: task_arithmetic
320
  base_model: Casual-Autopsy/Umbral-Mind-3
321
  dtype: bfloat16
322
+ ```
323
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
324
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Casual-Autopsy__L3-Umbral-Mind-RP-v2.0-8B)
325
+
326
+ | Metric |Value|
327
+ |-------------------|----:|
328
+ |Avg. |25.76|
329
+ |IFEval (0-Shot) |71.23|
330
+ |BBH (3-Shot) |32.49|
331
+ |MATH Lvl 5 (4-Shot)|10.12|
332
+ |GPQA (0-shot) | 4.92|
333
+ |MuSR (0-shot) | 5.55|
334
+ |MMLU-PRO (5-shot) |30.26|
335
+