Adding the Open Portuguese LLM Leaderboard Evaluation Results

#1
Files changed (1) hide show
  1. README.md +165 -2
README.md CHANGED
@@ -1,7 +1,154 @@
1
  ---
2
- library_name: transformers
3
  license: apache-2.0
 
4
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
  # Model Card for Model ID
@@ -197,4 +344,20 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
197
 
198
  ## Model Card Contact
199
 
200
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  license: apache-2.0
3
+ library_name: transformers
4
  pipeline_tag: text-generation
5
+ model-index:
6
+ - name: internlmbode-7b
7
+ results:
8
+ - task:
9
+ type: text-generation
10
+ name: Text Generation
11
+ dataset:
12
+ name: ENEM Challenge (No Images)
13
+ type: eduagarcia/enem_challenge
14
+ split: train
15
+ args:
16
+ num_few_shot: 3
17
+ metrics:
18
+ - type: acc
19
+ value: 60.18
20
+ name: accuracy
21
+ source:
22
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
23
+ name: Open Portuguese LLM Leaderboard
24
+ - task:
25
+ type: text-generation
26
+ name: Text Generation
27
+ dataset:
28
+ name: BLUEX (No Images)
29
+ type: eduagarcia-temp/BLUEX_without_images
30
+ split: train
31
+ args:
32
+ num_few_shot: 3
33
+ metrics:
34
+ - type: acc
35
+ value: 50.07
36
+ name: accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
39
+ name: Open Portuguese LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: OAB Exams
45
+ type: eduagarcia/oab_exams
46
+ split: train
47
+ args:
48
+ num_few_shot: 3
49
+ metrics:
50
+ - type: acc
51
+ value: 40.27
52
+ name: accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
55
+ name: Open Portuguese LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: Assin2 RTE
61
+ type: assin2
62
+ split: test
63
+ args:
64
+ num_few_shot: 15
65
+ metrics:
66
+ - type: f1_macro
67
+ value: 90.74
68
+ name: f1-macro
69
+ source:
70
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
71
+ name: Open Portuguese LLM Leaderboard
72
+ - task:
73
+ type: text-generation
74
+ name: Text Generation
75
+ dataset:
76
+ name: Assin2 STS
77
+ type: eduagarcia/portuguese_benchmark
78
+ split: test
79
+ args:
80
+ num_few_shot: 15
81
+ metrics:
82
+ - type: pearson
83
+ value: 81.74
84
+ name: pearson
85
+ source:
86
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
87
+ name: Open Portuguese LLM Leaderboard
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: FaQuAD NLI
93
+ type: ruanchaves/faquad-nli
94
+ split: test
95
+ args:
96
+ num_few_shot: 15
97
+ metrics:
98
+ - type: f1_macro
99
+ value: 75.39
100
+ name: f1-macro
101
+ source:
102
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
103
+ name: Open Portuguese LLM Leaderboard
104
+ - task:
105
+ type: text-generation
106
+ name: Text Generation
107
+ dataset:
108
+ name: HateBR Binary
109
+ type: ruanchaves/hatebr
110
+ split: test
111
+ args:
112
+ num_few_shot: 25
113
+ metrics:
114
+ - type: f1_macro
115
+ value: 87.93
116
+ name: f1-macro
117
+ source:
118
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
119
+ name: Open Portuguese LLM Leaderboard
120
+ - task:
121
+ type: text-generation
122
+ name: Text Generation
123
+ dataset:
124
+ name: PT Hate Speech Binary
125
+ type: hate_speech_portuguese
126
+ split: test
127
+ args:
128
+ num_few_shot: 25
129
+ metrics:
130
+ - type: f1_macro
131
+ value: 67.51
132
+ name: f1-macro
133
+ source:
134
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
135
+ name: Open Portuguese LLM Leaderboard
136
+ - task:
137
+ type: text-generation
138
+ name: Text Generation
139
+ dataset:
140
+ name: tweetSentBR
141
+ type: eduagarcia-temp/tweetsentbr
142
+ split: test
143
+ args:
144
+ num_few_shot: 25
145
+ metrics:
146
+ - type: f1_macro
147
+ value: 62.88
148
+ name: f1-macro
149
+ source:
150
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlmbode-7b
151
+ name: Open Portuguese LLM Leaderboard
152
  ---
153
 
154
  # Model Card for Model ID
 
344
 
345
  ## Model Card Contact
346
 
347
+ [More Information Needed]
348
+ # [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
349
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/recogna-nlp/internlmbode-7b)
350
+
351
+ | Metric | Value |
352
+ |--------------------------|---------|
353
+ |Average |**68.52**|
354
+ |ENEM Challenge (No Images)| 60.18|
355
+ |BLUEX (No Images) | 50.07|
356
+ |OAB Exams | 40.27|
357
+ |Assin2 RTE | 90.74|
358
+ |Assin2 STS | 81.74|
359
+ |FaQuAD NLI | 75.39|
360
+ |HateBR Binary | 87.93|
361
+ |PT Hate Speech Binary | 67.51|
362
+ |tweetSentBR | 62.88|
363
+