Files changed (1) hide show
  1. README.md +18 -5
README.md CHANGED
@@ -1,12 +1,9 @@
1
  ---
2
  license: other
3
- base_model: meta-llama/Meta-Llama-3-8B
4
  tags:
5
  - generated_from_trainer
6
  - axolotl
7
- model-index:
8
- - name: out
9
- results: []
10
  datasets:
11
  - cognitivecomputations/Dolphin-2.9
12
  - teknium/OpenHermes-2.5
@@ -18,6 +15,9 @@ datasets:
18
  - abacusai/SystemChat-1.1
19
  - Locutusque/function-calling-chatml
20
  - internlm/Agent-FLAN
 
 
 
21
  ---
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -235,4 +235,17 @@ The following hyperparameters were used during training:
235
  - Transformers 4.40.0
236
  - Pytorch 2.2.2+cu121
237
  - Datasets 2.18.0
238
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
 
3
  tags:
4
  - generated_from_trainer
5
  - axolotl
6
+ base_model: meta-llama/Meta-Llama-3-8B
 
 
7
  datasets:
8
  - cognitivecomputations/Dolphin-2.9
9
  - teknium/OpenHermes-2.5
 
15
  - abacusai/SystemChat-1.1
16
  - Locutusque/function-calling-chatml
17
  - internlm/Agent-FLAN
18
+ model-index:
19
+ - name: out
20
+ results: []
21
  ---
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
235
  - Transformers 4.40.0
236
  - Pytorch 2.2.2+cu121
237
  - Datasets 2.18.0
238
+ - Tokenizers 0.19.1
239
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
240
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cognitivecomputations__dolphin-2.9-llama3-8b)
241
+
242
+ | Metric |Value|
243
+ |-------------------|----:|
244
+ |Avg. |18.62|
245
+ |IFEval (0-Shot) |38.50|
246
+ |BBH (3-Shot) |27.86|
247
+ |MATH Lvl 5 (4-Shot)| 5.06|
248
+ |GPQA (0-shot) | 4.92|
249
+ |Winogrande (5-shot)|13.79|
250
+ |MMLU-PRO (5-shot) |19.68|
251
+