Crystalcareai
commited on
Commit
•
c72e69b
1
Parent(s):
1e6a990
Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ The model was trained using a state-of-the-art distillation pipeline and an inst
|
|
20 |
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
|
21 |
|
22 |
# Evaluations
|
23 |
-
|
24 |
|
25 |
| Benchmark | SuperNova-Lite | Llama-3.1-8b-Instruct |
|
26 |
|-------------|----------------|----------------------|
|
@@ -30,7 +30,4 @@ We will be submitting this model to the OpenLLM Leaderboard for a more conclusiv
|
|
30 |
| BBH | 51.1 | 50.6 |
|
31 |
| GPQA | 31.2 | 29.02 |
|
32 |
|
33 |
-
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite/blob/main/eval.
|
34 |
-
|
35 |
-
# note
|
36 |
-
This readme will be edited regularly on September 10, 2024 (the day of release). After the final readme is in place we will remove this note.
|
|
|
20 |
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
|
21 |
|
22 |
# Evaluations
|
23 |
+
Here are our internal benchmarks using the main branch of lm evaluation harness:
|
24 |
|
25 |
| Benchmark | SuperNova-Lite | Llama-3.1-8b-Instruct |
|
26 |
|-------------|----------------|----------------------|
|
|
|
30 |
| BBH | 51.1 | 50.6 |
|
31 |
| GPQA | 31.2 | 29.02 |
|
32 |
|
33 |
+
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite/blob/main/eval.
|
|
|
|
|
|