|
--- |
|
license: apache-2.0 |
|
tags: |
|
- merge |
|
- mergekit |
|
- NousResearch/Meta-Llama-3-8B-Instruct |
|
base_model: |
|
- NousResearch/Meta-Llama-3-8B-Instruct |
|
model-index: |
|
- name: Aura-llama |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 58.02 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 77.82 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 65.61 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 51.94 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 73.4 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 52.01 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheSkullery/Aura-llama |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> |
|
<title>Aura-llama-3 Data Card</title> |
|
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> |
|
<style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; margin: 0; padding: 0; font-size: 16px; } |
|
.container { width: 80%; max-width: 800px; margin: 20px auto; background-color: rgba(255, 255, 255, 0.02); padding: 20px; border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); } |
|
.header h1 { font-size: 28px; color: #ECEFF4; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); } |
|
.update-section { margin-top: 30px; } .update-section h2 { font-size: 24px; color: #88C0D0; } |
|
.update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; } |
|
.info img { width: 100%; border-radius: 10px; margin-bottom: 15px; } |
|
a { color: #88C0D0; text-decoration: none; } |
|
a:hover { color: #A3BE8C; } |
|
pre { background-color: rgba(255, 255, 255, 0.05); padding: 10px; border-radius: 5px; overflow-x: auto; } |
|
code { font-family: 'Courier New', monospace; color: #A3BE8C; } </style> </head> <body> <div class="container"> |
|
<div class="header"> |
|
<h1>Aura-llama-3</h1> </div> <div class="info"> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/QYpWMEXTe0_X3A7HyeBm0.webp" alt="Aura-llama image"> |
|
<p>Now that the cute anime girl has your attention.</p> |
|
<p>UPDATE: Model has been fixed</p> |
|
<p>Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.</p> |
|
<p>Aura-llama is a merge of the following models to create a base model to work from:</p> |
|
<ul> |
|
<li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li> |
|
<li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li> |
|
</ul> |
|
</div> |
|
<div class="update-section"> |
|
<h2>Merged Evals (Has Not Been Finetuned):</h2> |
|
<p>Aura-llama</p> |
|
<ul> |
|
<li>Avg: 63.13</li> |
|
<li>ARC: 58.02</li> |
|
<li>HellaSwag: 77.82</li> |
|
<li>MMLU: 65.61</li> |
|
<li>T-QA: 51.94</li> |
|
<li>Winogrande: 73.40</li> |
|
<li>GSM8K: 52.01</li> |
|
</ul> |
|
</div> |
|
<div class="update-section"> |
|
<h2>🧩 Configuration</h2> |
|
<pre><code> |
|
dtype: float16 |
|
merge_method: passthrough |
|
slices: |
|
- sources: |
|
- layer_range: [0, 12] |
|
model: NousResearch/Meta-Llama-3-8B-Instruct |
|
- sources: |
|
- layer_range: [8, 20] |
|
model: NousResearch/Meta-Llama-3-8B-Instruct |
|
- sources: |
|
- layer_range: [16, 28] |
|
model: NousResearch/Meta-Llama-3-8B-Instruct |
|
- sources: |
|
- layer_range: [24, 32] |
|
model: NousResearch/Meta-Llama-3-8B-Instruct |
|
</code></pre> |
|
</div> |
|
</div> |
|
</body> |
|
</html> |
|
|
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheSkullery__Aura-llama) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |63.13| |
|
|AI2 Reasoning Challenge (25-Shot)|58.02| |
|
|HellaSwag (10-Shot) |77.82| |
|
|MMLU (5-Shot) |65.61| |
|
|TruthfulQA (0-shot) |51.94| |
|
|Winogrande (5-shot) |73.40| |
|
|GSM8k (5-shot) |52.01| |
|
|
|
|