metadata
library_name: transformers
tags:
- mergekit
- merge
base_model:
- hf-100/Llama-3-Spellbound-Instruct-8B-0.3
- unsloth/Meta-Llama-3.1-8B
- arcee-ai/Llama-3.1-SuperNova-Lite
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
- THUDM/LongWriter-llama3.1-8b
- ResplendentAI/Smarts_Llama3
- djuna/L3.1-Suze-Vume-2-calc
- djuna/L3.1-ForStHS
- Blackroot/Llama-3-8B-Abomination-LORA
model-index:
- name: L3.1-Purosani-2-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 49.88
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/L3.1-Purosani-2-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 31.39
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/L3.1-Purosani-2-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.12
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/L3.1-Purosani-2-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.82
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/L3.1-Purosani-2-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.3
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/L3.1-Purosani-2-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.57
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/L3.1-Purosani-2-8B
name: Open LLM Leaderboard
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the della_linear merge method using unsloth/Meta-Llama-3.1-8B as a base.
Models Merged
The following models were included in the merge:
- hf-100/Llama-3-Spellbound-Instruct-8B-0.3
- arcee-ai/Llama-3.1-SuperNova-Lite + grimjim/Llama-3-Instruct-abliteration-LoRA-8B
- THUDM/LongWriter-llama3.1-8b + ResplendentAI/Smarts_Llama3
- djuna/L3.1-Suze-Vume-2-calc
- djuna/L3.1-ForStHS + Blackroot/Llama-3-8B-Abomination-LORA
Configuration
The following YAML configuration was used to produce this model:
merge_method: della_linear
dtype: bfloat16
parameters:
epsilon: 0.1
lambda: 1.0
int8_mask: true
normalize: true
base_model: unsloth/Meta-Llama-3.1-8B
models:
- model: arcee-ai/Llama-3.1-SuperNova-Lite+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
weight: 1
density: 0.5
- model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3
parameters:
weight: 1
density: 0.45
- model: djuna/L3.1-Suze-Vume-2-calc
parameters:
weight: 1
density: 0.45
- model: THUDM/LongWriter-llama3.1-8b+ResplendentAI/Smarts_Llama3
parameters:
weight: 1
density: 0.55
- model: djuna/L3.1-ForStHS+Blackroot/Llama-3-8B-Abomination-LORA
parameters:
weight: 1
density: 0.5
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 22.85 |
IFEval (0-Shot) | 49.88 |
BBH (3-Shot) | 31.39 |
MATH Lvl 5 (4-Shot) | 10.12 |
GPQA (0-shot) | 6.82 |
MuSR (0-shot) | 8.30 |
MMLU-PRO (5-shot) | 30.57 |