metadata
license: mit
library_name: transformers
model-index:
- name: Arcanum-12b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 29.07
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Xclbr7/Arcanum-12b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 31.88
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Xclbr7/Arcanum-12b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.27
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Xclbr7/Arcanum-12b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.4
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Xclbr7/Arcanum-12b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.53
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Xclbr7/Arcanum-12b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.74
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Xclbr7/Arcanum-12b
name: Open LLM Leaderboard
Arcanum-12b π§ββοΈ
Arcanum-12b is a merged large language model created by combining TheDrummer/Rocinante-12B-v1.1 and MarinaraSpaghetti/NemoMix-Unleashed-12B using a novel merging technique.
Model Details π
- Developed by: Xclbr7
- Model type: Causal Language Model
- Language(s): English (primarily), may support other languages
- License: MIT
- Repository: https://huggingface.co/Xclbr7/Arcanum-12b
Model Architecture ποΈ
- Base model: MarinaraSpaghetti/NemoMix-Unleashed-12B
- Parameter count: ~12 billion
- Architecture specifics: Transformer-based language model
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 20.48 |
IFEval (0-Shot) | 29.07 |
BBH (3-Shot) | 31.88 |
MATH Lvl 5 (4-Shot) | 10.27 |
GPQA (0-shot) | 9.40 |
MuSR (0-shot) | 13.53 |
MMLU-PRO (5-shot) | 28.74 |
Training & Merging π
Arcanum-12b was created by merging two existing 12B models:
TheDrummer/Rocinante-12B-v1.1
- Density parameters: [1, 0.8, 0.6]
- Weight: 0.7
MarinaraSpaghetti/NemoMix-Unleashed-12B
- Density parameters: [0.5, 0.7, 0.9]
- Weight: 0.8
Merging method: Ties Additional parameters:
- Normalization: True
- Int8 mask: True
- Data type: float16
Intended Use π―
Conversation with different personas.
Ethical Considerations π€
As a merged model based on existing language models, Arcanum-12b may inherit biases and limitations from its parent models. Users should be aware of potential biases in generated content and use the model responsibly.
Acknowledgments π
We acknowledge the contributions of the original model creators:
- TheDrummer for Rocinante-12B-v1.1
- MarinaraSpaghetti for NemoMix-Unleashed-12B
Their work formed the foundation for Arcanum-12b.