L3.1-Moe
Collection
https://github.com/moeru-ai/L3.1-Moe
•
3 items
•
Updated
•
1
This model is a Mixture of Experts (MoE) made with mergekit-moe. It uses the following base models:
Heavily inspired by mlabonne/Beyonder-4x7B-v3.
base_model: argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "I want"
- source_model: sequelbox/Llama3.1-8B-PlumCode
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: sequelbox/Llama3.1-8B-PlumMath
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
- source_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 19.15 |
IFEval (0-Shot) | 43.47 |
BBH (3-Shot) | 27.86 |
MATH Lvl 5 (4-Shot) | 11.10 |
GPQA (0-shot) | 1.23 |
MuSR (0-shot) | 3.98 |
MMLU-PRO (5-shot) | 27.27 |