File size: 1,824 Bytes
9b5c394 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
base_model:
- argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
- sequelbox/Llama3.1-8B-PlumCode
- sequelbox/Llama3.1-8B-PlumMath
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
---
# L3.1-Moe-4x8B-v0.1
![cover](https://repository-images.githubusercontent.com/877091879/8e1b7595-1d75-4787-8e44-0a0218cdbb70)
This model is a Mixture of Experts (MoE) made with mergekit-moe. It uses the following base models:
- [argilla-warehouse/Llama-3.1-8B-MagPie-Ultra](https://huggingface.co/argilla-warehouse/Llama-3.1-8B-MagPie-Ultra)
- [sequelbox/Llama3.1-8B-PlumCode](https://huggingface.co/sequelbox/Llama3.1-8B-PlumCode)
- [sequelbox/Llama3.1-8B-PlumMath](https://huggingface.co/sequelbox/Llama3.1-8B-PlumMath)
- [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2)
Heavily inspired by [mlabonne/Beyonder-4x7B-v3](https://huggingface.co/mlabonne/Beyonder-4x7B-v3).
## Quantized models
> TODO
## Configuration
```yaml
base_model: argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "I want"
- source_model: sequelbox/Llama3.1-8B-PlumCode
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: sequelbox/Llama3.1-8B-PlumMath
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
- source_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
```
## License
[Apache-2.0](LICENSE)
|