kwaabot commited on
Commit
c6e66ac
1 Parent(s): 3a7eb35

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -17,7 +17,7 @@ base_model:
17
 
18
  ![cover](https://repository-images.githubusercontent.com/877091879/8e1b7595-1d75-4787-8e44-0a0218cdbb70)
19
 
20
- This model is a Mixture of Experts (MoE) made with mergekit-moe. It uses the following base models:
21
 
22
  - [argilla-warehouse/Llama-3.1-8B-MagPie-Ultra](https://huggingface.co/argilla-warehouse/Llama-3.1-8B-MagPie-Ultra)
23
  - [sequelbox/Llama3.1-8B-PlumCode](https://huggingface.co/sequelbox/Llama3.1-8B-PlumCode)
@@ -28,7 +28,10 @@ Heavily inspired by [mlabonne/Beyonder-4x7B-v3](https://huggingface.co/mlabonne/
28
 
29
  ## Quantized models
30
 
31
- > TODO
 
 
 
32
 
33
  ## Configuration
34
 
 
17
 
18
  ![cover](https://repository-images.githubusercontent.com/877091879/8e1b7595-1d75-4787-8e44-0a0218cdbb70)
19
 
20
+ This model is a Mixture of Experts (MoE) made with [mergekit-moe](https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md). It uses the following base models:
21
 
22
  - [argilla-warehouse/Llama-3.1-8B-MagPie-Ultra](https://huggingface.co/argilla-warehouse/Llama-3.1-8B-MagPie-Ultra)
23
  - [sequelbox/Llama3.1-8B-PlumCode](https://huggingface.co/sequelbox/Llama3.1-8B-PlumCode)
 
28
 
29
  ## Quantized models
30
 
31
+ ### GGUF by [mradermacher](https://huggingface.co/mradermacher)
32
+
33
+ - [mradermacher/L3.1-Moe-4x8B-v0.1-i1-GGUF](https://huggingface.co/mradermacher/L3.1-Moe-4x8B-v0.1-i1-GGUF)
34
+ - [mradermacher/L3.1-Moe-4x8B-v0.1-GGUF](https://huggingface.co/mradermacher/L3.1-Moe-4x8B-v0.1-GGUF)
35
 
36
  ## Configuration
37