File size: 587 Bytes
5287596
 
581cdd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5287596
581cdd7
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: apache-2.0
base_model: v2ray/Mixtral-8x22B-v0.1
inference: false
model_creator: MaziyarPanahi
model_name: Mixtral-8x22B-v0.1-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:  
  - quantized
  - 2-bit
  - 3-bit
  - 4-bit
  - 5-bit
  - 6-bit
  - 8-bit
  - 16-bit
  - GGUF
  - mixtral
  - moe
---

# Mixtral-8x22B-v0.1-GGUF

in progress ...

## Load sharded model

`llama_load_model_from_file` will detect the number of files and will load additional tensors from the rest of files.

```
main --model Mixtral-8x22B-v0.1.fp16-00001-of-00005.gguf -ngl 64
```