Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
MaziyarPanahi
/
Mixtral-8x22B-Instruct-v0.1-GGUF
like
32
Text Generation
GGUF
5 languages
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
16-bit
GGUF
mixtral
Mixture of Experts
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
37
Use this model
5758957
Mixtral-8x22B-Instruct-v0.1-GGUF
1 contributor
History:
8 commits
MaziyarPanahi
Delete Mixtral-8x22B-Instruct-v0.1.IQ3_XS-00004-of-00005.gguf
5758957
verified
7 months ago
.gitattributes
Safe
3.78 kB
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.IQ3_XS-00001-of-00005.gguf
Safe
13.5 GB
LFS
Upload folder using huggingface_hub (#4)
7 months ago
Mixtral-8x22B-Instruct-v0.1.IQ3_XS-00002-of-00005.gguf
Safe
13.2 GB
LFS
Upload folder using huggingface_hub (#4)
7 months ago
Mixtral-8x22B-Instruct-v0.1.IQ3_XS-00003-of-00005.gguf
Safe
12.6 GB
LFS
Upload folder using huggingface_hub (#4)
7 months ago
Mixtral-8x22B-Instruct-v0.1.IQ3_XS-00005-of-00005.gguf
Safe
5.61 GB
LFS
Upload folder using huggingface_hub (#4)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q2_K-00001-of-00005.gguf
Safe
11.8 GB
LFS
Upload folder using huggingface_hub (#9)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q2_K-00002-of-00005.gguf
Safe
12 GB
LFS
Upload folder using huggingface_hub (#1)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q2_K-00003-of-00005.gguf
Safe
11.4 GB
LFS
Upload folder using huggingface_hub (#1)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q2_K-00004-of-00005.gguf
Safe
12 GB
LFS
Upload folder using huggingface_hub (#1)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q2_K-00005-of-00005.gguf
Safe
4.78 GB
LFS
Upload folder using huggingface_hub (#1)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_L-00001-of-00005.gguf
Safe
16.5 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_L-00002-of-00005.gguf
Safe
16.8 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_L-00003-of-00005.gguf
Safe
15.9 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_L-00004-of-00005.gguf
Safe
16.8 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_L-00005-of-00005.gguf
Safe
6.61 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_M-00001-of-00005.gguf
Safe
15.7 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_M-00002-of-00005.gguf
Safe
15.6 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_M-00003-of-00005.gguf
Safe
14.8 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_M-00004-of-00005.gguf
Safe
15.6 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_M-00005-of-00005.gguf
Safe
6.15 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_S-00001-of-00005.gguf
Safe
14 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_S-00002-of-00005.gguf
Safe
14.2 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_S-00003-of-00005.gguf
Safe
13.5 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_S-00004-of-00005.gguf
Safe
14.2 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
Mixtral-8x22B-Instruct-v0.1.Q3_K_S-00005-of-00005.gguf
Safe
5.62 GB
LFS
Upload folder using huggingface_hub (#5)
7 months ago
README.md
Safe
5.61 kB
Update README.md (#3)
7 months ago