TheBloke commited on
Commit
89336ec
1 Parent(s): c2787dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -42,15 +42,16 @@ quantized_by: TheBloke
42
 
43
  This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
44
 
45
- ## EXPERIMENTAL - REQUIRES LLAMA.CPP PR
46
 
47
- These are experimental GGUF files, created using a llama.cpp PR found here: https://github.com/ggerganov/llama.cpp/pull/4406.
 
 
 
48
 
49
- THEY WILL NOT WORK WITH LLAMA.CPP FROM `main`, OR ANY DOWNSTREAM LLAMA.CPP CLIENT - such as LM Studio, llama-cpp-python, text-generation-webui, etc.
50
 
51
- To test these GGUFs, please build llama.cpp from the above PR.
52
-
53
- I have tested CUDA acceleration and it works great. I have not yet tested other forms of GPU acceleration.
54
 
55
  <!-- description end -->
56
 
 
42
 
43
  This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
44
 
45
+ **MIXTRAL GGUF SUPPORT**
46
 
47
+ Known to work in:
48
+ * llama.cpp as of December 13th
49
+ * KoboldCpp 1.52 as later
50
+ * LM Studio 0.2.9 and later
51
 
52
+ Support for Mixtral was merged into Llama.cpp on December 13th.
53
 
54
+ Other clients/libraries, not listed above, may not yet work.
 
 
55
 
56
  <!-- description end -->
57