Quantization of Mixtral-8x7B (https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) | |
Using llama.cpp, PR add Mixtral support #4406 (https://github.com/ggerganov/llama.cpp/pull/4406) | |
Instructions to run: | |
./main -m ./models/mixtral-8x7b-32k-q4_0.gguf \ | |
-p "I believe the meaning of life is" \ | |
-ngl 999 -s 1 -n 128 -t 8 | |