InferenceIllusionist
commited on
Commit
•
06bf66b
1
Parent(s):
44dbc4a
Update README.md
Browse filesImproving clarity in supported front-ends note.
README.md
CHANGED
@@ -23,8 +23,8 @@ license: apache-2.0
|
|
23 |
# Mistral-Nemo-Instruct-12B-iMat-GGUF
|
24 |
|
25 |
> [!WARNING]
|
26 |
-
><b>Important Note:</b> Inferencing in llama.cpp has now been merged in [PR #8604](https://github.com/ggerganov/llama.cpp/pull/8604). Please ensure you are on release [b3438](https://github.com/ggerganov/llama.cpp/releases/tag/b3438).
|
27 |
-
>Other front-ends
|
28 |
|
29 |
Quantized from Mistral-Nemo-Instruct-2407 fp16
|
30 |
* Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 92 chunks and n_ctx=512
|
|
|
23 |
# Mistral-Nemo-Instruct-12B-iMat-GGUF
|
24 |
|
25 |
> [!WARNING]
|
26 |
+
><b>Important Note:</b> Inferencing in llama.cpp has now been merged in [PR #8604](https://github.com/ggerganov/llama.cpp/pull/8604). Please ensure you are on release [b3438](https://github.com/ggerganov/llama.cpp/releases/tag/b3438) or newer.
|
27 |
+
>Other front-ends such as the main branch of kobold.cpp and text-generation-web-ui may not work as intended</b>
|
28 |
|
29 |
Quantized from Mistral-Nemo-Instruct-2407 fp16
|
30 |
* Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 92 chunks and n_ctx=512
|