Update README.md
Browse files
README.md
CHANGED
@@ -10,4 +10,6 @@ These are GGUF quantized versions of [Midnight Rose 70B v1.0](https://huggingfac
|
|
10 |
|
11 |
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
|
12 |
|
13 |
-
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later.
|
|
|
|
|
|
10 |
|
11 |
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
|
12 |
|
13 |
+
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later.
|
14 |
+
|
15 |
+
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf`
|