Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,13 @@
|
|
1 |
---
|
2 |
license: llama2
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: llama2
|
3 |
+
pipeline_tag: text-generation
|
4 |
---
|
5 |
+
<!-- description start -->
|
6 |
+
## Description
|
7 |
+
Converted to f16 using llama_cpp convert.py script, then quantized to q6_K using quantize from the same llama_cpp repository.<br>
|
8 |
+
Resulting file was split into 2 parts using split. <br><br>
|
9 |
+
**Note**: HF does not support uploading files larger than 50GB.<br>
|
10 |
+
<!-- description end -->
|
11 |
+
### File require joining
|
12 |
+
To join the files, do the following: <br>
|
13 |
+
cat codellama-70b-python-q6_K.gguf-split-* > codellama-70b-python-q6_K.gguf && rm codellama-70b-python-6_K.gguf-split-*
|