File size: 553 Bytes
862d9b0
 
0126f08
862d9b0
0126f08
 
 
1f264ee
0126f08
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
---
license: llama2
pipeline_tag: text-generation
---
<!-- description start -->
## Description
Converted to f16 using llama_cpp convert.py script, then quantized to q6_K using quantize from the same llama_cpp repository.<br>
Resulting file was split into 2 parts. <br><br>
**Note**: HF does not support uploading files larger than 50GB.<br>
<!-- description end -->
### File require joining
To join the files, do the following: <br>
cat codellama-70b-python-q6_K.gguf-split-* > codellama-70b-python-q6_K.gguf && rm codellama-70b-python-6_K.gguf-split-*