File size: 206 Bytes
903dce0 |
1 2 3 4 5 6 7 8 |
---
license: apache-2.0
---
## Introduce
Quantizing the [gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
|