license: apache-2.0 | |
## Introduce | |
Quantizing the [gradientai/Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp. | |
license: apache-2.0 | |
## Introduce | |
Quantizing the [gradientai/Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp. | |