ggufing it
#1
by
KnutJaegersberg
- opened
tried gguf-my-repo to have an 8bit gguf for your two instruct models, but it didn't work. I don't get the error message, but I was expecting it to work since it is a llama.
Hi @KnutJaegersberg , I don't know specifically why it's not working as expected, but we will very soon publish quantized versions of our models, so you will be able to use those.