--- tags: - gguf - GGUF --- # HELP WANTED Does anyone know how to build and host wheels for llama.cpp, but specifically of colab, to avoid wasted time. # Details [Thanks to mlabonne for the initial code](https://huggingface.co/mlabonne) Default Imatrix is from [kalomaze](https://github.com/kalomaze) RP Imatrix is from [Lewdiculous](https://huggingface.co/Lewdiculous) Host files for a google colab notebook, hoping to make it easier to GGUF models with Imatrix. There are two imatrix datasets, one for general use and one for RP. After some testing, making the actual quants is really slow, recommended to only use it for the intial FP16 GGUF and imatrix.dat generation.