Can't run the latest GGMLs? Check branch previous_llama for GGMLs compatible with older llama.cpp and UIs.

#3
by TheBloke - opened

I just added GGMLs compatible with the old llama.cpp quantisation method. You can find them in branch previous_llama

So if you're unable to use the latest files - eg because you're using text-generation-webui or some other UI that hasn't updated yet - you can now use the files in that branch.

Then when your UI updates, choose a GGML from the main branch instead.

It's now been added to mainline text-generation-webui so I will close this issue

TheBloke changed discussion status to closed

Sign up or log in to comment