Unable to run the gemma-2b.Q4_K_M.gguf
hi, First Thank you very much. I was trying to run the gguf version from your quantized version but I encountered with following error. llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'gemma-2b.Q4_K_M.gguf'
main: error: unable to load model
I think this "general.architecture str = llama " will be gemma instead of llama as in this PR for the gemma model in llama.cpp https://github.com/ggerganov/llama.cpp/pull/5631#issuecomment-1957223298 . Regards
Same issue.
Same issue: "llama.cpp error: 'create_tensor: tensor 'output.weight' not found'"
I am adding this edit at 9:47pm eastern
I also had this problem with other gemmas from other people. I am using LM Studio. I missed this myself but it clearly states:
"Google DeepMind's Gemma (2B, 7B) is supported in v0.2.15!"
When I'm in the app it says I have v0.2.14 and I have the most current version. No updates available. I am now trying to download a fresh installation of 0.2.15. and then retry.
this doesn't work for me either. I'm also getting that tensor error.
None of the GGUF quant models would work because of this issue https://github.com/ggerganov/llama.cpp/issues/5635
I am waiting for this PR to be merged to re-run the quant: https://github.com/ggerganov/llama.cpp/pull/5650
Yea, that was my understanding also , I found this one user's quant gguf model seems to be working in my local environment with very bad quality output though. Here is the link for that working quant version gguf. https://huggingface.co/rahuldshetty/gemma-7b-it-gguf-quantized
Yea, that was my understanding also , I found this one user's quant gguf model seems to be working in my local environment with very bad quality output though. Here is the link for that working quant version gguf. https://huggingface.co/rahuldshetty/gemma-7b-it-gguf-quantized
Yes, if we use the latest llama.cpp from the main branch it works. It is no longer failing with error, but the quality of the quants is terrible! I am really hoping that PR fixes the issue
I worked on it today and the quants can be executed with llama.cpp now. Will re-upload everything in a few hours, should be working now.