Oobabooga: "AttributeError: 'LlamaCppModel' object has no attribute 'model'"

#5
by yumeshiro - opened

I and others that I've seen have the following error when trying to load this model with Oobabooga. I get the error regardless of what type quantization for the GGUF I try and load.

ERROR:Failed to load the model.
Traceback (most recent call last):
File "D:\0\Oobabooga2\modules\ui_model_menu.py", line 206, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga2\modules\models.py", line 84, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga2\modules\models.py", line 235, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga2\modules\llamacpp_model.py", line 91, in from_pretrained
result.model = Llama(**params)
^^^^^^^^^^^^^^^
File "D:\0\Oobabooga2\installer_files\env\Lib\site-packages\llama_cpp_cuda\llama.py", line 357, in init
self.model = llama_cpp.llama_load_model_from_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga2\installer_files\env\Lib\site-packages\llama_cpp_cuda\llama_cpp.py", line 498, in llama_load_model_from_file
return _lib.llama_load_model_from_file(path_model, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: exception: access violation reading 0x0000000000000000

Exception ignored in: <function LlamaCppModel.__del__ at 0x000002DD709C6FC0>
Traceback (most recent call last):
File "D:\0\Oobabooga2\modules\llamacpp_model.py", line 49, in del
self.model.del()
^^^^^^^^^^
AttributeError: 'LlamaCppModel' object has no attribute 'model'

This happens on a new and up-to-date Oobabooga. I myself am running a 3080 10gb, am on Windows 10, and all of my other models work fine.

yumeshiro changed discussion title from AttributeError: 'LlamaCppModel' object has no attribute 'model' to Oobabooga: "AttributeError: 'LlamaCppModel' object has no attribute 'model'"

It's the issue of Oobabooga. The latest llama.cpp-python can load it with no error. Maybe Oobabooga is using the old one.
btw, the new version called CausalLM-14B-DPO-alpha has relesed.

It's the issue of Oobabooga. The latest llama.cpp-python can load it with no error. Maybe Oobabooga is using the old one.
btw, the new version called CausalLM-14B-DPO-alpha has relesed.

How do I update the llama_cpp_python if I'm having the same problem, absurdly one year later, with another model? what does this keep happening?

Sign up or log in to comment