Error while loading Model
Allways getting this Error while loading the Model in oobabooga.
WARNING:The safetensors archive passed at models\TheBloke_Wizard-Vicuna-30B-Uncensored-GPTQ\Wizard-Vicuna-30B-Uncensored-GPTQ-4bit.act.order.safetensors does not contain metadata. Make sure to save your model with the save_pretrained
method. Defaulting to 'pt' metadata.
and after that it just shuts down.
Tried to update and reinstalled the launcher and loaded the model different times manually or with the interface.
tried it with the default model loader and model loader set up to Autogpt but didnt changed anything either.
i`m having this issue too.
i try everthing in options, but only errors :/
Does it say "Done" ?
This is caused on Windows when you don't have a large enough Pagefile. Increase your Pagefile to 100GB, or if the Pagefile is set to Auto make sure you have 100+GB free on the pagefile drive (C: by default)
This is a very common problem and I should really add mention of it to the README.
yeah that worked thanks !
now ive got that error ..
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 12.00 GiB total capacity; 11.17 GiB already allocated; 0 bytes free; 11.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
tried it with "--gpu-memory 10000MiB" didnt changed anything .. is 12gb gpu memory too little ? worked well with the 13b version
Yes, 12GB is too little for 30B. 13B maximum. I thought GPU memory would work, however even if it does it will be horribly slow.
I recommend to use a GGML instead, with GPU offload so it's part on CPU and part on GPU. That will have acceptable performance. Check the text-generation-webui docs for details on how to get llama-cpp-python compiled for GPU acceleration.
Made my page file 100000Mb but still getting error:
Traceback (most recent call last):
File "E:\Ai\Oobabooga\installer_files\env\Lib\site-packages\transformers\configuration_utils.py", line 729, in _get_config_dict
config_dict = cls._dict_from_json_file(resolved_config_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Ai\Oobabooga\installer_files\env\Lib\site-packages\transformers\configuration_utils.py", line 827, in _dict_from_json_file
text = reader.read()
^^^^^^^^^^^^^
File "", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc8 in position 0: invalid continuation byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Ai\Oobabooga\modules\ui_model_menu.py", line 209, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Ai\Oobabooga\modules\models.py", line 89, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Ai\Oobabooga\modules\models.py", line 147, in huggingface_loader
config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=params['trust_remote_code'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Ai\Oobabooga\installer_files\env\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 1082, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Ai\Oobabooga\installer_files\env\Lib\site-packages\transformers\configuration_utils.py", line 644, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Ai\Oobabooga\installer_files\env\Lib\site-packages\transformers\configuration_utils.py", line 732, in _get_config_dict
raise EnvironmentError(
OSError: It looks like the config file at 'models\Wizard-Vicuna-30B-Uncensored-GPTQ.safetensors' is not a valid JSON file.