ValueError: The checkpoint you are trying to load has model type zoedepth but Transformers does not recognize this architecture
#1
by
justinmanley
- opened
When I try to load this model, I get an error saying the zoedepth
model cannot be found. This error persists whether I use the latest published version of transformers
(4.41.2) or install directly from the git repo HEAD.
Details in the Github issue.
Same issue here
KeyError Traceback (most recent call last)
File ~/Desktop/AI_ENV/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:982, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
981 try:
--> 982 config_class = CONFIG_MAPPING[config_dict["model_type"]]
983 except KeyError:
File ~/Desktop/AI_ENV/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:684, in _LazyConfigMapping.__getitem__(self, key)
683 if key not in self._mapping:
--> 684 raise KeyError(key)
685 value = self._mapping[key]
KeyError: 'zoedepth'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[14], line 6
3 import requests
5 # load pipe
----> 6 depth_estimator = pipeline(task="depth-estimation", model="Intel/zoedepth-nyu-kitti")
8 # load image
9 url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
File ~/Desktop/AI_ENV/lib/python3.10/site-packages/transformers/pipelines/__init__.py:805, in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
802 adapter_config = json.load(f)
803 model = adapter_config["base_model_name_or_path"]
--> 805 config = AutoConfig.from_pretrained(
806 model, _from_pipeline=task, code_revision=code_revision, **hub_kwargs, **model_kwargs
807 )
808 hub_kwargs["_commit_hash"] = config._commit_hash
810 custom_tasks = {}
File ~/Desktop/AI_ENV/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:984, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
982 config_class = CONFIG_MAPPING[config_dict["model_type"]]
983 except KeyError:
--> 984 raise ValueError(
985 f"The checkpoint you are trying to load has model type `{config_dict['model_type']}` "
986 "but Transformers does not recognize this architecture. This could be because of an "
987 "issue with the checkpoint, or because your version of Transformers is out of date."
988 )
989 return config_class.from_dict(config_dict, **unused_kwargs)
990 else:
991 # Fallback: use pattern matching on the string.
992 # We go from longer names to shorter names to catch roberta before bert (for instance)
ValueError: The checkpoint you are trying to load has model type `zoedepth` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.