Visual Question Answering
Transformers
English
videollama2_mistral
text-generation
multimodal large language model
large video-language model
Inference Endpoints

Problem in config.json file

#1
by ParthSoniVK - opened

I am trying to run this model on my paperspace platform, and I have this problem

om_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1062 return config_class.from_pretrained(pretrained_model_name_or_path, **kwargs)
1063 elif "model_type" in config_dict:
-> 1064 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1065 return config_class.from_dict(config_dict, **unused_kwargs)
1066 else:
1067 # Fallback: use pattern matching on the string.
1068 # We go from longer names to shorter names to catch roberta before bert (for instance)

File /usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py:761, in _LazyConfigMapping.getitem(self, key)
759 return self._extra_content[key]
760 if key not in self._mapping:
--> 761 raise KeyError(key)
762 value = self._mapping[key]
763 module_name = model_type_to_module_name(key)

KeyError: 'videollama2_mistral'

Language Technology Lab at Alibaba DAMO Academy org

Please modify the 'videollama2_mistral' to 'mistral'

Sign up or log in to comment