Post
2266
We have recently merged Video-LLaVA to transformers! π€ποΈ
What makes this model different?
Demo: llava-hf/video-llava
Model: LanguageBind/Video-LLaVA-7B-hf
Compared to other models that take image and video input and either project them separately or downsampling video and projecting selected frames, Video-LLaVA is converting images and videos to unified representation and project them using a shared projection layer.
It uses Vicuna 1.5 as the language model and LanguageBind's own encoders that's based on OpenCLIP, these encoders project the modalities to an unified representation before passing to projection layer.
I feel like one of the coolest features of this model is the joint understanding which is also introduced recently with many models
It's a relatively older model but ahead of it's time and works very well! Which means, e.g. you can pass model an image of a cat and a video of a cat and ask questions like whether the cat in the image exists in video or not π€©
What makes this model different?
Demo: llava-hf/video-llava
Model: LanguageBind/Video-LLaVA-7B-hf
Compared to other models that take image and video input and either project them separately or downsampling video and projecting selected frames, Video-LLaVA is converting images and videos to unified representation and project them using a shared projection layer.
It uses Vicuna 1.5 as the language model and LanguageBind's own encoders that's based on OpenCLIP, these encoders project the modalities to an unified representation before passing to projection layer.
I feel like one of the coolest features of this model is the joint understanding which is also introduced recently with many models
It's a relatively older model but ahead of it's time and works very well! Which means, e.g. you can pass model an image of a cat and a video of a cat and ask questions like whether the cat in the image exists in video or not π€©