pllava-34b / README.md
ermu2001's picture
Create README.md
de51de5 verified
---
license: apache-2.0
tags:
- video LLM
datasets:
- OpenGVLab/VideoChat2-IT
---
# PLLaVA Model Card
## Model details
**Model type:**
PLLaVA-34B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: liuhaotian/llava-v1.6-34b
**Model date:**
PLLaVA-34B was trained in April 2024.
**Paper or resources for more information:**
- github repo: https://github.com/magic-research/PLLaVA
- project page: https://pllava.github.io/
- paper link: https://arxiv.org/abs/2404.16994
## License
NousResearch/Nous-Hermes-2-Yi-34B license.
**Where to send questions or comments about the model:**
https://github.com/magic-research/PLLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of PLLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT
## Evaluation dataset
A collection of 6 benchmarks, including 5 Video QA benchmarks and 1 benchmarks specifically proposed for Video-LMMs.