Haozhangcx
commited on
Commit
•
a9a3726
1
Parent(s):
de4f7f3
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
3 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
4 |
+
{}
|
5 |
+
---
|
6 |
+
|
7 |
+
# LLaVA-Next Interleave Model Card
|
8 |
+
|
9 |
+
## Model Details
|
10 |
+
|
11 |
+
Model type: LLaVA-Next Interleave is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.
|
12 |
+
|
13 |
+
Base LLM: Qwen/Qwen1.5-7B-Chat
|
14 |
+
|
15 |
+
### Model Description
|
16 |
+
|
17 |
+
**Repository:** https://github.com/LLaVA-VL/LLaVA-NeXT
|
18 |
+
|
19 |
+
**Primary intended uses:** The primary use of LLaVA-Next Interleave is research on large multimodal models and chatbots. This is only for research exploration, and prohibited for commercial usage.
|
20 |
+
|
21 |
+
**Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
22 |
+
|
23 |
+
### License Notices
|
24 |
+
|
25 |
+
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models for checkpoints trained using the dataset (e.g. Llama-1/2 community license for LLaMA-2 and Vicuna-v1.5, [Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) and [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.
|
26 |
+
|
27 |
+
## How to Get Started with the Model
|
28 |
+
|
29 |
+
Use the code below to get started with the model.
|
30 |
+
|
31 |
+
```bash
|
32 |
+
git clone https://github.com/LLaVA-VL/LLaVA-NeXT
|
33 |
+
# install llava-next
|
34 |
+
...
|
35 |
+
# download the ckpt
|
36 |
+
...
|
37 |
+
bash playground/demo/interleave_demo.py --model_path path/to/ckpt
|
38 |
+
```
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
## Evaluation
|
43 |
+
|
44 |
+
Use the code below to evaluate the model.
|
45 |
+
|
46 |
+
Please first edit /path/to/ckpt to the path of checkpoint, /path/to/images to the path of "interleave_data" in scripts/interleave/eval_all.sh and then run
|
47 |
+
```bash
|
48 |
+
bash scripts/interleave/eval_all.sh
|
49 |
+
```
|