Image-Text-to-Text
Transformers
PyTorch
English
llava
text-generation
Inference Endpoints
SpursgoZmy commited on
Commit
1ff7879
1 Parent(s): b22c371

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - SpursgoZmy/MMTab
4
+ - liuhaotian/LLaVA-Instruct-150K
5
+ - liuhaotian/LLaVA-Pretrain
6
+ language:
7
+ - en
8
+ metrics:
9
+ - accuracy
10
+ - bleu
11
+ - f1
12
+ pipeline_tag: image-text-to-text
13
+ ---
14
+ # Table LLaVA Model Card
15
+
16
+ <!-- Provide a quick summary of what the model is/does. -->
17
+
18
+ Table LLaVA 7B is an open-source multimodal chatbot for understanding different table images and fulfilling diverse table-related requests, e.g., question answering, table cell description and structure understanding.
19
+
20
+ See the ACL 2024 paper for more details: [Multimodal Table Understanding](https://arxiv.org/abs/2406.08100)
21
+
22
+ ## Model Details
23
+
24
+ <!-- Provide a longer summary of what this model is. -->
25
+
26
+ **Model Type:** Table LLaVA strictly follows the [LLaVA-v1.5](https://arxiv.org/abs/2310.03744) model architecture and training pipeline,
27
+ with [CLIP-ViT-L-336px](https://huggingface.co/openai/clip-vit-large-patch14-336) as visual encoder (336*336 image resolution),
28
+ [Vicuna-v1.5-7B](https://huggingface.co/lmsys/vicuna-7b-v1.5) as base LLM and a two-layer MLP as vision-language connector.
29
+
30
+ It was trained with a two-stage pipeline as LLaVA:
31
+
32
+ 1. Pre-training: train the vision-language connector with image-caption data and table recognition data.
33
+ 2. Instruction tuning: train the vision-language connector and the base LLM with multimodal instruction following data of tabular and non-tabular tasks.
34
+
35
+ **Code Base:** We use the official code of [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) for model training and inference,
36
+ and the saved model checkpoint is uploaded to this repository.
37
+
38
+ **Model Date:** Table-LLaVA 7B was trained in January 2024.
39
+
40
+ **Where to send questions or comments about the model:** https://github.com/SpursGoZmy/Table-LLaVA/issues
41
+
42
+ ## Training dataset
43
+
44
+ The training data includes original LLaVA-1.5 data and specially constructed
45
+ multimodal instruction-following data from the [MMTab dataset](https://huggingface.co/datasets/SpursgoZmy/MMTab),
46
+ which is a large-scale dataset covering a wide range of table images and table-related tasks.
47
+
48
+ | Training Stage | Data Description | Data Size | Hugging Face Dataset |
49
+ | :---: | :---: | :---: | :---: |
50
+ | Pre-training | 558K original LLaVA-1.5 pre-training data | 558K | [blip_laion_cc_sbu_558k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) |
51
+ | | 150K table recognition data | 150K | [MMTab-pre_pretrain_data_llava_format_150K.json](https://huggingface.co/datasets/SpursgoZmy/MMTab) |
52
+ | Instruction Fine-tuning | 665K original LLaVA-1.5 fine-tuning data | 665K | [llava_v1_5_mix665k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) |
53
+ | | 232K multimodal instruction tuning data of 14 tabular tasks | 232K | [MMTab-instruct_sft_data_llava_format_232K.json](https://huggingface.co/datasets/SpursgoZmy/MMTab) |
54
+
55
+ We also provide the merged pre-training and instruction fine-tuning data in the MMTab dataset,
56
+ i.e., enhanced_llava_pretrain_data_708K.json and enhanced_llava_sft_data_898K.json.
57
+
58
+ ## Evaluation dataset
59
+
60
+ A collection of 17 held-in and 7 held-out tabular benchmarks, including 15 table-related tasks, e.g., table question answering and table2text generation.
61
+ We also evaluate Table LLaVA on two non-tabular benchmarks:
62
+ [TextVQA](https://textvqa.org/) and [llava-bench-in-the-wild](https://huggingface.co/datasets/liuhaotian/llava-bench-in-the-wild).
63
+
64
+ ## License
65
+
66
+ Table LLaVA is based on LLaVA-1.5 and thus follows its license. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
67
+
68
+ ## Intended use
69
+
70
+ **Primary intended uses:** The primary use of Table LLaVA is research on large multimodal models and chatbots, especially for multimodal table understanding.
71
+
72
+ **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
73
+
74
+ ## Limitations
75
+
76
+ Though the proposed Table-LLaVA demonstrates
77
+ great performance on a wide range of table-based
78
+ tasks, the resolution of input images (336*336) is relatively
79
+ low and may limit the upper bound of its capacity. Luckily, with the emergence of MLLMs which
80
+ possess higher input image resolution (e.g., Monkey (Li et al., 2023d), LLaVA-Next (Liu et al.,
81
+ 2024)), we can use MMTab to develop more powerful tabular MLLM in the future research.