POINTS: Improving Your Vision-language Model with Affordable Strategies
Abstract
In recent years, vision-language models have made significant strides, excelling in tasks like optical character recognition and geometric problem-solving. However, several critical issues remain: 1) Proprietary models often lack transparency about their architectures, while open-source models need more detailed ablations of their training strategies. 2) Pre-training data in open-source works is under-explored, with datasets added empirically, making the process cumbersome. 3) Fine-tuning often focuses on adding datasets, leading to diminishing returns. To address these issues, we propose the following contributions: 1) We trained a robust baseline model using the latest advancements in vision-language models, introducing effective improvements and conducting comprehensive ablation and validation for each technique. 2) Inspired by recent work on large language models, we filtered pre-training data using perplexity, selecting the lowest perplexity data for training. This approach allowed us to train on a curated 1M dataset, achieving competitive performance. 3) During visual instruction tuning, we used model soup on different datasets when adding more datasets yielded marginal improvements. These innovations resulted in a 9B parameter model that performs competitively with state-of-the-art models. Our strategies are efficient and lightweight, making them easily adoptable by the community.
Community
POINTS: IMPROVING YOUR VISION-LANGUAGE MODEL WITH AFFORDABLE STRATEGIES
In recent years, vision-language models have achieved significant advancements, excelling in tasks once deemed challenging, such as optical character recognition and geometric problem-solving. Despite these impressive achievements, several critical issues remain unaddressed: 1) Proprietary models rarely disclose detailed information about their architectures. In contrast, while open-source models provide visibility into their training strategies, detailed ablations of these strategies are highly anticipated. 2) Pre-training data is currently under-explored in open-source works, with most efforts empirically adding datasets from diverse sources, making the entire process elusive and cumbersome. 3) During the fine-tuning stage, the focus is often on adding and ablating more datasets, which frequently leads to diminishing returns. Therefore, refining data schemes is essential for further enhancing model performance. To address these issues, we propose the following contributions in this paper: 1) We trained a robust baseline model, leveraging the latest technological advancements in vision-language models. Building upon existing advancements, we introduced effective improvements and conducted comprehensive ablation and validation for each technique incorporated into this strong baseline. 2) Inspired by recent work on large language models, we propose filtering pre-training data using perplexity, selecting the data with the lowest perplexity as the training set. This approach allowed us to train on a curated 1M dataset, resulting in highly competitive performance. 3) During the visual instruction tuning stage, we experimented with model soup on different datasets when further introducing more datasets into the training set brought marginal improvements. Integrating these innovations, we obtained a model with 9B parameters, performing competitively with a series of existing state-of-the-art models. Additionally, these strategies we propose are efficient and relatively lightweight, allowing the community to adopt them easily for their models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Building and better understanding vision-language models: insights and future directions (2024)
- SynthVLM: High-Efficiency and High-Quality Synthetic Data for Vision Language Models (2024)
- IAA: Inner-Adaptor Architecture Empowers Frozen Large Language Model with Multimodal Capabilities (2024)
- ParGo: Bridging Vision-Language with Partial and Global Views (2024)
- EVLM: An Efficient Vision-Language Model for Visual Understanding (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
it says there's appendix in the paper, but it seems like it's not. Where I can find out appendix information?
I will update this paper as soon as possible.
it says there's appendix in the paper, but it seems like it's not. Where I can find out appendix information?
Thank you for your interests in our paper, and the arxiv paper has been updated.
Thanks for fast update!
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper