HenryCRM commited on
Commit
e98f092
β€’
1 Parent(s): 80e34f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +361 -3
README.md CHANGED
@@ -1,3 +1,361 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - dataset
6
+ - ocr
7
+ - multimodal
8
+ - vision
9
+ - image-text-to-text
10
+ datasets:
11
+ - custom
12
+ license: apache-2.0
13
+ ---
14
+
15
+ # BLIP3-OCR-200M Dataset
16
+
17
+ ## Overview
18
+
19
+ The **BLIP3-OCR-200M** dataset is designed to address the limitations of current Vision-Language Models (VLMs) in processing and interpreting text-rich images, such as documents and charts. Traditional image-text datasets often struggle to capture nuanced textual information, which is crucial for tasks requiring complex text comprehension and reasoning.
20
+
21
+ <img src="blip3_ocr_200m_examples/blip3_ocr_200m.png" alt="Art" width=600>
22
+ <!-- <img src="blip3_ocr_200m.png" alt="Art" width=500> -->
23
+
24
+ ### Key Features
25
+ - **OCR Integration**: The dataset incorporates Optical Character Recognition (OCR) data during the pre-training phase of VLMs. This integration enhances vision-language alignment by providing detailed textual information alongside visual data.
26
+ - **Text-Rich Content**: The dataset focuses on text-rich images and includes OCR-specific annotations, enabling more accurate handling of content like documents, charts, and other text-heavy visuals.
27
+ - **Parquet Format**: The dataset is stored in Parquet format, facilitating efficient storage, processing, and retrieval of OCR metadata and images. This format is well-suited for handling large-scale datasets and can be easily integrated into data processing pipelines.
28
+
29
+ ### Purpose:
30
+ The primary goal of the BLIP3-OCR-200M dataset is to improve the cross-modality reasoning capabilities of VLMs by enriching the pre-training datasets with detailed textual information. This approach significantly enhances the performance of VLMs on tasks involving complex text-rich images, advancing the field of vision-language understanding.
31
+
32
+ ## Key Statistics
33
+
34
+ - **Total Files**: 50 Parquet files.
35
+ - **Data Format**: Parquet (each file containing OCR-related metadata).
36
+ - **Sample Count**: Approximately 2 million.
37
+ - **Purpose**: This dataset is intended for enhancing vision-language model performance on text-rich content.
38
+
39
+ ## Data Structure
40
+
41
+ ### File Structure
42
+
43
+ The dataset is organized into multiple Parquet files, each corresponding to a different folder from the original dataset. Each Parquet file contains flattened and cleaned data for efficient storage and retrieval.
44
+
45
+ ```plaintext
46
+ ./
47
+ β”œβ”€β”€ 001.parquet
48
+ β”œβ”€β”€ 002.parquet
49
+ β”œβ”€β”€ 003.parquet
50
+ └── ... (up to 050.parquet)
51
+ ```
52
+
53
+ ### Data Schema
54
+
55
+ Each Parquet file contains a tabular structure with the following columns:
56
+
57
+ - **`uid`**: Unique identifier for each image.
58
+ - **`url`**: URL where the image was originally obtained (if available).
59
+ - **`key`**: Unique key associated with each sample.
60
+ - **`face_bboxes`**: Flattened JSON string containing face bounding box coordinates (if available).
61
+ - **`captions`**: A flattened JSON string that contains OCR-based captions. Each caption object includes the following key entries:
62
+ - **text**: The OCR-extracted text from the image.
63
+ - **granularity**: An integer indicating the level of detail in the OCR annotation. We have 12 different granularities in total described later.
64
+ - **include_datacomp_raw_cap**: A boolean flag indicating whether the original raw caption from the DataComp dataset is included.
65
+ - **`metadata`**: A flattened JSON string that holds intermediate metadata related to the OCR process, with the following key entries:
66
+ - **length**: The total count of the OCR-extracted text.
67
+ - **entries**: An array of objects, each representing a token with its corresponding bounding box (`bbox`), OCR text (`text`), and confidence score (`confidence`).
68
+ - **uid**: A unique identifier for the OCR data associated with each image.
69
+ - **ocr_num_token_larger_than_confidence_threshold**: The number of OCR tokens that exceed a specified confidence threshold(0.9 in this case).
70
+
71
+ ### Downloading the Original DataComp Images
72
+
73
+ If you want to download the original images for each sample based on the `url` entry in the dataset, you can use the [`img2dataset`](https://github.com/rom1504/img2dataset) tool. This tool efficiently downloads images from URLs and stores them in a specified format. You can refer to the script provided by the DataComp project for downloading images. The script is available [here](https://github.com/mlfoundations/datacomp/blob/main/download_upstream.py).
74
+
75
+
76
+
77
+ ### Example of Loading and Processing the Data
78
+ You can simply access the data by:
79
+ ```python
80
+ from datasets import load_dataset
81
+ datasets = load_dataset("Salesforce/blip3-ocr-200m")
82
+ ```
83
+
84
+ Alternatively, you can load the Parquet files directly using Pandas:
85
+
86
+ ```python
87
+ # download the Salesforce/blip3-ocr-200m datasets
88
+ import os
89
+
90
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
91
+
92
+ from huggingface_hub import HfApi, logging
93
+
94
+ # Specify the custom download directory
95
+ custom_download_dir = "./blip3-ocr-200m"
96
+
97
+ hf = HfApi()
98
+ hf.snapshot_download("Salesforce/blip3-ocr-200m", repo_type="dataset", local_dir=custom_download_dir, local_dir_use_symlinks=False)
99
+
100
+ import pandas as pd
101
+
102
+ # Load a Parquet file into a DataFrame
103
+ df = pd.read_parquet("./blip3-ocr-200m/001.parquet")
104
+
105
+ # Display the first few rows
106
+ print(df.head())
107
+
108
+ # Count the number of rows in the DataFrame
109
+ print(f"Number of samples: {len(df)}")
110
+
111
+ # Optionally, parse JSON strings if necessary
112
+ df['captions'] = df['captions'].apply(json.loads)
113
+ df['metadata'] = df['metadata'].apply(json.loads)
114
+
115
+ # Access specific fields
116
+ print(df[['uid', 'url', 'captions']].head())
117
+ ```
118
+
119
+ ### Sample JSON Metadata
120
+
121
+ Below is an example of what the JSON metadata might look like for each image:
122
+
123
+ ```json
124
+ {
125
+ "uid": "ff2dcbbdc58426e1dcebac49786b6660",
126
+ "url": "https://img.ev01.to/xxrz/250x400/183/bc/57/bc5793c416cd6d457410be3a39ebb717/bc5793c416cd6d457410be3a39ebb717.jpg",
127
+ "key": "000000001133",
128
+ "face_bboxes": [],
129
+ "captions": [
130
+ {
131
+ "text": "The image contains the text \"THE\", the text \"PATRiCK\", the text \"STAR\", the text \"SHOW\"",
132
+ "granularity": 0,
133
+ "include_datacomp_raw_cap": false
134
+ },
135
+ {
136
+ "text": "The image contains the text \"THE\" located above the center, the text \"PATRiCK\" located above the center, the text \"STAR\" located above the center, the text \"SHOW\" located above the center",
137
+ "granularity": 1,
138
+ "include_datacomp_raw_cap": false
139
+ },
140
+ {
141
+ "text": "The image contains the text \"THE\" located at 1/14 of the image from top to bottom and 1/2 of the image from left to right, the text \"PATRiCK\" located at 1/7 of the image from top to bottom and 10/19 of the image from left to right, the text \"STAR\" located at 1/4 of the image from top to bottom and 10/19 of the image from left to right, the text \"SHOW\" located at 6/19 of the image from top to bottom and 1/2 of the image from left to right",
142
+ "granularity": 2,
143
+ "include_datacomp_raw_cap": false
144
+ },
145
+ {
146
+ "text": "The image contains the text \"THE\" approximately starts at [0.4, 0.0] and extends upto [0.6, 0.1], the text \"PATRiCK\" approximately starts at [0.2, 0.1] and extends upto [0.8, 0.2], the text \"STAR\" approximately starts at [0.3, 0.2] and extends upto [0.7, 0.3], the text \"SHOW\" approximately starts at [0.4, 0.3] and extends upto [0.6, 0.4]",
147
+ "granularity": 3,
148
+ "include_datacomp_raw_cap": false
149
+ },
150
+ {
151
+ "text": "The image contains the text \"THE\" starts at [0.42, 0.045] and extends upto [0.585, 0.101], the text \"PATRiCK\" starts at [0.237, 0.089] and extends upto [0.799, 0.202], the text \"STAR\" starts at [0.308, 0.182] and extends upto [0.728, 0.31], the text \"SHOW\" starts at [0.371, 0.289] and extends upto [0.647, 0.357]",
152
+ "granularity": 4,
153
+ "include_datacomp_raw_cap": false
154
+ },
155
+ {
156
+ "text": "The image contains the <ocr>\"THE</ocr><bbox>[0.42, 0.045][0.585, 0.101]</bbox>, the <ocr>\"PATRiCK</ocr><bbox>[0.237, 0.089][0.799, 0.202]</bbox>, the <ocr>\"STAR</ocr><bbox>[0.308, 0.182][0.728, 0.31]</bbox>, the <ocr>\"SHOW</ocr><bbox>[0.371, 0.289][0.647, 0.357]</bbox>",
157
+ "granularity": 5,
158
+ "include_datacomp_raw_cap": false
159
+ },
160
+ {
161
+ "text": "The Patrick Star Show. The image contains the text \"THE\", the text \"PATRiCK\", the text \"STAR\", the text \"SHOW\"",
162
+ "granularity": 0,
163
+ "include_datacomp_raw_cap": true
164
+ },
165
+ {
166
+ "text": "The Patrick Star Show. The image contains the text \"THE\" located above the center, the text \"PATRiCK\" located above the center, the text \"STAR\" located above the center, the text \"SHOW\" located above the center",
167
+ "granularity": 1,
168
+ "include_datacomp_raw_cap": true
169
+ },
170
+ {
171
+ "text": "The Patrick Star Show. The image contains the text \"THE\" located at 1/14 of the image from top to bottom and 1/2 of the image from left to right, the text \"PATRiCK\" located at 1/7 of the image from top to bottom and 10/19 of the image from left to right, the text \"STAR\" located at 1/4 of the image from top to bottom and 10/19 of the image from left to right, the text \"SHOW\" located at 6/19 of the image from top to bottom and 1/2 of the image from left to right",
172
+ "granularity": 2,
173
+ "include_datacomp_raw_cap": true
174
+ },
175
+ {
176
+ "text": "The Patrick Star Show. The image contains the text \"THE\" approximately starts at [0.4, 0.0] and extends upto [0.6, 0.1], the text \"PATRiCK\" approximately starts at [0.2, 0.1] and extends upto [0.8, 0.2], the text \"STAR\" approximately starts at [0.3, 0.2] and extends upto [0.7, 0.3], the text \"SHOW\" approximately starts at [0.4, 0.3] and extends upto [0.6, 0.4]",
177
+ "granularity": 3,
178
+ "include_datacomp_raw_cap": true
179
+ },
180
+ {
181
+ "text": "The Patrick Star Show. The image contains the text \"THE\" starts at [0.42, 0.045] and extends upto [0.585, 0.101], the text \"PATRiCK\" starts at [0.237, 0.089] and extends upto [0.799, 0.202], the text \"STAR\" starts at [0.308, 0.182] and extends upto [0.728, 0.31], the text \"SHOW\" starts at [0.371, 0.289] and extends upto [0.647, 0.357]",
182
+ "granularity": 4,
183
+ "include_datacomp_raw_cap": true
184
+ },
185
+ {
186
+ "text": "The Patrick Star Show. The image contains the <ocr>\"THE</ocr><bbox>[0.42, 0.045][0.585, 0.101]</bbox>, the <ocr>\"PATRiCK</ocr><bbox>[0.237, 0.089][0.799, 0.202]</bbox>, the <ocr>\"STAR</ocr><bbox>[0.308, 0.182][0.728, 0.31]</bbox>, the <ocr>\"SHOW</ocr><bbox>[0.371, 0.289][0.647, 0.357]</bbox>",
187
+ "granularity": 5,
188
+ "include_datacomp_raw_cap": true
189
+ }
190
+ ],
191
+ "metadata": {
192
+ "length": 4,
193
+ "entries": [
194
+ {
195
+ "bbox": [
196
+ [0.41964285714285715, 0.044642857142857144],
197
+ [0.5848214285714286, 0.044642857142857144],
198
+ [0.5848214285714286, 0.10119047619047619],
199
+ [0.41964285714285715, 0.10119047619047619]
200
+ ],
201
+ "text": "THE",
202
+ "confidence": 0.9961821436882019
203
+ },
204
+ {
205
+ "bbox": [
206
+ [0.23660714285714285, 0.08928571428571429],
207
+ [0.7991071428571429, 0.08928571428571429],
208
+ [0.7991071428571429, 0.20238095238095238],
209
+ [0.23660714285714285, 0.20238095238095238]
210
+ ],
211
+ "text": "PATRiCK",
212
+ "confidence": 0.9370164275169373
213
+ },
214
+ {
215
+ "bbox": [
216
+ [0.3080357142857143, 0.18154761904761904],
217
+ [0.7366071428571429, 0.19940476190476192],
218
+ [0.7276785714285714, 0.30952380952380953],
219
+ [0.29910714285714285, 0.29464285714285715]
220
+ ],
221
+ "text": "STAR",
222
+ "confidence": 0.9967861771583557
223
+ },
224
+ {
225
+ "bbox": [
226
+ [0.3705357142857143, 0.28869047619047616],
227
+ [0.6473214285714286, 0.28869047619047616],
228
+ [0.6473214285714286, 0.35714285714285715],
229
+ [0.3705357142857143, 0.35714285714285715]
230
+ ],
231
+ "text": "SHOW",
232
+ "confidence": 0.9861563444137573
233
+ }
234
+ ],
235
+ "uid": "datacomp_ocr_ff2dcbbdc58426e1dcebac49786b6660",
236
+ "ocr_num_token_larger_than_confidence_threshold": 4
237
+ }
238
+ }
239
+
240
+ ```
241
+
242
+ The corresponding image with its OCR-related caption is shown below,
243
+
244
+ <img src="blip3_ocr_200m_examples/blip3_OCR_000000001133.png" alt="Art" width=2000>
245
+
246
+ ## OCR Annotation Levels
247
+ The BLIP3-OCR-200M dataset provides OCR annotations at 12 different levels of granularity, leveraging the off-the-shelf OCR engine [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR). The bounding boxes for the detected text are represented by the four corners of the text box, as detailed in [PaddleOCR’s documentation](https://github.com/PaddlePaddle/PaddleOCR/blob/e9fca96d4f66f59d620470143d6d666d58ef5b81/docs/datasets/ocr_datasets.en.md). These levels are designed to capture varying degrees of detail in the text content extracted from images, allowing researchers to explore how different levels of textual granularity and the inclusion of the original raw caption affect the performance of Vision-Language Models (VLMs).
248
+
249
+
250
+ ### Levels 1-6: Different OCR Granularities
251
+ The first six levels of annotation focus solely on the OCR text extracted from the images, providing varying levels of detail:
252
+
253
+ 1. **Level 1 (Granularity 0)**:
254
+ - **Basic Text Extraction**: This level provides a straightforward extraction of the main text present in the image. It focuses on the most prominent textual content without specifying its location or context within the image.
255
+ - **Use Case**: Useful for tasks where only the presence of specific text is important, regardless of its position or structure.
256
+
257
+ 2. **Level 2 (Granularity 1)**:
258
+ - **Text with General Location**: This level adds a general indication of where the text is located within the image, such as "top-right corner" or "bottom-left corner."
259
+ - **Use Case**: Beneficial for applications where understanding the approximate position of text within the image adds context to its meaning.
260
+
261
+ 3. **Level 3 (Granularity 2)**:
262
+ - **Text with Detailed Positioning**: At this level, the text is annotated with more precise positional information, describing specific areas within the image (e.g., "7/10 from top to bottom, 2/5 from left to right").
263
+ - **Use Case**: Ideal for scenarios where the exact location of text within the image plays a crucial role, such as in document layout analysis.
264
+
265
+ 4. **Level 4 (Granularity 3)**:
266
+ - **Text with Bounding Box Coordinates**: This level provides bounding box coordinates that outline the area where the text is located, offering a more spatially precise annotation.
267
+ - **Use Case**: Useful for tasks involving text region detection, where knowing the exact spatial boundaries of text is necessary.
268
+
269
+ 5. **Level 5 (Granularity 4)**:
270
+ - **Text with Precise Bounding Boxes**: Similar to Level 4 but with even more precise bounding box information, capturing subtle details of text placement within the image.
271
+ - **Use Case**: Suitable for fine-grained text detection tasks, such as identifying specific words or characters within a tightly constrained area.
272
+
273
+ 6. **Level 6 (Granularity 5)**:
274
+ - **Text with OCR Tags and Bounding Boxes**: This level includes OCR tags combined with precise bounding box coordinates, marking both the text and its spatial characteristics within the image.
275
+ - **Use Case**: Best for detailed image-text alignment tasks where both the text content and its exact spatial metadata are required for accurate interpretation.
276
+
277
+ ### Levels 7-12: Different OCR Granularities with Raw Captions
278
+
279
+ The next six levels combine the OCR annotations described above with the original raw captions from the dataset. This allows researchers to explore how the inclusion of the raw caption, alongside the OCR data, impacts model performance.
280
+
281
+ 7. **Level 7 (Granularity 0 + Raw Caption)**:
282
+ - **Basic Text Extraction with Raw Caption**: Combines the basic OCR text with the original raw caption extracted from the image.
283
+ - **Use Case**: Enables comparison between raw text content and the basic OCR output, providing insights into how models handle raw versus processed text.
284
+
285
+ 8. **Level 8 (Granularity 1 + Raw Caption)**:
286
+ - **Text with General Location and Raw Caption**: This level integrates the OCR text with general location information and the raw caption.
287
+ - **Use Case**: Useful for understanding how location context and raw text influence model interpretations and performance.
288
+
289
+ 9. **Level 9 (Granularity 2 + Raw Caption)**:
290
+ - **Text with Detailed Positioning and Raw Caption**: Provides detailed positional information along with the raw caption, combining structured OCR data with the unprocessed text.
291
+ - **Use Case**: Ideal for evaluating models that require both detailed text placement and raw text content for tasks like document processing.
292
+
293
+ 10. **Level 10 (Granularity 3 + Raw Caption)**:
294
+ - **Text with Bounding Box Coordinates and Raw Caption**: Merges the bounding box annotations with the raw caption, offering spatial context alongside the original text.
295
+ - **Use Case**: Suitable for tasks where spatial data and raw text content are both critical, such as in multi-modal information retrieval.
296
+
297
+ 11. **Level 11 (Granularity 4 + Raw Caption)**:
298
+ - **Text with Precise Bounding Boxes and Raw Caption**: Enhances the precise bounding box annotations with the raw caption, providing detailed spatial-textual information.
299
+ - **Use Case**: Best for high-precision tasks where exact text boundaries and raw text are both important, such as in OCR refinement.
300
+
301
+ 12. **Level 12 (Granularity 5 + Raw Caption)**:
302
+ - **Text with OCR Tags, Bounding Boxes, and Raw Caption**: The most detailed level, combining OCR tags, precise bounding boxes, and the raw caption.
303
+ - **Use Case**: Ideal for comprehensive multi-modal tasks that require complete text and spatial context, integrating all available data for model training or evaluation.
304
+
305
+
306
+ ### OCR Caption Data Schema
307
+ This includes up to 12 different levels of granularity, with the following structure:
308
+
309
+ ```plaintext
310
+ captions:
311
+ β”œβ”€β”€ 'caption_1'
312
+ β”‚ β”œβ”€β”€ 'text': 'image contains text X, text Y and text Z'
313
+ β”‚ β”œβ”€β”€ 'granularity': 0
314
+ β”‚ β”œβ”€β”€ 'include_datacomp_raw_cap': False
315
+ β”œβ”€β”€ 'caption_2'
316
+ β”‚ β”œβ”€β”€ 'text': 'image contains text X located at the top-right corner, text Y ...'
317
+ β”‚ β”œβ”€β”€ 'granularity': 1
318
+ β”‚ β”œβ”€β”€ 'include_datacomp_raw_cap': False
319
+ β”œβ”€β”€ 'caption_3'
320
+ β”‚ β”œβ”€β”€ 'text': 'image contains text X located 7/10 from the top to bottom and 2/5 from left to right ...'
321
+ β”‚ β”œβ”€β”€ 'granularity': 2
322
+ β”‚ β”œβ”€β”€ 'include_datacomp_raw_cap': False
323
+ β”œβ”€β”€ ...
324
+ β”œβ”€β”€ 'caption_7'
325
+ β”‚ β”œβ”€β”€ 'text': <datacomp_raw_caption> + 'image contains text X, text Y and text Z'
326
+ β”‚ β”œβ”€β”€ 'granularity': 0
327
+ β”‚ β”œβ”€β”€ 'include_datacomp_raw_cap': True
328
+ β”œβ”€β”€ ...
329
+ β”œβ”€β”€ 'caption_12'
330
+ β”‚ β”œβ”€β”€ 'text': <datacomp_raw_caption> + 'image contains <ocr>X</ocr><bbox>[0.113,0.222],[0.112,0.411]</bbox>'
331
+ β”‚ β”œβ”€β”€ 'granularity': 5
332
+ β”‚ β”œβ”€β”€ 'include_datacomp_raw_cap': True
333
+ ```
334
+
335
+ In the previous example, the `<datacomp_raw_caption>` is 'The Patrick Star Show.'
336
+
337
+ ### Exploring the Annotations
338
+ Researchers are encouraged to explore the performance of Vision-Language Models (VLMs) across these different levels of granularity. By experimenting with various combinations of OCR detail and the inclusion of raw captions, it is possible to gain insights into how different types of textual information influence model learning and decision-making processes.
339
+
340
+ This structured approach allows for fine-tuned analysis of VLMs' capabilities in handling text-rich images, providing a valuable resource for advancing research in the field of multimodal machine learning.
341
+
342
+
343
+ ## License
344
+ We release BLIP3-OCR-200M under an Apache2.0 license, designating it primarily as a research artifact. . This dataset is being released for research purposes only. This repository includes the extracted original text in the underlying images. It is the responsibility of the user to check and/or obtain the proper copyrights to use any of the images of the original dataset.
345
+
346
+
347
+ ## Citation
348
+
349
+ ```bibtex
350
+ @article{blip3-xgenmm,
351
+ author = {Le Xue, Manli Shu, Anas Awadalla, Jun Wang, An Yan, Senthil Purushwalkam, Honglu Zhou, Viraj Prabhu, Yutong Dai, Michael S Ryoo, Shrikant Kendre, Jieyu Zhang, Can Qin, Shu Zhang, Chia-Chih Chen, Ning Yu, Juntao Tan, Tulika Manoj Awalgaonkar, Shelby Heinecke, Huan Wang, Yejin Choi, Ludwig Schmidt, Zeyuan Chen, Silvio Savarese, Juan Carlos Niebles, Caiming Xiong, Ran Xu},
352
+ title = {xGen-MM (BLIP-3): A Family of Open Large Multimodal Models},
353
+ journal = {arXiv preprint},
354
+ month = {August},
355
+ year = {2024},
356
+ eprint = {2408.08872},
357
+ archivePrefix = {arXiv},
358
+ primaryClass = {cs.CV},
359
+ url = {https://arxiv.org/abs/2408.08872},
360
+ }
361
+ ```