The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ReadTimeout
Message:      (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 60e369f2-6baa-41d6-804d-6da4c84a12e5)')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1889, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1269, in get_module
                  patterns = get_data_patterns(base_path, download_config=self.download_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 501, in get_data_patterns
                  return _get_data_files_patterns(resolver)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 275, in _get_data_files_patterns
                  data_files = pattern_resolver(pattern)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 388, in resolve_pattern
                  for filepath, info in fs.glob(pattern, detail=True, **glob_kwargs).items()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 416, in glob
                  path = self.resolve_path(path, revision=kwargs.get("revision")).unresolve()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 183, in resolve_path
                  repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 131, in _repo_and_revision_exist
                  self._api.repo_info(repo_id, revision=revision, repo_type=repo_type, timeout=HF_HUB_ETAG_TIMEOUT)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2588, in repo_info
                  return method(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2445, in dataset_info
                  r = get_session().get(path, headers=headers, timeout=timeout, params=params)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get
                  return self.request("GET", url, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 66, in send
                  return super().send(request, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 60e369f2-6baa-41d6-804d-6da4c84a12e5)')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

πŸƒ MINT-1T:
Scaling Open-Source Multimodal Data by 10x:
A Multimodal Dataset with One Trillion Tokens

πŸƒ MINT-1T is an open-source Multimodal INTerleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. πŸƒ MINT-1T is designed to facilitate research in multimodal pretraining. πŸƒ MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.

You are currently viewing a subset of the PDF portion of πŸƒ MINT-1T associated with CommonCrawl dump CC-2023-40. For other PDF, HTML, and ArXiv subsets, refer to the πŸƒ MINT-1T collection.

Examples

Updates

9/19/24

We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.

8/8/24

We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.

Dataset Details

Dataset Sources

Uses

Direct Use

πŸƒ MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as Idefics2, XGen-MM, and Chameleon.

Out-of-Scope Use

πŸƒ MINT-1T was built to make research into large multimodal models more accessible. Using the dataset to train models that ingest or generate personally identifying information (such as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of πŸƒ MINT-1T.

Dataset Creation

Curation Rationale

πŸƒ MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.

Source Data

The dataset is a comprehensive collection of multimodal documents from various sources:

  • HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
  • PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
  • ArXiv documents: A subset of papers from the ArXiv repository

In total, πŸƒ MINT-1T contains 1056.8 million documents, broken down as follows:

  • 1029.4 million HTML documents
  • 24.0 million PDF documents
  • 0.6 million ArXiv documents

Data Collection and Processing

The data collection and processing involved several steps:

  1. Document Extraction:

    • HTML documents were parsed from CommonCrawl WARC files
    • PDF documents were extracted from CommonCrawl WAT files
    • ArXiv papers were directly sourced from ArXiv S3 buckets
  2. Filtering Process:

    • Applied text quality filters to ensure content relevance and readability
    • Removed duplicate content at both paragraph and document levels
    • Filtered out undesirable content based on predefined criteria
    • Verified image availability and quality for HTML documents
    • Limited PDF size to 50MB and 50 pages to manage dataset size and quality
  3. Image Processing:

    • Used NSFW image detection to remove pornographic or otherwise undesirable images
    • Removed images smaller than 150 pixels or larger than 20,000 pixels
    • Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
  4. Text Processing:

    • Used fasttext for language identification, focusing on English content
    • Masked personally identifiable information such as email addresses and IP addresses
    • Applied paragraph and document-level deduplication using Bloom filters
  5. PDF Specific Processing:

    • Used PyMuPDF for parsing PDFs and extracting reading order
    • Clustered text blocks based on columns and ordered from top left to bottom right
  6. ArXiv Specific Processing:

    • Used TexSoup to parse LaTeX source code and interleave images with text
    • Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags

Various open-source tools were utilized in this process, including fasttext, PyMuPDF, and DCLM and bff for deduplication and content filtering.

Personal and Sensitive Information

Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:

  • Email addresses and IP addresses were masked to protect privacy
  • An NSFW image classifierto remove inappropriate visual content
  • URLs containing substrings associated with undesirable or sensitive content were filtered out

However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.

Bias, Risks, and Limitations

Several potential biases, risks, and limitations have been identified:

  1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.

  2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.

  3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.

  4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.

  5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.

Recommendations

Given these considerations, the following recommendations are provided:

  1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.

  2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.

  3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.

  4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.

License

We release πŸƒ MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.

Citation

@article{awadalla2024mint1t,
      title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens}, 
      author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
      year={2024}
}
Downloads last month
10

Collection including mlfoundations/MINT-1T-PDF-CC-2023-40