CulturaP / README.md
huu-ontocord's picture
Update README.md
9b7e2b8 verified
|
raw
history blame
No virus
8.13 kB
metadata
configs:
  - config_name: af
    data_files: af/*.jsonl
  - config_name: ar
    data_files: ar/*.jsonl
  - config_name: az
    data_files: az/*.jsonl
  - config_name: be
    data_files: be/*.jsonl
  - config_name: bg
    data_files: bg/*.jsonl
  - config_name: bn
    data_files: bn/*.jsonl
  - config_name: ca
    data_files: ca/*.jsonl
  - config_name: cs
    data_files: cs/*.jsonl
  - config_name: cy
    data_files: cy/*.jsonl
  - config_name: da
    data_files: da/*.jsonl
  - config_name: de
    data_files: de/*.jsonl
  - config_name: el
    data_files: el/*.jsonl
  - config_name: en
    data_files: en/*.jsonl
  - config_name: eo
    data_files: eo/*.jsonl
  - config_name: es
    data_files: es/*.jsonl
  - config_name: et
    data_files: et/*.jsonl
  - config_name: eu
    data_files: eu/*.jsonl
  - config_name: fa
    data_files: fa/*.jsonl
  - config_name: fi
    data_files: fi/*.jsonl
  - config_name: fr
    data_files: fr/*.jsonl
  - config_name: ga
    data_files: ga/*.jsonl
  - config_name: gl
    data_files: gl/*.jsonl
  - config_name: gu
    data_files: gu/*.jsonl
  - config_name: hbs
    data_files: hbs/*.jsonl
  - config_name: he
    data_files: he/*.jsonl
  - config_name: hi
    data_files: hi/*.jsonl
  - config_name: hu
    data_files: hu/*.jsonl
  - config_name: hy
    data_files: hy/*.jsonl
  - config_name: id
    data_files: id/*.jsonl
  - config_name: is
    data_files: is/*.jsonl
  - config_name: it
    data_files: it/*.jsonl
  - config_name: ja
    data_files: ja/*.jsonl
  - config_name: ka
    data_files: ka/*.jsonl
  - config_name: kk
    data_files: kk/*.jsonl
  - config_name: kn
    data_files: kn/*.jsonl
  - config_name: ko
    data_files: ko/*.jsonl
  - config_name: ky
    data_files: ky/*.jsonl
  - config_name: la
    data_files: la/*.jsonl
  - config_name: lt
    data_files: lt/*.jsonl
  - config_name: lv
    data_files: lv/*.jsonl
  - config_name: mk
    data_files: mk/*.jsonl
  - config_name: ml
    data_files: ml/*.jsonl
  - config_name: mn
    data_files: mn/*.jsonl
  - config_name: mr
    data_files: mr/*.jsonl
  - config_name: ms
    data_files: ms/*.jsonl
  - config_name: mt
    data_files: mt/*.jsonl
  - config_name: my
    data_files: my/*.jsonl
  - config_name: nb
    data_files: nb/*.jsonl
  - config_name: ne
    data_files: ne/*.jsonl
  - config_name: nl
    data_files: nl/*.jsonl
  - config_name: nn
    data_files: nn/*.jsonl
  - config_name: pa
    data_files: pa/*.jsonl
  - config_name: pl
    data_files: pl/*.jsonl
  - config_name: ps
    data_files: ps/*.jsonl
  - config_name: pt
    data_files: pt/*.jsonl
  - config_name: ro
    data_files: ro/*.jsonl
  - config_name: ru
    data_files: ru/*.jsonl
  - config_name: si
    data_files: si/*.jsonl
  - config_name: sk
    data_files: sk/*.jsonl
  - config_name: sl
    data_files: sl/*.jsonl
  - config_name: so
    data_files: so/*.jsonl
  - config_name: sq
    data_files: sq/*.jsonl
  - config_name: sv
    data_files: sv/*.jsonl
  - config_name: sw
    data_files: sw/*.jsonl
  - config_name: ta
    data_files: ta/*.jsonl
  - config_name: te
    data_files: te/*.jsonl
  - config_name: th
    data_files: th/*.jsonl
  - config_name: tl
    data_files: tl/*.jsonl
  - config_name: tr
    data_files: tr/*.jsonl
  - config_name: tt
    data_files: tt/*.jsonl
  - config_name: uk
    data_files: uk/*.jsonl
  - config_name: ur
    data_files: ur/*.jsonl
  - config_name: uz
    data_files: uz/*.jsonl
  - config_name: vi
    data_files: vi/*.jsonl
  - config_name: zh
    data_files: zh/*.jsonl
pretty_name: CulturaP
annotations_creators:
  - no-annotation
language_creators:
  - found
language:
  - af
  - ar
  - az
  - be
  - bg
  - bn
  - ca
  - cs
  - cy
  - da
  - de
  - el
  - en
  - eo
  - es
  - et
  - eu
  - fa
  - fi
  - fr
  - ga
  - gl
  - gu
  - hbs
  - he
  - hi
  - hu
  - hy
  - id
  - is
  - it
  - ja
  - ka
  - kk
  - kn
  - ko
  - ky
  - la
  - lt
  - lv
  - mk
  - ml
  - mn
  - mr
  - ms
  - mt
  - my
  - nb
  - ne
  - nl
  - nn
  - pa
  - pl
  - ps
  - pt
  - ro
  - ru
  - si
  - sk
  - sl
  - so
  - sq
  - sv
  - sw
  - ta
  - te
  - th
  - tl
  - tr
  - tt
  - uk
  - ur
  - uz
  - vi
  - zh
multilinguality:
  - multilingual
size_categories:
  - n<1K
  - 1K<n<10K
  - 10K<n<100K
  - 100K<n<1M
  - 1M<n<10M
  - 10M<n<100M
  - 100M<n<1B
  - 1B<n<10B
source_datasets:
  - original
task_categories:
  - text-generation
  - fill-mask
task_ids:
  - language-modeling
  - masked-language-modeling
license: cc-by-4.0

CulturaP: A Permissive Multilingual Dataset of 75 Languages

Dataset Summary

From the team that brought you CulturaX and CulturaX, we present CulturaP, a filtered subset of the multilingual dataset CulturaY that we believe is more likely to be copyright permissive and usable. CulturaY is in turn based on HPLT v1.1 dataset. Ultimately, this dataset is based on Common Crawl and the Internet Archive.

We have filtered to find websites with what we believe are government domain names, international organization domain names like the UN and europa.eu, and creative commons licensed data. While we strongly believe that fair use protects machine learning on webcrawled data in the United States, this may not be the case in other countries. Thus we hve created this filtered version.

The column 'cc' will tell you if we detected the keywords associated with creative commons license in the header or footer of the data. We have not confirmed by hand the prescense of the keyword means the page is in fact CC licensed. Please confirm for yourself the type of license you wish to use (CC-BY, CC-BY-SA, CC-BY-NC, etc).

Notes

We used .gov/ .gov.* .gouv.* and .int/, among other patterns in the URL filtering. This means that a URL like 'http://transcoder.usablenet.com/tt/www.ers.usda.gov/amber-waves/2006-november/new-phytosanitary-regulations-allow-higher-imports-of-avocados.aspx' will be a match. We believe that references to government URLs in a company's URL will likely mean the source is also governmental data. However, you may wish to confirm the URL field for strictly the type or URLs you wish to use (e.g., filter all .com data from the URLs).

Disclaimer

This dataset is provided for research purposes only. We make no claims to ownership of the data and release any of our annotations under ODC-By because that was the original RefinedWeb license. We believe usage of web crawled data is permitted under fair use, but you use this dataset at your own risks, and we disclaim all warranties and liabilities including warranties for non-infringement. This dataset and datacard is not legal advice.

NO PERSONALLY IDENTIFIABLE INFORMATION REDACTION WAS PERFORMED. ALL SUCH INFORMATION ARE FROM THE ORIGINAL DATASET AND THE CORRESPONDING WEB PAGE.

Contact Us

Please reach out to us in the discussion link above if you have questions.

Our annotations and arrangements are licensed under CC-BY-4.0, and we make the data available for fair use machine learning research.
But we make no claims as to the underlying copyrights of the work. This data was copied from the HPLT project, which in turn used the data from Common Crawl and the Internet Archive.

Acknowledgement

We thank our collaborators at UONLP - The Natural Language Processing Group at the University of Oregon, and the computing resources of the managers of the Karolina Supercomputers. We also thank our friends at TurkuNLP for their support.

Data Breakdown:

There are 75 langauges, with the following breakdown:

TODO

Dataset structure

The dataset has a total of 7 columns, including:

  • 2 columns text, url will be the two main columns in this dataset.
  • the remaining columns id, document_lang, scores, langs belong to the original document in the HPLT V1.1 dataset, retained for debugging purposes. and will be removed in the future.

Therefore, when using, please only utilize the two columns text and url.

Process for Creating CulturaP

TODO: We will update this section to describe our curation method.

Citation

To cite CulturaP, please use:

@misc{nguyen2024culturap,
      title={CulturaP: A Permissive Multilingual Dataset of 75 Languages}, 
      author={Huu Nguyen, Thuat Nguyen, Ken Tsui, and Thien Nguyen},
      year={2024},
}