--- pretty_name: CulturaP language: - af - ar - az - be - bg - bn - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - ga - gl - gu - hbs - he - hi - hu - hy - id - is - it - ja - ka - kk - kn - ko - ky - la - lt - lv - mk - ml - mn - mr - ms - mt - my - nb - ne - nl - nn - pa - pl - ps - pt - ro - ru - si - sk - sl - so - sq - sv - sw - ta - te - th - tl - tr - tt - uk - ur - uz - vi - zh multilinguality: - multilingual license: cc-by-4.0 --- ## CulturaP: A Permissive Multilingual Dataset of 75 Languages *** NOTE: We are currently uploading the data. We are having some issues with the upload, so please come back later today June 6, for updates *** ### Dataset Summary From the team that brought you [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) and [CulturaX](https://huggingface.co/datasets/ontocord/CulturaY), we present CulturaP, a filtered subset of the multilingual dataset CulturaY that we believe is more likely to be copyright permissive and usable. CulturaY is in turn based on [HPLT v1.1](https://hplt-project.org/datasets/v1.1) dataset. Ultimately, this dataset is based on Common Crawl and the Internet Archive. We have filtered to find websites with what we believe are government domain names, international organization domain names like the UN and europa.eu, and creative commons licensed data. While we strongly believe that fair use protects machine learning on webcrawled data in the United States, this may not be the case in other countries. Thus we hve created this filtered version. Please note though that because laws may vary, some countries may not permit free usage of government materials, or a government webiste might have a specific prohibition against usages, such as republicaiton. We are exploring ways to detect this and flag in the dataset. For example: The webiste https://presidency.gov.sd/eng/page/Yawor, while a governmental website has the following footer inforamtion: ``` The Republican Palace All rights reserved. | Privacy Policy It's not allowed to re-publish any of the content of this website or transmitted in any way, whether electronic or otherwise without the prior permission from the press office of the Presidency of the Republic - The Republican Palace ``` ### Notes We used .gov/ .gov.* .gouv.* and .int/, among other patterns in the URL filtering. This means that a URL like 'http://transcoder.usablenet.com/tt/www.ers.usda.gov/amber-waves/2006-november/new-phytosanitary-regulations-allow-higher-imports-of-avocados.aspx' will be a match. We believe that references to government URLs in a company's URL will likely mean the source is also governmental data. However, you may wish to confirm the URL field for strictly the type or URLs you wish to use (e.g., filter all .com data from the URLs). ### Disclaimer This dataset is provided for research purposes only. We make no claims to ownership of the data and release any of our annotations under ODC-By because that was the original RefinedWeb license. We believe usage of web crawled data is permitted under fair use, but you use this dataset at your own risks, and we disclaim all warranties and liabilities including warranties for non-infringement. This dataset and datacard is not legal advice. NO PERSONALLY IDENTIFIABLE INFORMATION REDACTION WAS PERFORMED. ALL SUCH INFORMATION ARE FROM THE ORIGINAL DATASET AND THE CORRESPONDING WEB PAGE. ### Contact Us Please reach out to us in the discussion link above if you have questions. Our annotations and arrangements are licensed under CC-BY-4.0, and we make the data available for fair use machine learning research. But we make no claims as to the underlying copyrights of the work. This data was copied from the HPLT project, which in turn used the data from Common Crawl and the Internet Archive. ### Acknowledgement We thank our collaborators at [UONLP - The Natural Language Processing Group at the University of Oregon](http://nlp.uoregon.edu/), and the computing resources of the managers of the Karolina Supercomputers. We also thank our friends at [TurkuNLP](https://turkunlp.org) for their support. ### Data Breakdown: There are 75 langauges, with the following breakdown: TODO ### Dataset structure The dataset has a total of 8 columns, including: The column 'cc' will tell you if we detected the keywords associated with creative commons license in the header or footer of the data. We have not confirmed by hand the prescense of the keyword means the page is in fact CC licensed. Please confirm for yourself the type of license you wish to use (CC-BY, CC-BY-SA, CC-BY-NC, etc). There is also a column 'en_text' which is a short 200 character beginning of the text translate to English using Google translate. - The columns `document_lang, text, url, cc, en_text` will be the main columns in this dataset. - the remaining columns `id, document_lang, scores, langs` belong to the original document in the HPLT V1.1 dataset, retained for debugging purposes. and will be removed in the future. Therefore, when using, please only utilize the two columns text and url. ### Process for Creating CulturaP TODO: We will update this section to describe our curation method. ### Citation To cite CulturaP, please use: ``` @misc{nguyen2024culturap, title={CulturaP: A Permissive Multilingual Dataset of 75 Languages}, author={Huu Nguyen, Thuat Nguyen, Ken Tsui, and Thien Nguyen}, year={2024}, } ```