resquito-wmf commited on
Commit
d67734b
1 Parent(s): c0f2622

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +292 -3
README.md CHANGED
@@ -1,3 +1,292 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license:
3
+ - cc-by-sa-4.0
4
+ - gfdl
5
+ size_categories:
6
+ - 1M<n<10M
7
+ task_ids:
8
+ - language-modeling
9
+ - masked-language-modeling
10
+ language:
11
+ - en
12
+ - fr
13
+ tags:
14
+ - wikimedia
15
+ - wikipedia
16
+ ---
17
+
18
+ # Dataset Card for Wikimedia Structured Wikipedia
19
+
20
+ ## Dataset Description
21
+
22
+ * Homepage: [https://enterprise.wikimedia.com/](https://enterprise.wikimedia.com/)
23
+
24
+ ### Dataset Summary
25
+
26
+ Early beta release of pre-parsed English and French Wikipedia articles including infoboxes. Inviting feedback.
27
+
28
+ This dataset contains all articles of the [English](https://en.wikipedia.org/) and [French](https://fr.wikipedia.org/) language editions of Wikipedia, pre-parsed and outputted as structured JSON files with a consistent schema (JSONL compressed as zip). Each JSON line holds the content of one full Wikipedia article stripped of extra markdown and non-prose sections (references, etc.).
29
+
30
+ #### Invitation for Feedback
31
+
32
+ The dataset is built as part of the Structured Contents initiative and based on the Wikimedia Enterprise html [snapshots](https://enterprise.wikimedia.com/docs/snapshot/). It is an early beta release to improve transparency in the development process and request feedback. This first version includes pre-parsed Wikipedia abstracts, short descriptions, main images links, infoboxes and article sections, excluding non-prose sections (e.g. references). More elements (such as lists and tables) may be added over time. For updates follow the project’s [blog](https://enterprise.wikimedia.com/blog/) and our [Mediawiki Quarterly software updates](https://www.mediawiki.org/wiki/Wikimedia_Enterprise#Updates) on MediaWiki.
33
+
34
+ As this is an early beta release, we highly value your feedback to help us refine and improve this dataset. Please share your thoughts, suggestions, and any issues you encounter either [on the discussion page](https://meta.wikimedia.org/wiki/Talk:Wikimedia_Enterprise) of Wikimedia Enterprise’s homepage on Meta wiki, or on the discussion page for this dataset here on HuggingFace.
35
+
36
+ The contents of this dataset of Wikipedia articles is collectively written and curated by a global volunteer community. All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL) and the [Creative Commons Attribution-Share-Alike 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/). Some text may be available only under the Creative Commons license; see the Wikimedia [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details. Text written by some authors may be released under additional licenses or into the public domain.
37
+
38
+ ### Supported Tasks and Leaderboards
39
+
40
+ The dataset in its structured form is generally helpful for a wide variety of tasks, including all phases of model development, from pre-training to alignment, fine-tuning, updating/RAG as well as testing/benchmarking.
41
+
42
+ We would love to hear more about your use cases.
43
+
44
+ ### Languages
45
+
46
+ English
47
+ BCP 47 Language Code: EN
48
+
49
+ French/Français
50
+ BCP 47 Language Code: FR
51
+
52
+ There is only one language edition for English, and one for French. They encompass national and cultural variations of spelling, vocabulary, grammar etc. Within a Wikipedia language edition, no national variety is officially preferred over others. The rule of thumb is that the conventions of a particular variety of language should be followed consistently within a given article.
53
+
54
+ The relevant Manual of Style policy documents are available [for English Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style#National_varieties_of_English), and [for French Wikipedia](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Conventions_de_style#Variantes_g%C3%A9ographiques).
55
+
56
+ As part of this Beta Wikimedia Enterprise Structured Contents project, the team is working to including more language editions: [https://enterprise.wikimedia.com/docs/snapshot/#available-structured-contents-snapshots-beta](https://enterprise.wikimedia.com/docs/snapshot/#available-structured-contents-snapshots-beta)
57
+
58
+ Dataset Structure
59
+
60
+ ### Data Instances
61
+
62
+ An example of each line of JSON looks as follows (abbreviated data):
63
+
64
+ ```
65
+ {
66
+ "name":"JosephineBaker",
67
+ "identifier":255083,
68
+ "url":"https://en.wikipedia.org/wiki/Josephine_Baker",
69
+ "date_created":"...",
70
+ "date_modified":"...",
71
+ "is_part_of":{"..."},
72
+ "in_language":{"..."},
73
+ "main_entity":{"identifier":"Q151972",...},
74
+ "additional_entities":[...],
75
+ "version":{...},
76
+ "description":"American-bornFrenchdancer...",
77
+ "abstract":"FredaJosephineBaker,naturalizedas...",
78
+ "image":{"content_url":"https://upload.wikimedia.org/wikipedia/...",...},
79
+ "infobox":[{"name":"Infoboxperson",
80
+ "type":"infobox",
81
+ "has_parts":[
82
+ {"name":"JosephineBaker",
83
+ "type":"section",
84
+ "has_parts":[
85
+ {"name":"Born",
86
+ "type":"field",
87
+ "value":"FredaJosephineMcDonaldJune3,1906
88
+ St.Louis,Missouri,US",
89
+ "links":[{"url": "https://en.wikipedia.org/wiki/St._Louis",
90
+ "text":"St.Louis"},...}],
91
+ "sections"[{"name": "Abstract",
92
+ "type": "section",
93
+ "has_parts": [
94
+ {"type": "paragraph",
95
+ "value": "Freda Josephine Baker (née McDonald; June 3, 1906 - April 12, 1975), naturalized as Joséphine Baker...",
96
+ "links": [{"url": "https://en.wikipedia.org/wiki/Siren_...",
97
+ "text": "Siren of the Tropics"...}],
98
+ "license":[...],
99
+ }
100
+ ```
101
+
102
+ Timestamp
103
+ Dataset extracted 16 September 2024
104
+
105
+ #### Size
106
+
107
+ English: enwiki_namespace_0.zip
108
+
109
+ Size of compressed dataset files: 17.91 GB
110
+ Size of uncompressed dataset: 79.57 GB
111
+
112
+ French: frwiki_namespace_0.zip
113
+
114
+ Size of compressed dataset files: 6.95 GB
115
+ Size of the uncompressed dataset: 34.01 GB
116
+
117
+ JSONL files compressed in zip, once uncompressed they are chunked by max 2.15GB.
118
+
119
+ For more guidelines on how to process the snapshots, see [SDKs](https://enterprise.wikimedia.com/docs/#sdk).
120
+
121
+ ### Data Fields
122
+
123
+ The data fields are the same among all, noteworthy included fields:
124
+
125
+ * [name](https://enterprise.wikimedia.com/docs/data-dictionary/#name) - title of the article.
126
+ * [identifier](https://enterprise.wikimedia.com/docs/data-dictionary/#identifier) - ID of the article.
127
+ * [url](https://enterprise.wikimedia.com/docs/data-dictionary/#url) - URL of the article.
128
+ * [version](https://enterprise.wikimedia.com/docs/data-dictionary/#version): metadata related to the latest specific revision of the article
129
+ * [version.editor](https://enterprise.wikimedia.com/docs/data-dictionary/#version_editor) - editor-specific signals that can help contextualize the revision
130
+ * [version.scores](https://enterprise.wikimedia.com/docs/data-dictionary/#version_scores) - returns assessments by ML models on the likelihood of a revision being reverted.
131
+ * [main entity](https://enterprise.wikimedia.com/docs/data-dictionary/#main_entity) - Wikidata QID the article is related to.
132
+ * [abstract](https://enterprise.wikimedia.com/docs/data-dictionary/#abstract) - lead section, summarizing what the article is about.
133
+ * [description](https://enterprise.wikimedia.com/docs/data-dictionary/#description) - one-sentence description of the article for quick reference.
134
+ * [image](https://enterprise.wikimedia.com/docs/data-dictionary/#image) - main image representing the article's subject.
135
+ * [infobox](https://enterprise.wikimedia.com/docs/data-dictionary/#infobox) - parsed information from the side panel (infobox)l on the Wikipedia article.
136
+ * [sections](https://enterprise.wikimedia.com/docs/data-dictionary/#article_sections) - parsed sections of the article, including links.
137
+
138
+ Note: excludes other media/images, lists, tables and references or similar non-prose sections.
139
+
140
+ Full data dictionary is available here: [https://enterprise.wikimedia.com/docs/data-dictionary/](https://enterprise.wikimedia.com/docs/data-dictionary/)
141
+
142
+ ### Data Splits
143
+
144
+ [N/A]
145
+
146
+ ## Dataset Creation
147
+
148
+ ### Curation Rationale
149
+
150
+ This dataset has been created as part of the larger [Structured Contents initiative at Wikimedia Enterprise](https://meta.wikimedia.org/wiki/Wikimedia_Enterprise/FAQ#What_is_Structured_Contents) with the aim of making Wikimedia data more machine readable. These efforts are both focused on pre-parsing Wikipedia snippets as well as connecting the different projects closer together.
151
+
152
+ Even if Wikipedia is very structured to the human eye, it is a non-trivial task to extract the knowledge lying within in a machine readable manner. Projects, languages, domains all have their own specific community experts and way of structuring data, bolstered by various templates and best practices. A specific example we’ve addressed in this release is article infoboxes. Infoboxes are panels that commonly appear in the top right corner of a Wikipedia article and summarize key facts and statistics of the article’s subject. The editorial community works hard to keep infoboxes populated with the article’s most pertinent and current metadata, and we’d like to lower the barrier of entry significantly so that this data is also accessible at scale without the need for bespoke parsing systems.
153
+
154
+ We also include the link to the [Wikidata Q](https://www.wikidata.org/wiki/Wikidata:Identifiers) Identifier (corresponding Wikidata entity), and the link to (main and infobox) images to facilitate easier access to additional information on the specific topics.
155
+
156
+ You will also find [Credibility Signals fields](https://enterprise.wikimedia.com/blog/understanding-credibility-signals-in-wikimedia-enterprise-api/) included. These can help you decide when, how, and why to use what is in the dataset. These fields mirror the over 20 years of editorial policies created and kept by the Wikipedia editing communities, taking publicly available information and structuring it. Like with article structures, because this information is not centralized (neither on a single project nor across them), it is hard to access. Credibility signals shine a light on that blind spot. You will find most of these signals under the ‘version’ object, but other objects like ‘protection’ and ‘watchers_count’ offer similar insight.
157
+
158
+ This is an early beta release of pre-parsed Wikipedia articles in bulk, as a means to improve transparency in the development process and gather insights of current use cases to follow where the AI community needs us most; as well as feedback points, to develop this further through collaboration. There will be limitations (see ‘known limitations’ section below), but in line with [our values](https://wikimediafoundation.org/about/values/#a3-we-welcome-everyone-who-shares-our-vision-and-values), we believe it is better to share early, often, and respond to feedback.
159
+
160
+ You can also test out more languages on an article by article basis through our [beta Structured Contents On-demand endpoint](https://enterprise.wikimedia.com/docs/on-demand/#article-structured-contents-beta) with a [free account](https://enterprise.wikimedia.com/blog/enhanced-free-api/).
161
+
162
+ Attribution is core to the sustainability of the Wikimedia projects. It is what drives new editors and donors to Wikipedia. With consistent attribution, this cycle of content creation and reuse ensures encyclopedic content of high-quality, reliability, and verifiability will continue being written on Wikipedia and ultimately remain available for reuse via datasets such as these.
163
+
164
+ As such, we require all users of this dataset to conform to our expectations for proper attribution. Detailed attribution requirements for use of this dataset are outlined below.
165
+
166
+ Beyond attribution, there are many ways of contributing to and supporting the Wikimedia movement. and various other ways of supporting and participating in the Wikimedia movement below. To discuss your specific circumstances please contact Nicholas Perry from the Wikimedia Foundation technical partnerships team at [[email protected]](mailto:[email protected]). You can also contact us on either the [discussion page](https://meta.wikimedia.org/wiki/Talk:Wikimedia_Enterprise) of Wikimedia Enterprise’s homepage on Meta wiki, or on the discussion page for this dataset here on HuggingFace.
167
+
168
+ ### Source Data
169
+
170
+ #### Initial Data Collection and Normalization
171
+
172
+ The dataset is built from the Wikimedia Enterprise HTML “snapshots”: [https://enterprise.wikimedia.com/docs/snapshot/](https://enterprise.wikimedia.com/docs/snapshot/) and focusses on the Wikipedia article namespace ([namespace 0 (main))](https://en.wikipedia.org/wiki/Wikipedia:What_is_an_article%3F#Namespace).
173
+
174
+ #### Who are the source language producers?
175
+
176
+ Wikipedia is a human generated corpus of free knowledge, written, edited, and curated by a [global community of editors](https://meta.wikimedia.org/wiki/Community_Insights/Community_Insights_2023_Report) since 2001.
177
+
178
+ It is the largest and most accessed educational resource in history, accessed over 20 billion times by half a billion people each month. Wikipedia represents almost 25 years of work by its community; the creation, curation, and maintenance of millions of articles on distinct topics.
179
+
180
+ This dataset includes the complete article contents of two Wikipedia language editions: English [https://en.wikipedia.org/](https://en.wikipedia.org/) and French [https://fr.wikipedia.org/](https://fr.wikipedia.org/), written by the respective communities.
181
+
182
+ ### Annotations
183
+
184
+ #### Annotation process
185
+
186
+ [N/A]
187
+
188
+ The dataset doesn't contain any additional annotations
189
+
190
+ #### Who are the annotators?
191
+
192
+ [N/A]
193
+
194
+ ### Personal and Sensitive Information
195
+
196
+ The Wikipedia community and the Wikimedia Foundation - which operates Wikipedia - establish robust policy and guidelines around personal and sensitive information; both to avoid personal/sensitive information within articles as well as strict privacy policies around information of their contributors.
197
+
198
+ The Wikimedia Foundation’s privacy policy is available at: [https://foundation.wikimedia.org/wiki/Policy:Privacy_policy](https://foundation.wikimedia.org/wiki/Policy:Privacy_policy)
199
+
200
+ Transparency reports covering the Wikimedia Foundation’s responses to requests received to alter or remove content from the projects, and to provide nonpublic information about users, are found here: [https://wikimediafoundation.org/about/transparency/](https://wikimediafoundation.org/about/transparency/)
201
+
202
+ Among many editorial policies regarding personal and sensitive information, particular care is paid by the community to biographies of living people. Details for each language community’s responses and norms can be found here: [https://www.wikidata.org/wiki/Q4663389#sitelinks-wikipedia](https://www.wikidata.org/wiki/Q4663389#sitelinks-wikipedia)
203
+
204
+ ## Considerations for Using the Data
205
+
206
+ ### Social Impact of Dataset
207
+
208
+ Wikipedia’s articles are read over 20 billion times by half a billion people each month. It does not belong to, or come from a single culture or language. It is an example of mass international cooperation, across languages and continents. Wikipedia is the only website among the world’s most visited that is not operated by a commercial organization.
209
+
210
+ Wikimedia projects have been used (and upsampled) as a core source of qualitative data in AI/ML/LLM . The Wikimedia Foundation has published an article on [the value of Wikipedia in age of generative AI](https://wikimediafoundation.org/news/2023/07/12/wikipedias-value-in-the-age-of-generative-ai).
211
+
212
+ There is also a Community Article about [why Wikimedia data matters for ML](https://huggingface.co/blog/frimelle/wikipedias-treasure-trove-ml-data#why-wikimedia-data-for-ml) on the HuggingFace blog. This highlights that it is: rich, diverse content; multimodal; and both community-curated and openly licensed.
213
+
214
+ ### Discussion of Biases
215
+
216
+ While consciously trying to present an editorially neutral point of view, Wikipedia’s content reflects [the biases of the society it comes from](https://wikimediafoundation.org/our-work/open-the-knowledge/otk-change-the-stats/). This includes various “gaps” (notably in both the [proportion of biographies of, and editors who identify as, women](https://wikimediafoundation.org/our-work/open-the-knowledge/wikipedia-needs-more-women/). Other significant gaps include in linguistic and technical accessibility of the websites, and censorship. Because the content is written by its readers, ensuring the widest possible access to the content is crucial to reducing the biases of the content itself. There is continuous work to redress these biases through various social and technical efforts – both centrally and at the grassroots around the world.
217
+
218
+ ### Other Known Limitations
219
+
220
+ This is an early beta version, the following limitations may apply:
221
+
222
+ * A small percentage of duplicated, deleted or missed articles may be part of the snapshot. Duplicates can be filtered out by looking at the highest "version.identifier", being the most up-to-date revision of the article
223
+ * Revision discrepancies may happen due to limitations with long articles
224
+ * On occasion empty sections or values may be returned. This is either because the section contains references or similar; or is made out of structured elements like lists and tables; or the section was left empty by editors.
225
+ * Images: only main and infobox image links are supported at the moment, we encourage you to take additional information and licensing by following the image link, while we are evaluating adding this data directly.
226
+
227
+ Please let us know if there are any other limitations that aren't covered above.
228
+
229
+ ## Additional Information
230
+
231
+ ### Dataset Curators
232
+
233
+ This dataset was created by the [Wikimedia Enterprise](https://enterprise.wikimedia.com/about/) team of the [Wikimedia Foundation](https://wikimediafoundation.org/) as part of the Structured Contents initiative.
234
+
235
+ ### Attribution Information
236
+
237
+ Wikimedia Enterprise provides this dataset under the assumption that downstream users will adhere to the relevant free culture licenses when the data is reused. In situations where attribution is required, reusers should identify the Wikimedia project from which the content was retrieved as the source of the content. Any attribution should adhere to Wikimedia’s trademark policy (available at [https://foundation.wikimedia.org/wiki/Trademark_policy](https://foundation.wikimedia.org/wiki/Trademark_policy)) and visual identity guidelines (available at [https://foundation.wikimedia.org/wiki/Visual_identity_guidelines](https://foundation.wikimedia.org/wiki/Visual_identity_guidelines)) when identifying Wikimedia as the source of content.
238
+
239
+ How To Attribute Wikipedia
240
+
241
+ We ask that all content re-users attribute Wikipedia in a way that supports our model. In the spirit of reciprocity the framework allows you to leverage our brand to signal trust, reliability and recency whilst also communicating that our dataset is written entirely by human contributors who have volunteered their time in the spirit of knowledge for all.
242
+
243
+ For example, for Generative AI, what to include in all attribution cases:
244
+
245
+ | Wikipedia Attribution |
246
+ | ---------------------------------------- |
247
+ | Application |
248
+ | Outputs using Wikipedia in-line |
249
+ | Outputs using Wikipedia non-specifically |
250
+
251
+ Attribution UI Example: Outputs using Wikipedia non-specifically
252
+
253
+ ![](https://huggingface.co/datasets/wikimedia/structured-wikipedia/resolve/main/images/attr1.png)
254
+
255
+ Attribution UI Example: Outputs using Wikipedia in-line
256
+
257
+ ![](https://huggingface.co/datasets/wikimedia/structured-wikipedia/resolve/main/images/attr2.png)
258
+
259
+ Tools & Resources for Attribution
260
+
261
+ [W Favicons](https://commons.wikimedia.org/wiki/Category:Wikimedia_Attribution_Guide_Favicons): The Wikipedia ‘W’ icon is a distinctive graphic element derived from the W of Wikipedia, set in the font Hoefler combined with various framing devices.
262
+
263
+ <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/1e/Wikipedia_%22W%22_Square_Black.svg/240px-Wikipedia_%22W%22_Square_Black.svg.png" style="display: inline-block; width: 50px; height:50px;" /><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/cf/Wikipedia_%22W%22_Rounded_Black.svg/240px-Wikipedia_%22W%22_Rounded_Black.svg.png" style="width: 50px; height:50px;display: inline-block;" /><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/67/Wikipedia_%22W%22_Square_Tech.svg/240px-Wikipedia_%22W%22_Square_Tech.svg.png" style="width: 50px; height:50px;display: inline-block;" /><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Wikipedia_%22W%22_Rounded_Tech.svg/240px-Wikipedia_%22W%22_Rounded_Tech.svg.png" style="width: 50px; height:50px;display: inline-block;" /><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/23/Wikipedia_%22W%22_Square_White.svg/240px-Wikipedia_%22W%22_Square_White.svg.png" style="width: 50px; height:50px;display: inline-block;" /><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/23/Wikipedia_%22W%22_Rounded_White.svg/240px-Wikipedia_%22W%22_Rounded_White.svg.png" style="width: 50px; height:50px;display: inline-block;" />
264
+
265
+ [W Wordmarks](https://foundation.wikimedia.org/wiki/Legal:Visual_identity_guidelines): The distinctive wordmark, with larger case W and A characters. The wordmark alone is the most identifiable and clearly understood version of all of the official marks representing Wikipedia.
266
+
267
+ ![](https://huggingface.co/datasets/wikimedia/structured-wikipedia/resolve/main/images/wikipedia1.png)![](https://huggingface.co/datasets/wikimedia/structured-wikipedia/resolve/main/images/wikipedia2.png)
268
+
269
+ Wikimedia Product Icons: You can find all currently available icons in the [Assets library](https://www.figma.com/file/1lT9LKOK6wiHLnpraMjP3E/%E2%9D%96-Assets-(Icons%2C-Logos%2C-Illustrations)?node-id=3295-13631&t=XsJ03mZaUOTNMw9j-0) in Figma. We provide a listing of all icons with their ids for implementation in the [Codex demo](https://doc.wikimedia.org/codex/latest/icons/all-icons.html). Additionally you also find all icons as single [SVGO production optimized](https://www.mediawiki.org/wiki/Manual:Coding_conventions/SVG) SVGs for usage outside of MediaWiki.
270
+
271
+ <img src="https://huggingface.co/datasets/wikimedia/structured-wikipedia/resolve/main/images/svgs.png" style="width: 400px; height:79px"/>
272
+
273
+ More logo assets and guidance can be found on the [Wikipedia](https://foundation.wikimedia.org/wiki/Legal:Visual_identity_guidelines) and [Wikimedia](https://meta.wikimedia.org/wiki/Brand) Brand portals.
274
+ Wikipedia favicons can be found on [Wikimedia Commons](https://commons.wikimedia.org/wiki/Category:Wikimedia_Attribution_Guide_Favicons).
275
+
276
+ To schedule a 30-minute brand attribution walkthrough or to request a customized solution, please email us at [[email protected]](mailto:[email protected])
277
+
278
+ ### Licensing Information
279
+
280
+ All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL) and the [Creative Commons Attribution-Share-Alike 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/). Some text may be available only under the Creative Commons license; see the Wikimedia [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details. Text written by some authors may be released under additional licenses or into the public domain.
281
+
282
+ Derived dataset prepared by[ Wikimedia Enterprise](https://enterprise.wikimedia.com/). Wikipedia and Wikimedia Enterprise are registered trademarks of the[ Wikimedia Foundation, Inc.](https://www.wikimediafoundation.org/), a non-profit organization.
283
+
284
+ For more information, see [dumps licensing information](https://dumps.wikimedia.org/legal.html) and [Wikimedia Enterprise Terms](https://enterprise.wikimedia.com/terms/).
285
+
286
+ ### Citation Information
287
+
288
+ @ONLINE{structured-wikipedia,
289
+ author = {Wikimedia Enterprise, Wikimedia Foundation},
290
+ title = {Structured Wikipedia},
291
+ month = {sep},
292
+ year= {2024}}