resquito-wmf commited on
Commit
b453fca
1 Parent(s): b3fb07a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -15,8 +15,10 @@ tags:
15
  - wikipedia
16
  ---
17
 
 
18
  # Dataset Card for Wikimedia Structured Wikipedia
19
 
 
20
  ## Dataset Description
21
 
22
  * Homepage: [https://enterprise.wikimedia.com/](https://enterprise.wikimedia.com/)
@@ -27,7 +29,7 @@ Early beta release of pre-parsed English and French Wikipedia articles including
27
 
28
  This dataset contains all articles of the [English](https://en.wikipedia.org/) and [French](https://fr.wikipedia.org/) language editions of Wikipedia, pre-parsed and outputted as structured JSON files with a consistent schema (JSONL compressed as zip). Each JSON line holds the content of one full Wikipedia article stripped of extra markdown and non-prose sections (references, etc.).
29
 
30
- #### Invitation for Feedback
31
 
32
  The dataset is built as part of the Structured Contents initiative and based on the Wikimedia Enterprise html [snapshots](https://enterprise.wikimedia.com/docs/snapshot/). It is an early beta release to improve transparency in the development process and request feedback. This first version includes pre-parsed Wikipedia abstracts, short descriptions, main images links, infoboxes and article sections, excluding non-prose sections (e.g. references). More elements (such as lists and tables) may be added over time. For updates follow the project’s [blog](https://enterprise.wikimedia.com/blog/) and our [Mediawiki Quarterly software updates](https://www.mediawiki.org/wiki/Wikimedia_Enterprise#Updates) on MediaWiki.
33
 
@@ -102,7 +104,7 @@ An example of each line of JSON looks as follows (abbreviated data):
102
  Timestamp
103
  Dataset extracted 16 September 2024
104
 
105
- #### Size
106
 
107
  English: enwiki_namespace_0.zip
108
 
@@ -167,11 +169,11 @@ Beyond attribution, there are many ways of contributing to and supporting the Wi
167
 
168
  ### Source Data
169
 
170
- #### Initial Data Collection and Normalization
171
 
172
  The dataset is built from the Wikimedia Enterprise HTML “snapshots”: [https://enterprise.wikimedia.com/docs/snapshot/](https://enterprise.wikimedia.com/docs/snapshot/) and focusses on the Wikipedia article namespace ([namespace 0 (main))](https://en.wikipedia.org/wiki/Wikipedia:What_is_an_article%3F#Namespace).
173
 
174
- #### Who are the source language producers?
175
 
176
  Wikipedia is a human generated corpus of free knowledge, written, edited, and curated by a [global community of editors](https://meta.wikimedia.org/wiki/Community_Insights/Community_Insights_2023_Report) since 2001.
177
 
@@ -181,13 +183,13 @@ This dataset includes the complete article contents of two Wikipedia language ed
181
 
182
  ### Annotations
183
 
184
- #### Annotation process
185
 
186
  [N/A]
187
 
188
  The dataset doesn't contain any additional annotations
189
 
190
- #### Who are the annotators?
191
 
192
  [N/A]
193
 
@@ -236,7 +238,7 @@ This dataset was created by the [Wikimedia Enterprise](https://enterprise.wikime
236
 
237
  Wikimedia Enterprise provides this dataset under the assumption that downstream users will adhere to the relevant free culture licenses when the data is reused. In situations where attribution is required, reusers should identify the Wikimedia project from which the content was retrieved as the source of the content. Any attribution should adhere to Wikimedia’s trademark policy (available at [https://foundation.wikimedia.org/wiki/Trademark_policy](https://foundation.wikimedia.org/wiki/Trademark_policy)) and visual identity guidelines (available at [https://foundation.wikimedia.org/wiki/Visual_identity_guidelines](https://foundation.wikimedia.org/wiki/Visual_identity_guidelines)) when identifying Wikimedia as the source of content.
238
 
239
- #### How To Attribute Wikipedia
240
 
241
  We ask that all content re-users attribute Wikipedia in a way that supports our model. In the spirit of reciprocity the framework allows you to leverage our brand to signal trust, reliability and recency whilst also communicating that our dataset is written entirely by human contributors who have volunteered their time in the spirit of knowledge for all.
242
 
 
15
  - wikipedia
16
  ---
17
 
18
+
19
  # Dataset Card for Wikimedia Structured Wikipedia
20
 
21
+
22
  ## Dataset Description
23
 
24
  * Homepage: [https://enterprise.wikimedia.com/](https://enterprise.wikimedia.com/)
 
29
 
30
  This dataset contains all articles of the [English](https://en.wikipedia.org/) and [French](https://fr.wikipedia.org/) language editions of Wikipedia, pre-parsed and outputted as structured JSON files with a consistent schema (JSONL compressed as zip). Each JSON line holds the content of one full Wikipedia article stripped of extra markdown and non-prose sections (references, etc.).
31
 
32
+ ### Invitation for Feedback
33
 
34
  The dataset is built as part of the Structured Contents initiative and based on the Wikimedia Enterprise html [snapshots](https://enterprise.wikimedia.com/docs/snapshot/). It is an early beta release to improve transparency in the development process and request feedback. This first version includes pre-parsed Wikipedia abstracts, short descriptions, main images links, infoboxes and article sections, excluding non-prose sections (e.g. references). More elements (such as lists and tables) may be added over time. For updates follow the project’s [blog](https://enterprise.wikimedia.com/blog/) and our [Mediawiki Quarterly software updates](https://www.mediawiki.org/wiki/Wikimedia_Enterprise#Updates) on MediaWiki.
35
 
 
104
  Timestamp
105
  Dataset extracted 16 September 2024
106
 
107
+ ### Size
108
 
109
  English: enwiki_namespace_0.zip
110
 
 
169
 
170
  ### Source Data
171
 
172
+ ### Initial Data Collection and Normalization
173
 
174
  The dataset is built from the Wikimedia Enterprise HTML “snapshots”: [https://enterprise.wikimedia.com/docs/snapshot/](https://enterprise.wikimedia.com/docs/snapshot/) and focusses on the Wikipedia article namespace ([namespace 0 (main))](https://en.wikipedia.org/wiki/Wikipedia:What_is_an_article%3F#Namespace).
175
 
176
+ ### Who are the source language producers?
177
 
178
  Wikipedia is a human generated corpus of free knowledge, written, edited, and curated by a [global community of editors](https://meta.wikimedia.org/wiki/Community_Insights/Community_Insights_2023_Report) since 2001.
179
 
 
183
 
184
  ### Annotations
185
 
186
+ ### Annotation process
187
 
188
  [N/A]
189
 
190
  The dataset doesn't contain any additional annotations
191
 
192
+ ### Who are the annotators?
193
 
194
  [N/A]
195
 
 
238
 
239
  Wikimedia Enterprise provides this dataset under the assumption that downstream users will adhere to the relevant free culture licenses when the data is reused. In situations where attribution is required, reusers should identify the Wikimedia project from which the content was retrieved as the source of the content. Any attribution should adhere to Wikimedia’s trademark policy (available at [https://foundation.wikimedia.org/wiki/Trademark_policy](https://foundation.wikimedia.org/wiki/Trademark_policy)) and visual identity guidelines (available at [https://foundation.wikimedia.org/wiki/Visual_identity_guidelines](https://foundation.wikimedia.org/wiki/Visual_identity_guidelines)) when identifying Wikimedia as the source of content.
240
 
241
+ ### How To Attribute Wikipedia
242
 
243
  We ask that all content re-users attribute Wikipedia in a way that supports our model. In the spirit of reciprocity the framework allows you to leverage our brand to signal trust, reliability and recency whilst also communicating that our dataset is written entirely by human contributors who have volunteered their time in the spirit of knowledge for all.
244