Datasets:
File size: 2,504 Bytes
ff76791 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 ff76791 0c3c2dd 2a95948 0c3c2dd be21133 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd be21133 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 804013b 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 0c3c2dd 2a95948 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
pretty_name: ScandiWiki
language:
- da
- sv
- no
- nb
- nn
- is
- fo
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1M<n<10M
source_datasets:
- wikipedia
task_categories:
- fill-mask
- text-generation
- feature-extraction
task_ids:
- language-modeling
---
# Dataset Card for ScandiWiki
## Dataset Description
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Total amount of disk used:** 4485.90 MB
### Dataset Summary
ScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmål,
Norwegian Nynorsk, Swedish, Icelandic and Faroese.
### Supported Tasks and Leaderboards
This dataset is intended for general language modelling.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian Bokmål (`nb`),
Norwegian Nynorsk (`nn`), Icelandic (`is`) and Faroese (`fo`).
## Dataset Structure
### Data Instances
- **Total amount of disk used:** 4485.90 MB
An example from the `train` split of the `fo` subset looks as follows.
```
{
'id': '3380',
'url': 'https://fo.wikipedia.org/wiki/Enk%C3%B6pings%20kommuna',
'title': 'Enköpings kommuna',
'text': 'Enköpings kommuna (svenskt: Enköpings kommun), er ein kommuna í Uppsala län í Svøríki. Enköpings kommuna hevur umleið 40.656 íbúgvar (2013).\n\nKeldur \n\nKommunur í Svøríki'
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Subsets
| name | samples |
|----------|----------:|
| sv | 2,469,978 |
| nb | 596,593 |
| da | 287,216 |
| nn | 162,776 |
| is | 55,418 |
| fo | 12,582 |
## Dataset Creation
### Curation Rationale
It takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so
this dataset is primarily for convenience.
### Source Data
The original data is from the [wikipedia
dataset](https://huggingface.co/datasets/wikipedia).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/), in accordance with the same
license of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia).
|