File size: 3,955 Bytes
aacc11b b31c00e aacc11b 3ca2932 b31c00e aacc11b 22895f4 aacc11b dbdf3db aacc11b 22895f4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
---
annotations_creators:
- expert-generated
language_creators:
- other
languages:
- sv
licenses:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: suc3_1
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
- part-of-speech-tagging
---
# Dataset Card for SUC 3.1
## Dataset Description
- **Homepage:** [https://spraakbanken.gu.se/en/resources/suc3](https://spraakbanken.gu.se/en/resources/suc3)
- **Repository:** [https://github.com/kb-labb/suc3_1](https://github.com/kb-labb/suc3_1)
- **Paper:** [SUC 2.0 manual](http://spraakbanken.gu.se/parole/Docs/SUC2.0-manual.pdf)
- **Point of Contact:**
### Dataset Summary
The dataset is a conversion of the venerable SUC 3.0 dataset into the
huggingface ecosystem.
The original dataset does not contain an official train-dev-test split, which is
introduced here; the tag distribution for the NER tags between the three splits
is mostly the same.
The dataset has three different types of tagsets: manually annotated POS,
manually annotated NER, and automatically annotated NER.
For the automatically annotated NER tags, only sentences were chosen, where the
automatic and manual annotations would match (with their respective categories).
Additionally we provide remixes of the same data with some or all sentences
being lowercased.
### Supported Tasks and Leaderboards
- Part-of-Speech tagging
- Named-Entity-Recognition
### Languages
Swedish
## Dataset Structure
### Data Remixes
- `original_tags` contain the manual NER annotations
- `lower` the whole dataset uncased
- `lower_mix` some of the dataset uncased
- `lower_both` every instance both cased and uncased
- `simple_tags` contain the automatic NER annotations
- `lower` the whole dataset uncased
- `lower_mix` some of the dataset uncased
- `lower_both` every instance both cased and uncased
### Data Instances
For each instance, there is an `id`, with an optional `_lower` suffix to mark
that it has been modified, a `tokens` list of strings containing tokens, a
`pos_tags` list of strings containing POS-tags, and a `ner_tags` list of strings
containing NER-tags.
```json
{"id": "e24d782c-e2475603_lower",
"tokens": ["-", "dels", "har", "vi", "inget", "index", "att", "g\u00e5", "efter", ",", "vi", "kr\u00e4ver", "allts\u00e5", "ers\u00e4ttning", "i", "40-talets", "penningv\u00e4rde", "."],
"pos_tags": ["MID", "KN", "VB", "PN", "DT", "NN", "IE", "VB", "PP", "MID", "PN", "VB", "AB", "NN", "PP", "NN", "NN", "MAD"],
"ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]}
```
### Data Fields
- `id`: a string containing the sentence-id
- `tokens`: a list of strings containing the sentence's tokens
- `pos_tags`: a list of strings containing the tokens' POS annotations
- `ner_tags`: a list of strings containing the tokens' NER annotations
### Data Splits
| Dataset Split | Size Percentage of Total Dataset Size | Number of Instances for the Original Tags |
| ------------- | ------------------------------------- | ----------------------------------------- |
| train | 64% | 46\,026 |
| dev | 16% | 11\,506 |
| test | 20% | 14\,383 |
The `simple_tags` remix has fewer instances due to the requirement to match
tags.
## Dataset Creation
See the [original webpage](https://spraakbanken.gu.se/en/resources/suc3)
## Additional Information
### Dataset Curators
[Språkbanken]([email protected])
### Licensing Information
CC BY 4.0 (attribution)
### Citation Information
[SUC 2.0 manual](http://spraakbanken.gu.se/parole/Docs/SUC2.0-manual.pdf)
### Contributions
Thanks to [@robinqrtz](https://github.com/robinqrtz) for adding this dataset.
|