Datasets:

Modalities:
Text
Languages:
Persian
ArXiv:
Libraries:
Datasets
License:
danyaljj commited on
Commit
2915636
1 Parent(s): 608ab54

adding files

Browse files
Files changed (3) hide show
  1. README.md +165 -0
  2. dataset_infos.json +1 -0
  3. parsinlu_translation_fa_en.py +143 -0
README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - fa
8
+ licenses:
9
+ - cc-by-nc-sa-4.0
10
+ multilinguality:
11
+ - fa
12
+ - en
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - extended
17
+ task_categories:
18
+ - translation
19
+ task_ids:
20
+ - translation
21
+ ---
22
+
23
+ # Dataset Card for PersiNLU (Machine Translation)
24
+
25
+ ## Table of Contents
26
+ - [Dataset Card for PersiNLU (Machine Translation)](#dataset-card-for-persi_nlu_machine_translation)
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
41
+ - [Annotations](#annotations)
42
+ - [Annotation process](#annotation-process)
43
+ - [Who are the annotators?](#who-are-the-annotators)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+
55
+ ## Dataset Description
56
+
57
+ - **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
58
+ - **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
59
+ - **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
60
+ - **Leaderboard:**
61
+ - **Point of Contact:** [email protected]
62
+
63
+ ### Dataset Summary
64
+
65
+ A Persian translation dataset (English -> Persian).
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [More Information Needed]
70
+
71
+ ### Languages
72
+
73
+ The text dataset is in Persian (`fa`) and English (`en`).
74
+
75
+ ## Dataset Structure
76
+
77
+ ### Data Instances
78
+
79
+ Here is an example from the dataset:
80
+ ```json
81
+ {
82
+ "source": "چه زحمت‌ها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد.",
83
+ "targets": ["how toil to raise funds, propagate reforms, initiate institutions!"],
84
+ "category": "mizan_dev_en_fa"
85
+ }
86
+ ```
87
+
88
+ ### Data Fields
89
+
90
+ - `source`: the input sentences, in Persian.
91
+ - `targets`: the list of gold target translations in English.
92
+ - `category`: the source from which the example is mined.
93
+
94
+ ### Data Splits
95
+
96
+ The train/dev/test split contains 1,622,281/2,138/47,745 samples.
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Curation Rationale
101
+
102
+ For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
103
+
104
+ ### Source Data
105
+
106
+ #### Initial Data Collection and Normalization
107
+
108
+ [More Information Needed]
109
+
110
+ #### Who are the source language producers?
111
+
112
+ [More Information Needed]
113
+
114
+ ### Annotations
115
+
116
+ #### Annotation process
117
+
118
+ [More Information Needed]
119
+
120
+ #### Who are the annotators?
121
+
122
+ [More Information Needed]
123
+
124
+ ### Personal and Sensitive Information
125
+
126
+ [More Information Needed]
127
+
128
+ ## Considerations for Using the Data
129
+
130
+ ### Social Impact of Dataset
131
+
132
+ [More Information Needed]
133
+
134
+ ### Discussion of Biases
135
+
136
+ [More Information Needed]
137
+
138
+ ### Other Known Limitations
139
+
140
+ [More Information Needed]
141
+
142
+ ## Additional Information
143
+
144
+ ### Dataset Curators
145
+
146
+ [More Information Needed]
147
+
148
+ ### Licensing Information
149
+
150
+ CC BY-NC-SA 4.0 License
151
+
152
+ ### Citation Information
153
+ ```bibtex
154
+ @article{huggingface:dataset,
155
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
156
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
157
+ year={2020}
158
+ journal = {arXiv e-prints},
159
+ eprint = {2012.06154},
160
+ }
161
+ ```
162
+
163
+ ### Contributions
164
+
165
+ Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"parsinlu-repo": {"description": "A Persian translation dataset (Persian -> English). \n", "citation": "@article{huggingface:dataset,\n title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},\n authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},\n year={2020}\n journal = {arXiv e-prints},\n eprint = {2012.06154}, \n}\n", "homepage": "https://github.com/persiannlp/parsinlu/", "license": "CC BY-NC-SA 4.0", "features": {"source": {"dtype": "string", "id": null, "_type": "Value"}, "targets": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "category": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "parsinlu_reading_comprehension", "config_name": "parsinlu-repo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 273889436, "num_examples": 1622280, "dataset_name": "parsinlu_reading_comprehension"}, "test": {"name": "test", "num_bytes": 23003250, "num_examples": 47744, "dataset_name": "parsinlu_reading_comprehension"}, "validation": {"name": "validation", "num_bytes": 462962, "num_examples": 2137, "dataset_name": "parsinlu_reading_comprehension"}}, "download_checksums": {"https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_fa_en/train.tsv": {"num_bytes": 252797553, "checksum": "8d09f7bf9d58e808c8f93a5a9321b9eb5668b19ed955fe0261bde1f77d4ace2d"}, "https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_fa_en/dev.tsv": {"num_bytes": 435450, "checksum": "b808565327f1fc3c4f2c14e354eb654cb4eab19155e8320543ccf136ebd091ab"}, "https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_fa_en/test.tsv": {"num_bytes": 22332746, "checksum": "67d9da6c6ae1579ec298522454af3c78b7d990fbc3a3745bdd65be4057b32903"}}, "download_size": 275565749, "post_processing_size": null, "dataset_size": 297355648, "size_in_bytes": 572921397}}
parsinlu_translation_fa_en.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ParsiNLU Persian reading comprehension task"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import json
21
+
22
+ import datasets
23
+
24
+
25
+ logger = datasets.logging.get_logger(__name__)
26
+
27
+ _CITATION = """\
28
+ @article{huggingface:dataset,
29
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
30
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
31
+ year={2020}
32
+ journal = {arXiv e-prints},
33
+ eprint = {2012.06154},
34
+ }
35
+ """
36
+
37
+ # You can copy an official description
38
+ _DESCRIPTION = """\
39
+ A Persian translation dataset (Persian -> English).
40
+ """
41
+
42
+ _HOMEPAGE = "https://github.com/persiannlp/parsinlu/"
43
+
44
+ _LICENSE = "CC BY-NC-SA 4.0"
45
+
46
+ _URL = "https://media.githubusercontent.com/media/persiannlp/parsinlu/master/data/translation/translation_combined_fa_en/"
47
+ _URLs = {
48
+ "train": _URL + "train.tsv",
49
+ "dev": _URL + "dev.tsv",
50
+ "test": _URL + "test.tsv",
51
+ }
52
+
53
+
54
+ class ParsinluReadingComprehension(datasets.GeneratorBasedBuilder):
55
+ """ParsiNLU Persian reading comprehension task."""
56
+
57
+ VERSION = datasets.Version("1.0.0")
58
+
59
+ BUILDER_CONFIGS = [
60
+ datasets.BuilderConfig(
61
+ name="parsinlu-repo", version=VERSION, description="ParsiNLU repository: translation"
62
+ ),
63
+ ]
64
+
65
+ def _info(self):
66
+ features = datasets.Features(
67
+ {
68
+ "source": datasets.Value("string"),
69
+ "targets": datasets.features.Sequence(
70
+ datasets.Value("string")
71
+ ),
72
+ "category": datasets.Value("string"),
73
+ }
74
+ )
75
+
76
+ return datasets.DatasetInfo(
77
+ # This is the description that will appear on the datasets page.
78
+ description=_DESCRIPTION,
79
+ # This defines the different columns of the dataset and their types
80
+ features=features, # Here we define them above because they are different between the two configurations
81
+ # If there's a common (input, target) tuple from the features,
82
+ # specify them here. They'll be used if as_supervised=True in
83
+ # builder.as_dataset.
84
+ supervised_keys=None,
85
+ # Homepage of the dataset for documentation
86
+ homepage=_HOMEPAGE,
87
+ # License for the dataset if available
88
+ license=_LICENSE,
89
+ # Citation for the dataset
90
+ citation=_CITATION,
91
+ )
92
+
93
+ def _split_generators(self, dl_manager):
94
+ data_dir = dl_manager.download_and_extract(_URLs)
95
+ return [
96
+ datasets.SplitGenerator(
97
+ name=datasets.Split.TRAIN,
98
+ # These kwargs will be passed to _generate_examples
99
+ gen_kwargs={
100
+ "filepath": data_dir["train"],
101
+ "split": "train",
102
+ },
103
+ ),
104
+ datasets.SplitGenerator(
105
+ name=datasets.Split.TEST,
106
+ # These kwargs will be passed to _generate_examples
107
+ gen_kwargs={"filepath": data_dir["test"], "split": "test"},
108
+ ),
109
+ datasets.SplitGenerator(
110
+ name=datasets.Split.VALIDATION,
111
+ # These kwargs will be passed to _generate_examples
112
+ gen_kwargs={
113
+ "filepath": data_dir["dev"],
114
+ "split": "dev",
115
+ },
116
+ ),
117
+ ]
118
+
119
+ def _generate_examples(self, filepath, split):
120
+ logger.info("generating examples from = %s", filepath)
121
+
122
+ print(filepath)
123
+ with open(filepath) as f:
124
+ for id_, row in enumerate(f.readlines()):
125
+ try:
126
+ if id_ == 0:
127
+ continue
128
+ row = row.split("\t")
129
+
130
+ if len(row) < 3:
131
+ print(f"* Ignoring the following line since it doesn't have three columns: {row}")
132
+ continue
133
+ source = row[0].replace("\t", "").replace("\n", "")
134
+ targets = row[1].replace("\t", "").replace("\n", "").split('///')
135
+ category = row[2].replace("\t", "").replace("\n", "")
136
+ yield id_, {
137
+ 'source': source,
138
+ 'targets': targets,
139
+ 'category': category,
140
+ }
141
+ except:
142
+ print(" * skipping . . . ")
143
+