parquet-converter commited on
Commit
42d3cd9
1 Parent(s): 1cc4854

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,51 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zstandard filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,132 +0,0 @@
1
- ---
2
- language:
3
- - fa
4
- license: mit
5
- multilinguality:
6
- - monolingual
7
- size_categories:
8
- - 30k<n<50k
9
- task_categories:
10
- - question-answering
11
- - text2text-generation
12
- - text-generation
13
- task_ids: []
14
- pretty_name: SynTranFa
15
- tags:
16
- - conditional-text-generation
17
- - conversational-question-answering
18
- ---
19
-
20
- # SynTran-fa
21
- Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:
22
-
23
- ```python
24
- import datasets
25
- data = datasets.load_dataset('SLPL/syntran-fa', split="train")
26
- ```
27
-
28
- ## Table of Contents
29
- - [Table of Contents](#table-of-contents)
30
- - [Dataset Description](#dataset-description)
31
- - [Dataset Summary](#dataset-summary)
32
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
- - [Languages](#languages)
34
- - [Dataset Structure](#dataset-structure)
35
- - [Dataset Creation](#dataset-creation)
36
- - [Source Data](#source-data)
37
- - [Considerations for Using the Data](#considerations-for-using-the-data)
38
- - [Social Impact of Dataset](#social-impact-of-dataset)
39
- - [Additional Information](#additional-information)
40
- - [Dataset Curators](#dataset-curators)
41
- - [Licensing Information](#licensing-information)
42
- - [Citation Information](#citation-information)
43
- - [Contributions](#contributions)
44
-
45
- ## Dataset Description
46
-
47
- - **Homepage:** [Sharif-SLPL](https://github.com/Sharif-SLPL)
48
- - **Repository:** [SynTran-fa](https://github.com/agp-internship/syntran-fa)
49
- - **Point of Contact:** [Sadra Sabouri](mailto:[email protected])
50
-
51
- ### Dataset Summary
52
-
53
- Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).
54
-
55
- This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in [Source Data section](#source-data).
56
-
57
- The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).
58
-
59
- ### Supported Tasks and Leaderboards
60
-
61
- This dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf).
62
-
63
- ### Languages
64
-
65
- + Persian (fa)
66
-
67
- ## Dataset Structure
68
- Each row of the dataset will look like something like the below:
69
- ```json
70
- {
71
- 'id': 0,
72
- 'question': 'باشگاه هاکی ساوتهمپتون چه نام دارد؟',
73
- 'short_answer': 'باشگاه هاکی ساوتهمپتون',
74
- 'fluent_answer': 'باشگاه هاکی ساوتهمپتون باشگاه هاکی ساوتهمپتون نام دارد.',
75
- 'bert_loss': 1.110097069682014
76
- }
77
- ```
78
- + `id` : the entry id in dataset
79
- + `question` : the question
80
- + `short_answer` : the short answer corresponding to the `question` (the primary answer)
81
- + `fluent_answer` : fluent (long) answer generated from both `question` and the `short_answer` (the secondary answer)
82
- + `bert_loss` : the loss that [pars-bert](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) gives when inputting the `fluent_answer` to it. As it increases the sentence is more likely to be influent.
83
-
84
- Note: the dataset is sorted increasingly by the `bert_loss`, so first sentences are more likely to be fluent.
85
-
86
- ### Data Splits
87
-
88
- Currently, the dataset just provided the `train` split. There would be a `test` split soon.
89
-
90
- ## Dataset Creation
91
-
92
- ### Source Data
93
- The source datasets that we used are as follows:
94
-
95
- + [PersianQA](https://github.com/sajjjadayobi/PersianQA)
96
- + [PersianQuAD](https://ieeexplore.ieee.org/document/9729745)
97
-
98
- #### Initial Data Collection and Normalization
99
-
100
- We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.
101
-
102
- ### Annotations
103
-
104
- #### Annotation process
105
-
106
- [More Information Needed]
107
-
108
- #### Who are the annotators?
109
-
110
- [More Information Needed]
111
-
112
- ### Personal and Sensitive Information
113
-
114
- The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.
115
-
116
- ## Additional Information
117
-
118
- ### Dataset Curators
119
-
120
- The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project.
121
-
122
- ### Licensing Information
123
-
124
- MIT
125
-
126
- ### Citation Information
127
-
128
- [More Information Needed]
129
-
130
- ### Contributions
131
-
132
- Thanks to [@farhaaaaa](https://github.com/farhaaaaa) and [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/train-00000-of-00001-3e7f957a24fef68a.parquet → SLPL--syntran-fa/parquet-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bdbf5332e5c2fe4d4cf3a65868aee21fc2046c862154498ce63bdd8cbb066606
3
- size 6677365
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef27ce89d81d98d86d2797b00938dd3eb87bf387dea04766219dfed68f93e97a
3
+ size 6814459
dataset_infos.json DELETED
@@ -1,52 +0,0 @@
1
- {"SLPL--syntran-fa": {
2
- "description": "Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers.",
3
- "citation": "",
4
- "homepage": "[Sharif-SLPL](https://github.com/Sharif-SLPL)",
5
- "license": "mit",
6
- "features": {
7
- "id": {
8
- "dtype": "int64",
9
- "id": null,
10
- "_type": "Value"
11
- },
12
- "question": {
13
- "dtype": "string",
14
- "id": null,
15
- "_type": "Value"
16
- },
17
- "short_answer": {
18
- "dtype": "string",
19
- "id": null,
20
- "_type": "Value"
21
- },
22
- "fluent_answer": {
23
- "dtype": "string",
24
- "id": null,
25
- "_type": "Value"
26
- },
27
- "bert_loss": {
28
- "dtype": "float64",
29
- "id": null,
30
- "_type": "Value"
31
- }
32
- },
33
- "post_processed": null,
34
- "supervised_keys": null,
35
- "task_templates": null,
36
- "builder_name": null,
37
- "config_name": null,
38
- "version": null,
39
- "splits": {
40
- "train": {
41
- "name": "train",
42
- "num_bytes": 11704035,
43
- "num_examples": 48106,
44
- "dataset_name": "syntran-fa"
45
- }
46
- },
47
- "download_checksums": null,
48
- "download_size": 6677365,
49
- "post_processing_size": null,
50
- "dataset_size": 11704035,
51
- "size_in_bytes": 18381400
52
- }}