msis commited on
Commit
73c56fb
1 Parent(s): 1e87191

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +176 -2
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  dataset_info:
3
  features:
4
  - name: audio
@@ -21,7 +22,180 @@ dataset_info:
21
  num_examples: 23474
22
  download_size: 117190597305
23
  dataset_size: 311210584610.23804
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ---
25
- # Dataset Card for "everyayah"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
1
  ---
2
+ pretty_name: Tarteel AI - EveryAyah Dataset
3
  dataset_info:
4
  features:
5
  - name: audio
 
22
  num_examples: 23474
23
  download_size: 117190597305
24
  dataset_size: 311210584610.23804
25
+ annotations_creators:
26
+ - expert-generated
27
+ language_creators:
28
+ - crowdsourced
29
+ language:
30
+ - ar
31
+ license:
32
+ - mit
33
+ multilinguality:
34
+ - monolingual
35
+ paperswithcode_id: tarteel-everyayah
36
+ size_categories:
37
+ - 100K<n<1M
38
+ source_datasets:
39
+ - original
40
+ task_categories:
41
+ - automatic-speech-recognition
42
+ task_ids: []
43
+ train-eval-index:
44
+ - config: clean
45
+ task: automatic-speech-recognition
46
+ task_id: speech_recognition
47
+ splits:
48
+ train_split: train
49
+ eval_split: test
50
+ validation_split: validation
51
+ col_mapping:
52
+ audio: audio
53
+ text: text
54
+ reciter: text
55
+ metrics:
56
+ - type: wer
57
+ name: WER
58
+ - type: cer
59
+ name: CER
60
  ---
61
+ # Dataset Card for Tarteel AI's EveryAyah Dataset
62
+
63
+ ## Table of Contents
64
+ - [Dataset Description](#dataset-description)
65
+ - [Dataset Summary](#dataset-summary)
66
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
67
+ - [Languages](#languages)
68
+ - [Dataset Structure](#dataset-structure)
69
+ - [Data Instances](#data-instances)
70
+ - [Data Fields](#data-fields)
71
+ - [Data Splits](#data-splits)
72
+ - [Dataset Creation](#dataset-creation)
73
+ - [Curation Rationale](#curation-rationale)
74
+ - [Source Data](#source-data)
75
+ - [Annotations](#annotations)
76
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
77
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
78
+ - [Social Impact of Dataset](#social-impact-of-dataset)
79
+ - [Discussion of Biases](#discussion-of-biases)
80
+ - [Other Known Limitations](#other-known-limitations)
81
+ - [Additional Information](#additional-information)
82
+ - [Dataset Curators](#dataset-curators)
83
+ - [Licensing Information](#licensing-information)
84
+ - [Citation Information](#citation-information)
85
+ - [Contributions](#contributions)
86
+
87
+ ## Dataset Description
88
+
89
+ - **Homepage:** [Tarteel AI](https://www.tarteel.ai/)
90
+ - **Repository:** [Needs More Information]
91
+ - **Point of Contact:** [Nawar Halabi](mailto:[email protected])
92
+
93
+ ### Dataset Summary
94
+
95
+ This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice.
96
+
97
+ ### Supported Tasks and Leaderboards
98
+
99
+ [Needs More Information]
100
+
101
+ ### Languages
102
+
103
+ The audio is in Arabic.
104
+
105
+ ## Dataset Structure
106
+
107
+ ### Data Instances
108
+
109
+ A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
110
+ An example from the dataset is:
111
+ ```
112
+ {
113
+ 'file': '/Users/username/.cache/huggingface/datasets/downloads/extracted/baebe85e2cb67579f6f88e7117a87888c1ace390f4f14cb6c3e585c517ad9db0/arabic-speech-corpus/wav/ARA NORM 0002.wav',
114
+ 'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/baebe85e2cb67579f6f88e7117a87888c1ace390f4f14cb6c3e585c517ad9db0/arabic-speech-corpus/wav/ARA NORM 0002.wav',
115
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
116
+ 'sampling_rate': 48000},
117
+ 'orthographic': 'waraj~aHa Alt~aqoriyru Al~a*iy >aEad~ahu maEohadu >aboHaA^i haDabapi Alt~ibiti fiy Alo>akaAdiymiy~api AlS~iyniy~api liloEuluwmi - >ano tasotamir~a darajaAtu AloHaraArapi wamusotawayaAtu Alr~uTuwbapi fiy Alo<irotifaAEi TawaAla ha*aA Aloqarono',
118
+ 'phonetic': "sil w a r a' jj A H a tt A q r ii0' r u0 ll a * i0 < a E a' dd a h u0 m a' E h a d u0 < a b H aa' ^ i0 h A D A' b a t i0 tt i1' b t i0 f i0 l < a k aa d ii0 m ii0' y a t i0 SS II0 n ii0' y a t i0 l u0 l E u0 l uu0' m i0 sil < a' n t a s t a m i0' rr a d a r a j aa' t u0 l H a r aa' r a t i0 w a m u0 s t a w a y aa' t u0 rr U0 T UU0' b a t i0 f i0 l Ah i0 r t i0 f aa' E i0 T A' w A l a h aa' * a l q A' r n sil",
119
+ 'text': '\ufeffwaraj~aHa Alt~aqoriyru Al~aTHiy >aEad~ahu maEohadu >aboHaA^i haDabapi Alt~ibiti fiy Alo>akaAdiymiy~api AlS~iyniy~api liloEuluwmi - >ano tasotamir~a darajaAtu AloHaraArapi wamusotawayaAtu Alr~uTuwbapi fiy Alo<irotifaAEi TawaAla haTHaA Aloqarono'
120
+ }
121
+ ```
122
+
123
+ ### Data Fields
124
+
125
+ - file: A path to the downloaded audio file in .wav format.
126
+
127
+ - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
128
+
129
+ - text: the transcription of the audio file.
130
+
131
+ - phonetic: the transcription in phonentics format.
132
+
133
+ - orthographic: the transcriptions written in orthographic format.
134
+
135
+ ### Data Splits
136
+
137
+ | | Train | Test | Validation |
138
+ | ----- | ----- | ---- | ---------- |
139
+ | dataset | 187785 | 23473 | 23474 |
140
+
141
+
142
+
143
+ ## Dataset Creation
144
+
145
+ ### Curation Rationale
146
+
147
+
148
+ ### Source Data
149
+
150
+ #### Initial Data Collection and Normalization
151
+
152
+
153
+ #### Who are the source language producers?
154
+
155
+
156
+ ### Annotations
157
+
158
+ #### Annotation process
159
+
160
+
161
+ #### Who are the annotators?
162
+
163
+
164
+
165
+ ### Personal and Sensitive Information
166
+
167
+
168
+
169
+ ## Considerations for Using the Data
170
+
171
+ ### Social Impact of Dataset
172
+
173
+ [More Information Needed]
174
+
175
+ ### Discussion of Biases
176
+
177
+ [More Information Needed]
178
+
179
+ ### Other Known Limitations
180
+
181
+ [Needs More Information]
182
+
183
+ ## Additional Information
184
+
185
+ ### Dataset Curators
186
+
187
+
188
+
189
+ ### Licensing Information
190
+
191
+ [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
192
+
193
+ ### Citation Information
194
+
195
+ ```
196
+
197
+ ```
198
+
199
+ ### Contributions
200
 
201
+ This dataset was created by: