EdwardHayashi-2023 commited on
Commit
caa01d8
1 Parent(s): a87a86d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -1,3 +1,52 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ #Acted Emotional Speech Dynamic Database v1.0
6
+
7
+ ========================ABOUT========================================================
8
+ AESDD v1.0 was created on October 2017 in the Laboratory of Electronic Media, School of
9
+ Journalism and Mass Communications, Aristotle University of Thessaloniki, for
10
+ the needs of Speech Emotion Recognition research of the Multidisciplinary Media &
11
+ Mediated Communication Research Group (M3C, http://m3c.web.auth.gr/).
12
+ It is a collection of utterances of emotional speech acted by professional actors.
13
+ This version is the initial state of AESDD. The purpose of this project the continuous
14
+ growth of the database through the collaborative effort of the M3C research group and
15
+ theatrical teams.
16
+
17
+ ========================CREATION OF THE DATABASE=======================================
18
+ For the creation of v.1 of the database, 5 (3 female and 2 male) professional actors were
19
+ recorded. 19 utterances of ambiguous out of context emotional content were chosen. The
20
+ actors acted these 19 utterances in every one of the 5 chosen emotions. One extra improvised
21
+ utterance was added for every actor and emotion. The guidance of the actors and the choice
22
+ of the final recordings were supervised by a scientific expert in dramatology.
23
+ For some of the utterances, more that one takes were qualified.
24
+ Consequently, around 500 utterances occured in the final database.
25
+
26
+ UPDATE: Since the AESDD is dynamic by definition, more actors have been recorded and added,
27
+ following the same naming scheme as described in the Section "ORGANISING THE DATABASE"
28
+
29
+ ========================CHOSEN EMOTIONS================================================
30
+ Five emotions were chosen:
31
+ a (anger)
32
+ d (disgust)
33
+ f (fear)
34
+ h (happiness)
35
+ s (sadness)
36
+
37
+ ========================ORGANISING THE DATABASE==========================================
38
+ There are five folders, named after the five emotion classes.
39
+ Every file name in the databased is in the following form:
40
+ xAA (B)
41
+
42
+ where x is the first letter of the emotion (a--> anger, h--> happiness etc.)
43
+ AA is the number of the utterance (01,02...20)
44
+ B is the number of the speaker (1 --> 1st speaker, 2 --> 2nd speaker etc)
45
+
46
+ e.g. 'a03 (4).wav' is the 3rd utterance spoken by the 4th speaker with anger
47
+
48
+ In the case where two takes were qualified for the same utterance, they are distinguished
49
+ with a lower case letter.
50
+
51
+ e.g. 'f18 (5).wav' and 'f18 (5)b.wav' are two different versions of the 5th actor saying the
52
+ 18th utterance with fear.