id
stringlengths
10
10
title
stringlengths
19
145
abstract
stringlengths
273
1.91k
full_text
dict
qas
dict
figures_and_tables
dict
question
sequence
retrieval_gt
sequence
answer_gt
sequence
__index_level_0__
int64
0
887
1911.06118
Learning Multi-Sense Word Distributions using Approximate Kullback-Leibler Divergence
Learning word representations has garnered greater attention in the recent past due to its diverse text applications. Word embeddings encapsulate the syntactic and semantic regularities of sentences. Modelling word embedding as multi-sense gaussian mixture distributions, will additionally capture uncertainty and polysemy of words. We propose to learn the Gaussian mixture representation of words using a Kullback-Leibler (KL) divergence based objective function. The KL divergence based energy function provides a better distance metric which can effectively capture entailment and distribution similarity among the words. Due to the intractability of KL divergence for Gaussian mixture, we go for a KL approximation between Gaussian mixtures. We perform qualitative and quantitative experiments on benchmark word similarity and entailment datasets which demonstrate the effectiveness of the proposed approach.
{ "paragraphs": [ [ "Language modelling in its inception had one-hot vector encoding of words. However, it captures only alphabetic ordering but not the word semantic similarity. Vector space models helps to learn word representations in a lower dimensional space and also captures semantic similarity. Learning word embedding aids in natural language processing tasks such as question answering and reasoning BIBREF0, stance detection BIBREF1, claim verification BIBREF2.", "Recent models BIBREF3, BIBREF4 work on the basis that words with similar context share semantic similarity. BIBREF4 proposes a neural probabilistic model which models the target word probability conditioned on the previous words using a recurrent neural network. Word2Vec models BIBREF3 such as continuous bag-of-words (CBOW) predict the target word given the context, and skip-gram model works in reverse of predicting the context given the target word. While, GloVe embeddings were based on a Global matrix factorization on local contexts BIBREF5. However, the aforementioned models do not handle words with multiple meanings (polysemies).", "BIBREF6 proposes a neural network approach considering both local and global contexts in learning word embeddings (point estimates). Their multiple prototype model handles polysemous words by providing apriori heuristics about word senses in the dataset. BIBREF7 proposes an alternative to handle polysemous words by a modified skip-gram model and EM algorithm. BIBREF8 presents a non-parametric based alternative to handle polysemies. However, these approaches fail to consider entailment relations among the words. BIBREF9 learn a Gaussian distribution per word using the expected likelihood kernel. However, for polysemous words, this may lead to word distributions with larger variances as it may have to cover various senses.", "BIBREF10 proposes multimodal word distribution approach. It captures polysemy. However, the energy based objective function fails to consider asymmetry and hence entailment. Textual entailment recognition is necessary to capture lexical inference relations such as causality (for example, mosquito $\\rightarrow $ malaria), hypernymy (for example, dog $\\models $ animal) etc.", "In this paper, we propose to obtain multi-sense word embedding distributions by using a variant of max margin objective based on the asymmetric KL divergence energy function to capture textual entailment. Multi-sense distributions are advantageous in capturing polysemous nature of words and in reducing the uncertainty per word by distributing it across senses. However, computing KL divergence between mixtures of Gaussians is intractable, and we use a KL divergence approximation based on stricter upper and lower bounds. While capturing textual entailment (asymmetry), we have also not compromised on capturing symmetrical similarity between words (for example, funny and hilarious) which will be elucidated in Section $3.1$. We also show the effectiveness of the proposed approach on the benchmark word similarity and entailment datasets in the experimental section." ], [ "Probabilistic representation of words helps one model uncertainty in word representation, and polysemy. Given a corpus $V$, containing a list of words each represented as $w$, the probability density for a word $w$ can be represented as a mixture of Gaussians with $C$ components BIBREF10.", "Here, $p_{w,j}$ represents the probability of word $w$ belonging to the component $j$, $\\operatorname{\\mathbf {\\mu }}_{w,j}$ represents $D$ dimensional word representation corresponding to the $j^{th}$ component sense of the word $w$, and $\\Sigma _{w,j}$ represents the uncertainty in representation for word $w$ belonging to component $j$." ], [ "The model parameters (means, covariances and mixture weights) $\\theta $ can be learnt using a variant of max-margin objective BIBREF11.", "Here $E_\\theta (\\cdot , \\cdot )$ represents an energy function which assigns a score to the pair of words, $w$ is a particular word under consideration, $c$ its positive context (same context), and $c^{\\prime }$ the negative context. The objective aims to push the margin of the difference between the energy function of a word $w$ to its positive context $c$ higher than its negative context $c$ by a threshold of $m$. Thus, word pairs in the same context gets a higher energy than the word pairs in the dissimilar context. BIBREF10 consider the energy function to be an expected likelihood kernel which is defined as follows.", "This is similar to the cosine similarity metric over vectors and the energy between two words is maximum when they have similar distributions. But, the expected likelihood kernel is a symmetric metric which will not be suitable for capturing ordering among words and hence entailment." ], [ "As each word is represented by a mixture of Gaussian distributions, KL divergence is a better choice of energy function to capture distance between distributions. Since, KL divergence is minimum when the distributions are similar and maximum when they are dissimilar, energy function is taken as exponentiated negative KL divergence.", "However, computing KL divergence between Gaussian mixtures is intractable and obtaining exact KL value is not possible. One way of approximating the KL is by Monte-Carlo approximation but it requires large number of samples to get a good approximation and is computationally expensive on high dimensional embedding space.", "Alternatively, BIBREF12 presents a KL approximation between Gaussian mixtures where they obtain an upper bound through product of Gaussian approximation method and a lower bound through variational approximation method. In BIBREF13, the authors combine the lower and upper bounds from approximation methods of BIBREF12 to provide a stricter bound on KL between Gaussian mixtures. Lets consider Gaussian mixtures for the words $w$ and $v$ as follows.", "The approximate KL divergence between the Gaussian mixture representations over the words $w$ and $v$ is shown in equation DISPLAY_FORM8. More details on approximation is included in the Supplementary Material.", "where $EL_{ik}(w,w) = \\int f_{w,i} (\\operatorname{\\mathbf {x}}) f_{w,k} (\\operatorname{\\mathbf {x}}) d\\operatorname{\\mathbf {x}}$ and $EL_{ij}(w,v) = \\int f_{w,i} (\\operatorname{\\mathbf {x}}) f_{v,k} (\\operatorname{\\mathbf {x}}) d\\operatorname{\\mathbf {x}}$. Note that the expected likelihood kernel appears component wise inside the approximate KL divergence derivation.", "One advantage of using KL as energy function is that it enables to capture asymmetry in entailment datasets. For eg., let us consider the words 'chair' with two senses as 'bench' and 'sling', and 'wood' with two senses as 'trees' and 'furniture'. The word chair ($w$) is entailed within wood ($v$), i.e. chair $\\models $ wood. Now, minimizing the KL divergence necessitates maximizing $\\log {\\sum _j p_{v,j} \\exp ({-KL(f_{w,i} (\\operatorname{\\mathbf {x}})||f_{v,j}(\\operatorname{\\mathbf {x}}))})}$ which in turn minimizes $KL(f_{w,i}(\\operatorname{\\mathbf {x}})||f_{v,j}(\\operatorname{\\mathbf {x}}))$. This will result in the support of the $i^{th}$ component of $w$ to be within the $j^{th}$ component of $v$, and holds for all component pairs leading to the entailment of $w$ within $v$. Consequently, we can see that bench $\\models $ trees, bench $\\models $ furniture, sling $\\models $ trees, and sling $\\models $ furniture. Thus, it introduces lexical relationship between the senses of child word and that of the parent word. Minimizing the KL also necessitates maximizing $\\log {\\sum _j {p_{v,j}} EL_{ij}(w,v)}$ term for all component pairs among $w$ and $v$. This is similar to maximizing expected likelihood kernel, which brings the means of $f_{w,i}(\\operatorname{\\mathbf {x}})$ and $f_{v,j}(\\operatorname{\\mathbf {x}})$ closer (weighted by their co-variances) as discussed in BIBREF10. Hence, the proposed approach captures the best of both worlds, thereby catering to both word similarity and entailment.", "We also note that minimizing the KL divergence necessitates minimizing $\\log {\\sum _k p_{w,k} \\exp ({-KL(f_{w,i}||f_{w,k})})}$ which in turn maximizes $KL(f_{w,i}||f_{w,k})$. This prevents the different mixture components of a word converging to single Gaussian and encourages capturing different possible senses of the word. The same is also achieved by minimizing $\\sum _k {p_{w,k}} EL_{ik}(w,w)$ term and act as a regularization term which promotes diversity in learning senses of a word." ], [ "We train our proposed model GM$\\_$KL (Gaussian Mixture using KL Divergence) on the Text8 dataset BIBREF14 which is a pre-processed data of $17M$ words from wikipedia. Of which, 71290 unique and frequent words are chosen using the subsampling trick in BIBREF15. We compare GM$\\_$KL with the previous approaches w2g BIBREF9 ( single Gaussian model) and w2gm BIBREF10 (mixture of Gaussian model with expected likelihood kernel). For all the models used for experimentation, the embedding size ($D$) was set to 50, number of mixtures to 2, context window length to 10, batch size to 128. The word embeddings were initialized using a uniform distribution in the range of $[-\\sqrt{\\frac{3}{D}}$, $\\sqrt{\\frac{3}{D}}]$ such that the expectation of variance is 1 and mean 0 BIBREF16. One could also consider initializing the word embeddings using other contextual representations such as BERT BIBREF17 and ELMo BIBREF18 in the proposed approach. In order to purely analyze the performance of $\\emph {GM\\_KL}$ over the other models, we have chosen initialization using uniform distribution for experiments. For computational benefits, diagonal covariance is used similar to BIBREF10. Each mixture probability is constrained in the range $[0,1]$, summing to 1 by optimizing over unconstrained scores in the range $(-\\infty ,\\infty )$ and converting scores to probability using softmax function. The mixture scores are initialized to 0 to ensure fairness among all the components. The threshold for negative sampling was set to $10^{-5}$, as recommended in BIBREF3. Mini-batch gradient descent with Adagrad optimizer BIBREF19 was used with initial learning rate set to $0.05$.", "Table TABREF9 shows the qualitative results of GM$\\_$KL. Given a query word and component id, the set of nearest neighbours along with their respective component ids are listed. For eg., the word `plane' in its 0th component captures the `geometry' sense and so are its neighbours, and its 1st component captures `vehicle' sense and so are its corresponding neighbours. Other words such as `rock' captures both the `metal' and `music' senses, `star' captures `celebrity' and `astronomical' senses, and `phone' captures `telephony' and `internet' senses.", "We quantitatively compare the performance of the GM$\\_$KL, w2g, and w2gm approaches on the SCWS dataset BIBREF6. The dataset consists of 2003 word pairs of polysemous and homonymous words with labels obtained by an average of 10 human scores. The Spearman correlation between the human scores and the model scores are computed. To obtain the model score, the following metrics are used:", "MaxCos: Maximum cosine similarity among all component pairs of words $w$ and $v$:", "AvgCos: Average component-wise cosine similarity between the words $w$ and $v$.", "KL$\\_$approx: Formulated as shown in (DISPLAY_FORM8) between the words $w$ and $v$.", "KL$\\_$comp: Maximum component-wise negative KL between words $w$ and $v$:", "Table TABREF17 compares the performance of the approaches on the SCWS dataset. It is evident from Table TABREF17 that GM$\\_$KL achieves better correlation than existing approaches for various metrics on SCWS dataset.", "Table TABREF18 shows the Spearman correlation values of GM$\\_$KL model evaluated on the benchmark word similarity datasets: SL BIBREF20, WS, WS-R, WS-S BIBREF21, MEN BIBREF22, MC BIBREF23, RG BIBREF24, YP BIBREF25, MTurk-287 and MTurk-771 BIBREF26, BIBREF27, and RW BIBREF28. The metric used for comparison is 'AvgCos'. It can be seen that for most of the datasets, GM$\\_$KL achieves significantly better correlation score than w2g and w2gm approaches. Other datasets such as MC and RW consist of only a single sense, and hence w2g model performs better and GM$\\_$KL achieves next better performance. The YP dataset have multiple senses but does not contain entailed data and hence could not make use of entailment benefits of GM$\\_$KL.", "Table TABREF19 shows the evaluation results of GM$\\_$KL model on the entailment datasets such as entailment pairs dataset BIBREF29 created from WordNet with both positive and negative labels, a crowdsourced dataset BIBREF30 of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset BIBREF31. The 'MaxCos' similarity metric is used for evaluation and the best precision and best F1-score is shown, by picking the optimal threshold. Overall, GM$\\_$KL performs better than both w2g and w2gm approaches." ], [ "We proposed a KL divergence based energy function for learning multi-sense word embedding distributions modelled as Gaussian mixtures. Due to the intractability of the Gaussian mixtures for the KL divergence measure, we use an approximate KL divergence function. We also demonstrated that the proposed GM$\\_$KL approaches performed better than other approaches on the benchmark word similarity and entailment datasets.", "tocsectionAppendices" ], [ "KL between gaussian mixtures $f_{w}(\\operatorname{\\mathbf {x}})$ and $f_{v}(\\operatorname{\\mathbf {x}})$ can be decomposed as:", "BIBREF12 presents KL approximation between gaussian mixtures using", "product of gaussian approximation method where KL is approximated using product of component gaussians and", "variational approximation method where KL is approximated by introducing some variational parameters.", "The product of component gaussian approximation method using Jensen's inequality provides upper bounds as shown in equations DISPLAY_FORM23 and .", "The variational approximation method provides lower bounds as shown in equations DISPLAY_FORM24 and DISPLAY_FORM25.", "where $H$ represents the entropy term and the entropy of $i^{th}$ component of word $w$ with dimension $D$ is given as", "In BIBREF13, the authors combine the lower and upper bounds from approximation methods of BIBREF12 to formulate a stricter bound on KL between gaussian mixtures.", "From equations DISPLAY_FORM23 and DISPLAY_FORM25, a stricter lower bound for KL between gaussian mixtures is obtained as shown in equation DISPLAY_FORM26", "From equations and DISPLAY_FORM24, a stricter upper bound for KL between gaussian mixtures is obtained as shown in equation DISPLAY_FORM27", "Finally, the KL between gaussian mixtures is taken as the mean of KL upper and lower bounds as shown in equation DISPLAY_FORM28.", "" ] ], "section_name": [ "Introduction", "Methodology ::: Word Representation", "Objective function", "Objective function ::: Proposed Energy function", "Experimentation and Results", "Conclusion", "Approximation for KL divergence between mixtures of gaussians" ] }
{ "answers": [ { "annotation_id": [ "2ca0c71c7cea5e83b0bd5cc786e8d446ef0f4229", "89ea124108fcdba2e6fa96483b840dc13511d507" ], "answer": [ { "evidence": [ "Table TABREF18 shows the Spearman correlation values of GM$\\_$KL model evaluated on the benchmark word similarity datasets: SL BIBREF20, WS, WS-R, WS-S BIBREF21, MEN BIBREF22, MC BIBREF23, RG BIBREF24, YP BIBREF25, MTurk-287 and MTurk-771 BIBREF26, BIBREF27, and RW BIBREF28. The metric used for comparison is 'AvgCos'. It can be seen that for most of the datasets, GM$\\_$KL achieves significantly better correlation score than w2g and w2gm approaches. Other datasets such as MC and RW consist of only a single sense, and hence w2g model performs better and GM$\\_$KL achieves next better performance. The YP dataset have multiple senses but does not contain entailed data and hence could not make use of entailment benefits of GM$\\_$KL.", "Table TABREF19 shows the evaluation results of GM$\\_$KL model on the entailment datasets such as entailment pairs dataset BIBREF29 created from WordNet with both positive and negative labels, a crowdsourced dataset BIBREF30 of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset BIBREF31. The 'MaxCos' similarity metric is used for evaluation and the best precision and best F1-score is shown, by picking the optimal threshold. Overall, GM$\\_$KL performs better than both w2g and w2gm approaches." ], "extractive_spans": [], "free_form_answer": "Spearman correlation values of GM_KL model evaluated on the benchmark word similarity datasets.\nEvaluation results of GM_KL model on the entailment datasets such as entailment pairs dataset created from WordNet, crowdsourced dataset of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset.", "highlighted_evidence": [ "Table TABREF18 shows the Spearman correlation values of GM$\\_$KL model evaluated on the benchmark word similarity datasets: SL BIBREF20, WS, WS-R, WS-S BIBREF21, MEN BIBREF22, MC BIBREF23, RG BIBREF24, YP BIBREF25, MTurk-287 and MTurk-771 BIBREF26, BIBREF27, and RW BIBREF28. ", "Table TABREF19 shows the evaluation results of GM$\\_$KL model on the entailment datasets such as entailment pairs dataset BIBREF29 created from WordNet with both positive and negative labels, a crowdsourced dataset BIBREF30 of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset BIBREF31" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF9 shows the qualitative results of GM$\\_$KL. Given a query word and component id, the set of nearest neighbours along with their respective component ids are listed. For eg., the word `plane' in its 0th component captures the `geometry' sense and so are its neighbours, and its 1st component captures `vehicle' sense and so are its corresponding neighbours. Other words such as `rock' captures both the `metal' and `music' senses, `star' captures `celebrity' and `astronomical' senses, and `phone' captures `telephony' and `internet' senses." ], "extractive_spans": [ "Given a query word and component id, the set of nearest neighbours along with their respective component ids are listed" ], "free_form_answer": "", "highlighted_evidence": [ "Table TABREF9 shows the qualitative results of GM$\\_$KL. Given a query word and component id, the set of nearest neighbours along with their respective component ids are listed. For eg., the word `plane' in its 0th component captures the `geometry' sense and so are its neighbours, and its 1st component captures `vehicle' sense and so are its corresponding neighbours. Other words such as `rock' captures both the `metal' and `music' senses, `star' captures `celebrity' and `astronomical' senses, and `phone' captures `telephony' and `internet' senses." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "c0342f4fa1ae1e4afcb3e4d49be641520d3de53b" ], "answer": [ { "evidence": [ "Table TABREF17 compares the performance of the approaches on the SCWS dataset. It is evident from Table TABREF17 that GM$\\_$KL achieves better correlation than existing approaches for various metrics on SCWS dataset." ], "extractive_spans": [ "GM$\\_$KL achieves better correlation than existing approaches for various metrics on SCWS dataset." ], "free_form_answer": "", "highlighted_evidence": [ ". It is evident from Table TABREF17 that GM$\\_$KL achieves better correlation than existing approaches for various metrics on SCWS dataset." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "infinity", "infinity" ], "paper_read": [ "no", "no" ], "question": [ "What are the qualitative experiments performed on benchmark datasets?", "How does this approach compare to other WSD approaches employing word embeddings?" ], "question_id": [ "26126068d72408555bcb52977cd669faf660bdf7", "660284b0a21fe3801e64dc9e0e51da5400223fe3" ], "question_writer": [ "f7c76ad7ff9c8b54e8c397850358fa59258c6672", "f7c76ad7ff9c8b54e8c397850358fa59258c6672" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 2: Spearman correlation (ρ * 100) on SCWS.", "Table 1: Qualitative results of GM KL", "Table 3: Spearman correlation results on word similarity datasets.", "Table 4: Results on entailment datasets" ], "file": [ "4-Table2-1.png", "4-Table1-1.png", "5-Table3-1.png", "5-Table4-1.png" ] }
[ "What are the qualitative experiments performed on benchmark datasets?" ]
[ [ "1911.06118-Experimentation and Results-9", "1911.06118-Experimentation and Results-1", "1911.06118-Experimentation and Results-8" ] ]
[ "Spearman correlation values of GM_KL model evaluated on the benchmark word similarity datasets.\nEvaluation results of GM_KL model on the entailment datasets such as entailment pairs dataset created from WordNet, crowdsourced dataset of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset." ]
246
1907.03187
Applying a Pre-trained Language Model to Spanish Twitter Humor Prediction
Our entry into the HAHA 2019 Challenge placed $3^{rd}$ in the classification task and $2^{nd}$ in the regression task. We describe our system and innovations, as well as comparing our results to a Naive Bayes baseline. A large Twitter based corpus allowed us to train a language model from scratch focused on Spanish and transfer that knowledge to our competition model. To overcome the inherent errors in some labels we reduce our class confidence with label smoothing in the loss function. All the code for our project is included in a GitHub repository for easy reference and to enable replication by others.
{ "paragraphs": [ [ "- !`Socorro, me ha picado una víbora!", "- ?`Cobra?", "- No, gratis.[5]", "Google Translation:", "- Help, I was bitten by a snake!", "- Does it charge?", "- Not free.", "[4]https://github.com/bfarzin/haha_2019_final, Accessed on 19 June 2019 [5]https://www.fluentin3months.com/spanish-jokes/, Accessed on 19 June 2019", "Humor does not translate well because it often relies on double-meaning or a subtle play on word choice, pronunciation, or context. These issues are further exacerbated in areas where space is a premium (as frequent on social media platforms), often leading to usage and development of shorthand, in-jokes, and self-reference. Thus, building a system to classify the humor of tweets is a difficult task. However, with transfer-learning and the Fast.ai library, we can build a high quality classifier in a foreign language. Our system outperforms a Naive Bayes Support Vector Machine (NBSVM) baseline, which is frequently considered a \"strong baseline\" for many Natural Language Processing (NLP) related tasks (see Wang et al BIBREF0 ).", "Rather than hand-crafted language features, we have taken an \"end to end\" approach building from the raw text to a final model that achieves the tasks as presented. Our paper lays out the details of the system and our code can be found in a GitHub repository for use by other researchers to extend the state of the art in sentiment analysis." ], [ "The Humor Analysis based on Human Annotation (HAHA) 2019 BIBREF1 competition asked for analysis of two tasks in the Spanish language based on a corpus of publicly collected data described in Castro et al. BIBREF2 :", "The HAHA dataset includes labeled data for 24,000 tweets and a test set of 6,000 tweets (80%/20% train/test split.) Each record includes the raw tweet text (including accents and emoticons), a binary humor label, the number of votes for each of five star ratings and a “Funniness Score” that is the average of the 1 to 5 star votes cast. Examples and data can be found on the CodaLab competition webpage." ], [ "We modify the method of Universal Langage Model Fine-tuning for Text Classification (ULMFiT) presented in Howard and Ruder BIBREF3 . The primary steps are:", "Below we will give more detail on each step and the parameters used to generate our system." ], [ "We collected a corpus for our LM based on Spanish Twitter using tweepy run for three 4-hour sessions and collecting any tweet with any of the terms 'el','su','lo','y' or 'en'. We excluded retweets to minimize repeated examples in our language model training. In total, we collected 475,143 tweets - a data set is nearly 16 times larger than the text provided by the competition alone. The frequency of terms, punctuation and vocabulary used on Twitter can be quite different from the standard Wikipedia corpus that is often used to train an LM from scratch.", "In the fine-tuning step, we combined the train and test text data without labels from the contest data." ], [ "We applied a list of default cleanup functions in sequence (see list below). They are close to the standard clean-up included in the Fast.ai library with the addition of one function for the Twitter dataset. Cleanup of data is key to expressing information in a compact way so that the LM can use the relevant data when trying to predict the next word in a sequence.", "Replace more than 3 repetitions of the same character (ie. grrrreat becomes g xxrep r 4 eat)", "Replace repetition at the word level (similar to above)", "Deal with ALL CAPS words replacing with a token and converting to lower case.", "Add spaces between special chars (ie. !!! to ! ! !)", "Remove useless spaces (remove more than 2 spaces in sequence)", "Addition: Move all text onto a single line by replacing new-lines inside a tweet with a reserved word (ie. \\n to xxnl)", "The following example shows the application of this data cleaning to a single tweet:", "Saber, entender y estar convencides que la frase \\", "#LaESILaDefendemosEntreTodes es nuestra linea es nuestro eje.\\", "#AlertaESI!!!!", "Vamos por mas!!! e invitamos a todas aquellas personas que quieran \\", "se parte.", "xxbos saber , entender y estar convencides que la frase \\", "# laesiladefendemosentretodes es nuestra linea es nuestro eje.\\", "xxnl # alertaesi xxrep 4 ! xxnl vamos por mas ! ! ! e invitamos a \\", "todas aquellas personas que quieran se parte." ], [ "We used sentencepiece BIBREF4 to parse into sub-word units and reduce the possible out-of-vocabulary terms in the data set. We selected a vocab size of 30,000 and used the byte-pair encoding (BPE) model. To our knowledge this is the first time that the BPE toenization has been used with ULMFiT in a competition model." ], [ "We train the LM using a 90/10 training/validation split, reporting the validation loss and accuracy of next-word prediction on the validation set. For the LM, we selected an ASGD Weight-Dropped Long Short Term Memory (AWD_LSTM, described in Merity et al. BIBREF5 ) model included in Fast.ai. We replaced the typical Long Short Term Memory (LSTM) units with Quasi Recurrent Neural Network (QRNN, described in Bradbury et al. BIBREF6 ) units. Our network has 2304 hidden-states, 3 layers and a softmax layer to predict the next-word. We tied the embedding weights BIBREF7 on the encoder and decoder for training. We performed some simple tests with LSTM units and a Transformer Language model, finding all models were similar in performance during LM training. We thus chose to use QRNN units due to improved training speed compared to the alternatives. This model has about 60 million trainable parameters.", "Parameters used for training and fine-tuning are shown in Table TABREF21 . For all networks we applied a dropout multiplier which scales the dropout used throughout the network. We used the Adam optimizer with weight decay as indicated in the table.", "Following the work of Smith BIBREF8 we found the largest learning-rate that we could apply and then ran a one-cycle policy for a single epoch. This largest weight is shown in Table TABREF21 under \"Learning Rate.\" Subsequent training epochs were run with one-cycle and lower learning rates indicated in Table TABREF21 under \"Continued Training.\"" ], [ "Again, following the play-book from Howard and Ruder BIBREF3 , we change the pre-trained network head to a softmax or linear output layer (as appropriate for the transfer task) and then load the LM weights for the layers below. We train just the new head from random initialization, then unfreeze the entire network and train with differential learning rates. We layout our training parameters in Table TABREF25 .", "With the same learning rate and weight decay we apply a 5-fold cross-validation on the outputs and take the mean across the folds as our ensemble. We sample 20 random seeds (see more in section SECREF26 ) to find the best initialization for our gradient descent search. From these samples, we select the best validation F1 metric or Mean Squared Error (MSE) for use in our test submission.", "For the classifier, we have a hidden layer and softmax head. We over-sample the minority class to balance the outcomes for better training using Synthetic Minority Oversampling Technique (SMOTE, described in Chawla et al. BIBREF9 ). Our loss is label smoothing as described in Pereyra et al. BIBREF10 of the flattened cross-entropy loss. In ULMFiT, gradual unfreezing allows us to avoid catastropic forgetting, focus each stage of training and preventing over-fitting of the parameters to the training cases. We take an alternative approach to regularization and in our experiments found that we got similar results with label smoothing but without the separate steps and learning rate refinement required of gradual unfreezing.", "For the regression task, we fill all #N/A labels with scores of 0. We add a hidden layer and linear output head and MSE loss function." ], [ "For classification and regression, the random seed sets the initial random weights of the head layer. This initialization affects the final F1 metric achievable.", "Across each of the 20 random seeds, we average the 5-folds and obtain a single F1 metric on the validation set. The histogram of 20-seed outcomes is shown in Figure FIGREF27 and covers a range from 0.820 to 0.825 over the validation set. We selected our single best random seed for the test submission. With more exploration, a better seed could likely be found. Though we only use a single seed for the LM training, one could do a similar search with random seeds for LM pre-training, and further select the best down-stream seed similar to Czapla et al BIBREF11 ." ], [ "Table TABREF29 gives three results from our submissions in the competition. The first is the baseline NBSVM solution, with an F1 of 0.7548. Second is our first random seed selected for the classifier which produces a 0.8083 result. While better than the NBSVM solution, we pick the best validation F1 from the 20 seeds we tried. This produced our final submission of 0.8099. Our best model achieved an five-fold average F1 of 0.8254 on the validation set shown in Figure FIGREF27 but a test set F1 of 0.8099 - a drop of 0.0155 in F1 for the true out-of-sample data. Also note that our third place entry was 1.1% worse in F1 score than first place but 1.2% better in F1 than the 4th place entry." ], [ "This paper describes our implementation of a neural net model for classification and regression in the HAHA 2019 challenge. Our solution placed 3rd in Task 1 and 2nd in Task 2 in the final competition standings. We describe the data collection, pre-training, and final model building steps for this contest. Twitter has slang and abbreviations that are unique to the short-format as well as generous use of emoticons. To capture these features, we collected our own dataset based on Spanish Tweets that is 16 times larger than the competition data set and allowed us to pre-train a language model. Humor is subtle and using a label smoothed loss prevented us from becoming overconfident in our predictions and train more quickly without the gradual unfreezing required by ULMFiT. We have open-sourced all code used in this contest to further enable research on this task in the future." ], [ "BF was the primary researcher. PC contributed with suggestions for the random seeds as a hyper-parameters and label smoothing to speed up training. JH contributed with suggestion for higher dropout throughout the network for more generalization." ], [ "The author would like to thank all the participants on the fast.ai forums for their ideas and suggestions. Also, Kyle Kastner for his edits, suggestions and recommendations in writing up these results." ] ], "section_name": [ "Introduction", "Task and Dataset Description", "System Description", "Additional Data", "Cleaning", "Tokenization", "LM Training and Fine-tuning", "Classification and Regression Fitting", "Random Seed as a Hyperparamter", "Results", "Conclusion", "Author Contributions", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "9e154f4db2e43af9096728580e49f5505e3bd523" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "320204bfbbf87c73bf9a0b740d9534cfefe5362d", "5de66468c0d2839ddf5d9a7e098625bbf7fa564c" ], "answer": [ { "evidence": [ "Table TABREF29 gives three results from our submissions in the competition. The first is the baseline NBSVM solution, with an F1 of 0.7548. Second is our first random seed selected for the classifier which produces a 0.8083 result. While better than the NBSVM solution, we pick the best validation F1 from the 20 seeds we tried. This produced our final submission of 0.8099. Our best model achieved an five-fold average F1 of 0.8254 on the validation set shown in Figure FIGREF27 but a test set F1 of 0.8099 - a drop of 0.0155 in F1 for the true out-of-sample data. Also note that our third place entry was 1.1% worse in F1 score than first place but 1.2% better in F1 than the 4th place entry." ], "extractive_spans": [ "F1 of 0.8099" ], "free_form_answer": "", "highlighted_evidence": [ "Second is our first random seed selected for the classifier which produces a 0.8083 result. While better than the NBSVM solution, we pick the best validation F1 from the 20 seeds we tried. This produced our final submission of 0.8099.", "Our best model achieved an five-fold average F1 of 0.8254 on the validation set shown in Figure FIGREF27 but a test set F1 of 0.8099 - a drop of 0.0155 in F1 for the true out-of-sample data." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF29 gives three results from our submissions in the competition. The first is the baseline NBSVM solution, with an F1 of 0.7548. Second is our first random seed selected for the classifier which produces a 0.8083 result. While better than the NBSVM solution, we pick the best validation F1 from the 20 seeds we tried. This produced our final submission of 0.8099. Our best model achieved an five-fold average F1 of 0.8254 on the validation set shown in Figure FIGREF27 but a test set F1 of 0.8099 - a drop of 0.0155 in F1 for the true out-of-sample data. Also note that our third place entry was 1.1% worse in F1 score than first place but 1.2% better in F1 than the 4th place entry." ], "extractive_spans": [], "free_form_answer": "F1 score result of 0.8099", "highlighted_evidence": [ "Table TABREF29 gives three results from our submissions in the competition. The first is the baseline NBSVM solution, with an F1 of 0.7548. Second is our first random seed selected for the classifier which produces a 0.8083 result. While better than the NBSVM solution, we pick the best validation F1 from the 20 seeds we tried. This produced our final submission of 0.8099. Our best model achieved an five-fold average F1 of 0.8254 on the validation set shown in Figure FIGREF27 but a test set F1 of 0.8099 - a drop of 0.0155 in F1 for the true out-of-sample data. Also note that our third place entry was 1.1% worse in F1 score than first place but 1.2% better in F1 than the 4th place entry." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "What did the best systems use for their model?", "What were their results on the classification and regression tasks" ], "question_id": [ "b7c3f3942a07c118e57130bc4c3ec4adc431d725", "a5505e25ee9ae84090e1442034ddbb3cedabcf04" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "spanish", "spanish" ], "topic_background": [ "", "" ] }
{ "caption": [ "Table 1. LM Training Parameters", "Table 2. Classification and Regression Training Parameters", "Fig. 1. Histogram of F1 metric averaged across 5-fold metric", "Table 3. Comparative Results" ], "file": [ "5-Table1-1.png", "6-Table2-1.png", "6-Figure1-1.png", "7-Table3-1.png" ] }
[ "What were their results on the classification and regression tasks" ]
[ [ "1907.03187-Results-0" ] ]
[ "F1 score result of 0.8099" ]
248
1705.01265
On the effectiveness of feature set augmentation using clusters of word embeddings
Word clusters have been empirically shown to offer important performance improvements on various tasks. Despite their importance, their incorporation in the standard pipeline of feature engineering relies more on a trial-and-error procedure where one evaluates several hyper-parameters, like the number of clusters to be used. In order to better understand the role of such features we systematically evaluate their effect on four tasks, those of named entity segmentation and classification as well as, those of five-point sentiment classification and quantification. Our results strongly suggest that cluster membership features improve the performance.
{ "paragraphs": [ [ "Many research attempts have proposed novel features that improve the performance of learning algorithms in particular tasks. Such features are often motivated by domain knowledge or manual labor. Although useful and often state-of-the-art, adapting such solutions on NLP systems across tasks can be tricky and time-consuming BIBREF0 . Therefore, simple yet general and powerful methods that perform well across several datasets are valuable BIBREF1 .", "An approach that has become extremely popular lately in NLP tasks, is to train word embeddings in an unsupervised way. These embeddings are dense vectors that project words or short text spans like phrases in a vector space where dimensions are supposed to capture text properties. Such embeddings can then be used either as features with off-the-shelf algorithms like Support Vector Machines, or to initialize deep learning systems BIBREF2 . However, as shown in BIBREF3 linear architectures perform better in high-dimensional discrete spaces compared to continuous ones. The latter is probably the main reason of the high performance of the vector space model BIBREF4 in tasks like text classification with linear models like SVMs. Using linear algorithms, while taking advantage of the expressiveness of text embeddings is the focus of this work.", "In this paper, we explore a hybrid approach, that uses text embeddings as a proxy to create features. Motivated by the argument that text embeddings manage to encode the semantics of text, we explore how clustering text embeddings can impact the performance of different NLP tasks. Although such an approach has been used in different studies during feature engineering, the selection of word vectors and the number of clusters remain a trial-end-error procedure. In this work we present an empirical evaluation across diverse tasks to verify whether and when such features are useful.", "Word clusters have been used as features in various tasks like Part-of-Speech tagging and NER. Owoputi et al. Owoputi13 use Brown clusters BIBREF5 in a POS tagger showing that this type of features carry rich lexical knowledge as they can substitute lexical resources like gazetteers. Kiritchenko et al. KiritchenkoZM14 discusses their use on sentiment classification while Hee et al. HeeLH16 incorporate them in the task of irony detection in Twitter. Ritter et al. Ritter2011 inject also word clusters in a NER tagger. While these works show that word clusters are beneficial no clear guidelines can be concluded of how and when to use them.", "In this work, we empirically demonstrate that using different types of embeddings on three NLP tasks with twitter data we manage to achieve better or near to the state-of-the art performance on three NLP tasks: (i) Named Entity Recognition (NER) segmentation, (ii) NER classification, (iii) fine-grained sentiment analysis and (iv) fine-grained sentiment quantification. For each of the three tasks, we achieve higher performance than without using features which indicates the effectiveness of the cluster membership features. Importantly, our evaluation compared to previous work BIBREF6 who focus on old and well studied datasets uses recent and challenging datasets composed by tweets. The obtained results across all the tasks permits us to reveal important aspects of the use of word clusters and therefore provide guidelines. Although our obtained scores are state-of-the-art, our analysis reveals that the performance in such tasks is far from perfect and, hence, identifies that there is still much space for improvement and future work." ], [ "Word embeddings associate words with dense, low-dimensional vectors. Recently, several models have been proposed in order to obtain these embeddings. Among others, the skipgram (skipgram) model with negative sampling BIBREF7 , the continuous bag-of-words (cbow) model BIBREF7 and Glove (glove) BIBREF8 have been shown to be effective. Training those models requires no annotated data and can be done using big amounts of text. Such a model can be seen as a function INLINEFORM0 that projects a word INLINEFORM1 in a INLINEFORM2 -dimensional space: INLINEFORM3 , where INLINEFORM4 is predefined. Here, we focus on applications using data from Twitter, which pose several difficulties due to being particularly short, using creative vocabulary, abbreviations and slang.", "For all the tasks in our experimental study, we use 36 millions English tweets collected between August and September 2017. A pre-processing step has been applied to replace URLs with a placeholder and to pad punctuation. The final vocabulary size was around 1.6 millions words. Additionally to the in-domain corpus we collected, we use GloVe vectors trained on Wikipedia articles in order to investigate the impact of out-of-domain word-vectors.", "We cluster the embeddings with INLINEFORM0 -Means. The k-means clusters are initialized using “k-means++” as proposed in BIBREF9 , while the algorithm is run for 300 iterations. We try different values for INLINEFORM1 . For each INLINEFORM2 , we repeat the clustering experiment with different seed initialization for 10 times and we select the clustering result that minimizes the cluster inertia." ], [ "We evaluate the proposed approach for augmenting the feature space in four tasks: (i) NER segmentation, (ii) NER classification, (iii) fine-grained sentiment classification and (iv) fine-grained sentiment quantification. The next sections present the evaluation settings we used. For each of the tasks, we use the designated training sets to train the learning algorithms, and we report the scores of the evaluation measures used in the respective test parts." ], [ "NER concerns the classification of textual segments in a predefined set of categories, like persons, organization and locations. We use the data of the last competition in NER for Twitter which released as a part of the 2nd Workshop on Noisy User-generated Text BIBREF10 . More specifically, the organizers provided annotated tweets with 10 named-entity types (person, movie, sportsteam, product etc.) and the task comprised two sub-tasks: 1) the detection of entity bounds and 2) the classification of an entity into one of the 10 types. The evaluation measure for both sub-tasks is the F INLINEFORM0 measure.", "The following is an example of a tweet which contains two named entities. Note that named entities may span several words in the text:", " INLINEFORM0 tonite ... 90 's music .. oldskool night wiith INLINEFORM1 ", "", "Our model for solving the task is a learning to search approach. More specifically we follow BIBREF11 which has been ranked 2nd among 10 participants in the aforementioned competition BIBREF10 . The model uses handcrafted features like n-grams, part-of-speech tags, capitalization and membership in gazetteers. The algorithm used belongs to the family of learning to search for structured prediction tasks BIBREF12 . These methods decompose the problem in a search space with states, actions and policies and then learn a hypothesis controlling a policy over the state-action space. The BIO encoding is used for attributing the corresponding labels to the tokens where B-type is used for the first token of the entity, I-type for inside tokens in case of multi-term entities and O for non entity tokens.", "Tables TABREF6 and TABREF7 present the results for the different number of clusters across the three vector models used to induce the clusters. For all the experiments we keep the same parametrization for the learning algorithm and we present the performance of each run on the official test set.", "Regarding the segmentation task we notice that adding word clusters as features improve the performance of the best model up to 1.1 F-score points while it boosts performance in the majority of cases. In only one case, for glove INLINEFORM0 vectors, there is a drop across all number of clusters used.", "As for the number of clusters, the best results are generally obtained between 250 and 1000 classes for all word vector models. These dimensions seem to be sufficient for the three-class sub-task that we deal with. The different models of word vectors perform similarly and thus one cannot privilege a certain type of word vectors. Interestingly, the clusters learned on the Wikipedia GloVe vectors offer competitive performance with respect to the in-domain word vectors used for the other cases showing that one can rely to out-of-domain data for constructing such representations.", "Concerning the classification task (Table TABREF7 ) we generally observe a drop in the performance of the tagger as we deal with 10 classes. This essentially corresponds to a multi-class problem with 21 classes: one for the non-entity type and two classes for each entity type. In this setting we notice that the best results are obtained in most cases for higher number of classes (1000 or 2000) possibly due to a better discriminatory power in higher dimensions. Note also, that in some cases the addition of word cluster features does not necessarily improve the performance. Contrary, it may degrade it as it is evident in the case of glove INLINEFORM0 word clusters. Like in the case of segmentation we do not observe a word vector model that clearly outperforms the rest. Finally, we note the same competitive performance of the Wikipedia word clusters and notably for the glove INLINEFORM1 clusters which obtain the best F1-score." ], [ "The task of fine grained sentiment classification consists in predicting the sentiment of an input text according to a five point scale (sentiment INLINEFORM0 {VeryNegative, Negative, Neutral, Positive, VeryPositive}). We use the setting of task 4 of SemEval2016 “Sentiment Analysis in Twitter” and the dataset released by the organizers for subtask 4 BIBREF13 .", "In total, the training (resp. test) data consist of 9,070 (resp. 20,632) tweets.", "The evaluation measure selected in BIBREF13 for the task in the macro-averaged Mean Absolute Error (MAE INLINEFORM0 ). It is a measure of error, hence lower values are better. The measure's goal is to take into account the order of the classes when penalizing the decision of a classifier. For instance, misclassifying a very negative example as very positive is a bigger mistake than classifying it as negative or neutral. Penalizing a classifier according to how far the predictions are from the true class is captured by MAE INLINEFORM1 BIBREF14 . Also, the advantage of using the macro- version instead of the standard version of the measure is the robustness against the class imbalance in the data.", "Learning algorithm To demonstrate the efficiency of cluster membership features we rely on the system of BIBREF15 which was ranked 1st among 11 participants and uses a Logistic Regression as a learning algorithm. We follow the same feature extraction steps which consist of extracting n-gram and character n-gram features, part-of-speech counts as well as sentiment scores using standard sentiment lexicons such as the Bing Liu's BIBREF16 and the MPQA lexicons BIBREF17 . For the full description, we refer the interested reader to BIBREF15 .", "To evaluate the performance of the proposed feature augmentation technique, we present in Table TABREF10 the macro-averaged Mean Absolute Error scores for different settings on the official test set of BIBREF13 . First, notice that the best score in the test data is achieved using cluster membership features, where the word embeddings are trained using the skipgram model. The achieved score improves the state-of-the art on the dataset, which to the best of our knowledge was by BIBREF15 . Also, note that the score on the test data improves for each type of embeddings used, which means that augmenting the feature space using cluster membership features helps the sentiment classification task.", "Note, also, that using the clusters produced by the out-of-domain embeddings trained on wikipedia that were released as part of BIBREF8 performs surprisingly well. One might have expected their addition to hurt the performance. However, their value probably stems from the sheer amount of data used for their training as well as the relatively simple type of words (like awesome, terrible) which are discriminative for this task. Lastly, note that in each of the settings, the best results are achieved when the number of clusters is within INLINEFORM0 as in the NER tasks. Comparing the performance across the different embeddings, one cannot claim that a particular embedding performs better. It is evident though that augmenting the feature space with feature derived using the proposed method, preferably with in-domain data, helps the classification performance and reduces MAE INLINEFORM1 .", "From the results of Table TABREF10 it is clear that the addition of the cluster membership features improves the sentiment classification performance. To better understand though why these clusters help, we manually examined a sample of the words associated with the clusters. To improve the eligibility of those results we first removed the hashtags and we filter the results using an English vocabulary. In Table TABREF11 we present sample words from two of the most characteristic clusters with respect to the task of sentiment classification. Notice how words with positive and negative meanings are put in the respective clusters." ], [ "Quantification is the problem of estimating the prevalence of a class in a dataset. While classification concerns assigning a category to a single instance, like labeling a tweet with the sentiment it conveys, the goal of quantification is, given a set of instances, to estimate the relative frequency of single class. Therefore, sentiment quantification tries to answer questions like “Given a set of tweets about the new iPhone, what is the fraction of VeryPositive ones?”. In the rest, we show the effect of the features derived from the word embeddings clusters in the fine-grained classification problem, which was also part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF13 .", "Learning Algorithm To perform the quantification task, we rely on a classify and count approach, which was shown effective in a related binary quantification problem BIBREF15 . The idea is that given a set of instances on a particular subject, one first classifies the instances and then aggregates the counts. To this end, we use the same feature representation steps and data with the ones used for fine grained classification (Section 3.2). Note that the data of the task are associated with subjects (described in full detail at BIBREF13 ), and, hence, quantification is performed for the tweets of a subject. For each of the five categories, the output of the approach is a 5-dimensional vector with the estimated prevalence of the categories.", "The evaluation measure for the problem is the Earth Movers Distance (EMD) BIBREF18 . EMD is a measure of error, hence lower values are better. It assumes ordered categories, which in our problem is naturally defined. Further assuming that the distance of consecutive categories (e.g., Positive and VeryPositive) is 1, the measure is calculated by: INLINEFORM0 ", "", "where INLINEFORM0 is number of categories (five in our case) and INLINEFORM1 and INLINEFORM2 are the true and predicted prevalence respectively BIBREF19 .", "Results Table TABREF13 presents the results of augmenting the feature set with the proposed features. We use Logistic Regression as a base classifier for the classify and count approach. Notice the positive impact of the features in the performance in the task. Adding the features derived from clustering the embeddings consistently improves the performance. Interestingly, the best performance ( INLINEFORM0 ) is achieved using the out-of-domain vectors, as in the NER classification task. Also, notice how the approach improves over the state-of-the-art performance in the challenge ( INLINEFORM1 ) BIBREF13 , held by the method of BIBREF20 . The improvement over the method of BIBREF20 however, does not necessarily mean that classify and count performs better in the task. It implies that the feature set we used is richer, that in turn highlights the value of robust feature extraction mechanisms which is the subject of this paper." ], [ "We have shown empirically the effectiveness of incorporating cluster membership features in the feature extraction pipeline of Named-Entity recognition, sentiment classification and quantification tasks. Our results strongly suggest that incorporating cluster membership features benefit the performance in the tasks. The fact that the performance improvements are consistent in the four tasks we investigated, further highlights their usefulness, both for practitioners and researchers.", "Although our study does not identify a clear winner with respect to the type of word vectors (skipgram, cbow, or GloVe), our findings suggest that one should first try skip-gram embeddings of low dimensionality ( INLINEFORM0 ) and high number of clusters (e.g., INLINEFORM1 ) as the results obtained using these settings are consistently competitive. Our results also suggest that using out-of-domain data, like Wikipedia articles in this case, to construct the word embeddings is a good practice, as the results we obtained with these vectors are also competitive. The positive of out-of-domain embeddings and their combination with in-domain ones remains to be further studied." ] ], "section_name": [ "Introduction", "Word Clusters", "Experimental Evaluation", "Named-Entity Recognition in Twitter", "Fine-grained Sentiment Analysis", "Fine-Grained Sentiment Quantification", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "7f369212797fa799ad82dd090487c07e7acd2a88", "abf9fe1b600c548def2415accf646f6851032335" ], "answer": [ { "evidence": [ "For all the tasks in our experimental study, we use 36 millions English tweets collected between August and September 2017. A pre-processing step has been applied to replace URLs with a placeholder and to pad punctuation. The final vocabulary size was around 1.6 millions words. Additionally to the in-domain corpus we collected, we use GloVe vectors trained on Wikipedia articles in order to investigate the impact of out-of-domain word-vectors." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "For all the tasks in our experimental study, we use 36 millions English tweets collected between August and September 2017. " ], "unanswerable": false, "yes_no": true }, { "evidence": [ "For all the tasks in our experimental study, we use 36 millions English tweets collected between August and September 2017. A pre-processing step has been applied to replace URLs with a placeholder and to pad punctuation. The final vocabulary size was around 1.6 millions words. Additionally to the in-domain corpus we collected, we use GloVe vectors trained on Wikipedia articles in order to investigate the impact of out-of-domain word-vectors." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "For all the tasks in our experimental study, we use 36 millions English tweets collected between August and September 2017. " ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "32e6aea9e0a56cfeceffb0fb9265ed7fe881b68f", "93b374d728be18ef6a86218c3b352e0167f4d477" ], "answer": [ { "evidence": [ "We cluster the embeddings with INLINEFORM0 -Means. The k-means clusters are initialized using “k-means++” as proposed in BIBREF9 , while the algorithm is run for 300 iterations. We try different values for INLINEFORM1 . For each INLINEFORM2 , we repeat the clustering experiment with different seed initialization for 10 times and we select the clustering result that minimizes the cluster inertia.", "FLOAT SELECTED: Table 1: Scores on F1-measure for named entities segmentation for the different word embeddings across different number of clusters. For each embedding type, we show its dimension and window size. For instance, glove40,w5 is 40-dimensional glove embeddings with window size 5.", "FLOAT SELECTED: Table 2: Results in terms of F1-score for named entities classification for the different word clusters across different number of clusters.", "FLOAT SELECTED: Table 3: MAEM scores (lower is better) for sentiment classification across different types of word embeddings and number of clusters.", "FLOAT SELECTED: Table 5: Earth Movers Distance for fine-grained sentiment quantification across different types of word embeddings and number of clusters. The score in brackets denotes the best performance achieved in the challenge." ], "extractive_spans": [], "free_form_answer": "number of clusters, seed value in clustering, selection of word vectors, window size and dimension of embedding", "highlighted_evidence": [ "We cluster the embeddings with INLINEFORM0 -Means.", "We try different values for INLINEFORM1 . For each INLINEFORM2 , we repeat the clustering experiment with different seed initialization for 10 times and we select the clustering result that minimizes the cluster inertia.", "FLOAT SELECTED: Table 1: Scores on F1-measure for named entities segmentation for the different word embeddings across different number of clusters. For each embedding type, we show its dimension and window size. For instance, glove40,w5 is 40-dimensional glove embeddings with window size 5.", "FLOAT SELECTED: Table 2: Results in terms of F1-score for named entities classification for the different word clusters across different number of clusters.", "FLOAT SELECTED: Table 3: MAEM scores (lower is better) for sentiment classification across different types of word embeddings and number of clusters.", "FLOAT SELECTED: Table 5: Earth Movers Distance for fine-grained sentiment quantification across different types of word embeddings and number of clusters. The score in brackets denotes the best performance achieved in the challenge." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Tables TABREF6 and TABREF7 present the results for the different number of clusters across the three vector models used to induce the clusters. For all the experiments we keep the same parametrization for the learning algorithm and we present the performance of each run on the official test set.", "Note, also, that using the clusters produced by the out-of-domain embeddings trained on wikipedia that were released as part of BIBREF8 performs surprisingly well. One might have expected their addition to hurt the performance. However, their value probably stems from the sheer amount of data used for their training as well as the relatively simple type of words (like awesome, terrible) which are discriminative for this task. Lastly, note that in each of the settings, the best results are achieved when the number of clusters is within INLINEFORM0 as in the NER tasks. Comparing the performance across the different embeddings, one cannot claim that a particular embedding performs better. It is evident though that augmenting the feature space with feature derived using the proposed method, preferably with in-domain data, helps the classification performance and reduces MAE INLINEFORM1 ." ], "extractive_spans": [ "different number of clusters", "different embeddings" ], "free_form_answer": "", "highlighted_evidence": [ "Tables TABREF6 and TABREF7 present the results for the different number of clusters across the three vector models used to induce the clusters. For all the experiments we keep the same parametrization for the learning algorithm and we present the performance of each run on the official test set.", "One might have expected their addition to hurt the performance. However, their value probably stems from the sheer amount of data used for their training as well as the relatively simple type of words (like awesome, terrible) which are discriminative for this task. Lastly, note that in each of the settings, the best results are achieved when the number of clusters is within INLINEFORM0 as in the NER tasks. Comparing the performance across the different embeddings, one cannot claim that a particular embedding performs better. It is evident though that augmenting the feature space with feature derived using the proposed method, preferably with in-domain data, helps the classification performance and reduces MAE INLINEFORM1 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "da6660e5a614aa7ea3f57d8c914c8731e4c22892" ], "answer": [ { "evidence": [ "In this paper, we explore a hybrid approach, that uses text embeddings as a proxy to create features. Motivated by the argument that text embeddings manage to encode the semantics of text, we explore how clustering text embeddings can impact the performance of different NLP tasks. Although such an approach has been used in different studies during feature engineering, the selection of word vectors and the number of clusters remain a trial-end-error procedure. In this work we present an empirical evaluation across diverse tasks to verify whether and when such features are useful." ], "extractive_spans": [ "selection of word vectors" ], "free_form_answer": "", "highlighted_evidence": [ "Although such an approach has been used in different studies during feature engineering, the selection of word vectors and the number of clusters remain a trial-end-error procedure. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "9c09cdf437df536bd26c3c52e6d896cf9d7cd93a" ], "answer": [ { "evidence": [ "We cluster the embeddings with INLINEFORM0 -Means. The k-means clusters are initialized using “k-means++” as proposed in BIBREF9 , while the algorithm is run for 300 iterations. We try different values for INLINEFORM1 . For each INLINEFORM2 , we repeat the clustering experiment with different seed initialization for 10 times and we select the clustering result that minimizes the cluster inertia." ], "extractive_spans": [], "free_form_answer": "Word clusters are extracted using k-means on word embeddings", "highlighted_evidence": [ "We cluster the embeddings with INLINEFORM0 -Means. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "", "", "", "" ], "question": [ "Do they report results only on English datasets?", "Which hyperparameters were varied in the experiments on the four tasks?", "Which other hyperparameters, other than number of clusters are typically evaluated in this type of research?", "How were the cluster extracted? " ], "question_id": [ "92d1a6df3041667dc662376938bc65527a5a1c3c", "12159f04e0427fe33fa05af6ba8c950f1a5ce5ea", "a4a1fcef760b133e9aa876ac28145ad98a609927", "63bb2040fa107c5296351c2b5f0312336dad2863" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Table 1: Scores on F1-measure for named entities segmentation for the different word embeddings across different number of clusters. For each embedding type, we show its dimension and window size. For instance, glove40,w5 is 40-dimensional glove embeddings with window size 5.", "Table 2: Results in terms of F1-score for named entities classification for the different word clusters across different number of clusters.", "Table 3: MAEM scores (lower is better) for sentiment classification across different types of word embeddings and number of clusters.", "Table 4: Sample from two clusters that were found useful for the sentiment classification. Words with positive or negative meaning are grouped together.", "Table 5: Earth Movers Distance for fine-grained sentiment quantification across different types of word embeddings and number of clusters. The score in brackets denotes the best performance achieved in the challenge." ], "file": [ "3-Table1-1.png", "3-Table2-1.png", "4-Table3-1.png", "5-Table4-1.png", "5-Table5-1.png" ] }
[ "Which hyperparameters were varied in the experiments on the four tasks?", "How were the cluster extracted? " ]
[ [ "1705.01265-3-Table2-1.png", "1705.01265-Fine-grained Sentiment Analysis-5", "1705.01265-Named-Entity Recognition in Twitter-5", "1705.01265-5-Table5-1.png", "1705.01265-3-Table1-1.png", "1705.01265-4-Table3-1.png", "1705.01265-Word Clusters-2" ], [ "1705.01265-Word Clusters-2" ] ]
[ "number of clusters, seed value in clustering, selection of word vectors, window size and dimension of embedding", "Word clusters are extracted using k-means on word embeddings" ]
250
1906.10225
Compound Probabilistic Context-Free Grammars for Grammar Induction
We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our context-free rule probabilities are modulated by a per-sentence continuous latent variable, which induces marginal dependencies beyond the traditional context-free assumptions. Inference in this grammar is performed by collapsed variational inference, in which an amortized variational posterior is placed on the continuous variable, and the latent trees are marginalized with dynamic programming. Experiments on English and Chinese show the effectiveness of our approach compared to recent state-of-the-art methods for grammar induction.
{ "paragraphs": [ [ " Grammar induction is the task of inducing hierarchical syntactic structure from data. Statistical approaches to grammar induction require specifying a probabilistic grammar (e.g. formalism, number and shape of rules), and fitting its parameters through optimization. Early work found that it was difficult to induce probabilistic context-free grammars (PCFG) from natural language data through direct methods, such as optimizing the log likelihood with the EM algorithm BIBREF0 , BIBREF1 . While the reasons for the failure are manifold and not completely understood, two major potential causes are the ill-behaved optimization landscape and the overly strict independence assumptions of PCFGs. More successful approaches to grammar induction have thus resorted to carefully-crafted auxiliary objectives BIBREF2 , priors or non-parametric models BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , and manually-engineered features BIBREF7 , BIBREF8 to encourage the desired structures to emerge.", "We revisit these aforementioned issues in light of advances in model parameterization and inference. First, contrary to common wisdom, we find that parameterizing a PCFG's rule probabilities with neural networks over distributed representations makes it possible to induce linguistically meaningful grammars by simply optimizing log likelihood. While the optimization problem remains non-convex, recent work suggests that there are optimization benefits afforded by over-parameterized models BIBREF9 , BIBREF10 , BIBREF11 , and we indeed find that this neural PCFG is significantly easier to optimize than the traditional PCFG. Second, this factored parameterization makes it straightforward to incorporate side information into rule probabilities through a sentence-level continuous latent vector, which effectively allows different contexts in a derivation to coordinate. In this compound PCFG—continuous mixture of PCFGs—the context-free assumptions hold conditioned on the latent vector but not unconditionally, thereby obtaining longer-range dependencies within a tree-based generative process.", "To utilize this approach, we need to efficiently optimize the log marginal likelihood of observed sentences. While compound PCFGs break efficient inference, if the latent vector is known the distribution over trees reduces to a standard PCFG. This property allows us to perform grammar induction using a collapsed approach where the latent trees are marginalized out exactly with dynamic programming. To handle the latent vector, we employ standard amortized inference using reparameterized samples from a variational posterior approximated from an inference network BIBREF12 , BIBREF13 .", "On standard benchmarks for English and Chinese, the proposed approach is found to perform favorably against recent neural network-based approaches to grammar induction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .", "" ], [ " We consider context-free grammars (CFG) consisting of a 5-tuple INLINEFORM0 where INLINEFORM1 is the distinguished start symbol, INLINEFORM2 is a finite set of nonterminals, INLINEFORM3 is a finite set of preterminals, INLINEFORM6 is a finite set of terminal symbols, and INLINEFORM7 is a finite set of rules of the form,", " INLINEFORM0 ", " A probabilistic context-free grammar (PCFG) consists of a grammar INLINEFORM0 and rule probabilities INLINEFORM1 such that INLINEFORM2 is the probability of the rule INLINEFORM3 . Letting INLINEFORM4 be the set of all parse trees of INLINEFORM5 , a PCFG defines a probability distribution over INLINEFORM6 via INLINEFORM7 where INLINEFORM8 is the set of rules used in the derivation of INLINEFORM9 . It also defines a distribution over string of terminals INLINEFORM10 via", " INLINEFORM0 ", " where INLINEFORM0 , i.e. the set of trees INLINEFORM1 such that INLINEFORM2 's leaves are INLINEFORM3 . We will slightly abuse notation and use", " INLINEFORM0 ", " to denote the posterior distribution over the unobserved latent trees given the observed sentence INLINEFORM0 , where INLINEFORM1 is the indicator function.", "" ], [ " A compound probability distribution BIBREF19 is a distribution whose parameters are themselves random variables. These distributions generalize mixture models to the continuous case, for example in factor analysis which assumes the following generative process,", " INLINEFORM0 ", " Compound distributions provide the ability to model rich generative processes, but marginalizing over the latent parameter can be computationally intractable unless conjugacy can be exploited.", "In this work, we study compound probabilistic context-free grammars whose distribution over trees arises from the following generative process: we first obtain rule probabilities via", " INLINEFORM0 ", " where INLINEFORM0 is a prior with parameters INLINEFORM1 (spherical Gaussian in this paper), and INLINEFORM2 is a neural network that concatenates the input symbol embeddings with INLINEFORM3 and outputs the sentence-level rule probabilities INLINEFORM4 ,", " INLINEFORM0 ", " where INLINEFORM0 denotes vector concatenation. Then a tree/sentence is sampled from a PCFG with rule probabilities given by INLINEFORM1 ,", " INLINEFORM0 ", " This can be viewed as a continuous mixture of PCFGs, or alternatively, a Bayesian PCFG with a prior on sentence-level rule probabilities parameterized by INLINEFORM0 . Importantly, under this generative model the context-free assumptions hold conditioned on INLINEFORM3 , but they do not hold unconditionally. This is shown in Figure FIGREF3 (right) where there is a dependence path through INLINEFORM4 if it is not conditioned upon. Compound PCFGs give rise to a marginal distribution over parse trees INLINEFORM5 via", " INLINEFORM0 ", " where INLINEFORM0 . The subscript in INLINEFORM1 denotes the fact that the rule probabilities depend on INLINEFORM2 . Compound PCFGs are clearly more expressive than PCFGs as each sentence has its own set of rule probabilities. However, it still assumes a tree-based generative process, making it possible to learn latent tree structures.", "Our motivation for the compound PCFG is based on the observation that for grammar induction, context-free assumptions are generally made not because they represent an adequate model of natural language, but because they allow for tractable training. We can in principle model richer dependencies through vertical/horizontal Markovization BIBREF21 , BIBREF22 and lexicalization BIBREF23 . However such dependencies complicate training due to the rapid increase in the number of rules. Under this view, we can interpret the compound PCFG as a restricted version of some lexicalized, higher-order PCFG where a child can depend on structural and lexical context through a shared latent vector. We hypothesize that this dependence among siblings is especially useful in grammar induction from words, where (for example) if we know that watched is used as a verb then the noun phrase is likely to be a movie.", "In contrast to the usual Bayesian treatment of PCFGs which places priors on global rule probabilities BIBREF3 , BIBREF4 , BIBREF6 , the compound PCFG assumes a prior on local, sentence-level rule probabilities. It is therefore closely related to the Bayesian grammars studied by BIBREF25 and BIBREF26 , who also sample local rule probabilities from a logistic normal prior for training dependency models with valence (DMV) BIBREF27 .", "" ], [ "" ], [ " Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines. All models soundly outperform right branching baselines, and we find that the neural PCFG/compound PCFG are strong models for grammar induction. In particular the compound PCFG outperforms other models by an appreciable margin on both English and Chinese. We again note that we were unable to induce meaningful grammars through a traditional PCFG with the scalar parameterization despite a thorough hyperparameter search. See lab:full for the full results (including corpus-level INLINEFORM1 ) broken down by sentence length.", "Table TABREF27 analyzes the learned tree structures. We compare similarity as measured by INLINEFORM0 against gold, left, right, and “self\" trees (top), where self INLINEFORM1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. We find that PRPN is particularly consistent across multiple runs. We also observe that different models are better at identifying different constituent labels, as measured by label recall (Table TABREF27 , bottom). While left as future work, this naturally suggests an ensemble approach wherein the empirical probabilities of constituents (obtained by averaging the predicted binary constituent labels from the different models) are used either to supervise another model or directly as potentials in a CRF constituency parser. Finally, all models seemed to have some difficulty in identifying SBAR/VP constituents which typically span more words than NP constituents.", "" ], [ " Grammar induction has a long and rich history in natural language processing. Early work on grammar induction with pure unsupervised learning was mostly negative BIBREF0 , BIBREF1 , BIBREF74 , though BIBREF75 reported some success on partially bracketed data. BIBREF76 and BIBREF2 were some of the first successful statistical approaches to grammar induction. In particular, the constituent-context model (CCM) of BIBREF2 , which explicitly models both constituents and distituents, was the basis for much subsequent work BIBREF27 , BIBREF7 , BIBREF8 . Other works have explored imposing inductive biases through Bayesian priors BIBREF4 , BIBREF5 , BIBREF6 , modified objectives BIBREF42 , and additional constraints on recursion depth BIBREF77 , BIBREF48 .", "While the framework of specifying the structure of a grammar and learning the parameters is common, other methods exist. BIBREF43 consider a nonparametric-style approach to unsupervised parsing by using random subsets of training subtrees to parse new sentences. BIBREF46 utilize an incremental algorithm to unsupervised parsing which makes local decisions to create constituents based on a complex set of heuristics. BIBREF47 induce parse trees through cascaded applications of finite state models.", "More recently, neural network-based approaches to grammar induction have shown promising results on inducing parse trees directly from words. BIBREF14 , BIBREF15 learn tree structures through soft gating layers within neural language models, while BIBREF16 combine recursive autoencoders with the inside-outside algorithm. BIBREF17 train unsupervised recurrent neural network grammars with a structured inference network to induce latent trees, and BIBREF78 utilize image captions to identify and ground constituents.", "Our work is also related to latent variable PCFGs BIBREF79 , BIBREF80 , BIBREF81 , which extend PCFGs to the latent variable setting by splitting nonterminal symbols into latent subsymbols. In particular, latent vector grammars BIBREF82 and compositional vector grammars BIBREF83 also employ continuous vectors within their grammars. However these approaches have been employed for learning supervised parsers on annotated treebanks, in contrast to the unsupervised setting of the current work.", "" ], [ " This work explores grammar induction with compound PCFGs, which modulate rule probabilities with per-sentence continuous latent vectors. The latent vector induces marginal dependencies beyond the traditional first-order context-free assumptions within a tree-based generative process, leading to improved performance. The collapsed amortized variational inference approach is general and can be used for generative models which admit tractable inference through partial conditioning. Learning deep generative models which exhibit such conditional Markov properties is an interesting direction for future work." ], [ " We thank Phil Blunsom for initial discussions which seeded many of the core ideas in the present work. We also thank Yonatan Belinkov and Shay Cohen for helpful feedback, and Andrew Drozdov for providing the parsed dataset from their DIORA model. YK is supported by a Google Fellowship. AMR acknowledges the support of NSF 1704834, 1845664, AWS, and Oracle." ], [ "We associate an input embedding INLINEFORM0 for each symbol INLINEFORM1 on the left side of a rule (i.e. INLINEFORM2 ) and run a neural network over INLINEFORM3 to obtain the rule probabilities. Concretely, each rule type INLINEFORM4 is parameterized as follows, INLINEFORM5 ", " where INLINEFORM0 is the product space INLINEFORM1 , and INLINEFORM2 are MLPs with two residual layers, INLINEFORM3 ", " The bias terms for the above expressions (including for the rule probabilities) are omitted for notational brevity. In Figure FIGREF3 we use the following to refer to rule probabilities of different rule types, INLINEFORM0 ", " where INLINEFORM0 denotes the set of rules with INLINEFORM1 on the left hand side.", "The compound PCFG rule probabilities INLINEFORM0 given a latent vector INLINEFORM1 , INLINEFORM2 ", " Again the bias terms are omitted for brevity, and INLINEFORM0 are as before where the first layer's input dimensions are appropriately changed to account for concatenation with INLINEFORM1 ." ], [ "For completeness we show the corpus-level and sentence-level INLINEFORM0 broken down by sentence length in Table TABREF44 , averaged across 4 different runs of each model." ], [ "For experiments on supervising RNNGs with induced trees, we use the parameterization and hyperparameters from BIBREF17 , which uses a 2-layer 650-dimensional stack LSTM (with dropout of 0.5) and a 650-dimensional tree LSTM BIBREF88 , BIBREF90 as the composition function.", "Concretely, the generative story is as follows: first, the stack representation is used to predict the next action (shift or reduce) via an affine transformation followed by a sigmoid. If shift is chosen, we obtain a distribution over the vocabulary via another affine transformation over the stack representation followed by a softmax. Then we sample the next word from this distribution and shift the generated word onto the stack using the stack LSTM. If reduce is chosen, we pop the last two elements off the stack and use the tree LSTM to obtain a new representation. This new representation is shifted onto the stack via the stack LSTM. Note that this RNNG parameterization is slightly different than the original from BIBREF53 , which does not ignore constituent labels and utilizes a bidirectional LSTM as the composition function instead of a tree LSTM. As our RNNG parameterization only works with binary trees, we binarize the gold trees with right binarization for the RNNG trained on gold trees (trees from the unsupervised methods explored in this paper are already binary). The RNNG also trains a discriminative parser alongside the generative model for evaluation with importance sampling. We use a CRF parser whose span score parameterization is similar similar to recent works BIBREF89 , BIBREF87 , BIBREF85 : position embeddings are added to word embeddings, and a bidirectional LSTM with 256 hidden dimensions is run over the input representations to obtain the forward and backward hidden states. The score INLINEFORM0 for a constituent spanning the INLINEFORM1 -th and INLINEFORM2 -th word is given by,", " INLINEFORM0 ", "where the MLP has a single hidden layer with INLINEFORM0 nonlinearity followed by layer normalization BIBREF84 .", "For experiments on fine-tuning the RNNG with the unsupervised RNNG, we take the discriminative parser (which is also pretrained alongside the RNNG on induced trees) to be the structured inference network for optimizing the evidence lower bound. We refer the reader to BIBREF17 and their open source implementation for additional details. We also observe that as noted by BIBREF17 , a URNNG trained from scratch on this version of PTB without punctuation failed to outperform a right-branching baseline.", "The LSTM language model baseline is the same size as the stack LSTM (i.e. 2 layers, 650 hidden units, dropout of 0.5), and is therefore equivalent to an RNNG with completely right branching trees. The PRPN/ON baselines for perplexity/syntactic evaluation in Table TABREF30 also have 2 layers with 650 hidden units and 0.5 dropout. Therefore all models considered in Table TABREF30 have roughly the same capacity. For all models we share input/output word embeddings BIBREF86 . Perplexity estimation for the RNNGs and the compound PCFG uses 1000 importance-weighted samples.", "For grammaticality judgment, we modify the publicly available dataset from BIBREF56 to only keep sentence pairs that did not have any unknown words with respect to our PTB vocabulary of 10K words. This results in 33K sentence pairs for evaluation." ], [ "Figure FIGREF50 shows the part-of-speech alignments and Table TABREF46 shows the nonterminal label alignments for the compound PCFG/neural PCFG." ], [ "Table TABREF53 lists more examples of constituents within each subtree as the top principical component is varied. Due to data sparsity, the subtree analysis is performed on the full dataset. See section UID36 for more details." ] ], "section_name": [ "Introduction", "Probabilistic Context-Free Grammars", "Compound PCFGs", "Experimental Setup", "Results and Discussion", "Related Work", "Conclusion", "Acknowledgments", "Model Parameterization", "Corpus/Sentence F 1 F_1 by Sentence Length", "Experiments with RNNGs", "Nonterminal/Preterminal Alignments", "Subtree Analysis" ] }
{ "answers": [ { "annotation_id": [ "33750f128dea3ec395c2ccd09dab6bebc2204a8b", "821e22e19536d5ec22fb6eec5af9d25e478ca1bb" ], "answer": [ { "evidence": [ "Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines. All models soundly outperform right branching baselines, and we find that the neural PCFG/compound PCFG are strong models for grammar induction. In particular the compound PCFG outperforms other models by an appreciable margin on both English and Chinese. We again note that we were unable to induce meaningful grammars through a traditional PCFG with the scalar parameterization despite a thorough hyperparameter search. See lab:full for the full results (including corpus-level INLINEFORM1 ) broken down by sentence length." ], "extractive_spans": [ "INLINEFORM0 scores" ], "free_form_answer": "", "highlighted_evidence": [ "Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 3: Results from training RNNGs on induced trees from various models (Induced RNNG) on the PTB. Induced URNNG indicates fine-tuning with the URNNG objective. We show perplexity (PPL), grammaticality judgment performance (Syntactic Eval.), and unlabeled F1. PPL/F1 are calculated on the PTB test set and Syntactic Eval. is from Marvin and Linzen (2018)’s dataset. Results on top do not make any use of annotated trees, while the bottom two results are trained on binarized gold trees. The perplexity numbers here are not comparable to standard results on the PTB since our models are generative model of sentences and hence we do not carry information across sentence boundaries. Also note that all the RNN-based models above (i.e. LSTM/PRPN/ON/RNNG/URNNG) have roughly the same model capacity (see appendix A.3).", "FLOAT SELECTED: Table 1: Unlabeled sentence-level F1 scores on PTB and CTB test sets. Top shows results from previous work while the rest of the results are from this paper. Mean/Max scores are obtained from 4 runs of each model with different random seeds. Oracle is the maximum score obtainable with binarized trees, since we compare against the non-binarized gold trees per convention. Results with † are trained on a version of PTB with punctuation, and hence not strictly comparable to the present work. For URNNG/DIORA, we take the parsed test set provided by the authors from their best runs and evaluate F1 with our evaluation setup, which ignores punctuation. ough hyperparameter search.13 See appendix A.2 for the full results (including corpus-level F1) broken down by sentence length.", "FLOAT SELECTED: Table 2: (Top) Mean F1 similarity against Gold, Left, Right, and Self trees. Self F1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. (Bottom) Fraction of ground truth constituents that were predicted as a constituent by the models broken down by label (i.e. label recall)." ], "extractive_spans": [], "free_form_answer": "Unlabeled sentence-level F1, perplexity, grammatically judgment performance", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Results from training RNNGs on induced trees from various models (Induced RNNG) on the PTB. Induced URNNG indicates fine-tuning with the URNNG objective. We show perplexity (PPL), grammaticality judgment performance (Syntactic Eval.), and unlabeled F1. PPL/F1 are calculated on the PTB test set and Syntactic Eval. is from Marvin and Linzen (2018)’s dataset. Results on top do not make any use of annotated trees, while the bottom two results are trained on binarized gold trees. The perplexity numbers here are not comparable to standard results on the PTB since our models are generative model of sentences and hence we do not carry information across sentence boundaries. Also note that all the RNN-based models above (i.e. LSTM/PRPN/ON/RNNG/URNNG) have roughly the same model capacity (see appendix A.3).", "FLOAT SELECTED: Table 1: Unlabeled sentence-level F1 scores on PTB and CTB test sets. Top shows results from previous work while the rest of the results are from this paper. Mean/Max scores are obtained from 4 runs of each model with different random seeds. Oracle is the maximum score obtainable with binarized trees, since we compare against the non-binarized gold trees per convention. Results with † are trained on a version of PTB with punctuation, and hence not strictly comparable to the present work. For URNNG/DIORA, we take the parsed test set provided by the authors from their best runs and evaluate F1 with our evaluation setup, which ignores punctuation. ough hyperparameter search.13 See appendix A.2 for the full results (including corpus-level F1) broken down by sentence length.", "FLOAT SELECTED: Table 2: (Top) Mean F1 similarity against Gold, Left, Right, and Self trees. Self F1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. (Bottom) Fraction of ground truth constituents that were predicted as a constituent by the models broken down by label (i.e. label recall)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "48fb2739344a063b6ffcb3c07547b1482ad9cae9", "b40308d73f3487a15ea5164c43a876b90a024221" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "fef5c59d539c0eed8a37d44ce161d1f380c41dd1" ], "answer": [ { "evidence": [ "Experimental Setup" ], "extractive_spans": [], "free_form_answer": "Answer with content missing: (Data section) Penn Treebank (PTB)", "highlighted_evidence": [ "Experimental Setup" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3d4045906c00176d2b42b243922425178a95723b" ], "answer": [ { "evidence": [ "Experimental Setup" ], "extractive_spans": [], "free_form_answer": "Answer with content missing: (Data section) Chinese with version 5.1 of the Chinese Penn Treebank (CTB)", "highlighted_evidence": [ "Experimental Setup" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "what were the evaluation metrics?", "what are the state of the art methods?", "what english datasets were used?", "which chinese datasets were used?" ], "question_id": [ "01f4a0a19467947a8f3bdd7ec9fac75b5222d710", "7784d321ccc64db5141113b6783e4ba92fdd4b20", "218615a005f7f00606223005fef22c07057d9d77", "867290103f762e1ddfa6f2ea30dd0a327f595182" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Figure 1: A graphical model-like diagram for the neural PCFG (left) and the compound PCFG (right) for an example tree structure. In the above, A1, A2 ∈ N are nonterminals, T1, T2, T3 ∈ P are preterminals, w1, w2, w3 ∈ Σ are terminals. In the neural PCFG, the global rule probabilities π = πS ∪πN ∪πP are the output from a neural net run over the symbol embeddings EG , where πN are the set of rules with a nonterminal on the left hand side (πS and πP are similarly defined). In the compound PCFG, we have per-sentence rule probabilities πz = πz,S ∪ πz,N ∪ πz,P obtained from running a neural net over a random vector z (which varies across sentences) and global symbol embeddings EG . In this case, the context-free assumptions hold conditioned on z, but they do not hold unconditionally: e.g. when conditioned on z and A2, the variables A1 and T1 are independent; however when conditioned on just A2, they are not independent due to the dependence path through z. Note that the rule probabilities are random variables in the compound PCFG but deterministic variables in the neural PCFG.", "Table 1: Unlabeled sentence-level F1 scores on PTB and CTB test sets. Top shows results from previous work while the rest of the results are from this paper. Mean/Max scores are obtained from 4 runs of each model with different random seeds. Oracle is the maximum score obtainable with binarized trees, since we compare against the non-binarized gold trees per convention. Results with † are trained on a version of PTB with punctuation, and hence not strictly comparable to the present work. For URNNG/DIORA, we take the parsed test set provided by the authors from their best runs and evaluate F1 with our evaluation setup, which ignores punctuation. ough hyperparameter search.13 See appendix A.2 for the full results (including corpus-level F1) broken down by sentence length.", "Table 2: (Top) Mean F1 similarity against Gold, Left, Right, and Self trees. Self F1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. (Bottom) Fraction of ground truth constituents that were predicted as a constituent by the models broken down by label (i.e. label recall).", "Table 3: Results from training RNNGs on induced trees from various models (Induced RNNG) on the PTB. Induced URNNG indicates fine-tuning with the URNNG objective. We show perplexity (PPL), grammaticality judgment performance (Syntactic Eval.), and unlabeled F1. PPL/F1 are calculated on the PTB test set and Syntactic Eval. is from Marvin and Linzen (2018)’s dataset. Results on top do not make any use of annotated trees, while the bottom two results are trained on binarized gold trees. The perplexity numbers here are not comparable to standard results on the PTB since our models are generative model of sentences and hence we do not carry information across sentence boundaries. Also note that all the RNN-based models above (i.e. LSTM/PRPN/ON/RNNG/URNNG) have roughly the same model capacity (see appendix A.3).", "Figure 2: Alignment of induced nonterminals ordered from top based on predicted frequency (therefore NT-04 is the most frequently-predicted nonterminal). For each nonterminal we visualize the proportion of correctly-predicted constituents that correspond to particular gold labels. For reference we also show the precision (i.e. probability of correctly predicting unlabeled constituents) in the rightmost column.", "Table 4: For each query sentence (bold), we show the 5 nearest neighbors based on cosine similarity, where we take the representation for each sentence to be the mean of the variational posterior.", "Table 5: For each subtree, we perform PCA on the variational posterior mean vectors that are associated with that particular subtree and take the top principal component. We then list the top 5 constituents that had the lowest (PC -) and highest (PC +) principal component values.", "Table 6: Average unlabeled F1 for the various models broken down by sentence length on the PTB test set. For example WSJ-10 refers to F1 calculated on the subset of the test set where the maximum sentence length is at most 10. Scores are averaged across 4 runs of the model with different random seeds. Oracle is the performance of binarized gold trees (with right branching binarization). Top shows sentence-level F1 and bottom shows corpus-level F1.", "Figure 3: Preterminal alignment to part-of-speech tags for the compound PCFG (top) and the neural PCFG (bottom).", "Table 7: Analysis of label alignment for nonterminals in the compound PCFG (top) and the neural PCFG (bottom). Label alignment is the proportion of correctly-predicted constistuents that correspond to a particular gold label. We also show the predicted constituent frequency and accuracy (i.e. precision) on the right. Bottom line shows the frequency in the gold trees.", "Table 8: For each subtree (shown at the top of each set of examples), we perform PCA on the variational posterior mean vectors that are associated with that particular subtree and take the top principal component. We then list the top 5 constituents that had the lowest (left) and highest (right) principal component values." ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "7-Figure2-1.png", "8-Table4-1.png", "9-Table5-1.png", "14-Table6-1.png", "15-Figure3-1.png", "16-Table7-1.png", "17-Table8-1.png" ] }
[ "what were the evaluation metrics?" ]
[ [ "1906.10225-6-Table1-1.png", "1906.10225-6-Table2-1.png", "1906.10225-7-Table3-1.png" ] ]
[ "Unlabeled sentence-level F1, perplexity, grammatically judgment performance" ]
251
1712.05999
Characterizing Political Fake News in Twitter by its Meta-Data
This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users.
{ "paragraphs": [ [ "10pt", "1.10pt", "[ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users.", "]" ], [ "While fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited.", "Since the Obama campaign in 2008, social media has been pervasive in the political arena in the United States. Studies report that up to 62% of American adults receive their news from social media BIBREF4 . The wide use of platforms such as Twitter and Facebook has facilitated the diffusion of fake news by simplifying the process of receiving content with no significant third party filtering, fact-checking or editorial judgement. Such characteristics make these platforms suitable means for sharing news that, disguised as legit ones, try to confuse readers.", "Such use and their prominent rise has been confirmed by Craig Silverman, a Canadian journalist who is a prominent figure on fake news BIBREF5 : “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlet”.", "Our current research hence departs from the assumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled and eventually blocked. In order to do so, we use a sample of more that 1.5M tweets collected on November 8th 2016 —election day in the United States— with the goal of identifying features that tweets containing fake news are likely to have. As such, our paper aims to provide a preliminary characterization of fake news in Twitter by looking into meta-data embedded in tweets. Considering meta-data as a relevant factor of analysis is in line with findings reported by Morris et al. BIBREF6 . We argue that understanding differences between tweets containing fake news and regular tweets will allow researchers to design mechanisms to block fake news in Twitter.", "Specifically, our goals are: 1) compare the characteristics of tweets labelled as containing fake news to tweets labelled as not containing them, 2) characterize, through their meta-data, viral tweets containing fake news and the accounts from which they originated, and 3) determine the extent to which tweets containing fake news expressed polarized political views.", "For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties.", "From our results, the following main observations can be made:", "Our findings resonate with similar work done on fake news such as the one from Allcot and Gentzkow BIBREF9 . Therefore, even if our study is a preliminary attempt at characterizing fake news on Twitter using only their meta-data, our results provide external validity to previous research. Moreover, our work not only stresses the importance of using meta-data, but also underscores which parameters may be useful to identify fake news on Twitter.", "The rest of the paper is organized as follows. The next section briefly discusses where this work is located within the literature on fake news and contextualizes the type of fake news we are studying. Then, we present our hypotheses, the data, and the methodology we follow. Finally, we present our findings, conclusions of this study, and future lines of work." ], [ "Our research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news.", "With respect to social sciences, efforts from psychology, political science and sociology, have been dedicated to understand why people consume and/or believe misinformation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Most of these studies consistently reported that psychological biases such as priming effects and confirmation bias play an important role in people ability to discern misinformation.", "In relation to the production and distribution of fake news, a recent paper in the field of Economics BIBREF9 found that most fake news sites use names that resemble those of legitimate organizations, and that sites supplying fake news tend to be short-lived. These authors also noticed that fake news items are more likely shared than legitimate articles coming from trusted sources, and they tend to exhibit a larger level of polarization.", "The conceptual issue of how to define fake news is a serious and unresolved issue. As the focus of our work is not attempting to offer light on this, we will rely on work by other authors to describe what we consider as fake news. In particular, we use the categorization provided by Rubin et al. BIBREF7 . The five categories they described, together with illustrative examples from our dataset, are as follows:" ], [ "Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information:", "Taking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset.", "Exposure.", "Characterization.", "Polarization." ], [ "For this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016).", "One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times.", "Once we have the dataset of viral tweets, we eliminated duplicates (some of the tweets were collected several times because they had several handles) and an expert manually inspected the text field within the tweets to label them as containing fake news, or not containing them (according to the characterization presented before). This annotated dataset BIBREF8 is publicly available and can be freely reused.", "Finally, we use the following fields within tweets (from the ones returned by Twitter's API) to compare their distributions and look for differences between viral tweets containing fake news and viral tweets not containing fake news:", "In the following section, we provide graphical descriptions of the distribution of each of the identified attributes for the two sets of tweets (those labelled as containing fake news and those labelled as not containing them). Where appropriate, we normalized and/or took logarithms of the data for better representation. To gain a better understanding of the significance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both distributions are equal." ], [ "The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window.", "The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth.", "The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered." ], [ "Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time.", "However, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal.", "In relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant.", "Finally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news." ], [ "We found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts.", "Turning to friends, accounts distributing fake news appear to have, on average, the same number of friends than those distributing tweets with no fake news. However, the density distribution of friends from the accounts (Figure FIGREF29 ) shows that there is indeed a statistically significant difference in their distributions.", "If we take into consideration the number of followers, accounts generating viral tweets with fake news do have a very different distribution on this dimension, compared to those accounts generating viral tweets with no fake news (see Figure FIGREF30 ). In fact, such differences are statistically significant.", "A useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed.", "With respect to the number of mentions, Figure FIGREF33 shows that viral tweets labelled as containing fake news appear to use mentions to other users less frequently than viral tweets not containing fake news. In other words, tweets containing fake news mostly contain 1 mention, whereas other tweets tend to have two). Such differences are statistically significant.", "The analysis (Figure FIGREF34 ) of the presence of media in the tweets in our dataset shows that tweets labelled as not containing fake news appear to present more media elements than those labelled as fake news. However, the difference is not statistically significant.", "On the other hand, Figure FIGREF35 shows that viral tweets containing fake news appear to include more URLs to other sites than viral tweets that do not contain fake news. In fact, the difference between the two distributions is statistically significant (assuming INLINEFORM0 )." ], [ "Finally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates." ], [ "As a summary, and constrained by our existing dataset, we made the following observations regarding differences between viral tweets labelled as containing fake news and viral tweets labelled as not containing them:", "These findings (related to our initial hypothesis in Table TABREF44 ) clearly suggest that there are specific pieces of meta-data about tweets that may allow the identification of fake news. One such parameter is the time of exposure. Viral tweets containing fake news are shorter-lived than those containing other type of content. This notion seems to resonate with our findings showing that a number of accounts spreading fake news have already been deleted or suspended by Twitter by the time of writing. If one considers that researchers using different data have found similar results BIBREF9 , it appears that the lifetime of accounts, together with the age of the questioned viral content could be useful to identify fake news. In the light of this finding, accounts newly created should probably put under higher scrutiny than older ones. This in fact, would be a nice a-priori bias for a Bayesian classifier.", "Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising).", "Finally, it is interesting to notice that the content of viral fake news was highly polarized. This finding is also in line with those of Alcott et al. BIBREF9 . This feature suggests that textual sentiment analysis of the content of tweets (as most researchers do), together with the above mentioned parameters from meta-data, may prove useful for identifying fake news." ], [ "With the election of Donald Trump as President of the United States, the concept of fake news has become a broadly-known phenomenon that is getting tremendous attention from governments and media companies. We have presented a preliminary study on the meta-data of a publicly available dataset of tweets that became viral during the day of the 2016 US presidential election. Our aim is to advance the understanding of which features might be characteristic of viral tweets containing fake news in comparison with viral tweets without fake news.", "We believe that the only way to automatically identify those deceitful tweets (i.e. containing fake news) is by actually understanding and modelling them. Only then, the automation of the processes of tagging and blocking these tweets can be successfully performed. In the same way that spam was fought, we anticipate fake news will suffer a similar evolution, with social platforms implementing tools to deal with them. With most works so far focusing on the actual content of the tweets, ours is a novel attempt from a different, but also complementary, angle.", "Within the used dataset, we found there are differences around exposure, characteristics of accounts spreading fake news and the tone of the content. Those findings suggest that it is indeed possible to model and automatically detect fake news. We plan to replicate and validate our experiments in an extended sample of tweets (until 4 months after the US election), and tests the predictive power of the features we found relevant within our sample." ], [ "No competing financial interest exist." ] ], "section_name": [ null, "Introduction", "Defining Fake news", "Research Hypotheses", "Data and Methodology", "Results", "Exposure", "Characterization", "Polarization", "Discussion", "Conclusions", "Author Disclosure Statement" ] }
{ "answers": [ { "annotation_id": [ "362470973c9da357961960c77061cb50435d135f" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: For each one of the selected features, the table shows the difference between the set of tweets containing fake news and those non containing them, and the associated p-value (applying a KolmogorovSmirnov test). The null hypothesis is that both distributions are equal (two sided). Results are ordered by decreasing p-value.", "The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered." ], "extractive_spans": [], "free_form_answer": "Distributions of Followers, Friends and URLs are significantly different between the set of tweets containing fake news and those non containing them, but for Favourites, Mentions, Media, Retweets and Hashtags they are not significantly different", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: For each one of the selected features, the table shows the difference between the set of tweets containing fake news and those non containing them, and the associated p-value (applying a KolmogorovSmirnov test). The null hypothesis is that both distributions are equal (two sided). Results are ordered by decreasing p-value.", " Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "4f041e24c8c9a3afc1a21f5b498b0b012cd0491e", "a2d77c56d9efd579ab0ac2fdc375cff4e6fcd5ed" ], "answer": [ { "evidence": [ "For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties." ], "extractive_spans": [], "free_form_answer": "an expert annotator determined if the tweet fell under a specific category", "highlighted_evidence": [ "For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information:", "Exposure.", "Characterization.", "Polarization." ], "extractive_spans": [ "Exposure", "Characterization", "Polarization" ], "free_form_answer": "", "highlighted_evidence": [ "Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information:", "Exposure.\n\nCharacterization.\n\nPolarization." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "b8b51c6ffa7c8ea645e38262d0977381a5a934d7", "b8d168bd380cf62721b6093c7b62f207e363c04c" ], "answer": [ { "evidence": [ "One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times." ], "extractive_spans": [], "free_form_answer": "Viral tweets are the ones that are retweeted more than 1000 times", "highlighted_evidence": [ "For our study, we consider that a tweet went viral if it was retweeted more than 1000 times." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties." ], "extractive_spans": [], "free_form_answer": "those that contain a high number of retweets", "highlighted_evidence": [ "For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "ad8ccfc30196dd5d23bda7052db3cb8b9ba4d465", "d0ba2ed5fdf63adf4f5ebfc3d010716889d77731" ], "answer": [ { "evidence": [ "We found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts.", "A useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed.", "Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time." ], "extractive_spans": [], "free_form_answer": "Accounts that spread fake news are mostly unverified, recently created and have on average high friends/followers ratio", "highlighted_evidence": [ "From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts.", "Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. ", "Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising)." ], "extractive_spans": [ "have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only" ], "free_form_answer": "", "highlighted_evidence": [ "Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "7482b25aaf2fefad5899e9fbe473a60dcdf393d2" ], "answer": [ { "evidence": [ "One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times." ], "extractive_spans": [ "1000" ], "free_form_answer": "", "highlighted_evidence": [ "For our study, we consider that a tweet went viral if it was retweeted more than 1000 times." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "b976fbfd6cab88371d119a0fc158155a72cb74fd" ], "answer": [ { "evidence": [ "The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth." ], "extractive_spans": [], "free_form_answer": "Ground truth is not established in the paper", "highlighted_evidence": [ "Because of this, we do not claim that this dataset can be considered a ground truth." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "", "", "five", "five", "five" ], "paper_read": [ "", "", "", "no", "no", "no" ], "question": [ "What were their distribution results?", "How did they determine fake news tweets?", "What is their definition of tweets going viral?", "What are the characteristics of the accounts that spread fake news?", "What is the threshold for determining that a tweet has gone viral?", "How is the ground truth for fake news established?" ], "question_id": [ "907b3af3cfaf68fe188de9467ed1260e52ec6cf1", "56a8826cbee49560592b2d4b47b18ada236a12b9", "968b7c3553a668ba88da105eff067d57f393c63f", "f03df5d99b753dc4833ef27b32bb95ba53d790ee", "a8f51b4e334a917702422782329d97304a2fe139", "dca86fbe1d57b44986055b282a03c15ef7882e51" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "twitter", "twitter", "twitter" ], "topic_background": [ "", "", "", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Distribution of the date of creation of the tweets that were viral on November 8th. For clarity, the image only shows the year 2016, and no more than 150 tweets per day.", "Figure 2: Density distributions of achieved retweets for tweets in our dataset 1)containing fake news and 2)not containing them. No differences are apparent.", "Table 1: For each one of the selected features, the table shows the difference between the set of tweets containing fake news and those non containing them, and the associated p-value (applying a KolmogorovSmirnov test). The null hypothesis is that both distributions are equal (two sided). Results are ordered by decreasing p-value.", "Figure 3: Density distributions of the number of favourites that the user generating the tweet has. The differences are not statistically significant.", "Figure 4: Distribution of the number of hashtags used in tweets labelled as containing fake news and those labelled as not containing them.", "Figure 5: Tweets labelled as containing fake news mostly come from non-verified users. This contrasts with the opposite pattern for tweets non containing them (which mostly originate from verified accounts).", "Figure 6: Density distributions (for tweets labelled as containing fake news, and tweets labelled as not containing them) of the number of friends that the user generating the tweet has. Difference is statistically significant.", "Figure 9: Density distribution of friends/followers ratio. Note that they do not follow a normal distribution. A higher friends/followers ratio exists for accounts that has at least produced a tweet labelled as containing fake news.", "Figure 10: Number of mentions within tweets labelled as containing fake news and tweets not containing them. There is almost a similar distribution of 1 and 2 mentions for tweets containing fake news. This contrasts with tweets not containing fake news, in which 2 mentions is much more common.", "Figure 7: Density distributions of the number of followers that the accounts generating viral tweets (within our sample) have. Accounts producing fake news have a narrower window of followers.", "Figure 8: Density distribution of friends/followers ratio, showing quartiles. Accounts that generate fake news tend to have a higher ratio value.", "Figure 11: Number of media elements embedded within viral tweets (labelled as containing fake news vs. labelled as not containing them)", "Figure 12: Number of URLs embedded within viral tweets (with fake news vs. without them). Differences are statistically significant with α = 0.05", "Table 2: Summary of our conclusions, and tested hypothesis" ], "file": [ "4-Figure1-1.png", "4-Figure2-1.png", "4-Table1-1.png", "4-Figure3-1.png", "5-Figure4-1.png", "5-Figure5-1.png", "6-Figure6-1.png", "6-Figure9-1.png", "6-Figure10-1.png", "6-Figure7-1.png", "6-Figure8-1.png", "7-Figure11-1.png", "7-Figure12-1.png", "8-Table2-1.png" ] }
[ "What were their distribution results?", "How did they determine fake news tweets?", "What is their definition of tweets going viral?", "What are the characteristics of the accounts that spread fake news?", "How is the ground truth for fake news established?" ]
[ [ "1712.05999-Results-2", "1712.05999-4-Table1-1.png" ], [ "1712.05999-Research Hypotheses-0", "1712.05999-Research Hypotheses-4", "1712.05999-Research Hypotheses-3", "1712.05999-Introduction-5", "1712.05999-Research Hypotheses-2" ], [ "1712.05999-Introduction-5", "1712.05999-Data and Methodology-1" ], [ "1712.05999-Discussion-2", "1712.05999-Characterization-0", "1712.05999-Exposure-0", "1712.05999-Characterization-3" ], [ "1712.05999-Results-1" ] ]
[ "Distributions of Followers, Friends and URLs are significantly different between the set of tweets containing fake news and those non containing them, but for Favourites, Mentions, Media, Retweets and Hashtags they are not significantly different", "an expert annotator determined if the tweet fell under a specific category", "those that contain a high number of retweets", "Accounts that spread fake news are mostly unverified, recently created and have on average high friends/followers ratio", "Ground truth is not established in the paper" ]
252
1808.03738
Ancient-Modern Chinese Translation with a Large Training Dataset
Ancient Chinese brings the wisdom and spirit culture of the Chinese nation. Automatic translation from ancient Chinese to modern Chinese helps to inherit and carry forward the quintessence of the ancients. However, the lack of large-scale parallel corpus limits the study of machine translation in Ancient-Modern Chinese. In this paper, we propose an Ancient-Modern Chinese clause alignment approach based on the characteristics of these two languages. This method combines both lexical-based information and statistical-based information, which achieves 94.2 F1-score on our manual annotation Test set. We use this method to create a new large-scale Ancient-Modern Chinese parallel corpus which contains 1.24M bilingual pairs. To our best knowledge, this is the first large high-quality Ancient-Modern Chinese dataset. Furthermore, we analyzed and compared the performance of the SMT and various NMT models on this dataset and provided a strong baseline for this task.
{ "paragraphs": [ [ "Ancient Chinese is the writing language in ancient China. It is a treasure of Chinese culture which brings together the wisdom and ideas of the Chinese nation and chronicles the ancient cultural heritage of China. Learning ancient Chinese not only helps people to understand and inherit the wisdom of the ancients, but also promotes people to absorb and develop Chinese culture.", "However, it is difficult for modern people to read ancient Chinese. Firstly, compared with modern Chinese, ancient Chinese is more concise and shorter. The grammatical order of modern Chinese is also quite different from that of ancient Chinese. Secondly, most modern Chinese words are double syllables, while the most of the ancient Chinese words are monosyllabic. Thirdly, there is more than one polysemous phenomenon in ancient Chinese. In addition, manual translation has a high cost. Therefore, it is meaningful and useful to study the automatic translation from ancient Chinese to modern Chinese. Through ancient-modern Chinese translation, the wisdom, talent and accumulated experience of the predecessors can be passed on to more people.", "Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 has achieved remarkable performance on many bilingual translation tasks. It is an end-to-end learning approach for machine translation, with the potential to show great advantages over the statistic machine translation (SMT) systems. However, NMT approach has not been widely applied to the ancient-modern Chinese translation task. One of the main reasons is the limited high-quality parallel data resource.", "The most popular method of acquiring translation examples is bilingual text alignment BIBREF5 . This kind of method can be classified into two types: lexical-based and statistical-based. The lexical-based approaches BIBREF6 , BIBREF7 focus on lexical information, which utilize the bilingual dictionary BIBREF8 , BIBREF9 or lexical features. Meanwhile, the statistical-based approaches BIBREF10 , BIBREF11 rely on statistical information, such as sentence length ratio in two languages and align mode probability.", "However, these methods are designed for other bilingual language pairs that are written in different language characters (e.g. English-French, Chinese-Japanese). The ancient-modern Chinese has some characteristics that are quite different from other language pairs. For example, ancient and modern Chinese are both written in Chinese characters, but ancient Chinese is highly concise and its syntactical structure is different from modern Chinese. The traditional methods do not take these characteristics into account. In this paper, we propose an effective ancient-modern Chinese text alignment method at the level of clause based on the characteristics of these two languages. The proposed method combines both lexical-based information and statistical-based information, which achieves 94.2 F1-score on Test set. Recently, a simple longest common subsequence based approach for ancient-modern Chinese sentence alignment is proposed in BIBREF12 . Our experiments showed that our proposed alignment approach performs much better than their method.", "We apply the proposed method to create a large translation parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. Furthermore, we test SMT models and various NMT models on the created dataset and provide a strong baseline for this task." ], [ "There are four steps to build the ancient-modern Chinese translation dataset: (i) The parallel corpus crawling and cleaning. (ii) The paragraph alignment. (iii) The clause alignment based on aligned paragraphs. (iv) Augmenting data by merging aligned adjacent clauses. The most critical step is the third step." ], [ "In the clause alignment step, we combine both statistical-based and lexical-based information to measure the score for each possible clause alignment between ancient and modern Chinese strings. The dynamic programming is employed to further find overall optimal alignment paragraph by paragraph. According to the characteristics of the ancient and modern Chinese languages, we consider the following factors to measure the alignment score INLINEFORM0 between a bilingual clause pair:", "Lexical Matching. The lexical matching score is used to calculate the matching coverage of the ancient clause INLINEFORM0 . It contains two parts: exact matching and dictionary matching. An ancient Chinese character usually corresponds to one or more modern Chinese words. In the first part, we carry out Chinese Word segmentation to the modern Chinese clause INLINEFORM1 . Then we match the ancient characters and modern words in the order from left to right. In further matching, the words that have been matched will be deleted from the original clauses.", "However, some ancient characters do not appear in its corresponding modern Chinese words. An ancient Chinese dictionary is employed to address this issue. We preprocess the ancient Chinese dictionary and remove the stop words. In this dictionary matching step, we retrieve the dictionary definition of each unmatched ancient character and use it to match the remaining modern Chinese words. To reduce the impact of universal word matching, we use Inverse Document Frequency (IDF) to weight the matching words. The lexical matching score is calculated as: DISPLAYFORM0 ", " The above equation is used to calculate the matching coverage of the ancient clause INLINEFORM0 . The first term of equation ( EQREF8 ) represents exact matching score. INLINEFORM1 denotes the length of INLINEFORM2 , INLINEFORM3 denotes each ancient character in INLINEFORM4 , and the indicator function INLINEFORM5 indicates whether the character INLINEFORM6 can match the words in the clause INLINEFORM7 . The second term is dictionary matching score. Here INLINEFORM8 and INLINEFORM9 represent the remaining unmatched strings of INLINEFORM10 and INLINEFORM11 , respectively. INLINEFORM12 denotes the INLINEFORM13 -th character in the dictionary definition of the INLINEFORM14 and its IDF score is denoted as INLINEFORM15 . The INLINEFORM16 is a predefined parameter which is used to normalize the IDF score. We tuned the value of this parameter on the Dev set.", "Statistical Information. Similar to BIBREF11 and BIBREF6 , the statistical information contains alignment mode and length information. There are many alignment modes between ancient and modern Chinese languages. If one ancient Chinese clause aligns two adjacent modern Chinese clauses, we call this alignment as 1-2 alignment mode. We show some examples of different alignment modes in Figure FIGREF9 . In this paper, we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes which account for INLINEFORM0 of the Dev set. We estimate the probability Pr INLINEFORM1 n-m INLINEFORM2 of each alignment mode n-m on the Dev set. To utilize length information, we make an investigation on length correlation between these two languages. Based on the assumption of BIBREF11 that each character in one language gives rise to a random number of characters in the other language and those random variables INLINEFORM3 are independent and identically distributed with a normal distribution, we estimate the mean INLINEFORM4 and standard deviation INLINEFORM5 from the paragraph aligned parallel corpus. Given a clause pair INLINEFORM6 , the statistical information score can be calculated by: DISPLAYFORM0 ", "where INLINEFORM0 denotes the normal distribution probability density function.", "Edit Distance. Because ancient and modern Chinese are both written in Chinese characters, we also consider using the edit distance. It is a way of quantifying the dissimilarity between two strings by counting the minimum number of operations (insertion, deletion, and substitution) required to transform one string into the other. Here we define the edit distance score as: DISPLAYFORM0 ", "Dynamic Programming. The overall alignment score for each possible clause alignment is as follows: DISPLAYFORM0 ", "Here INLINEFORM0 and INLINEFORM1 are pre-defined interpolation factors. We use dynamic programming to find the overall optimal alignment paragraph by paragraph. Let INLINEFORM2 be total alignment scores of aligning the first to INLINEFORM3 -th ancient Chinese clauses with the first to to INLINEFORM4 -th modern Chinese clauses, and the recurrence then can be described as follows: DISPLAYFORM0 ", "Where INLINEFORM0 denotes concatenate clause INLINEFORM1 to clause INLINEFORM2 . As we discussed above, here we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes." ], [ "Data Collection. To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era. They used plain and accurate words to express what happened at that time, and thus ensure the generality of the translated materials.", "Paragraph Alignment. To further ensure the quality of the new dataset, the work of paragraph alignment is manually completed. After data cleaning and manual paragraph alignment, we obtained 35K aligned bilingual paragraphs.", "Clause Alignment. We applied our clause alignment algorithm on the 35K aligned bilingual paragraphs and obtained 517K aligned bilingual clauses. The reason we use clause alignment algorithm instead of sentence alignment is because we can construct more aligned sentences more flexibly and conveniently. To be specific, we can get multiple additional sentence level bilingual pairs by “data augmentation”.", "Data Augmentation. We augmented the data in the following way: Given an aligned clause pair, we merged its adjacent clause pairs as a new sample pair. For example, suppose we have three adjacent clause level bilingual pairs: ( INLINEFORM0 , INLINEFORM1 ), ( INLINEFORM2 , INLINEFORM3 ), and ( INLINEFORM4 , INLINEFORM5 ). We can get some additional sentence level bilingual pairs, such as: ( INLINEFORM6 , INLINEFORM7 ) and ( INLINEFORM8 , INLINEFORM9 ). Here INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 are adjacent clauses in the original paragraph, and INLINEFORM13 denotes concatenate clause INLINEFORM14 to clause INLINEFORM15 . The advantage of using this data augmentation method is that compared with only using ( INLINEFORM16 , INLINEFORM17 ) as the training data, we can also use ( INLINEFORM18 , INLINEFORM19 ) and ( INLINEFORM20 , INLINEFORM21 ) as the training data, which can provide richer supervision information for the model and make the model learn the align information between the source language and the target language better. After the data augmentation, we filtered the sentences which are longer than 50 or contain more than four clause pairs.", "Dataset Creation. Finally, we split the dataset into three sets: training (Train), development (Dev) and testing (Test). Note that the unaugmented dataset contains 517K aligned bilingual clause pairs from 35K aligned bilingual paragraphs. To keep all the sentences in different sets come from different articles, we split the 35K aligned bilingual paragraphs into Train, Dev and Test sets following these ratios respectively: 80%, 10%, 10%. Before data augmentation, the unaugmented Train set contains INLINEFORM0 aligned bilingual clause pairs from 28K aligned bilingual paragraphs. Then we augmented the Train, Dev and Test sets respectively. Note that the augmented Train, Dev and Test sets also contain the unaugmented data. The statistical information of the three data sets is shown in Table TABREF17 . We show some examples of data in Figure FIGREF14 ." ], [ "We first briefly introduce the RNN based Neural Machine Translation (RNN-based NMT) model. The RNN-based NMT with attention mechanism BIBREF0 has achieved remarkable performance on many translation tasks. It consists of encoder and decoder part.", "We firstly introduce the encoder part. The input word sequence of source language are individually mapped into a INLINEFORM0 -dimensional vector space INLINEFORM1 . Then a bi-directional RNN BIBREF15 with GRU BIBREF16 or LSTM BIBREF17 cell converts these vectors into a sequences of hidden states INLINEFORM2 .", "For the decoder part, another RNN is used to generate target sequence INLINEFORM0 . The attention mechanism BIBREF0 , BIBREF18 is employed to allow the decoder to refer back to the hidden state sequence and focus on a particular segment. The INLINEFORM1 -th hidden state INLINEFORM2 of decoder part is calculated as: DISPLAYFORM0 ", "Here g INLINEFORM0 is a linear combination of attended context vector c INLINEFORM1 and INLINEFORM2 is the word embedding of (i-1)-th target word: DISPLAYFORM0 ", "The attended context vector c INLINEFORM0 is computed as a weighted sum of the hidden states of the encoder: DISPLAYFORM0 ", " The probability distribution vector of the next word INLINEFORM0 is generated according to the following: DISPLAYFORM0 ", "We take this model as the basic RNN-based NMT model in the following experiments." ], [ "Recently, the Transformer model BIBREF4 has made remarkable progress in machine translation. This model contains a multi-head self-attention encoder and a multi-head self-attention decoder.", "As proposed by BIBREF4 , an attention function maps a query and a set of key-value pairs to an output, where the queries INLINEFORM0 , keys INLINEFORM1 , and values INLINEFORM2 are all vectors. The input consists of queries and keys of dimension INLINEFORM3 , and values of dimension INLINEFORM4 . The attention function is given by: DISPLAYFORM0 ", "Multi-head attention mechanism projects queries, keys and values to INLINEFORM0 different representation subspaces and calculates corresponding attention. The attention function outputs are concatenated and projected again before giving the final output. Multi-head attention allows the model to attend to multiple features at different positions.", "The encoder is composed of a stack of INLINEFORM0 identical layers. Each layer has two sub-layers: multi-head self-attention mechanism and position-wise fully connected feed-forward network. Similarly, the decoder is also composed of a stack of INLINEFORM1 identical layers. In addition to the two sub-layers in each encoder layer, the decoder contains a third sub-layer which performs multi-head attention over the output of the encoder stack (see more details in BIBREF4 )." ], [ "Our experiments revolve around the following questions: Q1: As we consider three factors for clause alignment, do all these factors help? How does our method compare with previous methods? Q2: How does the NMT and SMT models perform on this new dataset we build?" ], [ "In order to evaluate our clause alignment algorithm, we manually aligned bilingual clauses from 37 bilingual ancient-modern Chinese articles, and finally got 4K aligned bilingual clauses as the Test set and 2K clauses as the Dev set.", "Metrics. We used F1-score and precision score as the evaluation metrics. Suppose that we get INLINEFORM0 bilingual clause pairs after running the algorithm on the Test set, and there are INLINEFORM1 bilingual clause pairs of these INLINEFORM2 pairs are in the ground truth of the Test set, the precision score is defined as INLINEFORM3 (the algorithm gives INLINEFORM4 outputs, INLINEFORM5 of which are correct). And suppose that the ground truth of the Test set contains INLINEFORM6 bilingual clause pairs, the recall score is INLINEFORM7 (there are INLINEFORM8 ground truth samples, INLINEFORM9 of which are output by the algorithm), then the F1-score is INLINEFORM10 .", "Baselines. Since the related work BIBREF10 , BIBREF11 can be seen as the ablation cases of our method (only statistical score INLINEFORM0 with dynamic programming), we compared the full proposed method with its variants on the Test set for ablation study. In addition, we also compared our method with the longest common subsequence (LCS) based approach proposed by BIBREF12 . To the best of our knowledge, BIBREF12 is the latest related work which are designed for Ancient-Modern Chinese alignment.", "Hyper-parameters. For the proposed method, we estimated INLINEFORM0 and INLINEFORM1 on all aligned paragraphs. The probability Pr INLINEFORM2 n-m INLINEFORM3 of each alignment mode n-m was estimated on the Dev set. For the hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , the grid search was applied to tune them on the Dev set. In order to show the effect of hyper-parameters INLINEFORM7 , INLINEFORM8 , and INLINEFORM9 , we reported the results of various hyper-parameters on the Dev set in Table TABREF26 . Based on the results of grid search on the Dev set, we set INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 in the following experiment. The Jieba Chinese text segmentation is employed for modern Chinese word segmentation.", "Results. The results on the Test set are shown in Table TABREF28 , the abbreviation w/o means removing a particular part from the setting. From the results, we can see that the lexical matching score is the most important among these three factors, and statistical information score is more important than edit distance score. Moreover, the dictionary term in lexical matching score significantly improves the performance. From these results, we obtain the best setting that involves all these three factors. We used this setting for dataset creation. Furthermore, the proposed method performs much better than LCS BIBREF12 ." ], [ "In this experiment, we analyzed and compared the performance of the SMT and various NMT models on our built dataset. To verify the effectiveness of our data augmented method. We trained the NMT and SMT models on both unaugmented dataset (including 0.46M training pairs) and augmented dataset, and test all the models on the same Test set which is augmented. The models to be tested and their configurations are as follows:", "SMT. The state-of-art Moses toolkit BIBREF19 was used to train SMT model. We used KenLM BIBREF20 to train a 5-gram language model, and the GIZA++ toolkit to align the data.", "RNN-based NMT. The basic RNN-based NMT model is based on BIBREF0 which is introduced above. Both the encoder and decoder used 2-layer RNN with 1024 LSTM cells, and the encoder is a bi-directional RNN. The batch size, threshold of element-wise gradient clipping and initial learning rate of Adam optimizer BIBREF21 were set to 128, 5.0 and 0.001. When trained the model on augmented dataset, we used 4-layer RNN. Several techniques were investigated to train the model, including layer-normalization BIBREF22 , RNN-dropout BIBREF23 , and learning rate decay BIBREF1 . The hyper-parameters were chosen empirically and adjusted in the Dev set. Furthermore, we tested the basic NMT model with several techniques, such as target language reversal BIBREF24 (reversing the order of the words in all target sentences, but not source sentences), residual connection BIBREF25 and pre-trained word2vec BIBREF26 . For word embedding pre-training, we collected an external ancient corpus which contains INLINEFORM0 134M tokens.", "Transformer-NMT. We also trained the Transformer model BIBREF4 which is a strong baseline of NMT on both augmented and unaugmented parallel corpus. The training configuration of the Transformer model is shown in Table TABREF32 . The hyper-parameters are set based on the settings in the paper BIBREF4 and the sizes of our training sets.", "For the evaluation, we used the average of 1 to 4 gram BLEUs multiplied by a brevity penalty BIBREF27 which computed by multi-bleu.perl in Moses as metrics. The results are reported in Table TABREF34 . For RNN-based NMT, we can see that target language reversal, residual connection, and word2vec can further improve the performance of the basic RNN-based NMT model. However, we find that word2vec and reversal tricks seem no obvious improvement when trained the RNN-based NMT and Transformer models on augmented parallel corpus. For SMT, it performs better than NMT models when they were trained on the unaugmented dataset. Nevertheless, when trained on the augmented dataset, both the RNN-based NMT model and Transformer based NMT model outperform the SMT model. In addition, as with other translation tasks BIBREF4 , the Transformer also performs better than RNN-based NMT.", "Because the Test set contains both augmented and unaugmented data, it is not surprising that the RNN-based NMT model and Transformer based NMT model trained on unaugmented data would perform poorly. In order to further verify the effect of data augmentation, we report the test results of the models on only unaugmented test data (including 48K test pairs) in Table TABREF35 . From the results, it can be seen that the data augmentation can still improve the models." ], [ "The generated samples of various models are shown in Figure FIGREF36 . Besides BLEU scores, we analyze these examples from a human perspective and draw some conclusions. At the same time, we design different metrics and evaluate on the whole Test set to support our conclusions as follows:", "On the one hand, we further compare the translation results from the perspective of people. We find that although the original meaning can be basically translated by SMT, its translation results are less smooth when compared with the other two NMT models (RNN-based NMT and Transformer). For example, the translations of SMT are usually lack of auxiliary words, conjunctions and function words, which is not consistent with human translation habits. To further confirm this conclusion, the average length of the translation results of the three models are measured (RNN-based NMT:17.12, SMT:15.50, Transformer:16.78, Reference:16.47). We can see that the average length of the SMT outputs is shortest, and the length gaps between the SMT outputs and the references are largest. Meanwhile, the average length of the sentences translated by Transformer is closest to the average length of references. These results indirectly verify our point of view, and show that the NMT models perform better than SMT in this task.", "On the other hand, there still exists some problems to be solved. We observe that translating proper nouns and personal pronouns (such as names, place names and ancient-specific appellations) is very difficult for all of these models. For instance, the ancient Chinese appellation `Zhen' should be translated into `Wo' in modern Chinese. Unfortunately, we calculate the accurate rate of some special words (such as `Zhen',`Chen' and `Gua'), and find that this rate is very low (the accurate rate of translating `Zhen' are: RNN-based NMT:0.14, SMT:0.16, Transformer:0.05). We will focus on this issue in the future." ], [ "We propose an effective ancient-modern Chinese clause alignment method which achieves 94.2 F1-score on Test set. Based on it, we build a large scale parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. In addition, we test the performance of the SMT and various NMT models on our built dataset and provide a strong NMT baseline for this task which achieves 27.16 BLEU score (4-gram). We further analyze the performance of the SMT and various NMT models and summarize some specific problems that machine translation models will encounter when translating ancient Chinese.", "For the future work, firstly, we are going to expand the dataset using the proposed method continually. Secondly, we will focus on solving the problem of proper noun translation and improve the translation system according to the features of ancient Chinese translation. Finally, we plan to introduce some techniques of statistical translation into neural machine translation to improve the performance.", "This work is supported by National Natural Science Fund for Distinguished Young Scholar (Grant No. 61625204) and partially supported by the State Key Program of National Science Foundation of China (Grant Nos. 61836006 and 61432014)." ] ], "section_name": [ "Introduction", "Overview", "Clause Alignment", "Ancient-Modern Chinese Dataset", "RNN-based NMT model", "Transformer-NMT", "Experiments", "Clause Alignment Results (Q1)", "Translation Results (Q2)", "Analysis", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "3934f19478179ed3df37403f0f3800eb4ba62d77" ], "answer": [ { "evidence": [ "RNN-based NMT model", "We first briefly introduce the RNN based Neural Machine Translation (RNN-based NMT) model. The RNN-based NMT with attention mechanism BIBREF0 has achieved remarkable performance on many translation tasks. It consists of encoder and decoder part.", "Transformer-NMT", "Recently, the Transformer model BIBREF4 has made remarkable progress in machine translation. This model contains a multi-head self-attention encoder and a multi-head self-attention decoder." ], "extractive_spans": [ "RNN-based NMT model", "Transformer-NMT" ], "free_form_answer": "", "highlighted_evidence": [ "RNN-based NMT model\nWe first briefly introduce the RNN based Neural Machine Translation (RNN-based NMT) model.", "Transformer-NMT\nRecently, the Transformer model BIBREF4 has made remarkable progress in machine translation. This model contains a multi-head self-attention encoder and a multi-head self-attention decoder." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "7ab27520d7c99455c0ddb3f8e3af47e4d2ba159f", "be0cb11171ba030a4fee4dba89c90f4169e6bfc9" ], "answer": [ { "evidence": [ "Data Collection. To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era. They used plain and accurate words to express what happened at that time, and thus ensure the generality of the translated materials." ], "extractive_spans": [ "ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era" ], "free_form_answer": "", "highlighted_evidence": [ "To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Data Collection. To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era. They used plain and accurate words to express what happened at that time, and thus ensure the generality of the translated materials." ], "extractive_spans": [], "free_form_answer": "Ancient Chinese history records in several dynasties and articles written by celebrities during 1000BC-200BC collected from the internet ", "highlighted_evidence": [ "Data Collection. To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "what NMT models did they compare with?", "Where does the ancient Chinese dataset come from?" ], "question_id": [ "27dbbd63c86d6ca82f251d4f2f030ed3e88f58fa", "b9d07757e2d2c4be41823dd1ea3b9c7f115b5f72" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Fig. 1. Some examples of different alignment modes. There is a pair of aligned ancient–modern Chinese paragraphs. The ancient Chinese paragraph on the left contains nine clauses, and the modern Chinese paragraph on the right contains wight clauses. The lines represent the alignment relation of ancient and modern clauses. The alignment of blue clauses is the 2-2 alignment mode, which covers the cases of sentence disordering. And the alignment of red clauses is the 2-1 alignment mode. The rest are in 1-1 alignment mode.", "Table 1. Statistical Information of the Ancient–Modern Chinese Parallel Corpus", "Fig. 2. Some data samples of the ancient–modern Chinese parallel corpus. The source language is ancient Chinese and the reference is modern Chinese.", "Table 2. Results of Various Hyper-parameters on the Dev Set", "Table 3. Evaluation Results on the Test Set", "Table 4. The Training Configuration of the Transformer", "Table 5. 1- to 4-gram BLEUs Results on Various Models", "Table 6. 1- to 4-gram BLEUs Results of NMT Models Tested on Only Unaugmented Test Data", "Fig. 3. Some generated samples of various models." ], "file": [ "4-Figure1-1.png", "6-Table1-1.png", "6-Figure2-1.png", "8-Table2-1.png", "9-Table3-1.png", "10-Table4-1.png", "10-Table5-1.png", "10-Table6-1.png", "11-Figure3-1.png" ] }
[ "Where does the ancient Chinese dataset come from?" ]
[ [ "1808.03738-Ancient-Modern Chinese Dataset-0" ] ]
[ "Ancient Chinese history records in several dynasties and articles written by celebrities during 1000BC-200BC collected from the internet " ]
254
1910.08293
Follow Alice into the Rabbit Hole: Giving Dialogue Agents Understanding of Human Level Attributes.
For conversational AI and virtual assistants to communicate with humans in a realistic way, they must exhibit human characteristics such as expression of emotion and personality. Current attempts toward constructing human-like dialogue agents have presented significant difficulties. We propose Human Level Attributes (HLAs) based on tropes as the basis of a method for learning dialogue agents that can imitate the personalities of fictional characters. Tropes are characteristics of fictional personalities that are observed recurrently and determined by viewers' impressions. By combining detailed HLA data with dialogue data for specific characters, we present a dataset that models character profiles and gives dialogue agents the ability to learn characters' language styles through their HLAs. We then introduce a three-component system, ALOHA (which stands for Artificial Learning On Human Attributes), that combines character space mapping, character community detection, and language style retrieval to build a character (or personality) specific language model. Our preliminary experiments demonstrate that ALOHA, combined with our proposed dataset, can outperform baseline models at identifying correct dialogue responses of any chosen target character, and is stable regardless of the character's identity, genre of the show, and context of the dialogue.
{ "paragraphs": [ [ "Attempts toward constructing human-like dialogue agents have met significant difficulties, such as maintaining conversation consistency BIBREF0. This is largely due to inabilities of dialogue agents to engage the user emotionally because of an inconsistent personality BIBREF1. Many agents use personality models that attempt to map personality attributes into lower dimensional spaces (e.g. the Big Five BIBREF2). However, these represent human personality at a very high-level and lack depth. They prohibit the ability to link specific and detailed personality traits to characters, and to construct large datasets where dialogue is traceable back to these traits.", "For this reason, we propose Human Level Attributes (HLAs), which we define as characteristics of fictional characters representative of their profile and identity. We base HLAs on tropes collected from TV Tropes BIBREF3, which are determined by viewers' impressions of the characters. See Figure FIGREF1 for an example. Based on the hypothesis that profile and identity contribute effectively to language style BIBREF4, we propose that modeling conversation with HLAs is a means for constructing a dialogue agent with stable human-like characteristics. By collecting dialogues from a variety of characters along with this HLA information, we present a novel labelling of this dialogue data where it is traceable back to both its context and associated human-like qualities.", "We also propose a system called ALOHA (Artificial Learning On Human Attributes) as a novel method of incorporating HLAs into dialogue agents. ALOHA maps characters to a latent space based on their HLAs, determines which are most similar in profile and identity, and recovers language styles of specific characters. We test the performance of ALOHA in character language style recovery against four baselines, demonstrating outperformance and system stability. We also run a human evaluation supporting our results. Our major contributions are: (1) We propose HLAs as personality aspects of fictional characters from the audience's perspective based on tropes; (2) We provide a large dialogue dataset traceable back to both its context and associated human-like attributes; (3) We propose a system called ALOHA that is able to recommend responses linked to specific characters. We demonstrate that ALOHA, combined with the proposed dataset, outperforms baselines. ALOHA also shows stable performance regardless of the character's identity, genre of the show, and context of the dialogue. We plan to release all of ALOHA's data and code." ], [ "Task completion chatbots (TCC), or task-oriented chatbots, are dialogue agents used to fulfill specific purposes, such as helping customers book airline tickets, or a government inquiry system. Examples include the AIML based chatbot BIBREF5 and DIVA Framework BIBREF6. While TCC are low cost, easily configurable, and readily available, they are restricted to working well for particular domains and tasks.", "Open-domain chatbots are more generic dialogue systems. An example is the Poly-encoder from BIBREF7 humeau2019real. It outperforms the Bi-encoder BIBREF8, BIBREF9 and matches the performance of the Cross-encoder BIBREF10, BIBREF11 while maintaining reasonable computation time. It performs strongly on downstream language understanding tasks involving pairwise comparisons, and demonstrates state-of-the-art results on the ConvAI2 challenge BIBREF12. Feed Yourself BIBREF13 is an open-domain dialogue agent with a self-feeding model. When the conversation goes well, the dialogue becomes part of the training data, and when the conversation does not, the agent asks for feedback. Lastly, Kvmemnn BIBREF14 is a key-value memory network with a knowledge base that uses a key-value retrieval mechanism to train over multiple domains simultaneously. We use all three of these models as baselines for comparison. While these can handle a greater variety of tasks, they do not respond with text that aligns with particular human-like characteristics.", "BIBREF15 li2016persona defines persona (composite of elements of identity) as a possible solution at the word level, using backpropagation to align responses via word embeddings. BIBREF16 bartl2017retrieval uses sentence embeddings and a retrieval model to achieve higher accuracy on dialogue context. BIBREF17 liu2019emotion applies emotion states of sentences as encodings to select appropriate responses. BIBREF18 pichl2018alquist uses knowledge aggregation and hierarchy of sub-dialogues for high user engagement. However, these agents represent personality at a high-level and lack detailed human qualities. LIGHT BIBREF19 models adventure game characters' dialogues, actions, and emotions. It focuses on the agent identities (e.g. thief, king, servant) which includes limited information on realistic human behaviours. BIBREF20 pasunuru2018game models online soccer games as dynamic visual context. BIBREF21 wang2016learning models user dialogue to complete tasks involving certain configurations of blocks. BIBREF22 antol2015vqa models open-ended questions, but is limited to visual contexts. BIBREF23 bordes2016learning tracks user dialogues but is goal-oriented. BIBREF24 ilinykh2019meetup tracks players' dialogues and movements in a visual environment, and is grounded on navigation tasks. All of these perform well in their respective fictional environments, but are not a strong representation of human dialogue in reality." ], [ "We collect HLA data from TV Tropes BIBREF3, a knowledge-based website dedicated to pop culture, containing information on a plethora of characters from a variety of sources. Similar to Wikipedia, its content is provided and edited collaboratively by a massive user-base. These attributes are determined by human viewers and their impressions of the characters, and are correlated with human-like characteristics. We believe that TV Tropes is better for our purpose of fictional character modeling than data sources used in works such as BIBREF25 shuster2019engaging because TV Tropes' content providers are rewarded for correctly providing content through community acknowledgement.", "TV Tropes defines tropes as attributes of storytelling that the audience recognizes and understands. We use tropes as HLAs to calculate correlations with specific target characters. We collect data from numerous characters from a variety of TV shows, movies, and anime. We filter and keep characters with at least five HLA, as those with fewer are not complex enough to be correctly modeled due to reasons such as lack of data. We end up eliminating 5.86% of total characters, and end up with 45,821 characters and 12,815 unique HLA, resulting in 945,519 total character-HLA pairs. Each collected character has 20.64 HLAs on average. See Figure FIGREF1 for an example character and their HLAs." ], [ "Our task is the following, where $t$ denotes “target\":", "Given a target character $c_t$ with HLA set $H_t$, recover the language style of $c_t$ without any dialogue of $c_t$ provided.", "For example, if Sheldon Cooper from The Big Bang Theory is $c_t$, then $H_t$ is the set of HLA on the left side of Figure FIGREF1.", "We define the language style of a character as its diction, tone, and speech patterns. It is a character specific language model refined from a general language model. We must learn to recover the language style of $c_t$ without its dialogue as our objective is to imitate human-like qualities, and hence the model must understand the language styles of characters based on their traits. If we feed $c_t$'s dialogue during training, the model will likely not effectively learn to imitate language styles based on HLAs, but based on the correlation between text in the training and testing dialogues BIBREF26.", "We define character space as the character representations within the HLA latent space (see Figure FIGREF4), and the set $C = \\lbrace c_1,c_2,...,c_n\\rbrace $ as the set of all characters. We define Observation (OBS) as the input that is fed into any dialogue model. This can be a single or multiple lines of dialogue along with other information. The goal of the dialogue model is to find the best response to this OBS." ], [ "We propose a three-component system called ALOHA to solve the task (see Figure FIGREF6). The first component, Character Space Module (CSM), generates the character space and calculates confidence levels using singular value decomposition BIBREF27 between characters $c_j$ (for $j = 1$ to $n$ where $j \\ne t$) and $c_t$ in the HLA-oriented neighborhood.", "The second component, Character Community Module (CCM), ranks the similarity between our target character $c_t$ with any other character $c_j$ by the relative distance between them in the character space.", "The third component, Language Style Recovery Module (LSRM), recovers the language style of $c_t$ without its dialogue by training the BERT bi-ranker model BIBREF28 to rank responses from similar characters. Our results demonstrate higher accuracy at retrieving the ground truth response from $c_t$. Our system is also able to pick responses which are correct both in context as well as character space.", "Hence, the overall process for ALOHA works as follows. First, given a set of characters, determine the character space using the CSM. Next, given a specific target character, determine the positive community and negative set of associated characters using the CCM. Lastly, using the positive community and negative set determined above along with a dialogue dataset, recover the language style of the target." ], [ "CSM learns how to rank characters. We can measure the interdependencies between the HLA variables BIBREF29 and rank the similarity between the TV show characters. We use implicit feedback instead of neighborhood models (e.g. cosine similarity) because it can compute latent factors to transform both characters and HLAs into the same latent space, making them directly comparable.", "We define a matrix $P$ that contains binary values, with $P_{u,i} = 1$ if character $u$ has HLA $i$ in our dataset, and $P_{u,i} = 0$ otherwise. We define a constant $\\alpha $ that measures our confidence in observing various character-HLA pairs as positive. $\\alpha $ controls how much the model penalizes the error if the ground truth is $P_{u,i} = 1$. If $P_{u,i} = 1$ and the model guesses incorrectly, we penalize by $\\alpha $ times the loss. But if $P_{u,i} = 0$ and the model guesses a value greater than 0, we do not penalize as $\\alpha $ has no impact. This is because $P_{u,i} = 0$ can either represent a true negative or be due to a lack of data, and hence is less reliable for penalization. See Equation DISPLAY_FORM8. We find that using $\\alpha =20$ provides decent results.", "We further define two dense vectors $X_u$ and $Y_i$. We call $X_u$ the “latent factors for character $u$\", and $Y_i$ the “latent factors for HLA $i$\". The dot product of these two vectors produces a value ($X_u^TY_i$) that approximates $P_{u,i}$ (see Figure FIGREF9). This is analogous to factoring the matrix $P$ into two separate matrices, where one contains the latent factors for characters, and the other contains the latent factors for HLAs. We find that $X_u$ and $Y_i$ being 36-dimensional produces decent results. To bring $X_u^TY_i$ as close as possible to $P_{u,i}$, we minimize the following loss function using the Conjugate Gradient Method BIBREF30:", "", "The first term penalizes differences between the model's prediction ($X_u^TY_i$) and the actual value ($P_{u,i}$). The second term is an L2 regularizer to reduce overfitting. We find $\\lambda = 100$ provides decent results for 500 iterations (see Section SECREF26)." ], [ "CCM aims to divide characters (other than $c_t$) into a positive community and a negative set. We define this positive community as characters that are densely connected internally to $c_t$ within the character space, and the negative set as the remaining characters. We can then sample dialogue from characters in the negative set to act as the distractors (essentially negative samples) during LSRM training.", "As community finding is an ill-defined problem BIBREF31, we choose to treat CCM as a simple undirected, unweighted graph. We use the values learned in the CSM for $X_u$ and $Y_i$ for various values of $u$ and $i$, which approximate the matrix $P$. Similar to BIBREF29 hu2008collaborative, we can calculate the correlation between two rows (and hence two characters).", "We then employ a two-level connection representation by ranking all characters against each other in terms of their correlation with $c_t$. For the first level, the set $S^{FL}$ is the top 10% (4582) most highly correlated characters with $c_t$ out of the 45,820 total other characters that we have HLA data for. For the second level, for each character $s_i$ in $S^{FL}$, we determine the 30 most heavily correlated characters with $s_i$ as set $S^{SL}_i$. The positive set $S^{pos}$ are the characters which are present in at least 10 $S^{SL}_i$ sets. We call this value 10 the minimum frequency. All other characters in our dialogue dataset make up the negative set $S^{neg}$. These act as our positive community and negative set, respectively. See Algorithm 1 in Appendix A for details, and Figure FIGREF11 for an example." ], [ "LSRM creates a dialogue agent that aligns with observed characteristics of human characters by using the positive character community and negative set determined in the CCM, along with a dialogue dataset, to recover the language style of $c_t$ without its dialogue. We use the BERT bi-ranker model from the Facebook ParlAI framework BIBREF32, where the model has the ability to retrieve the best response out of 20 candidate responses. BIBREF12, BIBREF19, BIBREF0 choose 20 candidate responses, and for comparison purposes, we do the same." ], [ "BIBREF28 is first trained on massive amounts of unlabeled text data. It jointly conditions on text on both the left and right, which provides a deep bi-directional representation of sentence inference. BERT is proven to perform well on a wide range of tasks by simply fine-tuning on one additional layer. We are interested in its ability to predict the next sentence, called Next Sentence Prediction. We perform further fine-tuning on BERT for our target character language style retrieval task to produce our LSRM model by optimizing both the encoding layers and the additional layer. We use BERT to create vector representations for the OBS and for each candidate response. By passing the first output of BERT's 12 layers through an additional linear layer, these representations can be obtained as 768-dimensional sentence-level embeddings. It uses the dot product between these embeddings to score candidate responses and is trained using the ranking loss." ], [ "is similar to the procedure from previous work done on grounded dialogue agents BIBREF0, BIBREF19. Along with the ground truth response, we randomly sample 19 distractor responses from other characters from a uniform distribution of characters, and call this process uniform character sampling. Based on our observations, this random sampling provides multiple context correct responses. Hence, the BERT bi-ranker model is trained by learning to choose context correct responses, and the model learns to recover a domain-general language model that includes training on every character. This results in a Uniform Model that can select context correct responses, but not responses corresponding to a target character with specific HLAs.", "We then fine-tune on the above model to produce our LSRM model with a modification: we randomly sample the 19 distractor responses from only the negative character set instead. We choose the responses that have similar grammatical structures and semantics to the ground truth response, and call this process negative character sampling. This guides the model away from the language style of these negative characters to improve performance at retrieving responses for target characters with specific HLAs. Our results demonstrate higher accuracy at retrieving the correct response from character $c_t$, which is the ground truth." ], [ "To train the Uniform Model and LSRM, we collect dialogues from 327 major characters (a subset of the 45,821 characters we have HLA data for) in 38 TV shows from various existing sources of clean data on the internet, resulting in a total of 1,042,647 dialogue lines. We use a setup similar to the Persona-Chat dataset BIBREF0 and Cornell Movie-Dialogs Corpus BIBREF33, as our collected dialogues are also paired in terms of valid conversations. See Figure FIGREF1 for an example of these dialogue lines." ], [ "We define HLA Observation Guidance (HLA-OG) as explicitly passing a small subset of the most important HLAs of a given character as part of the OBS rather than just an initial line of dialogue. This is adapted from the process used in BIBREF0 zhang2018personalizing and BIBREF10 wolf2019transfertransfo which we call Persona Profiling. Specifically, we pass four HLAs that are randomly drawn from the top 40 most important HLAs of the character. We use HLA-OG during training of the LSRM and testing of all models. This is because the baselines (see Section SECREF31) already follow a similar process (Persona Profiling) for training. For the Uniform Model, we train using Next Sentence Prediction (see Section SECREF12). For testing, HLA-OG is necessary as it provides information about which HLAs the models should attempt to imitate in their response selection. Just passing an initial line of dialogue replicates a typical dialogue response task without HLAs. See Table TABREF19. Further, we also test our LSRM by explicitly passing four HLAs of `none' along with the initial line of dialogue as the OBS (No HLA-OG in Table TABREF19)." ], [ "is trained by us on the Persona-Chat dataset for the ConvAI2 challenge. Similar to BIBREF0 zhang2018personalizing, we cap the length of the OBS at 360 tokens and the length of each candidate response at 72 tokens. We use a batch size of 64, learning rate of 5e-5, and perform warm-up updates for 100 iterations. The learning rate scheduler uses SGD optimizer with Nesterov's accelerated gradient descent BIBREF34 and is set to have a decay of 0.4 and to reduce on plateau." ], [ "is produced by finetuning the BERT bi-ranker on the dialogue data discussed in Section SECREF15 using uniform character sampling. We use the same hyperparameters as the BERT bi-ranker along with half-precision operations (i.e. float16 operations) to increase batch size as recommended BIBREF7." ], [ "is produced by finetuning on the Uniform Model discussed above using negative character sampling. We use the same hyperparameters as the BERT bi-ranker along with half-precision operations (i.e. float16 operations) to increase batch size as recommended." ], [ "We begin by evaluating the ability of the CSM component of our system to correctly generate the character space. To do so, during training, 30% of the character-HLA pairs (which are either 0 or 1) are masked, and this is used as a validation set (see Figure FIGREF9). For each character $c$, the model generates a list of the 12,815 unique HLAs ranked similarly to BIBREF29 hu2008collaborative for $c$. We look at the recall of our CSM model, which measures the percentage of total ground truth HLAs (over all characters $c$) present within the top N ranked HLAs for all $c$ by our model. That is:", "", "where $HLA_{c}^{gt}$ are the ground truth HLAs for $c$, and $HLA_{c}^{tN}$ are the top N ranked HLAs by the model for $c$. We use $N = 100$, and our model achieves 25.08% recall.", "To inspect the CSM performance, we use the T-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF35 to reduce each high-dimensionality data point to two-dimensions via Kullback-Leibler Divergence BIBREF36. This allows us to map our character space into two-dimensions, where similar characters from our embedding space have higher probability of being mapped close by. We sampled characters from four different groups or regions. As seen in Figure FIGREF4, our learned character space effectively groups these characters, as similar characters are adjacent to one another in four regions." ], [ "is used for training and testing of the Uniform Model and LSRM. The folds are divided randomly by the TV shows in our dialogue data. We use the dialogue data for 80% of these shows as the four-folds for training, and the dialogue data for the remaining 20% as the fifth-fold for validation/testing. The dialogue data used is discussed in Section SECREF15. This ensures no matter how our data is distributed, each part of it is tested, allowing our evaluation to be more robust to different characters. See Appendix C for five-fold cross validation details and statistics." ], [ "are chosen, one from each of the five testing sets above. Each is a well-known character from a separate TV show, and acts as a target character $c_t$ for evaluation of every model. We choose Sheldon Cooper from The Big Bang Theory, Jean-Luc Picard from Star Trek, Monica Geller from Friends, Gil Grissom from CSI, and Marge Simpson from The Simpsons. We choose characters of significantly different identities and profiles (intelligent scientist, ship captain, outgoing friend, police leader, and responsible mother, respectively) from shows of a variety of genres to ensure that we can successfully recover the language styles of various types of characters. We choose well-known characters because humans require knowledge on the characters they are evaluating (see Section SECREF40).", "For each of these five evaluation characters, all the dialogue lines from the character act as the ground truth responses. The initial dialogue lines are the corresponding dialogue lines to which these ground truth responses are responding. For each initial dialogue line, we randomly sample 19 other candidate responses from the associated testing set using uniform character sampling. Note that this is for evaluation, and hence we use the same uniform character sampling method for all models including ALOHA. The use of negative character sampling is only in ALOHA's training." ], [ "We compare against four dialogue system baselines: Kvmemnn, Feed Yourself, Poly-encoder, and a BERT bi-ranker baseline trained on the Persona-Chat dataset using the same training hyperparameters (including learning rate scheduler and length capping settings) described in Section SECREF20. For the first three models, we use the provided pretrained (on Persona-Chat) models. We evaluate all four on our five evaluation characters discussed in Section SECREF28." ], [ "is the accuracy of the correct ground truth response being within the top $n$ ranked candidate responses out of $N$ total candidates. We measure Hits@1/20, Hits@5/20, and Hits@10/20." ], [ "is the average rank that a model assigns the ground truth response among the 20 total candidates." ], [ "BIBREF37 looks at the mean of the multiplicative inverses of the rank of each correct answer out of a sample of queries $Q$:", "", "where $rank_i$ refers to the rank position of the correct response for the $i$-th query, and $|Q|$ refers to the total number of queries in $Q$." ], [ "equals $2 * \\frac{precision*recall}{precision+recall}$. For dialogue, precision is the fraction of words in the chosen response contained in the ground truth response, and recall is the fraction of words in the ground truth response contained in the chosen response." ], [ "BIBREF38 generally indicates how close two pieces of text are in content and structure, with higher values indicating greater similarity. We report our final BLEU scores as the average scores of 1 to 4-grams." ], [ "We conduct a human evaluation with 12 participants, 8 male and 4 female, who are affiliated project researchers aged 20-39 at the University of [ANON]. We choose the same five evaluation characters as in Section SECREF28. To control bias, each participant evaluates one or two characters. For each character, we randomly select 10 testing samples (each includes an initial line of dialogue along with 20 candidate responses, one of which is the ground truth) from the same testing data for the automatic evaluation discussed in Section SECREF28.", "These ten samples make up a single questionnaire presented in full to each participant evaluating the corresponding character, and the participant is asked to select the single top response they think the character would most likely respond with for each of the ten initial dialogue lines. See Figure FIGREF41 for an example. We mask any character names within the candidate responses to prevent human participants from using names to identify which show the response is from.", "Each candidate is prescreened to ensure they have sufficient knowledge of the character to be a participant. We ask three prescreening questions where the participant has to identify an image, relationship, and occupation of the character. All 12 of our participants passed the the prescreening." ], [ "Table TABREF44 shows average results of our automatic and human evaluations. Table TABREF45 shows average Hits@1/20 scores by evaluation character. See Appendix F for detailed evaluation results. ALOHA is the model with HLA-OG during training and testing, and ALOHA (No HLA-OG) is the model with HLA-OG during training but tested with the four HLAs in the OBS marked as `none' (see Section SECREF17). See Appendix G for demo interactions between a human, BERT bi-ranker baseline, and ALOHA for all five evaluation characters." ], [ "The evaluation of our task (retrieving the language style of a specific character) is challenging and hence the five-fold cross validation is necessary for the following reasons:", "The ability to choose a context correct response without attributes of specific characters may be hard to separate from our target metric, which is the ability to retrieve the correct response of a target character by its HLAs. However, from manual observation, we noticed that in the 20 chosen candidate responses, there are typically numerous context correct responses, but only one ground truth for the target character (for an example, see Figure FIGREF41). Hence, a model that only chooses dialogue based on context is distinguishable from one that learns HLAs.", "Retrieving responses for the target character depends on the other candidate responses. For example, dialogue retrieval performance for Grissom from CSI, which is a crime/police context, is higher than other evaluation characters (see Table TABREF45), potentially due to other candidate responses not falling within the same crime/police context." ], [ "As observed from Table TABREF44, ALOHA has a performance relatively close to humans. Human Hits@1/20 scores have a mean of 40.67% and a median over characters of 40%. The limited human evaluation sample size limits what can be inferred, but it indicates that the problem is solved to the extent that ALOHA is able perform relatively close to humans on average. Notice that even humans do not perform extremely well, demonstrating that this task of character based dialogue retrieval is more difficult than typical dialogue retrieval tasks BIBREF19, BIBREF12.", "Looking more closely at each character from Table TABREF45, we can see that human evaluation scores are higher for Sheldon and Grissom. This may be due to these characters having more distinct personalities, making them more memorable.", "We also look at Pearson correlation values of the Hits@1/20 scores across the five evaluation characters. For human versus Uniform Model, this is -0.4694, demonstrating that the Uniform Model, without knowledge of HLAs, fails to imitate human impressions. For human versus ALOHA, this is 0.4250, demonstrating that our system is able to retrieve character responses somewhat similarly to human impressions. Lastly, for human versus the difference in scores between ALOHA and Uniform Model, this is 0.7815. The difference between ALOHA and the Uniform Model, which is based on the additional knowledge of the HLAs, is hence shown to improve upon the Uniform Model similarly to human impressions. This demonstrates that HLAs are indeed an accurate method of modeling human impressions of character attributes, and also demonstrates that our system, ALOHA, is able to effectively use these HLAs to improve upon dialogue retrieval performance." ], [ "ALOHA, combined with the HLAs and dialogue dataset, achieves a significant improvement on the target character language style retrieval task compared to the baseline open-domain chatbot models. As observed from Table TABREF44, ALOHA achieves a significant boost in Hits@n/N accuracy and other metrics for retrieving the correct response of five diverse characters with different identities (see Section SECREF28)." ], [ "We observe a noticeable improvement in performance between ALOHA and the Uniform Model in recovering the language styles of specific characters that is consistent across all five folds (see Tables TABREF44 and TABREF45), indicating that lack of knowledge of HLAs limits the ability of the model to successfully recover the language style of specific characters. We claim that, to the best of our knowledge, we have made the first step in using HLA-based character dialogue clustering to improve upon personality learning for chatbots.", "ALOHA demonstrates an accuracy boost for all five evaluation characters, showing that the system is robust and stable and has the ability to recover the dialogue styles of fictional characters regardless of the character's profile and identity, genre of the show, and context of the dialogue." ], [ "As observed from Table TABREF44, ALOHA performs slightly better overall compared to ALOHA (No HLA-OG). Table TABREF45 shows that this slight performance increase is consistent across four of the five evaluation characters. In the case of Sheldon, the HLA-OG model performs a bit worse. This is possibly due to the large number of Sheldon's HLAs (217) compared to the other four evaluation characters (average of 93.75), along with the limited amount of HLAs we are using for guidance due to the models' limited memory. In general, HLA Observation Guidance during testing appears to improve upon the performance of ALOHA, but this improvement is minimal." ], [ "We proposed Human Level Attributes (HLAs) as a novel approach to model human-like attributes of characters, and collected a large volume of dialogue data for various characters with complete and robust profiles. We also proposed and evaluated a system, ALOHA, that uses HLAs to recommend tailored responses traceable to specific characters, and demonstrated its outperformance of the baselines and ability to effectively recover language styles of various characters, showing promise for learning character or personality styles. ALOHA was also shown to be stable regardless of the character's identity, genre of show, and context of dialogue.", "Potential directions for future work include training ALOHA with a multi-turn response approach BIBREF0 that tracks dialogue over multiple responses, as we could not acquire multi-turn dialogue data for TV shows. Another potential is the modeling of the dialog counterpart (e.g. the dialogue of other characters speaking to the target character). Further, performing semantic text exchange on the chosen response with a model such as SMERTI BIBREF39 may improve the ability of ALOHA to converse with humans. This is because the response may be context and HLA correct, but incorrect semantically (e.g. the response may say the weather is sunny when it is actually rainy). HLA-aligned generative models is another area of exploration. Typically, generative models produce text that is less fluent, but further work in this area may lead to better results. Lastly, a more diverse and larger participant pool is required due to the limited size of our human evaluation." ] ], "section_name": [ "Introduction", "Related Work", "Methodology ::: Human Level Attributes (HLA)", "Methodology ::: Overall Task", "Methodology ::: ALOHA", "Methodology ::: Character Space Module (CSM)", "Methodology ::: Character Community Module (CCM)", "Methodology ::: Language Style Recovery Module (LSRM)", "Methodology ::: Language Style Recovery Module (LSRM) ::: BERT", "Methodology ::: Language Style Recovery Module (LSRM) ::: Candidate response selection", "Experiment ::: Dialogue Dataset", "Experiment ::: HLA Observation Guidance (HLA-OG)", "Experiment ::: Training Details ::: BERT bi-ranker", "Experiment ::: Training Details ::: Uniform Model", "Experiment ::: Training Details ::: LSRM", "Evaluation ::: CSM Evaluation", "Evaluation ::: Automatic Evaluation Setup ::: Five-Fold Cross Validation", "Evaluation ::: Automatic Evaluation Setup ::: Five Evaluation Characters", "Evaluation ::: Baselines", "Evaluation ::: Key Evaluation Metrics ::: Hits@n/N", "Evaluation ::: Key Evaluation Metrics ::: Mean Rank", "Evaluation ::: Key Evaluation Metrics ::: Mean Reciprocal Rank (MRR)", "Evaluation ::: Key Evaluation Metrics ::: @!START@$F_1$@!END@-score", "Evaluation ::: Key Evaluation Metrics ::: BLEU", "Evaluation ::: Human Evaluation Setup", "Results and Analysis ::: Evaluation Results", "Results and Analysis ::: Evaluation Challenges", "Results and Analysis ::: Performance: ALOHA vs. Humans", "Results and Analysis ::: Performance: ALOHA vs. Baselines", "Results and Analysis ::: Performance: ALOHA vs. Uniform Model", "Results and Analysis ::: Performance: HLA-OG", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "8fa55612fdac777eeb0912ab8fc99eace0fdb3db", "e0c8db5f0c47131fc71dea81d10791c71dfb8197" ], "answer": [ { "evidence": [ "TV Tropes defines tropes as attributes of storytelling that the audience recognizes and understands. We use tropes as HLAs to calculate correlations with specific target characters. We collect data from numerous characters from a variety of TV shows, movies, and anime. We filter and keep characters with at least five HLA, as those with fewer are not complex enough to be correctly modeled due to reasons such as lack of data. We end up eliminating 5.86% of total characters, and end up with 45,821 characters and 12,815 unique HLA, resulting in 945,519 total character-HLA pairs. Each collected character has 20.64 HLAs on average. See Figure FIGREF1 for an example character and their HLAs." ], "extractive_spans": [ "45,821 characters" ], "free_form_answer": "", "highlighted_evidence": [ "We end up eliminating 5.86% of total characters, and end up with 45,821 characters and 12,815 unique HLA, resulting in 945,519 total character-HLA pairs.", " We end up eliminating 5.86% of total characters, and end up with 45,821 characters and 12,815 unique HLA, resulting in 945,519 total character-HLA pairs." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "TV Tropes defines tropes as attributes of storytelling that the audience recognizes and understands. We use tropes as HLAs to calculate correlations with specific target characters. We collect data from numerous characters from a variety of TV shows, movies, and anime. We filter and keep characters with at least five HLA, as those with fewer are not complex enough to be correctly modeled due to reasons such as lack of data. We end up eliminating 5.86% of total characters, and end up with 45,821 characters and 12,815 unique HLA, resulting in 945,519 total character-HLA pairs. Each collected character has 20.64 HLAs on average. See Figure FIGREF1 for an example character and their HLAs." ], "extractive_spans": [ "45,821 characters" ], "free_form_answer": "", "highlighted_evidence": [ "We end up eliminating 5.86% of total characters, and end up with 45,821 characters and 12,815 unique HLA, resulting in 945,519 total character-HLA pairs." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "3bdb6810f3afc2235f8c9d1e08190073067d64c5" ], "answer": [ { "evidence": [ "We collect HLA data from TV Tropes BIBREF3, a knowledge-based website dedicated to pop culture, containing information on a plethora of characters from a variety of sources. Similar to Wikipedia, its content is provided and edited collaboratively by a massive user-base. These attributes are determined by human viewers and their impressions of the characters, and are correlated with human-like characteristics. We believe that TV Tropes is better for our purpose of fictional character modeling than data sources used in works such as BIBREF25 shuster2019engaging because TV Tropes' content providers are rewarded for correctly providing content through community acknowledgement." ], "extractive_spans": [ "attributes are determined by human viewers and their impressions of the characters, and are correlated with human-like characteristics" ], "free_form_answer": "", "highlighted_evidence": [ "We collect HLA data from TV Tropes BIBREF3, a knowledge-based website dedicated to pop culture, containing information on a plethora of characters from a variety of sources. Similar to Wikipedia, its content is provided and edited collaboratively by a massive user-base. These attributes are determined by human viewers and their impressions of the characters, and are correlated with human-like characteristics. We believe that TV Tropes is better for our purpose of fictional character modeling than data sources used in works such as BIBREF25 shuster2019engaging because TV Tropes' content providers are rewarded for correctly providing content through community acknowledgement." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3c8d8b86bd27b426af591c8b09b84ed24a1ec29a" ], "answer": [ { "evidence": [ "Table TABREF44 shows average results of our automatic and human evaluations. Table TABREF45 shows average Hits@1/20 scores by evaluation character. See Appendix F for detailed evaluation results. ALOHA is the model with HLA-OG during training and testing, and ALOHA (No HLA-OG) is the model with HLA-OG during training but tested with the four HLAs in the OBS marked as `none' (see Section SECREF17). See Appendix G for demo interactions between a human, BERT bi-ranker baseline, and ALOHA for all five evaluation characters." ], "extractive_spans": [], "free_form_answer": "Metric difference between Aloha and best baseline score:\nHits@1/20: +0.061 (0.3642 vs 0.3032)\nMRR: +0.0572(0.5114 vs 0.4542)\nF1: -0.0484 (0.3901 vs 0.4385)\nBLEU: +0.0474 (0.2867 vs 0.2393)", "highlighted_evidence": [ "Table TABREF44 shows average results of our automatic and human evaluations." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "39d9a196a8997c101f075a643578b3568ab73cf8", "4a6ba5c71252cd704f94da0dfb306e6412a2a886" ], "answer": [ { "evidence": [ "Open-domain chatbots are more generic dialogue systems. An example is the Poly-encoder from BIBREF7 humeau2019real. It outperforms the Bi-encoder BIBREF8, BIBREF9 and matches the performance of the Cross-encoder BIBREF10, BIBREF11 while maintaining reasonable computation time. It performs strongly on downstream language understanding tasks involving pairwise comparisons, and demonstrates state-of-the-art results on the ConvAI2 challenge BIBREF12. Feed Yourself BIBREF13 is an open-domain dialogue agent with a self-feeding model. When the conversation goes well, the dialogue becomes part of the training data, and when the conversation does not, the agent asks for feedback. Lastly, Kvmemnn BIBREF14 is a key-value memory network with a knowledge base that uses a key-value retrieval mechanism to train over multiple domains simultaneously. We use all three of these models as baselines for comparison. While these can handle a greater variety of tasks, they do not respond with text that aligns with particular human-like characteristics.", "We compare against four dialogue system baselines: Kvmemnn, Feed Yourself, Poly-encoder, and a BERT bi-ranker baseline trained on the Persona-Chat dataset using the same training hyperparameters (including learning rate scheduler and length capping settings) described in Section SECREF20. For the first three models, we use the provided pretrained (on Persona-Chat) models. We evaluate all four on our five evaluation characters discussed in Section SECREF28." ], "extractive_spans": [ "the Poly-encoder from BIBREF7 humeau2019real", "Feed Yourself BIBREF13 is an open-domain dialogue agent with a self-feeding model", "Kvmemnn BIBREF14 is a key-value memory network with a knowledge base that uses a key-value retrieval mechanism to train over multiple domains simultaneously", "We compare against four dialogue system baselines: Kvmemnn, Feed Yourself, Poly-encoder, and a BERT bi-ranker baseline trained on the Persona-Chat dataset using the same training hyperparameters (including learning rate scheduler and length capping settings) described in Section SECREF20.", "a BERT bi-ranker" ], "free_form_answer": "", "highlighted_evidence": [ "Open-domain chatbots are more generic dialogue systems. An example is the Poly-encoder from BIBREF7 humeau2019real. It outperforms the Bi-encoder BIBREF8, BIBREF9 and matches the performance of the Cross-encoder BIBREF10, BIBREF11 while maintaining reasonable computation time. It performs strongly on downstream language understanding tasks involving pairwise comparisons, and demonstrates state-of-the-art results on the ConvAI2 challenge BIBREF12. Feed Yourself BIBREF13 is an open-domain dialogue agent with a self-feeding model. When the conversation goes well, the dialogue becomes part of the training data, and when the conversation does not, the agent asks for feedback. Lastly, Kvmemnn BIBREF14 is a key-value memory network with a knowledge base that uses a key-value retrieval mechanism to train over multiple domains simultaneously. We use all three of these models as baselines for comparison. ", "We compare against four dialogue system baselines: Kvmemnn, Feed Yourself, Poly-encoder, and a BERT bi-ranker baseline trained on the Persona-Chat dataset using the same training hyperparameters (including learning rate scheduler and length capping settings) described in Section SECREF20." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We compare against four dialogue system baselines: Kvmemnn, Feed Yourself, Poly-encoder, and a BERT bi-ranker baseline trained on the Persona-Chat dataset using the same training hyperparameters (including learning rate scheduler and length capping settings) described in Section SECREF20. For the first three models, we use the provided pretrained (on Persona-Chat) models. We evaluate all four on our five evaluation characters discussed in Section SECREF28." ], "extractive_spans": [ "Kvmemnn", " Feed Yourself", "Poly-encoder", "BERT bi-ranker" ], "free_form_answer": "", "highlighted_evidence": [ "We compare against four dialogue system baselines: Kvmemnn, Feed Yourself, Poly-encoder, and a BERT bi-ranker baseline trained on the Persona-Chat dataset using the same training hyperparameters (including learning rate scheduler and length capping settings) described in Section SECREF20." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How many different characters were in dataset?", "How does dataset model character's profiles?", "How big is the difference in performance between proposed model and baselines?", "What baseline models are used?" ], "question_id": [ "808f0ad46ca4eb4ea5492f9e14ca043fe1e206cc", "36ae003c7cb2a1bbfa90b89c671bc286bd3b3dfd", "f0b1d8c0a44dbe8d444a5dbe2d9c3d51e048a6f6", "357eb9f0c07fa45e482d998a8268bd737beb827f" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Example of a character and its associated HLAs (tropes) on the left and dialogue lines on the right.", "Figure 4: Illustration of our Collaborative Filtering procedure. Green check-marks indicate a character having an HLA, and ‘X’ indicates otherwise. We randomly mask 30% of this data for validation, as marked by the ‘?’.", "Figure 2: t-SNE visualization of the character space generated by our Character Space Module (CSM) based on HLAs.", "Figure 3: Overall system architecture.", "Figure 5: Illustration of the two-level connection representation procedure, using a minimum frequency of two. The transparent red circle indicates the first level set (SFL), while the blue ones indicate the sets SSLi . The lines indicate connections between the characters within the community structure of ct.", "Table 1: Example for Persona Profiling, HLA-OG, No HLA-OG, and Next Sentence Prediction. All lines under OBS are fed together as input to the dialogue retrieval model.", "Figure 6: Example of what a human participant sees on each page of the questionnaire, along with their chosen response and the ground truth. As seen, there are multiple context correct (but not necessarily HLA correct) candidate responses.", "Table 2: Average automatic and human evaluation results.", "Table 3: Average Hits@1/20 scores by evaluation character." ], "file": [ "1-Figure1-1.png", "3-Figure4-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "4-Figure5-1.png", "5-Table1-1.png", "6-Figure6-1.png", "6-Table2-1.png", "6-Table3-1.png" ] }
[ "How big is the difference in performance between proposed model and baselines?" ]
[ [ "1910.08293-Results and Analysis ::: Evaluation Results-0" ] ]
[ "Metric difference between Aloha and best baseline score:\nHits@1/20: +0.061 (0.3642 vs 0.3032)\nMRR: +0.0572(0.5114 vs 0.4542)\nF1: -0.0484 (0.3901 vs 0.4385)\nBLEU: +0.0474 (0.2867 vs 0.2393)" ]
255
1909.01296
PolyResponse: A Rank-based Approach to Task-Oriented Dialogue with Application in Restaurant Search and Booking
We present PolyResponse, a conversational search engine that supports task-oriented dialogue. It is a retrieval-based approach that bypasses the complex multi-component design of traditional task-oriented dialogue systems and the use of explicit semantics in the form of task-specific ontologies. The PolyResponse engine is trained on hundreds of millions of examples extracted from real conversations: it learns what responses are appropriate in different conversational contexts. It then ranks a large index of text and visual responses according to their similarity to the given context, and narrows down the list of relevant entities during the multi-turn conversation. We introduce a restaurant search and booking system powered by the PolyResponse engine, currently available in 8 different languages.
{ "paragraphs": [ [ "Task-oriented dialogue systems are primarily designed to search and interact with large databases which contain information pertaining to a certain dialogue domain: the main purpose of such systems is to assist the users in accomplishing a well-defined task such as flight booking BIBREF0, tourist information BIBREF1, restaurant search BIBREF2, or booking a taxi BIBREF3. These systems are typically constructed around rigid task-specific ontologies BIBREF1, BIBREF4 which enumerate the constraints the users can express using a collection of slots (e.g., price range for restaurant search) and their slot values (e.g., cheap, expensive for the aforementioned slots). Conversations are then modelled as a sequence of actions that constrain slots to particular values. This explicit semantic space is manually engineered by the system designer. It serves as the output of the natural language understanding component as well as the input to the language generation component both in traditional modular systems BIBREF5, BIBREF6 and in more recent end-to-end task-oriented dialogue systems BIBREF7, BIBREF8, BIBREF9, BIBREF3.", "Working with such explicit semantics for task-oriented dialogue systems poses several critical challenges on top of the manual time-consuming domain ontology design. First, it is difficult to collect domain-specific data labelled with explicit semantic representations. As a consequence, despite recent data collection efforts to enable training of task-oriented systems across multiple domains BIBREF0, BIBREF3, annotated datasets are still few and far between, as well as limited in size and the number of domains covered. Second, the current approach constrains the types of dialogue the system can support, resulting in artificial conversations, and breakdowns when the user does not understand what the system can and cannot support. In other words, training a task-based dialogue system for voice-controlled search in a new domain always implies the complex, expensive, and time-consuming process of collecting and annotating sufficient amounts of in-domain dialogue data.", "In this paper, we present a demo system based on an alternative approach to task-oriented dialogue. Relying on non-generative response retrieval we describe the PolyResponse conversational search engine and its application in the task of restaurant search and booking. The engine is trained on hundreds of millions of real conversations from a general domain (i.e., Reddit), using an implicit representation of semantics that directly optimizes the task at hand. It learns what responses are appropriate in different conversational contexts, and consequently ranks a large pool of responses according to their relevance to the current user utterance and previous dialogue history (i.e., dialogue context).", "The technical aspects of the underlying conversational search engine are explained in detail in our recent work BIBREF11, while the details concerning the Reddit training data are also available in another recent publication BIBREF12. In this demo, we put focus on the actual practical usefulness of the search engine by demonstrating its potential in the task of restaurant search, and extending it to deal with multi-modal data. We describe a PolyReponse system that assists the users in finding a relevant restaurant according to their preference, and then additionally helps them to make a booking in the selected restaurant. Due to its retrieval-based design, with the PolyResponse engine there is no need to engineer a structured ontology, or to solve the difficult task of general language generation. This design also bypasses the construction of dedicated decision-making policy modules. The large ranking model already encapsulates a lot of knowledge about natural language and conversational flow.", "Since retrieved system responses are presented visually to the user, the PolyResponse restaurant search engine is able to combine text responses with relevant visual information (e.g., photos from social media associated with the current restaurant and related to the user utterance), effectively yielding a multi-modal response. This setup of using voice as input, and responding visually is becoming more and more prevalent with the rise of smart screens like Echo Show and even mixed reality. Finally, the PolyResponse restaurant search engine is multilingual: it is currently deployed in 8 languages enabling search over restaurants in 8 cities around the world. System snapshots in four different languages are presented in Figure FIGREF16, while screencast videos that illustrate the dialogue flow with the PolyResponse engine are available at: https://tinyurl.com/y3evkcfz." ], [ "The PolyResponse system is powered by a single large conversational search engine, trained on a large amount of conversational and image data, as shown in Figure FIGREF2. In simple words, it is a ranking model that learns to score conversational replies and images in a given conversational context. The highest-scoring responses are then retrieved as system outputs. The system computes two sets of similarity scores: 1) $S(r,c)$ is the score of a candidate reply $r$ given a conversational context $c$, and 2) $S(p,c)$ is the score of a candidate photo $p$ given a conversational context $c$. These scores are computed as a scaled cosine similarity of a vector that represents the context ($h_c$), and a vector that represents the candidate response: a text reply ($h_r$) or a photo ($h_p$). For instance, $S(r,c)$ is computed as $S(r,c)=C cos(h_r,h_c)$, where $C$ is a learned constant. The part of the model dealing with text input (i.e., obtaining the encodings $h_c$ and $h_r$) follows the architecture introduced recently by Henderson:2019acl. We provide only a brief recap here; see the original paper for further details." ], [ "The model, implemented as a deep neural network, learns to respond by training on hundreds of millions context-reply $(c,r)$ pairs. First, similar to Henderson:2017arxiv, raw text from both $c$ and $r$ is converted to unigrams and bigrams. All input text is first lower-cased and tokenised, numbers with 5 or more digits get their digits replaced by a wildcard symbol #, while words longer than 16 characters are replaced by a wildcard token LONGWORD. Sentence boundary tokens are added to each sentence. The vocabulary consists of the unigrams that occur at least 10 times in a random 10M subset of the Reddit training set (see Figure FIGREF2) plus the 200K most frequent bigrams in the same random subset.", "During training, we obtain $d$-dimensional feature representations ($d=320$) shared between contexts and replies for each unigram and bigram jointly with other neural net parameters. A state-of-the-art architecture based on transformers BIBREF13 is then applied to unigram and bigram vectors separately, which are then averaged to form the final 320-dimensional encoding. That encoding is then passed through three fully-connected non-linear hidden layers of dimensionality $1,024$. The final layer is linear and maps the text into the final $l$-dimensional ($l=512$) representation: $h_c$ and $h_r$. Other standard and more sophisticated encoder models can also be used to provide final encodings $h_c$ and $h_r$, but the current architecture shows a good trade-off between speed and efficacy with strong and robust performance in our empirical evaluations on the response retrieval task using Reddit BIBREF14, OpenSubtitles BIBREF15, and AmazonQA BIBREF16 conversational test data, see BIBREF12 for further details.", "In training the constant $C$ is constrained to lie between 0 and $\\sqrt{l}$. Following Henderson:2017arxiv, the scoring function in the training objective aims to maximise the similarity score of context-reply pairs that go together, while minimising the score of random pairings: negative examples. Training proceeds via SGD with batches comprising 500 pairs (1 positive and 499 negatives)." ], [ "Photos are represented using convolutional neural net (CNN) models pretrained on ImageNet BIBREF17. We use a MobileNet model with a depth multiplier of 1.4, and an input dimension of $224 \\times 224$ pixels as in BIBREF18. This provides a $1,280 \\times 1.4 = 1,792$-dimensional representation of a photo, which is then passed through a single hidden layer of dimensionality $1,024$ with ReLU activation, before being passed to a hidden layer of dimensionality 512 with no activation to provide the final representation $h_p$." ], [ "For training text representations we use a Reddit dataset similar to AlRfou:2016arxiv. Our dataset is large and provides natural conversational structure: all Reddit data from January 2015 to December 2018, available as a public BigQuery dataset, span almost 3.7B comments BIBREF12. We preprocess the dataset to remove uninformative and long comments by retaining only sentences containing more than 8 and less than 128 word tokens. After pairing all comments/contexts $c$ with their replies $r$, we obtain more than 727M context-reply $(c,r)$ pairs for training, see Figure FIGREF2." ], [ "Once the text encoding sub-networks are trained, a photo encoder is learned on top of a pretrained MobileNet CNN, using data taken from the Yelp Open dataset: it contains around 200K photos and their captions. Training of the multi-modal sub-network then maximises the similarity of captions encoded with the response encoder $h_r$ to the photo representation $h_p$. As a result, we can compute the score of a photo given a context using the cosine similarity of the respective vectors. A photo will be scored highly if it looks like its caption would be a good response to the current context." ], [ "The Yelp dataset is used at inference time to provide text and photo candidates to display to the user at each step in the conversation. Our restaurant search is currently deployed separately for each city, and we limit the responses to a given city. For instance, for our English system for Edinburgh we work with 396 restaurants, 4,225 photos (these include additional photos obtained using the Google Places API without captions), 6,725 responses created from the structured information about restaurants that Yelp provides, converted using simple templates to sentences of the form such as “Restaurant X accepts credit cards.”, 125,830 sentences extracted from online reviews." ], [ "The system jointly trains two encoding functions (with shared word embeddings) $f(context)$ and $g(reply)$ which produce encodings $h_c$ and $h_r$, so that the similarity $S(c,r)$ is high for all $(c,r)$ pairs from the Reddit training data and low for random pairs. The encoding function $g()$ is then frozen, and an encoding function $t(photo)$ is learnt which makes the similarity between a photo and its associated caption high for all (photo, caption) pairs from the Yelp dataset, and low for random pairs. $t$ is a CNN pretrained on ImageNet, with a shallow one-layer DNN on top. Given a new context/query, we then provide its encoding $h_c$ by applying $f()$, and find plausible text replies and photo responses according to functions $g()$ and $t()$, respectively. These should be responses that look like answers to the query, and photos that look like they would have captions that would be answers to the provided query.", "At inference, finding relevant candidates given a context reduces to computing $h_c$ for the context $c$ , and finding nearby $h_r$ and $h_p$ vectors. The response vectors can all be pre-computed, and the nearest neighbour search can be further optimised using standard libraries such as Faiss BIBREF19 or approximate nearest neighbour retrieval BIBREF20, giving an efficient search that scales to billions of candidate responses.", "The system offers both voice and text input and output. Speech-to-text and text-to-speech conversion in the PolyResponse system is currently supported by the off-the-shelf Google Cloud tools." ], [ "The ranking model lends itself to the one-shot task of finding the most relevant responses in a given context. However, a restaurant-browsing system needs to support a dialogue flow where the user finds a restaurant, and then asks questions about it. The dialogue state for each search scenario is represented as the set of restaurants that are considered relevant. This starts off as all the restaurants in the given city, and is assumed to monotonically decrease in size as the conversation progresses until the user converges to a single restaurant. A restaurant is only considered valid in the context of a new user input if it has relevant responses corresponding to it. This flow is summarised here:", "", "S1. Initialise $R$ as the set of all restaurants in the city. Given the user's input, rank all the responses in the response pool pertaining to restaurants in $R$.", "", "S2. Retrieve the top $N$ responses $r_1, r_2, \\ldots , r_N$ with corresponding (sorted) cosine similarity scores: $s_1 \\ge s_2 \\ge \\ldots \\ge s_N$.", "", "S3. Compute probability scores $p_i \\propto \\exp (a \\cdot s_i)$ with $\\sum _{i=1}^N p_i$, where $a>0$ is a tunable constant.", "", "S4. Compute a score $q_e$ for each restaurant/entity $e \\in R$, $q_e = \\sum _{i: r_i \\in e} p_i$.", "", "S5. Update $R$ to the smallest set of restaurants with highest $q$ whose $q$-values sum up to more than a predefined threshold $t$.", "", "S6. Display the most relevant responses associated with the updated $R$, and return to S2.", "If there are multiple relevant restaurants, one response is shown from each. When only one restaurant is relevant, the top $N$ responses are all shown, and relevant photos are also displayed. The system does not require dedicated understanding, decision-making, and generation modules, and this dialogue flow does not rely on explicit task-tailored semantics. The set of relevant restaurants is kept internally while the system narrows it down across multiple dialogue turns. A simple set of predefined rules is used to provide a templatic spoken system response: e.g., an example rule is “One review of $e$ said $r$”, where $e$ refers to the restaurant, and $r$ to a relevant response associated with $e$. Note that while the demo is currently focused on the restaurant search task, the described “narrowing down” dialogue flow is generic and applicable to a variety of applications dealing with similar entity search.", "The system can use a set of intent classifiers to allow resetting the dialogue state, or to activate the separate restaurant booking dialogue flow. These classifiers are briefly discussed in §SECREF4." ], [ "The PolyResponse restaurant search is currently available in 8 languages and for 8 cities around the world: English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade). Selected snapshots are shown in Figure FIGREF16, while we also provide videos demonstrating the use and behaviour of the systems at: https://tinyurl.com/y3evkcfz. A simple MT-based translate-to-source approach at inference time is currently used to enable the deployment of the system in other languages: 1) the pool of responses in each language is translated to English by Google Translate beforehand, and pre-computed encodings of their English translations are used as representations of each foreign language response; 2) a provided user utterance (i.e., context) is translated to English on-the-fly and its encoding $h_c$ is then learned. We plan to experiment with more sophisticated multilingual models in future work." ], [ "An additional functionality enables the user to get parts of the restaurant menu relevant to the current user utterance as responses. This is achieved by performing an additional ranking step of available menu items and retrieving the ones that are semantically relevant to the user utterance using exactly the same methodology as with ranking other responses. An example of this functionality is shown in Figure FIGREF21." ], [ "The restaurant search system needs to support the discrete actions of restarting the conversation (i.e., resetting the set $R$), and should enable transferring to the slot-based table booking flow. This is achieved using two binary intent classifiers, that are run at each step in the dialogue. These classifiers make use of the already-computed $h_c$ vector that represents the user's latest text. A single-layer neural net is learned on top of the 512-dimensional encoding, with a ReLU activation and 100 hidden nodes. To train the classifiers, sets of 20 relevant paraphrases (e.g., “Start again”) are provided as positive examples. Finally, when the system successfully switches to the booking scenario, it proceeds to the slot filling task: it aims to extract all the relevant booking information from the user (e.g., date, time, number of people to dine). The entire flow of the system illustrating both the search phase and the booking phase is provided as the supplemental video material." ], [ "This paper has presented a general approach to search-based dialogue that does not rely on explicit semantic representations such as dialogue acts or slot-value ontologies, and allows for multi-modal responses. In future work, we will extend the current demo system to more tasks and languages, and work with more sophisticated encoders and ranking functions. Besides the initial dialogue flow from this work (§SECREF3), we will also work with more complex flows dealing, e.g., with user intent shifts." ] ], "section_name": [ "Introduction and Background", "PolyResponse: Conversational Search", "PolyResponse: Conversational Search ::: Text Representation.", "PolyResponse: Conversational Search ::: Photo Representation.", "PolyResponse: Conversational Search ::: Data Source 1: Reddit.", "PolyResponse: Conversational Search ::: Data Source 2: Yelp.", "PolyResponse: Conversational Search ::: Index of Responses.", "PolyResponse: Conversational Search ::: PolyResponse in a Nutshell.", "Dialogue Flow", "Other Functionality ::: Multilinguality.", "Other Functionality ::: Voice-Controlled Menu Search.", "Other Functionality ::: Resetting and Switching to Booking.", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "a5d51117db44e1edb288497f519a3ab450c04f98", "d170ebb3e7be6b44abb3ffb7c3b17f59c8f84e3d" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [ "This paper has presented a general approach to search-based dialogue that does not rely on explicit semantic representations such as dialogue acts or slot-value ontologies, and allows for multi-modal responses. In future work, we will extend the current demo system to more tasks and languages, and work with more sophisticated encoders and ranking functions. Besides the initial dialogue flow from this work (§SECREF3), we will also work with more complex flows dealing, e.g., with user intent shifts." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "This paper has presented a general approach to search-based dialogue that does not rely on explicit semantic representations such as dialogue acts or slot-value ontologies, and allows for multi-modal responses. In future work, we will extend the current demo system to more tasks and languages, and work with more sophisticated encoders and ranking functions. Besides the initial dialogue flow from this work (§SECREF3), we will also work with more complex flows dealing, e.g., with user intent shifts." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "54848aa0a2b910dc068c9dfab767602365bc66d8" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "a1cc45250d4b7edefcfbe4e5b418aa801ea339c2" ], "answer": [ { "evidence": [ "PolyResponse: Conversational Search ::: Text Representation.", "The model, implemented as a deep neural network, learns to respond by training on hundreds of millions context-reply $(c,r)$ pairs. First, similar to Henderson:2017arxiv, raw text from both $c$ and $r$ is converted to unigrams and bigrams. All input text is first lower-cased and tokenised, numbers with 5 or more digits get their digits replaced by a wildcard symbol #, while words longer than 16 characters are replaced by a wildcard token LONGWORD. Sentence boundary tokens are added to each sentence. The vocabulary consists of the unigrams that occur at least 10 times in a random 10M subset of the Reddit training set (see Figure FIGREF2) plus the 200K most frequent bigrams in the same random subset.", "PolyResponse: Conversational Search ::: Photo Representation.", "Photos are represented using convolutional neural net (CNN) models pretrained on ImageNet BIBREF17. We use a MobileNet model with a depth multiplier of 1.4, and an input dimension of $224 \\times 224$ pixels as in BIBREF18. This provides a $1,280 \\times 1.4 = 1,792$-dimensional representation of a photo, which is then passed through a single hidden layer of dimensionality $1,024$ with ReLU activation, before being passed to a hidden layer of dimensionality 512 with no activation to provide the final representation $h_p$." ], "extractive_spans": [ "Henderson:2017", "MobileNet model" ], "free_form_answer": "", "highlighted_evidence": [ "Text Representation.\nThe model, implemented as a deep neural network, learns to respond by training on hundreds of millions context-reply $(c,r)$ pairs. First, similar to Henderson:2017arxiv, raw text from both $c$ and $r$ is converted to unigrams and bigrams.", "Photo Representation.\nPhotos are represented using convolutional neural net (CNN) models pretrained on ImageNet BIBREF17. We use a MobileNet model with a depth multiplier of 1.4, and an input dimension of $224 \\times 224$ pixels as in BIBREF18." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3a31c824fdad1f2ab881a5d849fa944d71f59ba1", "d8dd14cdd251f30688e9017649f3c69a32aef358" ], "answer": [ { "evidence": [ "The PolyResponse restaurant search is currently available in 8 languages and for 8 cities around the world: English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade). Selected snapshots are shown in Figure FIGREF16, while we also provide videos demonstrating the use and behaviour of the systems at: https://tinyurl.com/y3evkcfz. A simple MT-based translate-to-source approach at inference time is currently used to enable the deployment of the system in other languages: 1) the pool of responses in each language is translated to English by Google Translate beforehand, and pre-computed encodings of their English translations are used as representations of each foreign language response; 2) a provided user utterance (i.e., context) is translated to English on-the-fly and its encoding $h_c$ is then learned. We plan to experiment with more sophisticated multilingual models in future work." ], "extractive_spans": [], "free_form_answer": "English, German, Spanish, Mandarin, Polish, Russian, Korean and Serbian", "highlighted_evidence": [ "The PolyResponse restaurant search is currently available in 8 languages and for 8 cities around the world: English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The PolyResponse restaurant search is currently available in 8 languages and for 8 cities around the world: English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade). Selected snapshots are shown in Figure FIGREF16, while we also provide videos demonstrating the use and behaviour of the systems at: https://tinyurl.com/y3evkcfz. A simple MT-based translate-to-source approach at inference time is currently used to enable the deployment of the system in other languages: 1) the pool of responses in each language is translated to English by Google Translate beforehand, and pre-computed encodings of their English translations are used as representations of each foreign language response; 2) a provided user utterance (i.e., context) is translated to English on-the-fly and its encoding $h_c$ is then learned. We plan to experiment with more sophisticated multilingual models in future work." ], "extractive_spans": [ "English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade)" ], "free_form_answer": "", "highlighted_evidence": [ "The PolyResponse restaurant search is currently available in 8 languages and for 8 cities around the world: English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Was PolyReponse evaluated against some baseline?", "What metric is used to evaluate PolyReponse system?", "How does PolyResponse architecture look like?", "In what 8 languages is PolyResponse engine used for restourant search and booking system?" ], "question_id": [ "ad08b215dca538930ef1f50b4e49cd25527028ad", "31101dc9937f108e27e08a5f34be44f0090b8b6b", "e4a315e9c190cf96493eefe04ce4ba6ae6894550", "6263b2cba18207474786b303852d2f0d7068d4b6" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: The PolyResponse ranking model: it encodes conversational contexts, replies, and photos to respective vectors hc, hr, and hp.", "Figure 2: Snapshots of the PolyResponse demo system for restaurant search in four different languages. Restaurant names are anonymised. Translations of non-English sentences are provided in parentheses; they are not part of the system output. The output also comprises relevant photos associated with the current restaurant.", "Figure 3: An example showing how the system can retrieve parts of the menu as responses to the current user utterance (if they are relevant to the utterance)." ], "file": [ "2-Figure1-1.png", "5-Figure2-1.png", "5-Figure3-1.png" ] }
[ "In what 8 languages is PolyResponse engine used for restourant search and booking system?" ]
[ [ "1909.01296-Other Functionality ::: Multilinguality.-0" ] ]
[ "English, German, Spanish, Mandarin, Polish, Russian, Korean and Serbian" ]
256
1801.02073
Analysis of Wikipedia-based Corpora for Question Answering
This paper gives comprehensive analyses of corpora based on Wikipedia for several tasks in question answering. Four recent corpora are collected,WikiQA, SelQA, SQuAD, and InfoQA, and first analyzed intrinsically by contextual similarities, question types, and answer categories. These corpora are then analyzed extrinsically by three question answering tasks, answer retrieval, selection, and triggering. An indexing-based method for the creation of a silver-standard dataset for answer retrieval using the entire Wikipedia is also presented. Our analysis shows the uniqueness of these corpora and suggests a better use of them for statistical question answering learning.
{ "paragraphs": [ [ "Question answering (QA) has been a blooming research field for the last decade. Selection-based QA implies a family of tasks that find answer contexts from large data given questions in natural language. Three tasks have been proposed for selection-based QA. Given a document, answer extraction BIBREF0 , BIBREF1 finds answer phrases whereas answer selection BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 and answer triggering BIBREF6 , BIBREF7 find answer sentences instead, although the presence of the answer context is not assumed within the provided document for answer triggering but it is for the other two tasks. Recently, various QA tasks that are not selection-based have been proposed BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ; however, selection-based QA remains still important because of its practical value to real applications (e.g., IBM Watson, MIT Start).", "Several datasets have been released for selection-based QA. wang:07a created the QASent dataset consisting of 277 questions, which has been widely used for benchmarking the answer selection task. feng:15a presented InsuranceQA comprising 16K+ questions on insurance contexts. yang:15a introduced WikiQA for answer selection and triggering. jurczyk:16 created SelQA for large real-scale answer triggering. rajpurkar2016squad presented SQuAD for answer extraction and selection as well as for reading comprehension. Finally, morales-EtAl:2016:EMNLP2016 provided InfoboxQA for answer selection.", "These corpora make it possible to evaluate the robustness of statistical question answering learning. Although all of these corpora target on selection-based QA, they are designed for different purposes such that it is important to understand the nature of these corpora so a better use of them can be made. In this paper, we make both intrinsic and extrinsic analyses of four latest corpora based on Wikipedia, WikiQA, SelQA, SQuAD, and InfoboxQA. We first give a thorough intrinsic analysis regarding contextual similarities, question types, and answer categories (Section SECREF2 ). We then map questions in all corpora to the current version of English Wikipedia and benchmark another selection-based QA task, answer retrieval (Section SECREF3 ). Finally, we present an extrinsic analysis through a set of experiments cross-testing these corpora using a convolutional neural network architecture (Section SECREF4 )." ], [ "Four publicly available corpora are selected for our analysis. These corpora are based on Wikipedia, so more comparable than the others, and have already been used for the evaluation of several QA systems.", "WikiQA BIBREF6 comprises questions selected from the Bing search queries, where user click data give the questions and their corresponding Wikipedia articles. The abstracts of these articles are then extracted to create answer candidates. The assumption is made that if many queries lead to the same article, it must contain the answer context; however, this assumption fails for some occasions, which makes this dataset more challenging. Since the existence of answer contexts is not guaranteed in this task, it is called answer triggering instead of answer selection.", "SelQA BIBREF7 is a product of five annotation tasks through crowdsourcing. It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers. Each question is associated with a section in Wikipedia where the answer context is guaranteed, and also with five sections selected from the entire Wikipedia where the selection is made by the Lucene search engine. This second dataset does not assume the existence of the answer context, so can be used for the evaluation of answer triggering.", "SQuAD BIBREF12 presents 107K+ crowdsourced questions on 536 Wikipedia articles, where the answer contexts are guaranteed to exist within the provided paragraph. It contains annotation of answer phrases as well as the pointers to the sentences including the answer phrases; thus, it can be used for both answer extraction and selection. This corpus also provides human accuracy on those questions, setting up a reasonable upper bound for machines. To avoid overfitting, the evaluation set is not publicly available although system outputs can be evaluated by their provided script.", "InfoboxQA BIBREF13 gives 15K+ questions based on the infoboxes from 150 articles in Wikipedia. Each question is crowdsourced and associated with an infobox, where each line of the infobox is considered an answer candidate. This corpus emphasizes the gravity of infoboxes, which summary arguably the most commonly asked information about those articles. Although the nature of this corpus is different from the others, it can also be used to evaluate answer selection." ], [ "All corpora provide datasets/splits for answer selection, whereas only (WikiQA, SQuAD) and (WikiQA, SelQA) provide datasets for answer extraction and answer triggering, respectively. SQuAD is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, SQuAD's average candidates per question ( INLINEFORM0 ) is the smallest because SQuAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although InfoboxQA is larger than WikiQA or SelQA, the number of token types ( INLINEFORM1 ) in InfoboxQA is smaller than those two, due to the repetitive nature of infoboxes.", "All corpora show similar average answer candidate lengths ( INLINEFORM0 ), except for InfoboxQA where each line in the infobox is considered a candidate. SelQA and SQuAD show similar average question lengths ( INLINEFORM1 ) because of the similarity between their annotation schemes. It is not surprising that WikiQA's average question length is the smallest, considering their questions are taken from search queries. InfoboxQA's average question length is relatively small, due to the restricted information that can be asked from the infoboxes. InfoboxQA and WikiQA show the least question-answer word overlaps over questions and answers ( INLINEFORM2 and INLINEFORM3 in Table TABREF2 ), respectively. In terms of the F1-score for overlapping words ( INLINEFORM4 ), SQuAD gives the least portion of overlaps between question-answer pairs although WikiQA comes very close.", "Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons. Although these corpora have been independently developed, a general trend is found, where the what question type dominates, followed by how and who, followed by when and where, and so on.", "Fig. FIGREF6 shows the distributions of answer categories automatically classified by our Convolutional Neural Network model trained on the data distributed by li:02a. Interestingly, each corpus focuses on different categories, Numeric for WikiQA and SelQA, Entity for SQuAD, and Person for InfoboxQA, which gives enough diversities for statistical learning to build robust models." ], [ "This section describes another selection-based QA task, called answer retrieval, that finds the answer context from a larger dataset, the entire Wikipedia. SQuAD provides no mapping of the answer contexts to Wikipedia, whereas WikiQA and SelQA provide mappings; however, their data do not come from the same version of Wikipedia. We propose an automatic way of mapping the answer contexts from all corpora to the same version of Wikipeda so they can be coherently used for answer retrieval.", "Each paragraph in Wikipedia is first indexed by Lucene using {1,2,3}-grams, where the paragraphs are separated by WikiExtractor and segmented by NLP4J (28.7M+ paragraphs are indexed). Each answer sentence from the corpora in Table TABREF3 is then queried to Lucene, and the top-5 ranked paragraphs are retrieved. The cosine similarity between each sentence in these paragraphs and the answer sentence is measured for INLINEFORM0 -grams, say INLINEFORM1 . A weight is assigned to each INLINEFORM2 -gram score, say INLINEFORM3 , and the weighted sum is measured: INLINEFORM4 . The fixed weights of INLINEFORM5 are used for our experiments, which can be improved.", "If there exists a sentence whose INLINEFORM0 , the paragraph consisting of that sentence is considered the silver-standard answer passage. Table TABREF3 shows how robust these silver-standard passages are based on human judgement ( INLINEFORM1 ) and how many passages are collected ( INLINEFORM2 ) for INLINEFORM3 , where the human judgement is performed on 50 random samples for each case. For answer retrieval, a dataset is created by INLINEFORM4 , which gives INLINEFORM5 accuracy and INLINEFORM6 coverage, respectively. Finally, each question is queried to Lucene and the top- INLINEFORM7 paragraphs are retrieved from the entire Wikipedia. If the answer sentence exists within those retrieved paragraphs according to the silver-standard, it is considered correct.", "Finding a paragraph that includes the answer context out of the entire Wikipedia is an extremely difficult task (128.7M). The last row of Table TABREF3 shows results from answer retrieval. Given INLINEFORM0 , SelQA and SQuAD show about 34% and 35% accuracy, which are reasonable. However, WikiQA shows a significantly lower accuracy of 12.47%; this is because the questions in WikiQA is about twice shorter than the questions in the other corpora such that not enough lexicons can be extracted from these questions for the Lucene search." ], [ "Answer selection is evaluated by two metrics, mean average precision (MAP) and mean reciprocal rank (MRR). The bigram CNN introduced by yu:14a is used to generate all the results in Table TABREF11 , where models are trained on either single or combined datasets. Clearly, the questions in WikiQA are the most challenging, and adding more training data from the other corpora hurts accuracy due to the uniqueness of query-based questions in this corpus. The best model is achieved by training on W+S+Q for SelQA; adding InfoboxQA hurts accuracy for SelQA although it gives a marginal gain for SQuAD. Just like WikiQA, InfoboxQA performs the best when it is trained on only itself. From our analysis, we suggest that to use models trained on WikiQA and InfoboxQA for short query-like questions, whereas to use ones trained on SelQA and SQuAD for long natural questions." ], [ "The results of INLINEFORM0 from the answer retrieval task in Section SECREF13 are used to create the datasets for answer triggering, where about 65% of the questions are not expected to find their answer contexts from the provided paragraphs for SelQA and SQuAD and 87.5% are not expected for WikiQA. Answer triggering is evaluated by the F1 scores as presented in Table TABREF11 , where three corpora are cross validated. The results on WikiQA are pretty low as expected from the poor accuracy on the answer retrieval task. Training on SelQA gives the best models for both WikiQA and SelQA. Training on SQuAD gives the best model for SQuAD although the model trained on SelQA is comparable. Since the answer triggering datasets are about 5 times larger than the answer selection datasets, it is computationally too expensive to combine all data for training. We plan to find a strong machine to perform this experiment in near future." ], [ "Lately, several deep learning approaches have been proposed for question answering. yu:14a presented a CNN model that recognizes the semantic similarity between two sentences. wang-nyberg:2015:ACL-IJCNLP presented a stacked bidirectional LSTM approach to read words in sequence, then outputs their similarity scores. feng:15a applied a general deep learning framework to non-factoid question answering. santos:16a introduced an attentive pooling mechanism that led to further improvements in selection-based QA." ], [ "We present a comprehensive comparison study of the existing corpora for selection-based question answering. Our intrinsic analysis provides a better understanding of the uniqueness or similarity between these corpora. Our extrinsic analysis shows the strength or weakness of combining these corpora together for statistical learning. Additionally, we create a silver-standard dataset for answer retrieval and triggering, which will be publicly available. In the future, we will explore different ways of improving the quality of our silver-standard datasets by fine-tuning the hyper-parameters." ] ], "section_name": [ "Introduction", "Intrinsic Analysis", "Analysis", "Answer Retrieval", "Answer Selection", "Answer Triggering", "Related work", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "cdf5e28e55601053a7907e503bf3b7d130626fcf" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "f5d16c5224a95f60d086edba603b9157e4bce349", "ff7252a6d375d16d0464a0e5e12b37b6c1c41cb5" ], "answer": [ { "evidence": [ "If there exists a sentence whose INLINEFORM0 , the paragraph consisting of that sentence is considered the silver-standard answer passage. Table TABREF3 shows how robust these silver-standard passages are based on human judgement ( INLINEFORM1 ) and how many passages are collected ( INLINEFORM2 ) for INLINEFORM3 , where the human judgement is performed on 50 random samples for each case. For answer retrieval, a dataset is created by INLINEFORM4 , which gives INLINEFORM5 accuracy and INLINEFORM6 coverage, respectively. Finally, each question is queried to Lucene and the top- INLINEFORM7 paragraphs are retrieved from the entire Wikipedia. If the answer sentence exists within those retrieved paragraphs according to the silver-standard, it is considered correct." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "For answer retrieval, a dataset is created by INLINEFORM4 , which gives INLINEFORM5 accuracy and INLINEFORM6 coverage, respectively." ], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "3c06ce195a2b687e328714004f32e78d1ec7a659", "501c31969d9e9222f8bd7f7cb752565ad5d1ab68" ], "answer": [ { "evidence": [ "Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons. Although these corpora have been independently developed, a general trend is found, where the what question type dominates, followed by how and who, followed by when and where, and so on." ], "extractive_spans": [ "seven " ], "free_form_answer": "", "highlighted_evidence": [ "Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons. Although these corpora have been independently developed, a general trend is found, where the what question type dominates, followed by how and who, followed by when and where, and so on." ], "extractive_spans": [], "free_form_answer": "7", "highlighted_evidence": [ "Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "490378da70f30c9219706100d311ead3e9ceee83" ], "answer": [ { "evidence": [ "All corpora provide datasets/splits for answer selection, whereas only (WikiQA, SQuAD) and (WikiQA, SelQA) provide datasets for answer extraction and answer triggering, respectively. SQuAD is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, SQuAD's average candidates per question ( INLINEFORM0 ) is the smallest because SQuAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although InfoboxQA is larger than WikiQA or SelQA, the number of token types ( INLINEFORM1 ) in InfoboxQA is smaller than those two, due to the repetitive nature of infoboxes.", "All corpora show similar average answer candidate lengths ( INLINEFORM0 ), except for InfoboxQA where each line in the infobox is considered a candidate. SelQA and SQuAD show similar average question lengths ( INLINEFORM1 ) because of the similarity between their annotation schemes. It is not surprising that WikiQA's average question length is the smallest, considering their questions are taken from search queries. InfoboxQA's average question length is relatively small, due to the restricted information that can be asked from the infoboxes. InfoboxQA and WikiQA show the least question-answer word overlaps over questions and answers ( INLINEFORM2 and INLINEFORM3 in Table TABREF2 ), respectively. In terms of the F1-score for overlapping words ( INLINEFORM4 ), SQuAD gives the least portion of overlaps between question-answer pairs although WikiQA comes very close." ], "extractive_spans": [], "free_form_answer": "They compare the tasks that the datasets are suitable for, average number of answer candidates per question, number of token types, average answer candidate lengths, average question lengths, question-answer word overlap.", "highlighted_evidence": [ "All corpora provide datasets/splits for answer selection, whereas only (WikiQA, SQuAD) and (WikiQA, SelQA) provide datasets for answer extraction and answer triggering, respectively. SQuAD is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, SQuAD's average candidates per question ( INLINEFORM0 ) is the smallest because SQuAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although InfoboxQA is larger than WikiQA or SelQA, the number of token types ( INLINEFORM1 ) in InfoboxQA is smaller than those two, due to the repetitive nature of infoboxes.\n\nAll corpora show similar average answer candidate lengths ( INLINEFORM0 ), except for InfoboxQA where each line in the infobox is considered a candidate. SelQA and SQuAD show similar average question lengths ( INLINEFORM1 ) because of the similarity between their annotation schemes. It is not surprising that WikiQA's average question length is the smallest, considering their questions are taken from search queries. InfoboxQA's average question length is relatively small, due to the restricted information that can be asked from the infoboxes. InfoboxQA and WikiQA show the least question-answer word overlaps over questions and answers ( INLINEFORM2 and INLINEFORM3 in Table TABREF2 ), respectively. In terms of the F1-score for overlapping words ( INLINEFORM4 ), SQuAD gives the least portion of overlaps between question-answer pairs although WikiQA comes very close." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Can their indexing-based method be applied to create other QA datasets in other domains, and not just Wikipedia?", "Do they employ their indexing-based method to create a sample of a QA Wikipedia dataset?", "How many question types do they find in the datasets analyzed?", "How do they analyze contextual similaries across datasets?" ], "question_id": [ "c34e80fbbfda0f1786d3b00e06cef5ada78a3f3c", "a9337636b52de375c852682a2561af2c1db5ec63", "45a5961a4e1d1c22874c4918e5c98bd3c0a670b3", "30e21f5bc1d2f80f422c56d62abca9cd3f2cd4a1" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Comparisons between the four corpora for answer selection. Note that both WIKIQA and SELQA provide separate annotation for answer triggering, which is not shown in this table. The SQUAD column shows statistics excluding the evaluation set, which is not publicly available. AE/AS/AT: annotation for answer extraction/selection/triggering, q/c: # of questions/answer candidates, w/t: # of tokens/token types, µq/c: average length of questions/answer candidates, Ωq/a: macro average in % of overlapping words between question-answer pairs normalized by the questions/answers lengths, Ωf : (2·Ωq·Ωa)/(Ωq+Ωa).", "Table 2: Statistics of the silver-standard dataset (first three rows) and the accuracies of answer retrieval in % (last row). ρ: robustness of the silver-standard in %, γc/p: #/% of retrieved silver-standard passages (coverage).", "Figure 1: Distributions of question types in %.", "Figure 2: Distributions of answer categories in %.", "Table 3: Results for answer selection and triggering in % trained and evaluated across all corpora splits. The first column shows the training source, and the other columns show the evaluation sources. W: WIKIQA, S: SELQA, Q: SQUAD, I: INFOBOXQA." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "3-Figure1-1.png", "3-Figure2-1.png", "4-Table3-1.png" ] }
[ "How many question types do they find in the datasets analyzed?", "How do they analyze contextual similaries across datasets?" ]
[ [ "1801.02073-Analysis-2" ], [ "1801.02073-Analysis-1", "1801.02073-Analysis-0" ] ]
[ "7", "They compare the tasks that the datasets are suitable for, average number of answer candidates per question, number of token types, average answer candidate lengths, average question lengths, question-answer word overlap." ]
258
1801.06482
Deep Learning for Detecting Cyberbullying Across Multiple Social Media Platforms
Harassment by cyberbullies is a significant phenomenon on the social media. Existing works for cyberbullying detection have at least one of the following three bottlenecks. First, they target only one particular social media platform (SMP). Second, they address just one topic of cyberbullying. Third, they rely on carefully handcrafted features of the data. We show that deep learning based models can overcome all three bottlenecks. Knowledge learned by these models on one dataset can be transferred to other datasets. We performed extensive experiments using three real-world datasets: Formspring (12k posts), Twitter (16k posts), and Wikipedia(100k posts). Our experiments provide several useful insights about cyberbullying detection. To the best of our knowledge, this is the first work that systematically analyzes cyberbullying detection on various topics across multiple SMPs using deep learning based models and transfer learning.
{ "paragraphs": [ [ "Cyberbullying has been defined by the National Crime Prevention Council as the use of the Internet, cell phones or other devices to send or post text or images intended to hurt or embarrass another person. Various studies have estimated that between to 10% to 40% of internet users are victims of cyberbullying BIBREF0 . Effects of cyberbullying can range from temporary anxiety to suicide BIBREF1 . Many high profile incidents have emphasized the prevalence of cyberbullying on social media. Most recently in October 2017, a Swedish model Arvida Byström was cyberbullied to the extent of receiving rape threats after she appeared in an advertisement with hairy legs.", "Detection of cyberbullying in social media is a challenging task. Definition of what constitutes cyberbullying is quite subjective. For example, frequent use of swear words might be considered as bullying by the general population. However, for teen oriented social media platforms such as Formspring, this does not necessarily mean bullying (Table TABREF9 ). Across multiple SMPs, cyberbullies attack victims on different topics such as race, religion, and gender. Depending on the topic of cyberbullying, vocabulary and perceived meaning of words vary significantly across SMPs. For example, in our experiments we found that for word `fat', the most similar words as per Twitter dataset are `female' and `woman' (Table TABREF23 ). However, other two datasets do not show such particular bias against women. This platform specific semantic similarity between words is a key aspect of cyberbullying detection across SMPs. Style of communication varies significantly across SMPs. For example, Twitter posts are short and lack anonymity. Whereas posts on Q&A oriented SMPs are long and have option of anonymity (Table TABREF7 ). Fast evolving words and hashtags in social media make it difficult to detect cyberbullying using swear word list based simple filtering approaches. The option of anonymity in certain social networks also makes it harder to identify cyberbullying as profile and history of the bully might not be available.", "Past works on cyberbullying detection have at least one of the following three bottlenecks. First (Bottleneck B1), they target only one particular social media platform. How these methods perform across other SMPs is unknown. Second (Bottleneck B2), they address only one topic of cyberbullying such as racism, and sexism. Depending on the topic, vocabulary and nature of cyberbullying changes. These models are not flexible in accommodating changes in the definition of cyberbullying. Third (Bottleneck B3), they rely on carefully handcrafted features such as swear word list and POS tagging. However, these handcrafted features are not robust against variations in writing style. In contrast to existing bottlenecks, this work targets three different types of social networks (Formspring: a Q&A forum, Twitter: microblogging, and Wikipedia: collaborative knowledge repository) for three topics of cyberbullying (personal attack, racism, and sexism) without doing any explicit feature engineering by developing deep learning based models along with transfer learning.", "We experimented with diverse traditional machine learning models (logistic regression, support vector machine, random forest, naive Bayes) and deep neural network models (CNN, LSTM, BLSTM, BLSTM with Attention) using variety of representation methods for words (bag of character n-gram, bag of word unigram, GloVe embeddings, SSWE embeddings). Summary of our findings and research contributions is as follows." ], [ "Please refer to Table TABREF7 for summary of datasets used. We performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. We cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages). Each dataset addresses a different topic of cyberbullying. Twitter dataset contains examples of racism and sexism. Wikipedia dataset contains examples of personal attack. However, Formspring dataset is not specifically about any single topic. All three datasets have the problem of class imbalance where posts labeled as cyberbullying are in the minority as compared to neutral posts. Variation in the number of posts across datasets also affects vocabulary size that represents the number of distinct words encountered in the dataset. We measure the size of a post in terms of the number of words in the post. For each dataset, there are only a few posts with large size. We truncate such large posts to the size of post ranked at 95 percentile in that dataset. For example, in Wikipedia dataset, the largest post has 2846 words. However, size of post ranked at 95 percentile in that dataset is only 231. Any post larger than size 231 in Wikipedia dataset will be truncated by considering only first 231 words. This truncation affects only a small minority of posts in each dataset. However, it is required for efficiently training various models in our experiments. Details of each dataset are as follows.", "Formspring BIBREF2 : It was a question and answer based website where users could openly invite others to ask and answer questions. The dataset includes 12K annotated question and answer pairs. Each post is manually labeled by three workers. Among these pairs, 825 were labeled as containing cyberbullying content by at least two Amazon Mechanical turk workers.", "Twitter BIBREF3 : This dataset includes 16K annotated tweets. The authors bootstrapped the corpus collection, by performing an initial manual search of common slurs and terms used pertaining to religious, sexual, gender, and ethnic minorities. Of the 16K tweets, 3117 are labeled as sexist, 1937 as racist, and the remaining are marked as neither sexist nor racist.", "Wikipedia BIBREF4 : For each page in Wikipedia, a corresponding talk page maintains the history of discussion among users who participated in its editing. This data set includes over 100k labeled discussion comments from English Wikipedia's talk pages. Each comment was labeled by 10 annotators via Crowdflower on whether it contains a personal attack. There are total 13590 comments labeled as personal attack." ], [ "Please refer to Table TABREF9 . We use the following short forms in this section: B=Bullying, S=Swearing, A=Anonymous. Some of the values for Twitter dataset are undefined as Twitter does not allow anonymous postings. Use of swear words has been repeatedly linked to cyberbullying. However, preliminary analysis of datasets reveals that depending on swear word usage can neither lead to high precision nor high recall for cyberbullying detection. Swear word list based methods will have low precision as P(B INLINEFORM0 S) is not close to 1. In fact, for teen oriented social network Formspring, 78% of the swearing posts are non-bullying. Swear words based filtering will be irritating to the users in such SMPs where swear words are used casually. Swear word list based methods will also have a low recall as P(S INLINEFORM1 B) is not close to 1. For Twitter dataset, 82% of bullying posts do not use any swear words. Such passive-aggressive cyberbullying will go undetected with swear word list based methods. Anonymity is another clue that is used for detecting cyberbullying as bully might prefer to hide its identity. Anonymity definitely leads to increased use of swear words (P(S INLINEFORM2 A) INLINEFORM3 P(S)) and cyberbullying (P(B INLINEFORM4 A) INLINEFORM5 P(B), and P(B INLINEFORM6 A&S)) INLINEFORM7 P(B)). However, significant fraction of anonymous posts are non-bullying (P(B INLINEFORM8 A) not close to 1) and many of bullying posts are not anonymous (P(A INLINEFORM9 B) not close to 1). Further, anonymity might not be allowed by many SMPs such as Twitter." ], [ "Cyberbullying is recognized as a phenomenon at least since 2003 BIBREF5 . Use of social media exploded with launching of multiple platforms such as Wikipedia (2001), MySpace (2003), Orkut (2004), Facebook (2004), and Twitter (2005). By 2006, researchers had pointed that cyberbullying was as serious phenomenon as offline bullying BIBREF6 . However, automatic detection of cyberbullying was addressed only since 2009 BIBREF7 . As a research topic, cyberbullying detection is a text classification problem. Most of the existing works fit in the following template: get training dataset from single SMP, engineer variety of features with certain style of cyberbullying as the target, apply a few traditional machine learning methods, and evaluate success in terms of measures such as F1 score and accuracy. These works heavily rely on handcrafted features such as use of swear words. These methods tend to have low precision for cyberbullying detection as handcrafted features are not robust against variations in bullying style across SMPs and bullying topics. Only recently, deep learning has been applied for cyberbullying detection BIBREF8 . Table TABREF27 summarizes important related work." ], [ "We experimented with four DNN based models for cyberbullying detection: CNN, LSTM, BLSTM, and BLSTM with attention. These models are listed in the increasing complexity of their neural architecture and amount of information used by these models. Please refer to Figure 1 for general architecture that we have used across four models. Various models differ only in the Neural Architecture layer while having identical rest of the layers. CNNs are providing state-of-the-results on extracting contextual feature for classification tasks in images, videos, audios, and text. Recently, CNNs were used for sentiment classification BIBREF9 . Long Short Term Memory networks are a special kind of RNN, capable of learning long-term dependencies. Their ability to use their internal memory to process arbitrary sequences of inputs has been found to be effective for text classification BIBREF10 . Bidirectional LSTMs BIBREF11 further increase the amount of input information available to the network by encoding information in both forward and backward direction. By using two directions, input information from both the past and future of the current time frame can be used. Attention mechanisms allow for a more direct dependence between the state of the model at different points in time. Importantly, attention mechanism lets the model learn what to attend to based on the input sentence and what it has produced so far.", "The embedding layer processes a fixed size sequence of words. Each word is represented as a real-valued vector, also known as word embeddings. We have experimented with three methods for initializing word embeddings: random, GloVe BIBREF12 , and SSWE BIBREF13 . During the training, model improves upon the initial word embeddings to learn task specific word embeddings. We have observed that these task specific word embeddings capture the SMP specific and topic specific style of cyberbullying. Using GloVe vectors over random vector initialization has been reported to improve performance for some NLP tasks. Most of the word embedding methods such as GloVe, consider only syntactic context of the word while ignoring the sentiment conveyed by the text. SSWE method overcomes this problem by incorporating the text sentiment as one of the parameters for word embedding generation. We experimented with various dimension size for word embeddings. Experimental results reported here are with dimension size as 50. There was no significant variation in results with dimension size ranging from 30 to 200.", "To avoid overfitting, we used two dropout layers, one before the neural architecture layer and one after, with dropout rates of 0.25 and 0.5 respectively. Fully connected layer is a dense output layer with the number of neurons equal to the number of classes, followed by softmax layer that provides softmax activation. All our models are trained using backpropagation. The optimizer used for training is Adam and the loss function is categorical cross-entropy. Besides learning the network weights, these methods also learn task-specific word embeddings tuned towards the bullying labels (See Section SECREF21 ). Our code is available at: https://github.com/sweta20/Detecting-Cyberbullying-Across-SMPs." ], [ "Existing works have heavily relied on traditional machine learning models for cyberbullying detection. However, they do not study the performance of these models across multiple SMPs. We experimented with four models: logistic regression (LR), support vector machine (SVM), random forest (RF), and naive Bayes (NB), as these are used in previous works (Table TABREF27 ). We used two data representation methods: character n-gram and word unigram. Past work in the domain of detecting abusive language have showed that simple n-gram features are more powerful than linguistic and syntactic features, hand-engineered lexicons, and word and paragraph embeddings BIBREF14 . As compared to DNN models, performance of all four traditional machine learning models was significantly lower. Please refer to Table TABREF11 .", "All DNN models reported here were implemented using Keras. We pre-process the data, subjecting it to standard operations of removal of stop words, punctuation marks and lowercasing, before annotating it to assigning respective labels to each comment. For each trained model, we report its performance after doing five-fold cross-validation. We use following short forms." ], [ "The training datasets had a major problem of class imbalance with posts marked as bullying in the minority. As a result, all models were biased towards labeling the posts as non-bullying. To remove this bias, we oversampled the data from bullying class thrice. That is, we replicated bullying posts thrice in the training data. This significantly improved the performance of all DNN models with major leap in all three evaluation measures. Table TABREF17 shows the effect of oversampling for a variety of word embedding methods with BLSTM Attention as the detection model. Results for other models are similar BIBREF15 . We can notice that oversampled datasets (F+, T+, W+) have far better performance than their counterparts (F, T, W respectively). Oversampling particularly helps the smallest dataset Formspring where number of training instances for bullying class is quite small (825) as compared to other two datasets (about 5K and 13K). We also experimented with varying the replication rate for bullying posts BIBREF15 . However, we observed that for bullying posts, replication rate of three is good enough." ], [ "Initial word embeddings decide data representation for DNN models. However during the training, DNN models modify these initial word embeddings to learn task specific word embeddings. We have experimented with three methods to initialize word embeddings. Please refer to Table TABREF19 . This table shows the effect of varying initial word embeddings for multiple DNN models across datasets. We can notice that initial word embeddings do not have a significant effect on cyberbullying detection when oversampling of bullying posts is done (rows corresponding to F+, T+, W+). In the absence of oversampling (rows corresponding to F, T W), there is a gap in performance of simplest (CNN) and most complex (BLSTM with attention) models. However, this gap goes on reducing with the increase in the size of datasets.", "Table TABREF20 compares the performance of four DNN models for three evaluation measures while using SSWE as the initial word embeddings. We have noticed that most of the time LSTM performs weaker than other three models. However, performance gap in the other three models is not significant." ], [ "DNN models learn word embeddings over the training data. These learned embeddings across multiple datasets show the difference in nature and style of bullying across cyberbullying topics and SMPs. Here we report results for BLSTM with attention model. Results for other models are similar. We first verify that important words for each topic of cyberbullying form clusters in the learned embeddings. To enable the visualization of grouping, we reduced dimensionality with t-SNE BIBREF16 , a well-known technique for dimensionality reduction particularly well suited for visualization of high dimensional datasets. Please refer to Table TABREF22 . This table shows important clusters observed in t-SNE projection of learned word embeddings. Each cluster shows that words most relevant to a particular topic of bullying form cluster.", "We also observed changes in the meanings of the words across topics of cyberbullying. Table TABREF23 shows most similar words for a given query word for two datasets. Twitter dataset which is heavy on sexism and racism, considers word slave as similar to targets of racism and sexism. However, Wikipedia dataset that is about personal attacks does not show such bias." ], [ "We used transfer learning to check if the knowledge gained by DNN models on one dataset can be used to improve cyberbullying detection performance on other datasets. We report results where BLSTM with attention is used as the DNN model. Results for other models are similar BIBREF15 . We experimented with following three flavors of transfer learning.", "Complete Transfer Learning (TL1): In this flavor, a model trained on one dataset was directly used to detect cyberbullying in other datasets without any extra training. TL1 resulted in significantly low recall indicating that three datasets have different nature of cyberbullying with low overlap (Table TABREF25 ). However precision was relatively higher for TL1, indicating that DNN models are cautious in labeling a post as bully (Table TABREF25 ). TL1 also helps to measure similarity in nature of cyberbullying across three datasets. We can observe that bullying nature in Formspring and Wikipedia datasets is more similar to each other than the Twitter dataset. This can be inferred from the fact that with TL1, cyberbullying detection performance for Formspring dataset is higher when base model is Wikipedia (precision =0.51 and recall=0.66)as compared to Twitter as the base model (precision=0.38 and recall=0.04). Similarly, for Wikipedia dataset, Formspring acts as a better base model than Twitter while using TL1 flavor of transfer learning. Nature of SMP might be a factor behind this similarity in nature of cyberbullying. Both Formspring and Wikipedia are task oriented social networks (Q&A and collaborative knowledge repository respectively) that allow anonymity and larger posts. Whereas communication on Twitter is short, free of anonymity and not oriented towards a particular task.", "Feature Level Transfer Learning (TL2): In this flavor, a model was trained on one dataset and only learned word embeddings were transferred to another dataset for training a new model. As compared to TL1, recall score improved dramatically with TL2 (Table TABREF25 ). Improvement in precision was also significant (Table TABREF25 ). These improvements indicate that learned word embeddings are an essential part of knowledge transfer across datasets for cyberbullying detection.", "Model Level Transfer Learning (TL3): In this flavor, a model was trained on one dataset and learned word embeddings, as well as network weights, were transferred to another dataset for training a new model. TL3 does not result in any significant improvement over TL2. This lack of improvement indicates that transfer of network weights is not essential for cyberbullying detection and learned word embeddings is the key knowledge gained by the DNN models.", "DNN based models coupled with transfer learning beat the best-known results for all three datasets. Previous best F1 scores for Wikipedia BIBREF4 and Twitter BIBREF8 datasets were 0.68 and 0.93 respectively. We achieve F1 scores of 0.94 for both these datasets using BLSTM with attention and feature level transfer learning (Table TABREF25 ). For Formspring dataset, authors have not reported F1 score. Their method has accuracy score of 78.5% BIBREF2 . We achieve F1 score of 0.95 with accuracy score of 98% for the same dataset." ], [ "We have shown that DNN models can be used for cyberbullying detection on various topics across multiple SMPs using three datasets and four DNN models. These models coupled with transfer learning beat state of the art results for all three datasets. These models can be further improved with extra data such as information about the profile and social graph of users. Most of the current datasets do not provide any information about the severity of bullying. If such fine-grained information is made available, then cyberbullying detection models can be further improved to take a variety of actions depending on the perceived seriousness of the posts." ] ], "section_name": [ "Introduction", "Datasets", "Use of Swear Words and Anonymity", "Related Work", "Deep Neural Network (DNN) Based Models", "Experiments", "Effect of Oversampling Bullying Instances", "Choice of Initial Word Embeddings and Model", "Task Specific Word Embeddings", "Transfer Learning", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "46415aa8b6fe7d8b6390c4b4e62553f673be9b50" ], "answer": [ { "evidence": [ "DNN based models coupled with transfer learning beat the best-known results for all three datasets. Previous best F1 scores for Wikipedia BIBREF4 and Twitter BIBREF8 datasets were 0.68 and 0.93 respectively. We achieve F1 scores of 0.94 for both these datasets using BLSTM with attention and feature level transfer learning (Table TABREF25 ). For Formspring dataset, authors have not reported F1 score. Their method has accuracy score of 78.5% BIBREF2 . We achieve F1 score of 0.95 with accuracy score of 98% for the same dataset." ], "extractive_spans": [], "free_form_answer": "best model achieves 0.94 F1 score for Wikipedia and Twitter datasets and 0.95 F1 on Formspring dataset", "highlighted_evidence": [ "DNN based models coupled with transfer learning beat the best-known results for all three datasets. Previous best F1 scores for Wikipedia BIBREF4 and Twitter BIBREF8 datasets were 0.68 and 0.93 respectively. We achieve F1 scores of 0.94 for both these datasets using BLSTM with attention and feature level transfer learning (Table TABREF25 ). For Formspring dataset, authors have not reported F1 score. Their method has accuracy score of 78.5% BIBREF2 . We achieve F1 score of 0.95 with accuracy score of 98% for the same dataset." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "3cf642a3999c60782715e529866158dd293841a9", "cd3ed5af4bfedc5c9711a839a3048eb783d5c8ea" ], "answer": [ { "evidence": [ "Past works on cyberbullying detection have at least one of the following three bottlenecks. First (Bottleneck B1), they target only one particular social media platform. How these methods perform across other SMPs is unknown. Second (Bottleneck B2), they address only one topic of cyberbullying such as racism, and sexism. Depending on the topic, vocabulary and nature of cyberbullying changes. These models are not flexible in accommodating changes in the definition of cyberbullying. Third (Bottleneck B3), they rely on carefully handcrafted features such as swear word list and POS tagging. However, these handcrafted features are not robust against variations in writing style. In contrast to existing bottlenecks, this work targets three different types of social networks (Formspring: a Q&A forum, Twitter: microblogging, and Wikipedia: collaborative knowledge repository) for three topics of cyberbullying (personal attack, racism, and sexism) without doing any explicit feature engineering by developing deep learning based models along with transfer learning." ], "extractive_spans": [ "personal attack, racism, and sexism" ], "free_form_answer": "", "highlighted_evidence": [ "In contrast to existing bottlenecks, this work targets three different types of social networks (Formspring: a Q&A forum, Twitter: microblogging, and Wikipedia: collaborative knowledge repository) for three topics of cyberbullying (personal attack, racism, and sexism) without doing any explicit feature engineering by developing deep learning based models along with transfer learning." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Please refer to Table TABREF7 for summary of datasets used. We performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. We cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages). Each dataset addresses a different topic of cyberbullying. Twitter dataset contains examples of racism and sexism. Wikipedia dataset contains examples of personal attack. However, Formspring dataset is not specifically about any single topic. All three datasets have the problem of class imbalance where posts labeled as cyberbullying are in the minority as compared to neutral posts. Variation in the number of posts across datasets also affects vocabulary size that represents the number of distinct words encountered in the dataset. We measure the size of a post in terms of the number of words in the post. For each dataset, there are only a few posts with large size. We truncate such large posts to the size of post ranked at 95 percentile in that dataset. For example, in Wikipedia dataset, the largest post has 2846 words. However, size of post ranked at 95 percentile in that dataset is only 231. Any post larger than size 231 in Wikipedia dataset will be truncated by considering only first 231 words. This truncation affects only a small minority of posts in each dataset. However, it is required for efficiently training various models in our experiments. Details of each dataset are as follows." ], "extractive_spans": [ "racism", "sexism", "personal attack", "not specifically about any single topic" ], "free_form_answer": "", "highlighted_evidence": [ "We performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. We cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages). Each dataset addresses a different topic of cyberbullying. Twitter dataset contains examples of racism and sexism. Wikipedia dataset contains examples of personal attack. However, Formspring dataset is not specifically about any single topic." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "What were their performance results?", "What cyberbulling topics did they address?" ], "question_id": [ "5c6fa86757410aee6f5a0762328637de03a569e9", "7e38e0279a620d3df05ab9b5e2795044f18d4471" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Table 1. Dataset Statistics", "Table 2. Swear Word Use and Anonymity", "Fig. 1. Model Architecture", "Table 4. Effect of Oversampling Bullying Posts using BLSTM with attention", "Table 5. Effect of Choosing Initial Word Embedding Method on F1 Score", "Table 6. Performance Comparison of Various DNN Models", "Table 8. Most similar words to the query word across platform" ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "5-Figure1-1.png", "7-Table4-1.png", "8-Table5-1.png", "8-Table6-1.png", "9-Table8-1.png" ] }
[ "What were their performance results?" ]
[ [ "1801.06482-Transfer Learning-4" ] ]
[ "best model achieves 0.94 F1 score for Wikipedia and Twitter datasets and 0.95 F1 on Formspring dataset" ]
259
1703.07476
Topic Identification for Speech without ASR
Modern topic identification (topic ID) systems for speech use automatic speech recognition (ASR) to produce speech transcripts, and perform supervised classification on such ASR outputs. However, under resource-limited conditions, the manually transcribed speech required to develop standard ASR systems can be severely limited or unavailable. In this paper, we investigate alternative unsupervised solutions to obtaining tokenizations of speech in terms of a vocabulary of automatically discovered word-like or phoneme-like units, without depending on the supervised training of ASR systems. Moreover, using automatic phoneme-like tokenizations, we demonstrate that a convolutional neural network based framework for learning spoken document representations provides competitive performance compared to a standard bag-of-words representation, as evidenced by comprehensive topic ID evaluations on both single-label and multi-label classification tasks.
{ "paragraphs": [ [ "Topic identification (topic ID) on speech aims to identify the topic(s) for given speech recordings, referred to as spoken documents, where the topics are a predefined set of classes or labels. This task is typically formulated as a three-step process. First, speech is tokenized into words or phones by automatic speech recognition (ASR) systems BIBREF0 , or by limited-vocabulary keyword spotting BIBREF1 . Second, standard text-based processing techniques are applied to the resulting tokenizations, and produce a vector representation for each spoken document, typically a bag-of-words multinomial representation, or a more compact vector given by probabilistic topic models BIBREF2 , BIBREF3 . Finally, topic ID is performed on the spoken document representations by supervised training of classifiers, such as Bayesian classifiers and support vector machines (SVMs).", "However, in the first step, training the ASR system required for tokenization itself requires transcribed speech and pronunciations. In this paper, we focus on a difficult and realistic scenario where the speech corpus of a test language is annotated only with a minimal number of topic labels, i.e., no manual transcriptions or dictionaries for building an ASR system are available. We aim to exploit approaches that enable topic ID on speech without any knowledge of that language other than the topic annotations.", "In this scenario, while previous work demonstrates that the cross-lingual phoneme recognizers can produce reasonable speech tokenizations BIBREF4 , BIBREF5 , the performance is highly dependent on the language and environmental condition (channel, noise, etc.) mismatch between the training and test data. Therefore, we focus on unsupervised approaches that operate directly on the speech of interest. Raw acoustic feature-based unsupervised term discovery (UTD) is one such approach that aims to identify and cluster repeating word-like units across speech based around segmental dynamic time warping (DTW) BIBREF6 , BIBREF7 . BIBREF8 shows that using the word-like units from UTD for spoken document classification can work well; however, the results in BIBREF8 are limited since the acoustic features on which UTD is performed are produced by acoustic models trained from the transcribed speech of its evaluation corpus. In this paper, we investigate UTD-based topic ID performance when UTD operates on language-independent speech representations extracted from multilingual bottleneck networks trained on languages other than the test language BIBREF9 . Another alternative to producing speech tokenizations without language dependency is the model-based approach, i.e., unsupervised learning of hidden Markov model (HMM) based phoneme-like units from untranscribed speech. We exploit the Variational Bayesian inference based acoustic unit discovery (AUD) framework in BIBREF10 that allows parallelized large-scale training. In topic ID tasks, such AUD-based systems have been shown to outperform other systems based on cross-lingual phoneme recognizers BIBREF5 , and this paper aims to further investigate how the performance compares among UTD, AUD and ASR based systems.", "Moreover, after the speech is tokenized, these works BIBREF0 , BIBREF1 , BIBREF4 , BIBREF5 , BIBREF8 , BIBREF9 are limited to using bag-of-words features as spoken document representations. While UTD only identifies relatively long (0.5 – 1 sec) repeated terms, AUD/ASR enables full-coverage segmentation of continuous speech into a sequence of units/words, and such a resulting temporal sequence enables another feature learning architecture based on convolutional neural networks (CNNs) BIBREF11 ; instead of treating the sequential tokens as a bag of acoustic units or words, the whole token sequence is encoded as concatenated continuous vectors, and followed by convolution and temporal pooling operations that capture the local and global dependencies. Such continuous space feature extraction frameworks have been used in various language processing tasks like spoken language understanding BIBREF12 , BIBREF13 and text classification BIBREF14 , BIBREF15 . However, three questions are worth investigating in our AUD-based setting: (i) if such a CNN-based framework can perform as well on noisy automatically discovered phoneme-like units as on orthographic words/characters, (ii) if pre-trained vectors of phoneme-like units from word2vec BIBREF16 provide superior performance to random initialization as evidenced by the word-based tasks, and (iii) if CNNs are still competitive in low-resource settings of hundreds to two-thousand training exemplars, rather than the large/medium sized datasets as in previous work BIBREF14 , BIBREF15 .", "Finally, incorporating the different tokenization and feature representation approaches noted above, we perform comprehensive topic ID evaluations on both single-label and multi-label spoken document classification tasks.", "" ], [ "" ], [ "UTD aims to automatically identify and cluster repeated terms (e.g. words or phrases) from speech. To circumvent the exhaustive DTW-based search limited by INLINEFORM0 time BIBREF6 , we exploit the scalable UTD framework in the Zero Resource Toolkit (ZRTools) BIBREF7 , which permits search in INLINEFORM1 time. We briefly describe the UTD procedures in ZRTools by four steps below, and full details can be found in BIBREF7 .", "", "Construct the sparse approximate acoustic similarity matrices between pairs of speech utterances.", "Identify word repetitions via fast diagonal line search and segmental DTW.", "The resulting matches are used to construct an acoustic similarity graph, where nodes represent the matching acoustic segments and edges reflect DTW distances.", "Threshold the graph edges, and each connected component of the graph is a cluster of acoustic segments, which produces a corresponding term (word/phrase) category.", " Finally, the cluster of each discovered term category consists of a list of term occurrences.", "Note that in the third step above, the weight on each graph edge can be exact DTW-based similarity, or other similarity based on heuristics more than DTW distance. For example, we investigate an implementation in ZRTools, where a separate logistic regression model is used to rescore the similarity between identified matches by determining how likely the matching pair is the same underlying word/phrase and is not a filled pause (e.g. “um-hum” and “yeah uh-huh” in English). Filled pauses tend to be acoustically stationary with more phone repeats and thus would match throughout the acoustic similarity matrix, whereas a contentful word (without too many phone repeats) tend to concentrate around the main diagonal; thus, the features in logistic regression contain the numbers of matrix elements in diagonal bands in progressive steps away from the main diagonal. Feature weights are learned using a portion of transcribed speech with reference transcripts, and the resulting model can be used for language-independent rescoring.", "" ], [ "We exploit the nonparametric Bayesian AUD framework in BIBREF10 based on variational inference, rather than the maximum likelihood training in BIBREF4 which may oversimplify the parameter estimations, nor the Gibbs Sampling training in BIBREF17 which is not amenable to large scale applications. Specifically, a phone-loop model is formulated where each phoneme-like unit is modeled as an HMM with a Gaussian mixture model of output densities (GMM-HMM). Under the Dirichlet process framework, we consider the phone loop as an infinite mixture of GMM-HMMs, and the mixture weights are based on the stick-breaking construction of Dirichlet process. The infinite number of units in the mixture is truncated in practice, giving zero mixture weight to any unit beyond some large count. We treat such mixture of GMM-HMMs as a single unified HMM and thus the segmentation of the data is performed using standard forward-backward algorithm. Training is fully unsupervised and parallelized; after a fixed number of training iterations, we use Viterbi decoding algorithm to obtain acoustic unit tokenizations of the data." ], [ "After we obtain the tokenizations of speech by either UTD or AUD, each spoken document is represented by a vector of unigram occurrence counts over discovered terms, or a vector of INLINEFORM0 -gram counts over acoustic units, respectively. Each feature vector can be further scaled by inverse document frequency (IDF), producing a TF-IDF feature." ], [ "AUD enables full-coverage tokenization of continuous speech into a sequence of acoustic units, which we can exploit in a CNN-based framework to learn a vector representation for each spoken document. As shown in Figure FIGREF7 , in an acoustic unit sequence a of length INLINEFORM0 , each unit INLINEFORM1 , INLINEFORM2 , is encoded as a fixed dimensional continuous vector, and the whole sequence a is represented as a concatenated vector x. A shared convolutional feature transform INLINEFORM3 spans a fixed-sized INLINEFORM4 -gram window, INLINEFORM5 , and slides over the whole sequence. Then the hidden feature layer INLINEFORM6 with nonlinearities consists of each feature vector INLINEFORM7 extracted from the shared convolutional window centered at each acoustic unit position INLINEFORM8 . Max-pooling is performed on top of each INLINEFORM9 , INLINEFORM10 , to obtain a fixed-dimensional vector representation for the whole sequence a, i.e., a vector representation of the whole spoken document, followed by another hidden layer INLINEFORM11 and a final output layer. Note that this framework needs supervision for training; e.g., the output layer can be a softmax function for single-label classification, and the whole model is trained with categorical cross-entropy loss.", "Also, the vector representation of each unique acoustic unit can be randomly initialized, or pre-trained from other tasks. Specifically, we apply the skip-gram model of word2vec BIBREF18 to pre-train one embedding vector for each acoustic unit, based on the hierarchical softmax with Huffman codes." ], [ "For the bag-of-words representation, we use a stochastic gradient descent (SGD) based linear SVM BIBREF19 , BIBREF20 with hinge loss and INLINEFORM0 / INLINEFORM1 norm regularization. For the CNN-based framework, we use a softmax function in the output layer for classification as described in Section SECREF9 .", "For our single-label topic classification experiments, we use the Switchboard Telephone Speech Corpus BIBREF21 , a collection of two-sided telephone conversations. We use the same development (dev) and evaluation (eval) data sets as in BIBREF8 , BIBREF9 . Each whole conversation has two sides and one single topic, and topic ID is performed on each individual-side speech (i.e., each side is seen as one single spoken document). In the 35.7 hour dev data, there are 360 conversation sides evenly distributed across six different topics (recycling, capital punishment, drug testing, family finance, job benefits, car buying), i.e., each topic has equal number of 60 sides. In the 61.6 hour eval data, there are another different six topics (family life, news media, public education, exercise/fitness, pets, taxes) evenly distributed across 600 conversation sides. Algorithm design choices are explored through experiments on dev data. We use manual segmentations provided by the Switchboard corpus to produce utterances with speech activity, and UTD and AUD are operating only on those utterances.", "For UTD, we use the ZRTools BIBREF7 implementation with the default parameters except that, we use cosine similarity threshold INLINEFORM0 , and vary the diagonal median filter duration INLINEFORM1 over INLINEFORM2 ; we try both the exact DTW-based similarity and the rescored similarity as described in Section SECREF1 , and tune the similarity threshold (used to partition the graph edges) over INLINEFORM3 . For AUD, the unsupervised training is performed only on the dev data (10 iterations); after training, we use the learned models to decode both dev and eval data set, and obtain the acoustic unit tokenizations. We use truncation level 200, which implies maximum 200 different acoustic units can be learned from the corpus. For each acoustic unit, we use a 3-state HMM with 2 Gaussians per state. For the stick-breaking construction of Dirichlet process, we vary the concentration parameter INLINEFORM4 over INLINEFORM5 , and other hyperparameters are the same as BIBREF10 . The acoustic features on which UTD and AUD operate are extracted using the same multilingual bottleneck (BN) network as described in BIBREF9 with Kaldi toolkit BIBREF22 . We conduct the multilingual BN training with 10 language collections (Assamese, Bengali, Cantonese, Haitian, Lao, Pashto, Tamil, Tagalog, Vietnamese and Zulu) – 10 hours of transcribed speech per language. Complete specifications can be found in BIBREF9 .", "For SVM-based classification, we use the bag of discovered term unigrams, or bag of acoustic unit trigrams. On dev data, we try using the features of raw counts or scaled by IDF, SVM regularization tuned over INLINEFORM0 / INLINEFORM1 norm, regularization constant tuned over INLINEFORM2 , and SGD epochs tuned over INLINEFORM3 . We further normalize each feature to INLINEFORM4 norm unit length. Each experiment is a run of 10-fold cross validation (CV) on the 360 conversation sides of dev data, or on the 600 sides of eval data, respectively. Note that our data size here is relatively small (only 360 or 600) and the SGD training may give high variance in the performance BIBREF23 . Therefore, to report classification accuracy for each configuration (when varying features or models), we repeat each CV experiment 5 times, where each experiment again is a run of 10-fold CV; then for each configuration, the mean and standard deviation of 5 experiments is reported.", "For CNN-based classification, we use the same strategy to report classification accuracy, i.e., repeating experiments 5 times (where each time is a 10-fold CV) for each CNN configuration. Note that the respective 10 folds of both dev and eval data sets are fixed the same for all the SVM and CNN experiments. Additionally, for each 10-fold CV experiment, instead of training on 9 folds and testing on the remaining 1 fold as in SVM, we use 8 folds for CNN training, leave another 1 fold out as validation data; after training each CNN model for up to 100 epochs, the model with the best accuracy on the validation data is used for evaluation on the test set. The acoustic unit sequence (as CNN inputs) are zero-padded to the longest length in each dataset. We implemented the CNNs in Keras BIBREF24 with Theano BIBREF25 backend. CNN architectures are determined through experiments on dev data. For SGD training we use the Adadelta optimizer BIBREF26 and mini-batch size 18. The INLINEFORM0 -gram window size of each convolutional feature transform INLINEFORM1 is 7. The size of each hidden feature vector INLINEFORM2 (extracted from the transform INLINEFORM3 ) is 1024, with rectified linear unit (ReLU) nonlinearities. Thus, after max-pooling over time, we have a 1024-dimensional vector again, which then goes through another hidden layer INLINEFORM4 (also set as 1024-dimensional with ReLU) and finally into a softmax. Dropout BIBREF27 rate 0.2 is used at each layer.", "When we initialize the vector representation of each acoustic unit with a set of pre-trained vectors (instead of random initializations), we apply the skip-gram model of word2vec BIBREF18 to the acoustic unit tokenizations of each data set. We use the gensim implementation BIBREF28 , which includes a vector space of embedding dimension 50 (tuned over INLINEFORM0 ), a skip-gram window of size 5, and SGD over 20 epochs.", "Table TABREF15 shows the topic ID results on Switchboard. For UTD-based classifications, we find that the default rescoring in ZRTools BIBREF7 which is designed to filter out the filled pauses produces comparable performance to the raw DTW similarity scores, but the rescoring can result in much faster connected-component clustering (Section SECREF1 ). Note that this rescoring model is estimated using a portion of transcribed Switchboard, but it is still a legitimate language-independent UTD approach while operating on languages other than English. While a diagonal median filter duration INLINEFORM0 of INLINEFORM1 or INLINEFORM2 gives similar results, INLINEFORM3 produces longer but fewer terms, giving more sparse feature representations. Therefore, we proceed with rescoring and INLINEFORM4 in the following UTD experiments (Section SECREF16 ).", "For AUD-based classifications, CNN without word2vec pre-training usually gives comparable results with SVM; however, using word2vec pre-training, CNN substantially outperforms the competing SVM in all cases. Also as the concentration parameter INLINEFORM0 in AUD increases from INLINEFORM1 to INLINEFORM2 (yielding less concentrated distributions), we have more unique acoustic units in the tokenizations of both data sets, from 184 to 199, and INLINEFORM3 usually produces better results than INLINEFORM4 ." ], [ "In the setting where each spoken document can be associated with multiple topics/labels, we proceed to perform a multi-label classification task. The baseline approach is the binary relevance method, which independently trains one binary classifier for each label, and the spoken document is evaluated by each classifier to determine if the respective label applies to it. Specifically, we use a set of SVMs (Section SECREF10 ), one for each label, on the bag-of-words features.", "To adapt the CNN-based framework for multi-label classification, we replace the softmax in the output layer with a set of sigmoid output nodes, one for each label, as shown in Figure FIGREF7 . Since a sigmoid naturally provides output values between 0 and 1, we train the neural network (NN) to minimize the binary cross entropy loss defined as INLINEFORM0 , where INLINEFORM1 denotes the NN parameters, x is the feature vector of acoustic unit sequence, y is the target vector of labels, INLINEFORM2 and INLINEFORM3 are the output and the target for label INLINEFORM4 , and the number of unique labels is INLINEFORM5 .", "We further evaluate our topic ID performance on the speech corpora of three languages released by the DARPA LORELEI (Low Resource Languages for Emergent Incidents) Program. For each language there are a number of audio speech files, and each speech file is cut into segments of various lengths (up to 120 seconds). Each speech segment is seen as either in-domain or out-of-domain. In-domain data is defined as any speech segment relating to an incident or incidents, and in-domain data will fall into a set of domain-specific categories; these categories are known as situation types, or in-domain topics. There are 11 situation types: “Civil Unrest or Wide-spread Crime”, “Elections and Politics”, “Evacuation”, “Food Supply”, “Urgent Rescue”, “Utilities, Energy, or Sanitation”, “Infrastructure”, “Medical Assistance”, “Shelter”, “Terrorism or other Extreme Violence”, and “Water Supply”. We consider “Out-of-domain” as the 12th topic label, so each speech segment either corresponds to one or multiple in-domain topics, or is “Out-of-domain”. We use the average precision (AP, equal to the area under the precision-recall curve) as the evaluation metric, and report both the AP across the overall 12 labels, and the AP across 11 situation types, as shown in Table TABREF18 . For each configuration, only a single 10-fold CV result is reported, since we observe less variance in results here than in Switchboard. We have 16.5 hours in-domain data and 8.5 hours out-of-domain data for Turkish, 2.9 and 13.2 hours for Uzbek, and 7.7 and 7.2 hours for Mandarin. We use the same CNN architecture as on Switchboard but make the changes as described in Section SECREF11 . Also we use mini-batch size 30 and fix the training epochs as 100. All CNNs use word2vec pre-training. Additionally, we also implement another two separate topic ID baselines using the decoded word outputs from two supervised ASR systems, trained from 80 hours transcribed Babel Turkish speech BIBREF29 and about 170 hours transcribed HKUST Mandarin telephone speech (LDC2005T32 and LDC2005S15), respectively.", "", "As shown in Table TABREF18 , UTD-based SVMs are more competitive than AUD-based SVMs on the smaller corpora, i.e., Uzbek and Mandarin, while being less competitive on the larger corpus, Turkish. We further investigate this behavior on each individual language by varying the amount of training data; we split the data into 10 folds, and perform 10-fold CV 9 times, varying the number of folds for training from 1 to 9. As illustrated in Figure FIGREF19 for Turkish, as we use more folds for training, AUD-based system starts to be more competitive than UTD. Supervised ASR-based systems still give the best results in various cases, while UTD and AUD based systems give comparable performance.", "Note that CNN-based systems outperform SVMs on Turkish and Uzbek while losing on the smaller sized Mandarin, indicating more topic-labeled data is needed to enable competitive CNNs. This also indicates why CNNs on LORELEI corpora do not produce as large a gain over SVMs as on the larger sized Switchboard, since each 15-25 hour LORELEI corpus with 12 topic labels is a relatively small amount of data compared to the 35.7/61.6 hour Switchboard corpus with 6 labels." ], [ "We have demonstrated that both UTD and AUD are viable technologies for producing effective tokenizations of speech that enable topic ID performance comparable to using standard ASR systems, while effectively removing the dependency on transcribed speech required by the ASR alternative. We find that when training data is severely limited the UTD-based classification is superior to AUD-based classification. As the amount of training data increases, performance improves across the board. Finally, with sufficient training data AUD-based CNNs with word2vec pre-training outperform AUD-based SVMs." ] ], "section_name": [ "Introduction", "Unsupervised tokenizations of speech", "Unsupervised term discovery (UTD)", "Acoustic unit discovery (AUD)", "Bag-of-words representation", "Convolutional neural network-based representation", "Single-label classification", "Multi-label classification", "Concluding remarks" ] }
{ "answers": [ { "annotation_id": [ "4c72c49573088afbe92a401a212839f07ad3180c", "e4f0905c68df6952a1a30d63afee3968f459be3e" ], "answer": [ { "evidence": [ "For our single-label topic classification experiments, we use the Switchboard Telephone Speech Corpus BIBREF21 , a collection of two-sided telephone conversations. We use the same development (dev) and evaluation (eval) data sets as in BIBREF8 , BIBREF9 . Each whole conversation has two sides and one single topic, and topic ID is performed on each individual-side speech (i.e., each side is seen as one single spoken document). In the 35.7 hour dev data, there are 360 conversation sides evenly distributed across six different topics (recycling, capital punishment, drug testing, family finance, job benefits, car buying), i.e., each topic has equal number of 60 sides. In the 61.6 hour eval data, there are another different six topics (family life, news media, public education, exercise/fitness, pets, taxes) evenly distributed across 600 conversation sides. Algorithm design choices are explored through experiments on dev data. We use manual segmentations provided by the Switchboard corpus to produce utterances with speech activity, and UTD and AUD are operating only on those utterances.", "We further evaluate our topic ID performance on the speech corpora of three languages released by the DARPA LORELEI (Low Resource Languages for Emergent Incidents) Program. For each language there are a number of audio speech files, and each speech file is cut into segments of various lengths (up to 120 seconds). Each speech segment is seen as either in-domain or out-of-domain. In-domain data is defined as any speech segment relating to an incident or incidents, and in-domain data will fall into a set of domain-specific categories; these categories are known as situation types, or in-domain topics. There are 11 situation types: “Civil Unrest or Wide-spread Crime”, “Elections and Politics”, “Evacuation”, “Food Supply”, “Urgent Rescue”, “Utilities, Energy, or Sanitation”, “Infrastructure”, “Medical Assistance”, “Shelter”, “Terrorism or other Extreme Violence”, and “Water Supply”. We consider “Out-of-domain” as the 12th topic label, so each speech segment either corresponds to one or multiple in-domain topics, or is “Out-of-domain”. We use the average precision (AP, equal to the area under the precision-recall curve) as the evaluation metric, and report both the AP across the overall 12 labels, and the AP across 11 situation types, as shown in Table TABREF18 . For each configuration, only a single 10-fold CV result is reported, since we observe less variance in results here than in Switchboard. We have 16.5 hours in-domain data and 8.5 hours out-of-domain data for Turkish, 2.9 and 13.2 hours for Uzbek, and 7.7 and 7.2 hours for Mandarin. We use the same CNN architecture as on Switchboard but make the changes as described in Section SECREF11 . Also we use mini-batch size 30 and fix the training epochs as 100. All CNNs use word2vec pre-training. Additionally, we also implement another two separate topic ID baselines using the decoded word outputs from two supervised ASR systems, trained from 80 hours transcribed Babel Turkish speech BIBREF29 and about 170 hours transcribed HKUST Mandarin telephone speech (LDC2005T32 and LDC2005S15), respectively." ], "extractive_spans": [ "Switchboard Telephone Speech Corpus BIBREF21", "LORELEI (Low Resource Languages for Emergent Incidents) Program" ], "free_form_answer": "", "highlighted_evidence": [ "For our single-label topic classification experiments, we use the Switchboard Telephone Speech Corpus BIBREF21 , a collection of two-sided telephone conversations.", "We further evaluate our topic ID performance on the speech corpora of three languages released by the DARPA LORELEI (Low Resource Languages for Emergent Incidents) Program." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We further evaluate our topic ID performance on the speech corpora of three languages released by the DARPA LORELEI (Low Resource Languages for Emergent Incidents) Program. For each language there are a number of audio speech files, and each speech file is cut into segments of various lengths (up to 120 seconds). Each speech segment is seen as either in-domain or out-of-domain. In-domain data is defined as any speech segment relating to an incident or incidents, and in-domain data will fall into a set of domain-specific categories; these categories are known as situation types, or in-domain topics. There are 11 situation types: “Civil Unrest or Wide-spread Crime”, “Elections and Politics”, “Evacuation”, “Food Supply”, “Urgent Rescue”, “Utilities, Energy, or Sanitation”, “Infrastructure”, “Medical Assistance”, “Shelter”, “Terrorism or other Extreme Violence”, and “Water Supply”. We consider “Out-of-domain” as the 12th topic label, so each speech segment either corresponds to one or multiple in-domain topics, or is “Out-of-domain”. We use the average precision (AP, equal to the area under the precision-recall curve) as the evaluation metric, and report both the AP across the overall 12 labels, and the AP across 11 situation types, as shown in Table TABREF18 . For each configuration, only a single 10-fold CV result is reported, since we observe less variance in results here than in Switchboard. We have 16.5 hours in-domain data and 8.5 hours out-of-domain data for Turkish, 2.9 and 13.2 hours for Uzbek, and 7.7 and 7.2 hours for Mandarin. We use the same CNN architecture as on Switchboard but make the changes as described in Section SECREF11 . Also we use mini-batch size 30 and fix the training epochs as 100. All CNNs use word2vec pre-training. Additionally, we also implement another two separate topic ID baselines using the decoded word outputs from two supervised ASR systems, trained from 80 hours transcribed Babel Turkish speech BIBREF29 and about 170 hours transcribed HKUST Mandarin telephone speech (LDC2005T32 and LDC2005S15), respectively.", "As shown in Table TABREF18 , UTD-based SVMs are more competitive than AUD-based SVMs on the smaller corpora, i.e., Uzbek and Mandarin, while being less competitive on the larger corpus, Turkish. We further investigate this behavior on each individual language by varying the amount of training data; we split the data into 10 folds, and perform 10-fold CV 9 times, varying the number of folds for training from 1 to 9. As illustrated in Figure FIGREF19 for Turkish, as we use more folds for training, AUD-based system starts to be more competitive than UTD. Supervised ASR-based systems still give the best results in various cases, while UTD and AUD based systems give comparable performance." ], "extractive_spans": [], "free_form_answer": "LORELEI datasets of Uzbek, Mandarin and Turkish", "highlighted_evidence": [ "We further evaluate our topic ID performance on the speech corpora of three languages released by the DARPA LORELEI (Low Resource Languages for Emergent Incidents) Program. ", "As shown in Table TABREF18 , UTD-based SVMs are more competitive than AUD-based SVMs on the smaller corpora, i.e., Uzbek and Mandarin, while being less competitive on the larger corpus, Turkish. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "c7cd2fa2ca3724dcd325959876b16a9671f010ed" ], "answer": [ { "evidence": [ "UTD aims to automatically identify and cluster repeated terms (e.g. words or phrases) from speech. To circumvent the exhaustive DTW-based search limited by INLINEFORM0 time BIBREF6 , we exploit the scalable UTD framework in the Zero Resource Toolkit (ZRTools) BIBREF7 , which permits search in INLINEFORM1 time. We briefly describe the UTD procedures in ZRTools by four steps below, and full details can be found in BIBREF7 ." ], "extractive_spans": [ "Zero Resource Toolkit (ZRTools) BIBREF7" ], "free_form_answer": "", "highlighted_evidence": [ "UTD aims to automatically identify and cluster repeated terms (e.g. words or phrases) from speech. To circumvent the exhaustive DTW-based search limited by INLINEFORM0 time BIBREF6 , we exploit the scalable UTD framework in the Zero Resource Toolkit (ZRTools) BIBREF7 , which permits search in INLINEFORM1 time." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "", "" ], "question": [ "What datasets are used to assess the performance of the system?", "How is the vocabulary of word-like or phoneme-like units automatically discovered?" ], "question_id": [ "01e2d10178347d177519f792f86f25575106ddc7", "021bfb7e180d67112b74f05ecb3fa13acc036c86" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Figure 1: CNN-based framework that operates on automatically discovered acoustic units.", "Table 2: Multi-label topic ID average precision on LORELEI languages, with the number of speech segments in parentheses.", "Table 1: Single-label topic ID accuracies on Switchboard." ], "file": [ "2-Figure1-1.png", "4-Table2-1.png", "4-Table1-1.png" ] }
[ "What datasets are used to assess the performance of the system?" ]
[ [ "1703.07476-Multi-label classification-4", "1703.07476-Single-label classification-1", "1703.07476-Multi-label classification-2" ] ]
[ "LORELEI datasets of Uzbek, Mandarin and Turkish" ]
262
1906.00346
Pre-training of Graph Augmented Transformers for Medication Recommendation
Medication recommendation is an important healthcare application. It is commonly formulated as a temporal prediction task. Hence, most existing works only utilize longitudinal electronic health records (EHRs) from a small number of patients with multiple visits ignoring a large number of patients with a single visit (selection bias). Moreover, important hierarchical knowledge such as diagnosis hierarchy is not leveraged in the representation learning process. To address these challenges, we propose G-BERT, a new model to combine the power of Graph Neural Networks (GNNs) and BERT (Bidirectional Encoder Representations from Transformers) for medical code representation and medication recommendation. We use GNNs to represent the internal hierarchical structures of medical codes. Then we integrate the GNN representation into a transformer-based visit encoder and pre-train it on EHR data from patients only with a single visit. The pre-trained visit encoder and representation are then fine-tuned for downstream predictive tasks on longitudinal EHRs from patients with multiple visits. G-BERT is the first to bring the language model pre-training schema into the healthcare domain and it achieved state-of-the-art performance on the medication recommendation task.
{ "paragraphs": [ [ "The availability of massive electronic health records (EHR) data and the advances of deep learning technologies have provided unprecedented resource and opportunity for predictive healthcare, including the computational medication recommendation task. A number of deep learning models were proposed to assist doctors in making medication recommendation BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . They often learn representations for medical entities (e.g., patients, diagnosis, medications) from patient EHR data, and then use the learned representations to predict medications that are suited to the patient's health condition.", "To provide effective medication recommendation, it is important to learn accurate representation of medical codes. Despite that various considerations were handled in previous works for improving medical code representations BIBREF4 , BIBREF2 , BIBREF3 , there are two limitations with the existing work:", "To mitigate the aforementioned limitations, we propose G-BERT that combines the pre-training techniques and graph neural networks for better medical code representation and medication recommendation. G-BERT is enabled and demonstrated by the following technical contributions:" ], [ "Medication Recommendation Medication Recommendation can be categorized into instance-based and longitudinal recommendation methods BIBREF1 . Instance-based methods focus on current health conditions. Among them, Leap BIBREF9 formulates a multi-instance multi-label learning framework and proposes a variant of sequence-to-sequence model based on content-attention mechanism to predict combination of medicines given patient's diagnoses. Longitudinal-based methods leverage the temporal dependencies among clinical events, see BIBREF10 , BIBREF11 , BIBREF12 . Among them, RETAIN BIBREF10 uses a two-level neural attention model to detect influential past visits and significant clinical variables within those visits for improved medication recommendation.", "Pre-training Techniques The goal of pre-training techniques is to provide model training with good initializations. Pre-training has been shown extremely effective in various areas such as image classification BIBREF13 , BIBREF14 and machine translation BIBREF15 . The unsupervised pre-training can be considered as a regularizer that supports better generalization from the training dataset BIBREF16 . Recently, language model pre-training techniques such as BIBREF5 , BIBREF6 , BIBREF7 have shown to largely improve the performance on multiple NLP tasks. As the most widely used one, BERT BIBREF7 builds on the Transformer BIBREF8 architecture and improves the pre-training using a masked language model for bidirectional representation. In this paper, we adapt the framework of BERT and pre-train our model on each visit of the EHR data to leverage the single-visit data that were not fit for model training in other medication recommendation models.", "Graph Neural Networks (GNN) GNNs are neural networks that learn node or graph representations from graph-structured data. Various graph neural networks have been proposed to encode the graph-structure information, including graph convolutional neural networks (GCN) BIBREF17 , message passing networks (MPNN) BIBREF18 , graph attention networks (GAT) BIBREF19 . GNNs have already been demonstrated useful on EHR modeling BIBREF20 , BIBREF1 . GRAM BIBREF20 represented a medical concept as a combination of its ancestors in the medical ontology using an attention mechanism. It's different from G-BERT from two aspects as described in Section \"Input Representation\" . Another work worth mentioning is GAMENet BIBREF1 , which also used graph neural network to assist the medication recommendation task. However, GAMENet has a different motivation which results in using graph neural networks on drug-drug-interaction graphs instead of medical ontology." ], [ "Definition 1 (Longitudinal Patient Records) In longitudinal EHR data, each patient can be represented as a sequence of multivariate observations: $ \\mathcal {X}^{(n)} = \\lbrace \\mathcal {X}_1^{(n)}, \\mathcal {X}_2^{(n)}, \\cdots , \\mathcal {X}_{T^{(n)}}^{(n)} \\rbrace $ where $n\\in \\lbrace 1,2,\\ldots , N\\rbrace $ , $N$ is the total number of patients; $T^{(n)}$ is the number of visits of the $n^{th}$ patient. Here we choose two main medical code to represent each visit $\\mathcal {X}_t = \\mathcal {C}_d^t \\cup \\mathcal {C}_m^t$ of a patient which is a union set of corresponding diagnoses codes $\\mathcal {C}_d^t \\subset \\mathcal {C}_d$ and medications codes $\\mathcal {C}_m^t \\subset \\mathcal {C}_m$ . For simplicity, we use $\\mathcal {C}_\\ast ^t$ to indicate the unified definition for different type of medical codes and drop the superscript $(n)$ for a single patient whenever it is unambiguous. $n\\in \\lbrace 1,2,\\ldots , N\\rbrace $0 denotes the medical code set and $n\\in \\lbrace 1,2,\\ldots , N\\rbrace $1 the size of the code set. $n\\in \\lbrace 1,2,\\ldots , N\\rbrace $2 is the medical code.", "Definition 2 (Medical Ontology) Medical codes are usually categorized according to a tree-structured classification system such as ICD-9 ontoloy for diagnosis and ATC ontology for medication. We use $\\mathcal {O}_d, \\mathcal {O}_m$ to denote the ontology for diagnosis and medication. Similarly, we use $\\mathcal {O}_\\ast $ to indicate the unified definition for different type of medical codes. In detial, $\\mathcal {O}_\\ast = \\overline{\\mathcal {C}_\\ast } \\cup \\mathcal {C}_\\ast $ where $\\overline{\\mathcal {C}_\\ast }$ denotes the codes excluding leaf codes. For simplicity, we define two function $pa(\\cdot ), ch(\\cdot )$ which accept target medical code and return ancestors' code set and direct child code set.", "Problem Definition (Medication Recommendation) Given diagnosis codes $\\mathcal {C}_d^t$ of the visit at time $t$ , patient history $\\mathcal {X}_{1:t} = \\lbrace \\mathcal {X}_1, \\mathcal {X}_2, \\cdots , \\mathcal {X}_{t-1}\\rbrace $ , we want to recommend multiple medications by generating multi-label output $\\hat{\\mathbf {y}}_t \\in \\lbrace 0,1\\rbrace ^{|\\mathcal {C}_m|}$ ." ], [ "The overall framework of G-BERT is described in Figure 2 . G-BERT first derives the initial embedding of medical codes from medical ontology using graph neural networks. Then, in order to fully utilize the rich EHR data, G-BERT constructs an adaptive BERT model on the discarded single-visit data for visit representation. Finally we add a prediction layer and fine-tune the model in the medication recommendation task. In the following we will describe G-BERT in detail. But firstly, we give a brief background of BERT especially for the two pre-training objectives which will be later adapted to EHR data in Section \"Pre-training\" .", "Background of BERT Based on a multi-layer Transformer encoder BIBREF8 (The transformer architecture has been ubiquitously used in many sequence modeling tasks recently, so we will not introduce the details here), BERT is pre-trained using two unsupervised tasks:", "A typical input to BERT is as follows ( BIBREF7 ):", "Input = [CLS] the man went to [MASK] store [SEP] he bought a gallon [MASK] milk [SEP]", "Label = IsNext", "where [CLS] is the first token of each sentence pair to represent the special classification embedding, i.e. the final state of this token is used as the aggregated sequence representation for classification tasks; [SEP] is used to separate two sentences; [MASK] is used to mask out the predicted words in the masked language model. Using this form, these inputs facilitate the two tasks described above, and they will also be used in our method description in the following section." ], [ "The G-BERT model takes medical codes' ontology embeddings as input, and obtains intermediate representations from a Transformer encoder as the visit embeddings. It is then pre-trained on EHR from patients who only have one hospital visit. The derived encoder and visit embedding will be fed into a classifier and fine-tuned to make predictions.", "Ontology Embedding We constructed ontology embedding from diagnosis ontology $\\mathcal {O}_d$ and medication ontology $\\mathcal {O}_m$ . Since the medical codes in raw EHR data can be considered as leaf nodes in these ontology trees, we can enhance the medical code embedding using graph neural networks (GNNs) to integrate the ancestors' information of these codes. Here we perform a two-stage procedure with a specially designed GNN for ontology embedding.", "To start, we assign an initial embedding vector to every medical code $c_\\ast \\in \\mathcal {O}_\\ast $ with a learnable embedding matrix $\\mathbf {W}_e \\in \\mathbb {R}^{|\\mathcal {O}_\\ast | \\times d}$ where $d$ is the embedding dimension.", "Stage 1. For each non-leaf node $c_\\ast \\in \\overline{\\mathcal {C}_\\ast }$ , we obtain its enhanced medical embedding $\\mathbf {h}_{c_\\ast } \\in \\mathbb {R}^{d}$ as follows: ", "$$\\mathbf {h}_{c_\\ast } = g(c_\\ast , ch(c_\\ast ), \\mathbf {W}_e)$$ (Eq. 11) ", "where $g(\\cdot ,\\cdot ,\\cdot )$ is an aggregation function which accepts the target medical code $c_\\ast $ , its direct child codes $ch(c_\\ast )$ and initial embedding matrix. Intuitively, the aggregation function can pass and fuse information in target node from its direct children which result in the more related embedding of ancestor' code to child codes' embedding.", "Stage 2. After obtaining enhanced embeddings, we pass the enhance embedding matrix $\\mathbf {H}_e \\in \\mathbb {R}^{|\\mathcal {O}_\\ast | \\times d}$ back to get ontology embedding for leaf codes $c_\\ast \\in \\mathcal {C}_\\ast $ as follows: ", "$$\\mathbf {o}_{c_\\ast } = g(c_\\ast , pa(c_\\ast ), \\mathbf {H}_e)$$ (Eq. 12) ", "where $g(\\cdot ,\\cdot ,\\cdot )$ accepts ancestor codes of target medical code $c_\\ast $ . Here, we use $pa(c_\\ast )$ instead of $ch(c_\\ast )$ , since utilizing the ancestors' embedding can indirectly associate all medical codes instead of taking each leaf code as independent input.", "The option for the aggregation function $g(\\cdot ,\\cdot ,\\cdot )$ is flexible, including sum, mean. Here we choose the one from graph attention networks (GAT) BIBREF19 , which has shown efficient embedding learning ability on graph-structured tasks, e.g., node classification and link prediction. In particular, we implement the aggregation function $g(\\cdot ,\\cdot ,\\cdot )$ as follows: ", "$$g (c_\\ast , p(c_\\ast ), \\mathbf {H}_e) = \\operatornamewithlimits{\\scalebox {1}[1.5]{\\parallel }}_{k=1}^K \\sigma \\left( \\sum _{j \\in \\lbrace c_\\ast \\rbrace \\cup pa(c_\\ast )} \\alpha _{i,j}^k \\mathbf {W}^k \\mathbf {h}_j \\right)$$ (Eq. 13) ", "where $\\operatornamewithlimits{\\scalebox {1}[1.5]{\\parallel }}$ represents concatenation which enables the multi-head attention mechanism, $\\sigma $ is a nonlinear activation function, $\\mathbf {W}^k \\in \\mathbb {R}^{m \\times d}$ is the weight matrix for input transformation, and $\\alpha _{i,j}^k$ are the corresponding $k$ -th normalized attention coefficients computed as follows: ", "$$\\alpha _{i,j}^k = \\frac{e^{\\left(\\text{LeakyReLU}(\\mathbf {a}^\\intercal [\\mathbf {W}^k \\mathbf {h}_i || \\mathbf {W}^k \\mathbf {h}_j])\\right)}}{\\sum _{k \\in \\mathcal {N}_i} e^{\\left(\\text{LeakyReLU}(\\mathbf {a}^\\intercal [\\mathbf {W}^k \\mathbf {h}_i || \\mathbf {W}^k \\mathbf {h}_k])\\right)}}$$ (Eq. 14) ", "where $\\mathbf {a} \\in \\mathbb {R}^{2m}$ is a learnable weight vector and LeakyReLU is a nonlinear function. (we assume $m = d/K$ ).", "As shown in Figure 2 , we construct ICD-9 tree for diagnosis and ATC tree for medication using the same structure. Here the direction of arrow shows the information flow where ancestor nodes can get information from their direct children (in stage 1) and similarly leaf nodes can get information from their connected ancestors (in stage 2).", "It is worth mentioning that our graph embedding method on medical ontology is different from GRAM BIBREF20 from the following two aspects:", "Initialization: we initialize all the node embeddings from a learnable embedding matrix, while GRAM learns them using Glove from the co-occurrence information.", "Updating: we develop a two-step updating function for both leaf nodes and ancestor nodes; while in GRAM, only the leaf nodes are updated (as a combination of their ancestor nodes and themselves).", "Visit Embedding Similar to BERT, we use a multi-layer Transformer architecture BIBREF8 as our visit encoder. The model takes the ontology embedding as input and derive visit embedding $\\mathbf {v}_\\ast ^t \\in \\mathbb {R}^d$ for a patient at $t$ -th visit: ", "$$\\mathbf {v}_\\ast ^t = \\mathrm {Transformer}(\\lbrace \\textit {[CLS]}\\rbrace \\cup \\lbrace \\mathbf {o}_{c_\\ast }^t| c_\\ast \\in \\mathcal {C}_\\ast ^t\\rbrace )[0]$$ (Eq. 17) ", "where [CLS] is a special token as in BERT. It is put in the first position of each visit of type $\\ast $ and its final state can be used as the representation of the visit. Intuitively, it is more reasonable to use Transformers as encoders (multi-head attention based architecture) than RNN or mean/sum to aggregate multiple medical embedding for visit embedding since the set of medical codes within one visit is not ordered.", "It is worth noting that our Transformer encoder is different from the original one in the position embedding part. Position embedding, as an important component in Transformers and BERT, is used to encode the position and order information of each token in a sequence. However, one big difference between language sentences and EHR sequences is that the medical codes within the same visit do not generally have an order, so we remove the position embedding in our model." ], [ "We adapted the original BERT model to be more suitable for our data and task. In particular, we pre-train the model on each EHR visit (within both single-visit EHR sequences and multi-visit EHR sequences). We modified the input and pre-training objectives of the BERT model: (1) For the input, we built the Transformer encoder on the GNN outputs, i.e. ontology embeddings, for visit embedding. For the original EHR sequence, it means essentially we combine the GNN model with a Transformer to become a new integrated encoder. In addition, we removed the position embedding as we explained before. (2) As for the pre-training procedures, we modified the original pre-training tasks i.e., Masked LM (language model) task and Next Sentence prediction task to self-prediction task and dual-prediction task. The idea to conduct these tasks is to make the visit embedding absorb enough information about what it is made of and what it is able to predict.", "Thus, for the self-prediction task, we want the visit embedding $v^1_\\ast $ to recover what it is made of, i.e., the input medical codes $\\mathcal {C}_\\ast ^t$ for each visit as follows: ", "$$\\begin{aligned}\n&\\mathcal {L}_{se}(\\mathbf {v}^1_\\ast , \\mathcal {C}_\\ast ^1) = -\\log p(\\mathcal {C}_\\ast ^1 | \\mathbf {v}^1_\\ast ) \\\\\n& = - \\sum _{c \\in \\mathcal {C}_\\ast ^1}\\log p(c_\\ast ^1 | \\mathbf {v}^1_\\ast ) + \\sum _{c \\in \\mathcal {C}_\\ast \\setminus \\mathcal {C}_\\ast ^1}\\log p(c_\\ast ^1 | \\mathbf {v}^1_\\ast )\n\\end{aligned}$$ (Eq. 19) ", "we minimize the binary cross entropy loss $\\mathcal {L}_s$ , and in practise, $\\text{Sigmoid}(f(\\mathbf {v}_\\ast ))$ should be transformed by applying a fully connected neural network $f(\\cdot )$ with one hidden layer. With an analogy to the Masked LM task in BERT, we also used specific symbol [MASK] to randomly replace the original medical code $c_\\ast \\in \\mathcal {C}_\\ast ^1$ . So there are $15\\%$ codes in $\\mathcal {C}_\\ast $ which will be replaced randomly and the model should have the ability to predict the masked code based on others.", "Likewise, for the dual-prediction task, since the visit embedding $\\mathbf {v}_\\ast $ carries the information of medical codes of type $\\ast $ , we can further expect it has the ability to do more task-specific prediction as follows: ", "$$\\begin{aligned}\n\\mathcal {L}_{du} = -\\log p(\\mathcal {C}_d^1 | \\mathbf {v}_m^1) - \\log p(\\mathcal {C}_m^1 | \\mathbf {v}_d^1)\n\\end{aligned}$$ (Eq. 20) ", "where we use the same transformation function $\\text{Sigmoid}(f_1(\\mathbf {v}_m^1))$ , $\\text{Sigmoid}(f_2(\\mathbf {v}_d^1))$ with different weight matrix to transform the visit embedding and optimize the binary cross entropy loss $\\mathcal {L}_{du}$ expanded same as $\\mathcal {L}_{se}$ in Eq. 19 . This is a direct adaptation of the next sentence prediction task. In BERT, the next sentence prediction task facilitates the prediction of sentence relations, which is a common task in NLP. However, in healthcare, most predictive tasks do not have a sequence pair to classify. Instead, we are often interested in predicting unknown disease or medication codes of the sequence. For example, in medication recommendation, we want to predict multiple medications given only the diagnosis codes. Inversely, we can also predict unknown diagnosis given the medication codes.", "Thus, our final pre-training optimization objective can simply be the combination of the aforementioned losses, as shown in Eq. 21 . It is used to train on EHR data from all patients who only have one hospital visits.. ", "$$\\begin{aligned}\n\\mathcal {L}_{pr} = \\frac{1}{N} \\sum _{n=1}^N ((\n&\\mathcal {L}_{se}(\\mathbf {v}_d^{1, (n)}, \\mathcal {C}_d^{1, (n)}) + \\mathcal {L}_{se}(\\mathbf {v}_m^{1, (n)}, \\mathcal {C}_m^{1, (n)})\\\\\n& + \\mathcal {L}_{du}^{(n)})\n)\n\\end{aligned}$$ (Eq. 21) " ], [ "After obtaining pre-trained visit representation for each visit, for a prediction task on a multi-visit sequence data, we aggregate all the visit embedding and add a prediction layer for the medication recommendation task. To be specific, from pre-training on all visits, we have a pre-trained Transformer encoder, which can then be used to get the visit embedding $\\mathbf {v}_*^\\tau $ at time $\\tau $ . The known diagnosis codes $\\mathcal {C}_d^t$ at the prediction time $t$ is also represented using the same model as $\\mathbf {v}_*^t$ . Concatenating the mean of previous diagnoses visit embeddings and medication visit embeddings, also the last diagnoses visit embedding, we built an MLP based prediction layer to predict the recommended medication codes as in Equation 23 . ", "$$\\mathbf {y}_t = \\mathrm {Sigmoid}(\\mathbf {W}_1 [(\\frac{1}{t}\\sum _{\\tau < t} \\mathbf {v}_d^\\tau ) || (\\frac{1}{t}\\sum _{\\tau < t} \\mathbf {v}_m^\\tau ) ||\\mathbf {v}_d^t] + b)$$ (Eq. 23) ", "where $\\mathbf {W}_1 \\in \\mathbb {R}^{|\\mathcal {C}_m| \\times 3d} $ is a learnable transformation matrix.", "Given the true labels $\\hat{\\mathbf {y}}_t$ at each time stamp $t$ , the loss function for the whole EHR sequence (i.e. a patient) is ", "$$\\mathcal {L} = - \\frac{1}{T-1} \\sum _{t=2}^T (\\mathbf {y}_t^\\intercal \\log (\\hat{\\mathbf {y}}_t) + (1 - \\mathbf {y}_t^\\intercal ) \\log (1 - \\hat{\\mathbf {y}}_t))$$ (Eq. 24) " ], [ "Data We used EHR data from MIMIC-III BIBREF21 and conducted all our experiments on a cohort where patients have more than one visit. We utilize data from patients with both single visit and multiple visits in the training dataset as pre-training data source (multi-visit data are split into visit slices). In this work, we transform the drug coding from NDC to ATC Third Level for using the ontology information. The statistics of the datasets are summarized in Table 2 .", "Baseline We compared G-BERT with the following baselines. All methods are implemented in PyTorch BIBREF22 and trained on an Ubuntu 16.04 with 8GB memory and Nvidia 1080 GPU.", "We also evaluated three G-BERT variants for model ablation.", "Metrics To measure the prediction accuracy, we used Jaccard Similarity Score (Jaccard), Average F1 (F1) and Precision Recall AUC (PR-AUC). Jaccard is defined as the size of the intersection divided by the size of the union of ground truth set $Y_t^{(k)}$ and predicted set $\\hat{Y}_t^{(k)}$ . ", "$$\\text{Jaccard} = \\frac{1}{\\sum _k^N \\sum _t^{T_k} 1}\\sum _k^N \\sum _t^{T_k} \\frac{|Y_t^{(k)} \\cap \\hat{Y}_t^{(k)}|}{|Y_t^{(k)} \\cup \\hat{Y}_t^{(k)}|}\\nonumber $$ (Eq. 36) ", "where $N$ is the number of patients in test set and $T_k$ is the number of visits of the $k^{th}$ patient.", "Implementation Details We randomly divide the dataset into training, validation and testing set in a $0.6 : 0.2 : 0.2$ ratio. For G-BERT, the hyperparameters are adjusted on evaluation set: (1) GAT part: input embedding dimension as 75, number of attention heads as 4; (2) BERT part: hidden dimension as 300, dimension of position-wise feed-forward networks as 300, 2 hidden layers with 4 attention heads for each layer. Specially, we alternated the pre-training with 5 epochs and fine-tuning procedure with 5 epochs for 15 times to stabilize the training procedure.", "For LR, we use the grid search over typical range of hyper-parameter to search the best hyperparameter values which result in L1 norm penalty with weight as $1.1$ . For deep learning models, we implemented RNN using a gated recurrent unit (GRU) BIBREF24 and utilize dropout with a probability of 0.4 on the output of embedding. We test several embedding choice for baseline methods and determine the dimension for medical embedding as 300 and thershold for final prediction as 0.3 for better performance. Training is done through Adam BIBREF25 at learning rate 5e-4. We fix the best model on evaluation set within 100 epochs and report the performance in test set." ], [ "Experimental Results Table. 3 compares the performance on the medication recommendation task. For variants of G-BERT, $\\texttt {G-BERT}_{G^-,P^-}$ performs worse compared with $\\texttt {G-BERT}_{G^-}$ and $\\texttt {G-BERT}_{P^-}$ which demonstrate the effectiveness of using ontology information to get enhanced medical embedding as input and employ an unsupervised pre-training procedure on larger abundant data. Incorporating both hierarchical ontology information and pre-training procedure, the end-to-end model G-BERT has more capacity and achieve comparable results with others.", "As for baseline models, LR and Leap are worse than our most basic model ( $\\texttt {G-BERT}_{G^-,P^-}$ ) in terms of most metrics. Comparing $\\texttt {G-BERT}_{P^-}$ and GRAM, which both used medical ontology information without pre-training, the scores of our $\\texttt {G-BERT}_{P^-}$ is slightly higher in all metrics. This can demonstrate the validness of using Transformer encoders and the specific prediction layer for medication recommendation. Our final model G-BERT is also better than the attention based model, RETAIN, and the recently published state-of-the-art model, GAMENet. Specifically, even adding the extra information of DDI knowledge and procedure codes, GAMENet still performs worse than G-BERT.", "In addition, we visualized the pre-training medical code embeddings of $\\texttt {G-BERT}_{G^-}$ and G-BERT to show the effectiveness of ontology embedding using online embedding projector shown in (https://raw.githubusercontent.com/jshang123/G-Bert/master/saved/tsne.png/)." ], [ "In this paper we proposed a pre-training model named G-BERT for medical code representation and medication recommendation. To our best knowledge, G-BERT is the first that utilizes language model pre-training techniques in healthcare domain. It adapted BERT to the EHR data and integrated medical ontology information using graph neural networks. By additional pre-training on the EHR from patients who only have one hospital visit which are generally discarded before model training, G-BERT outperforms all baselines in prediction accuracy on medication recommendation task. One direction for the future work is to add more auxiliary and structural tasks to improve the ability of code representaion. Another direction may be to adapt our model to be suitable for even larger datasets with more heterogeneous modalities." ], [ "This work was supported by the National Science Foundation award IIS-1418511, CCF-1533768 and IIS-1838042, the National Institute of Health award 1R01MD011682-01 and R56HL138415." ] ], "section_name": [ "Introduction", "Related Work", "Problem Formalization", "Method", "Input Representation", "Pre-training", "Fine-tuning", "Experiment", "Results", "Conclusion", "Acknowledgment" ] }
{ "answers": [ { "annotation_id": [ "5103743e0b630c27ab4f9edb52bad084202bd16f" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 2: The framework of G-BERT. It consists of three main parts: ontology embedding, BERT and fine-tuned classifier. Firstly, we derive ontology embedding for medical code laid in leaf nodes by cooperating ancestors information by Eq. 1 and 2 based on graph attention networks (Eq. 3, 4). Then we input set of diagnosis and medication ontology embedding separately to shared weight BERT which is pretrained using Eq. 6, 7, 8. Finally, we concatenate the mean of all previous visit embeddings and the last visit embedding as input and fine-tune the prediction layers using Eq. 10 for medication recommendation tasks.", "We adapted the original BERT model to be more suitable for our data and task. In particular, we pre-train the model on each EHR visit (within both single-visit EHR sequences and multi-visit EHR sequences). We modified the input and pre-training objectives of the BERT model: (1) For the input, we built the Transformer encoder on the GNN outputs, i.e. ontology embeddings, for visit embedding. For the original EHR sequence, it means essentially we combine the GNN model with a Transformer to become a new integrated encoder. In addition, we removed the position embedding as we explained before. (2) As for the pre-training procedures, we modified the original pre-training tasks i.e., Masked LM (language model) task and Next Sentence prediction task to self-prediction task and dual-prediction task. The idea to conduct these tasks is to make the visit embedding absorb enough information about what it is made of and what it is able to predict.", "Thus, for the self-prediction task, we want the visit embedding $v^1_\\ast $ to recover what it is made of, i.e., the input medical codes $\\mathcal {C}_\\ast ^t$ for each visit as follows:", "$$\\begin{aligned} &\\mathcal {L}_{se}(\\mathbf {v}^1_\\ast , \\mathcal {C}_\\ast ^1) = -\\log p(\\mathcal {C}_\\ast ^1 | \\mathbf {v}^1_\\ast ) \\\\ & = - \\sum _{c \\in \\mathcal {C}_\\ast ^1}\\log p(c_\\ast ^1 | \\mathbf {v}^1_\\ast ) + \\sum _{c \\in \\mathcal {C}_\\ast \\setminus \\mathcal {C}_\\ast ^1}\\log p(c_\\ast ^1 | \\mathbf {v}^1_\\ast ) \\end{aligned}$$ (Eq. 19)", "Likewise, for the dual-prediction task, since the visit embedding $\\mathbf {v}_\\ast $ carries the information of medical codes of type $\\ast $ , we can further expect it has the ability to do more task-specific prediction as follows:", "$$\\begin{aligned} \\mathcal {L}_{du} = -\\log p(\\mathcal {C}_d^1 | \\mathbf {v}_m^1) - \\log p(\\mathcal {C}_m^1 | \\mathbf {v}_d^1) \\end{aligned}$$ (Eq. 20)", "FLOAT SELECTED: Table 1: Notations used in G-BERT" ], "extractive_spans": [], "free_form_answer": "The graph representation appears to be semi-supervised. It is included in the learning pipeline for the medical recommendation, where the attention model is learned. (There is some additional evidence that is unavailable in parsed text)", "highlighted_evidence": [ "FLOAT SELECTED: Figure 2: The framework of G-BERT. It consists of three main parts: ontology embedding, BERT and fine-tuned classifier. Firstly, we derive ontology embedding for medical code laid in leaf nodes by cooperating ancestors information by Eq. 1 and 2 based on graph attention networks (Eq. 3, 4). Then we input set of diagnosis and medication ontology embedding separately to shared weight BERT which is pretrained using Eq. 6, 7, 8. Finally, we concatenate the mean of all previous visit embeddings and the last visit embedding as input and fine-tune the prediction layers using Eq. 10 for medication recommendation tasks.", "For the original EHR sequence, it means essentially we combine the GNN model with a Transformer to become a new integrated encoder.", "Thus, for the self-prediction task, we want the visit embedding $v^1_\\ast $ to recover what it is made of, i.e., the input medical codes $\\mathcal {C}_\\ast ^t$ for each visit as follows:\n\n$$\\begin{aligned} &\\mathcal {L}_{se}(\\mathbf {v}^1_\\ast , \\mathcal {C}_\\ast ^1) = -\\log p(\\mathcal {C}_\\ast ^1 | \\mathbf {v}^1_\\ast ) \\\\ & = - \\sum _{c \\in \\mathcal {C}_\\ast ^1}\\log p(c_\\ast ^1 | \\mathbf {v}^1_\\ast ) + \\sum _{c \\in \\mathcal {C}_\\ast \\setminus \\mathcal {C}_\\ast ^1}\\log p(c_\\ast ^1 | \\mathbf {v}^1_\\ast ) \\end{aligned}$$ (Eq. 19)\n\n", "Likewise, for the dual-prediction task, since the visit embedding $\\mathbf {v}_\\ast $ carries the information of medical codes of type $\\ast $ , we can further expect it has the ability to do more task-specific prediction as follows:\n\n$$\\begin{aligned} \\mathcal {L}_{du} = -\\log p(\\mathcal {C}_d^1 | \\mathbf {v}_m^1) - \\log p(\\mathcal {C}_m^1 | \\mathbf {v}_d^1) \\end{aligned}$$ (Eq. 20)", "FLOAT SELECTED: Table 1: Notations used in G-BERT" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "74eea9f3f4f790836045fcc75d0b3f5156901499" ] }, { "annotation_id": [ "87268ef68147daa75667e2e3562d6ce5bc18445a", "cea7fb106f1a9532fb21ff81c8dfe3d87f70efde" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 2: The framework of G-BERT. It consists of three main parts: ontology embedding, BERT and fine-tuned classifier. Firstly, we derive ontology embedding for medical code laid in leaf nodes by cooperating ancestors information by Eq. 1 and 2 based on graph attention networks (Eq. 3, 4). Then we input set of diagnosis and medication ontology embedding separately to shared weight BERT which is pretrained using Eq. 6, 7, 8. Finally, we concatenate the mean of all previous visit embeddings and the last visit embedding as input and fine-tune the prediction layers using Eq. 10 for medication recommendation tasks." ], "extractive_spans": [], "free_form_answer": "There is nothing specific about the approach that depends on medical recommendations. The approach combines graph data and text data into a single embedding.", "highlighted_evidence": [ "FLOAT SELECTED: Figure 2: The framework of G-BERT. It consists of three main parts: ontology embedding, BERT and fine-tuned classifier. Firstly, we derive ontology embedding for medical code laid in leaf nodes by cooperating ancestors information by Eq. 1 and 2 based on graph attention networks (Eq. 3, 4). Then we input set of diagnosis and medication ontology embedding separately to shared weight BERT which is pretrained using Eq. 6, 7, 8. Finally, we concatenate the mean of all previous visit embeddings and the last visit embedding as input and fine-tune the prediction layers using Eq. 10 for medication recommendation tasks." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper we proposed a pre-training model named G-BERT for medical code representation and medication recommendation. To our best knowledge, G-BERT is the first that utilizes language model pre-training techniques in healthcare domain. It adapted BERT to the EHR data and integrated medical ontology information using graph neural networks. By additional pre-training on the EHR from patients who only have one hospital visit which are generally discarded before model training, G-BERT outperforms all baselines in prediction accuracy on medication recommendation task. One direction for the future work is to add more auxiliary and structural tasks to improve the ability of code representaion. Another direction may be to adapt our model to be suitable for even larger datasets with more heterogeneous modalities." ], "extractive_spans": [], "free_form_answer": "It learns a representation of medical records. The learned representation (embeddings) can be used for other predictive tasks involving information from electronic health records.", "highlighted_evidence": [ "In this paper we proposed a pre-training model named G-BERT for medical code representation and medication recommendation." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "74eea9f3f4f790836045fcc75d0b3f5156901499", "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "no", "no" ], "question": [ "IS the graph representation supervised?", "Is the G-BERT model useful beyond the task considered?" ], "question_id": [ "d201b9992809142fe59ae74508bc576f8ca538ff", "c4628d965983934d7a2a9797a2de6a411629d5bc" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "transformers", "transformers" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Graphical illustration of ICD-9 Ontology", "Figure 2: The framework of G-BERT. It consists of three main parts: ontology embedding, BERT and fine-tuned classifier. Firstly, we derive ontology embedding for medical code laid in leaf nodes by cooperating ancestors information by Eq. 1 and 2 based on graph attention networks (Eq. 3, 4). Then we input set of diagnosis and medication ontology embedding separately to shared weight BERT which is pretrained using Eq. 6, 7, 8. Finally, we concatenate the mean of all previous visit embeddings and the last visit embedding as input and fine-tune the prediction layers using Eq. 10 for medication recommendation tasks.", "Table 1: Notations used in G-BERT", "Table 2: Statistics of the Data", "Table 3: Performance on Medication Recommendation Task." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "3-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png" ] }
[ "IS the graph representation supervised?", "Is the G-BERT model useful beyond the task considered?" ]
[ [ "1906.00346-3-Table1-1.png", "1906.00346-3-Figure2-1.png", "1906.00346-Pre-training-0" ], [ "1906.00346-3-Figure2-1.png", "1906.00346-Conclusion-0" ] ]
[ "The graph representation appears to be semi-supervised. It is included in the learning pipeline for the medical recommendation, where the attention model is learned. (There is some additional evidence that is unavailable in parsed text)", "It learns a representation of medical records. The learned representation (embeddings) can be used for other predictive tasks involving information from electronic health records." ]
263
1903.11283
Grammatical Error Correction and Style Transfer via Zero-shot Monolingual Translation
Both grammatical error correction and text style transfer can be viewed as monolingual sequence-to-sequence transformation tasks, but the scarcity of directly annotated data for either task makes them unfeasible for most languages. We present an approach that does both tasks within the same trained model, and only uses regular language parallel data, without requiring error-corrected or style-adapted texts. We apply our model to three languages and present a thorough evaluation on both tasks, showing that the model is reliable for a number of error types and style transfer aspects.
{ "paragraphs": [ [ "Sequence-to-sequence (seq2seq) transformations have recently proven to be a successful framework for several natural language processing tasks, like: machine translation (MT) BIBREF0 , BIBREF1 , speech recognition BIBREF2 , speech synthesis BIBREF3 , natural language inference BIBREF4 and others. However, the success of these models depends on the availability of large amounts of directly annotated data for the task at hand (like translation examples, text segments and their speech recordings, etc.). This is a severe limitation for tasks where data is not abundantly available as well as for low-resource languages.", "Here we focus on two such tasks: grammatical error correction (GEC) and style transfer. Modern approaches to GEC learn from parallel corpora of erroneous segments and their manual corrections BIBREF5 , BIBREF6 ; text style transfer also relies on supervised approaches that require texts of the same meaning and different styles BIBREF7 , BIBREF8 or imprecise unsupervised methods BIBREF9 , BIBREF10 .", "In this paper we introduce an approach to performing both GEC and style transfer with the same trained model, while not using any supervised training data for either task. It is based on zero-shot neural machine translation (NMT) BIBREF11 , and as such, the only kind of data it uses is regular parallel corpora (with texts and their translations). However, we apply the model to do monolingual transfer, asking to translate the input segment into the same language. We show, that this “monolingual translation” is what enables the model to correct the errors in the input as well as adapt the output into a desired style. Moreover, the same trained model performs both tasks on several languages.", "Our main contributions are thus: (i) a single method for both style transfer and grammatical error correction, without using annotated data for either task, (ii) support for both tasks on multiple languages within the same model, (iii) a thorough quantitative and qualitative manual evaluation of the model on both tasks, and (iv) highlighting of the model's reliability aspects on both tasks. We used publicly available software and data; an online demo of our results is available, but concealed for anonymization purposes.", "We describe the details of our approach in Section SECREF2 , then evaluate it in terms of performance in grammatical error correction in Section SECREF3 and in style transfer in Section SECREF4 . The paper ends with a review of related work in Section SECREF5 and conclusions in Section SECREF6 ." ], [ "As mentioned in the introduction, our approach is based on the idea of zero-shot MT BIBREF11 . There the authors show that after training a single model to translate from Portuguese to English as well as from English to Spanish, it can also translate Portuguese into Spanish, without seeing any translation examples for this language pair. We use the zero-shot effect to achieve monolingual translation by training the model on bilingual examples in both directions, and then doing translation into the same language as the input: illustrated on Figure FIGREF1 .", "With regular sentences monolingual translation does not seem useful, as its behaviour mainly consists of copying. However, when the input sentence has characteristics unseen or rarely seen by the model at training time (like grammatical errors or different stylistic choices) – the decoder still generates the more regular version of the sentence (thus fixing the errors or adapting the style). Furthermore, in case of multilingual multi-domain NMT BIBREF12 , it is possible to switch between different domains or styles at runtime, thus performing “monolingual domain adaptation” or style transfer.", "To create a multilingual multi-domain NMT system we use the self-attention architecture BIBREF13 . Instead of specifying the output language with a token inside the input sequence, as BIBREF11 did, we follow BIBREF12 and use word features (or factors). On one hand, this provides a stronger signal for the model, and on the other – allows for additional parametrization, which in our case is the text domain/style of the corpus.", "As a result, a pre-processed English-Latvian training set sentence pair “Hello!”–“Sveiki!” looks like:", "Here 2lv and 2os specify Latvian and OpenSubtitles as the output language and domain; the output text has no factors to predict. At application time we simply use the same input and output languages, for example the grammatically incorrect input “we is” looks like the following, after pre-processing:", "The intuition behind our approach is that a multilingual shared encoder produces semantically rich latent sentence representations BIBREF14 , which provide a solid ground for the effective style transfer on top.", "Next we present the technical details, the experiment setup and the data we used for training the model used in the experiments." ], [ "We use three languages in our experiments: English, Estonian and Latvian. All three have different characteristics, for example Latvian and (especially) Estonian are morphologically complex and have loose word order, while English has a strict word order and the morphology is much simpler. Most importantly, all three languages have error-corrected corpora for testing purposes, though work on their automatic grammatical error correction is extremely limited (see Section SECREF3 ).", "The corpora we use for training the model are OpenSubtitles2018 BIBREF15 , Europarl BIBREF16 , JRC-Acquis and EMEA BIBREF17 . We assume that there should be sufficient stylistic difference between these corpora, especially between the more informal OpenSubtitles2018 (comprised of movie and TV subtitles) on one hand and Europarl and JRC-Acquis (proceedings and documents of the European Parliament) on the other." ], [ "For Europarl, JRC-Acquis and EMEA we use all data available for English-Estonian, English-Latvian and Estonian-Latvian language pairs. From OpenSubtitles2018 we take a random subset of 3M sentence pairs for English-Estonian, which is still more than English-Latvian and Estonian-Latvian (below 1M; there we use the whole corpus). This is done to balance the corpora representation and to limit the size of training data.", "Details on the model hyper-parameters, data pre-processing and training can be found in Appendix SECREF7 ." ], [ "First, we evaluate our model in the context of MT, as the translation quality can be expected to have influence on the other tasks that the model performs. We use public benchmarks for Estonian-English and Latvian-English translations from the news translation shared tasks of WMT 2017 and 2018 BIBREF18 , BIBREF19 . The BLEU scores for each translation direction and all included styles/domains are shown in Table TABREF6 .", "Some surface notes on these results: the BLEU scores for translation from and into Latvian are below English-Estonian scores, which is likely explained by smaller datasets that include Latvian. Also, translation into English has higher scores than into Estonian/Latvian, which is also expected.", "An interesting side-effect we have observed is the model's resilience to code-switching in the input text. The reason is that the model is trained with only the target language (and domain), and not the source language, as a result of which it learns language normalization of sorts. For example, the sentence “Ma tahan two saldējumus.” (“Ma tahan” / “I want” in Estonian, “two” and “saldējumus” / “ice-creams” in genitive, plural in Latvian) is correctly translated into English as “I want two ice creams.”. See more examples in Appendix SECREF8 ." ], [ "In this section we evaluate our model's performance in the GEC task: for example, for the English input “huge fan I are”, our model's output is “I am a huge fan”; this section's goal is to systematically check, how reliable the corrections are.", "Although GEC does not require any distinction in text style, the core idea of this article is to also perform style transfer with the same multilingual multi-domain model. That only means that for GEC we have to select an output domain/style when producing error corrections.", "Naturally, the model only copes with some kinds of errors and fails on others – for instance, word order is restored, as long as it does not affect the perception of the meaning. On the other hand, we do not expect orthographic variations like typos to be fixed reliably, since they affect the sub-word segmentation of the input and thus can hinder the translation.", "Below we present qualitative and quantitative analysis of our model's GEC results, showing its overall performance, as well as which kinds of errors are handled reliably and which are not." ], [ "We use the following error-corrected corpora both for scoring and as basis for manual analysis:", "for English: CoNLL-2014 BIBREF5 and JFLEG BIBREF20 corpora", "for Estonian: the Learner Language Corpus BIBREF21 ", "for Latvian: the Error-annotated Corpus of Latvian BIBREF22 ", "All of these are based on language learner (L2) essays and their manual corrections.", "To evaluate the model quantitatively we used two metrics: the Max-Match (M INLINEFORM0 ) metric from the CoNLL-2014 shared task scorer, and the GLEU score BIBREF23 for the other corpora. The main difference is that M INLINEFORM1 is based on the annotation of error categories, while the GLEU score compares the automatic correction to a reference without any error categorization." ], [ "The M INLINEFORM0 scores are computed based on error-annotated corpora. Since error annotations were only available for English, we calculated the scores on English CoNLL corpus, see Table TABREF12 ).", "Our model gets the M INLINEFORM0 score of 32.1. While it does not reach the score of the best CoNLL model BIBREF24 or the state-of-the-art BIBREF25 , these use annotated corpora to train. Our results count as restricted in CoNLL definitions and are more directly comparable to the classifier-based approach trained on unannotated corpora by BIBREF26 , while requiring even less effort.", "The GLEU scores can be seen in Table TABREF13 . We calculated GLEU for both formal and informal style models for all three languages. For English our model's best score was 45.9 and for Estonian it was 38.1. Latvian corrected output in fact get worse scores than the original uncorrected corpus, which can be explained by smaller training corpora and worse MT quality for Latvian (see Table TABREF6 )." ], [ "We looked at the automatic corrections for 100 erroneous sentences of English and Estonian each as well as 80 sentences of Latvian. The overall aim was to find the ratio of sentences where (1) all errors have been corrected (2) only some are corrected (3) only some are corrected and part of the meaning is changed and (4) all meaning is lost.", "The analysis was done separately for four error types: spelling and grammatical errors, word choice and word order. In case a sentence included more that one error type it was counted once for each error type. For English the first two types were annotated in the corpus, the rest were annotated by us, separating the original third error category into two new ones. The results can be seen in Table TABREF15 .", "Not all English sentences included errors. 30 sentences remained unchanged, out of which 17 had no mistakes in them. For the changed sentences 87% were fully or partially corrected. In case of Estonian, where all sentences had mistakes, 61 out of the 100 sentences were fully or partially corrected without loss of information. 12 sentences became nonsense, all of which originally had some lexical mistakes. For English the results are similar: the most confusing type of errors that leads to complete loss of meaning is word choice. On the other hand, this was also the most common error type for both languages and errors of that type were fully corrected in 45% of cases for Estonian and 72% for English. Using words in the wrong order is a typical learner's error for Estonian that has rather free word order. It is also difficult to describe or set rules for this type of error. Our model manages this type rather well, correcting 79% of sentences acceptably, only losing some meaning in 2 sentences including this error type.", "A similar experiment using 80 Latvian sentences yielded 17 fully corrected sentences, 15, 22 and 26 respectively for the other categories. As the Latvian model is weaker in general, this also leads to more chances of losing some of the meaning; we exclude it from the more detailed analysis and focus on English and Estonian.", "Our model handles punctuation, word order mistakes and grammatical errors well. For example the subject-verb disagreement in English UID16 and verb-object disagreement in Estonian UID19 have been corrected.", "“When price of gas goes up , the consumer do not want buy gas for fuels”", "“When the price of gas goes up, the consumer doesn't want to buy gas for fuels”", "“Sellepärast ütleb ta filmi lõpus, et tahab oma unistuse tagasi”", "“Sellepärast ütleb ta filmi lõpus, et tahab oma unistust tagasi”", "that's-why says he film at-end, that (he)-wants his-own dream INLINEFORM0 ", "Sentences that include several error types are generally noticeably more difficult to correct. Depending on the error types that have been combined our model manages quite well and corrects all or several errors present. The sentence UID23 includes mistakes with word order and word choice: the argument \"vabaainetele\" (to elective courses) here should precede the verb and the verb \"registreeruma\" (register oneself) takes no such argument. Our model corrects both mistakes while also replacing the word \"seejärel\" (after that) with its synonym.", "Seejärel pidi igaüks ennast registreeruma vabaainetele.", "then had-to everyone oneself register-oneself to-free-courses", "Siis pidi igaüks end vabaainetele registreerima.", "then had-to everyone oneself to-free-courses register", "The model fixes typos, but it mainly manages cases where two letters are needed but one is written and vice versa, for example \"detailled\" is corrected to \"detailed\" and ‘planing’ to \"planning\". More complicated mistakes are missed, especially if combined with other error types, and in some sentences a misspelled word is changed into an incorrect form that has a common ending, like \"misundrestood\" to \"misundrested\". The results get better if the input has been automatically spell-checked.", "The system does more changes than are strictly necessary and often replaces correct words and phrases, for example \"frequently\" was changed to to \"often\" or in Estonian “öelda” (\"say\") to “avaldada” (\"publish\"). Sometimes this also confuses the meaning: \"siblings\" was changed to \"friends\".", "To conclude this section, our model reliable corrects grammatical, spelling and word order errors on , with more mixed performance on lexical choice errors and some unnecessary paraphrasing of the input. The error types that the model manages well can be traced back to having a strong monolingual language model, a common trait of a good NMT model. As the model operates on the level of word parts and its vocabulary is limited, this leads to combining wrong word parts, sometimes across languages. This could be fixed by either using character-based NMT or doing automatic spelling correction prior to applying our current model.", "We limit further comparisons to two styles, translating sentences of the OpenSubtitles test set into the style of Europarl and vice versa. Our assumption is that, generally, movie subtitles gravitate towards the more informal style, and parliament proceedings towards the more formal (see examples of translations into those styles in Table TABREF30 ). Preliminary tests showed that JRC-Acquis and EMEA texts resulted in practically the same style as Europarl. We also leave Latvian out of the evaluations, assuming that its performance is weaker, similarly to GEC results.", "Human evaluation was performed on a subset of 100 sentences, 50 of them selected randomly from the OpenSubtitles test set and the other 50 from Europarl. Each sentence was translated into the opposite style. The resulting pairs were presented to participants, who were asked the following questions about each of them: (1) Do the sentences differ in any way? (2) How fluent is the translated sentence? (On a scale of 1 to 4, where 1 is unreadable, and 4 is perfectly fluent); (3) How similar are the sentences in meaning? (With options \"exactly the same\", \"the same with minor changes\", \"more or less the same\", \"completely different or nonsensical\"); (4) Does the translated sentence sound more formal than the original, more informal, or neither? (5) What differences are there between the sentences? (E.g. grammatical, lexical, missing words or phrases, word order, contractions, the use of formal \"you\").", "Two such surveys were conducted, one in English and one in Estonian. 3 people participated in each of them, each of the three evaluators presented with the same set of examples.", "In evaluation of fluency, all three human evaluators gave the translated sentences the same score in 41 out of 55 cases in English (not taking into account sentences which were simply copied from their originals), and in 51 out of 68 cases in Estonian. In evaluation of direction of style transfer, all three evaluators agreed in 16 cases and at least two agreed in 43 cases in English, and in Estonian in 19 cases all three agreed and in 59 at least two.", "Of the 100 translated sentences, 45 were marked by all participants as being the same as their original sentences in the English set and 32 in Estonian. The remaining 55 and 68, respectively, were used to quantify style transfer quality.", "Being a reasonably strong MT system, our model scores quite high on fluency (3.84 for English, 3.64 for Estonian) and meaning preservation (3.67 for English, 3.35 for Estonian). For meaning preservation, the judgments were converted into a scale of 1-4, where 1 stands for completely different meanings or nonsensical sentences, and 4 for the exact same meaning.", "We evaluated the style transfer itself in the following way. For each pair of sentences, the average score given by three evaluators was calculated, in which the answer that the translated sentence is more formal counts as +1, more informal as -1, and neither as 0. We calculated the root mean square error between these scores and desired values (+1 if we aimed to make the sentence more formal, -1 if more informal). RMSE of 0 would stand for always transferring style in the right direction as judged by all evaluators, and 2 for always transferring style in the wrong direction.", "On the English set, the RMSE is 0.78, and on Estonian 0.89. These numbers show that style transfer generally happens in the right direction, but is not very strong. Of the 55 sentences in English that were different from their source sentences, in 33 cases the sign of the average human score matched the desired one, in 7 it did not, and in 15 no change in style was observed by humans. In Estonian 36 sentences showed the right direction of style transfer, 10 wrong, and 22 no change.", "In English sentences where the direction of style transfer was found to be correct (Figure FIGREF32 ), changes in use of contractions were reported in 19 cases (e.g. I have just been vs. I've just been), lexical changes in 15 cases (e.g. 'cause vs. because, or sure vs. certainly), grammatical in 13 (e.g. replacing no one's gonna with no one will, or method of production with production method), missing or added words or phrases in 8 cases.", "In Estonian correctly transferred sentences (Figure FIGREF32 ), the most frequently reported were lexical substitutions (30 cases), followed by missing of added words or phrases (24 cases), changes in grammar (22 cases) and in word order (16 cases).", "To conclude this section, unlike many style transfer models which produce text with strong style characteristics (e.g. with strong positive or negative sentiment), often at the cost of preserving meaning and fluency, our model gravitates towards keeping the meaning and fluency of the original sentence intact and mimicking some of the desired stylistic traits." ], [ "Next we move on to evaluating the same model in terms of its performance in the context of style transfer.", "At first, we examined how often the sentences change when translated monolingually. The assumption is that passing modified style factors should prevent the model from simply copying the source sequences when translating inside a single language, and incentivize it to match its output to certain style characteristics typical for different corpora. Figure FIGREF28 shows the proportions of sentence pairs in the 1000-sentence test sets where there was a significant difference between translations into different styles. We can observe that English texts change less often than Estonian or Latvian, while Europarl sentences are changed more often than those of other corpora.", "To assess whether these changes actually correspond to the model's capability for transferring style, we turned to help of human evaluators." ], [ "Being able to translate between languages and also to modify the output to match the desired style allows the model to essentially perform domain adaptation. When translating from a language which has no formal \"you\" (English) into one that does (Estonian or Latvian), it will quite consistently use the informal variant when the target style is OpenSubtitles and the formal when the target style is Europarl (you rock INLINEFORM0 sa rokid/te rokite). The model is also quite consistent in use of contractions in English (es esmu šeit INLINEFORM1 I am here/I'm here). Some lexical substitutions occur: need on Matti lapsed. INLINEFORM2 those are Matt's kids./these are Matt's children. Word order may change: Where is Anna's bag? is Kus on Anna kott? in the more formal variant, and Kus Anna kott on? in the more informal. This feature is useful, but out of scope of this article, as we focus on monolingual applications." ], [ "Grammatical error correction: there have been four shared tasks for GEC with prepared error-tagged datasets for L2 learners of English in the last decade: HOO BIBREF27 , BIBREF28 and CoNLL BIBREF29 , BIBREF5 . This has given an opportunity to train new models on the shared datasets and get an objective comparison of results. The general approach for grammatical error correction has been to use either rule-based approach, machine learning on error-tagged corpora, MT models on parallel data of erroneous and corrected sentences, or a combination of these BIBREF5 . The top model of the CONLL shared task in 2014 used a combined model of rule-based approach and MT BIBREF24 . All of these require annotated data or considerable effort to create, whereas our model is much more resource-independent. Another focus of the newer research is on creating GEC models without human-annotated resources. For example BIBREF26 combine statistical MT with unsupervised classification using unannotated parallel data for MT and unannotated native data for the classification model. In this case parallel data of erroneous and corrected sentences is still necessary for MT; the classifier uses native data, but still needs definitions of possible error types to classify – this work needs to be done by a human and is difficult for some less clear error types. In our approach there is no need for parallel data nor to specify error types, only for native data.", "There has been little work on Estonian and Latvian GEC, all limited with rule-based approaches BIBREF30 , BIBREF31 . For both languages, as well as any low-resourced languages, our approach gives a feasible way to do grammatical error correction without needing neither parallel nor error tagged corpora. Style transfer: Several approaches use directly annotated data: for example, BIBREF7 and BIBREF8 train MT systems on the corpus of modern English Shakespeare to original Shakespeare. BIBREF32 collect a dataset of 110K informal/formal sentence pairs and train rule-based, phrase-based, and neural MT systems using this data.", "One line of work aims at learning a style-independent latent representation of content while building decoders that can generate sentences in the style of choice BIBREF9 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 , BIBREF39 . Unsupervised MT has also been adapted for the task BIBREF10 , BIBREF40 . Our system also does not require parallel data between styles, but leverages the stability of the off-the-shelf supervised NMT to avoid the hassle of training unsupervised NMT systems and making GANs converge. Another problem with many current (both supervised and unsupervised) style transfer methods is that they are bounded to solve a binary task, where only two styles are included (whether because of data or restrictions of the approach). Our method, on the other hand, can be extended to as many styles as needed as long as there are parallel MT corpora in these styles available.", "Notably, BIBREF41 use side constrains in order to translate in polite/impolite German, while we rely on multilingual encoder representations and use the system monolingually at inference time.", "Finally, the most similar to our work conceptually is the approach of BIBREF42 , where they translate a sentence into another language, hoping that it will lose some style indicators, and then translate it back into the original language with a desired style tag attached to the encoder latent space. We also use the MT encoder to obtain rich sentence representations, but learn them directly as a part of a single multilingual translation system." ], [ "We presented a simple approach where a single multilingual NMT model is adapted to monolingual transfer and performs grammatical error correction and style transfer. We experimented with three languages and presented extensive evaluation of the model on both tasks. We used publicly available software and data and believe that our work can be easily reproduced.", "We showed that for GEC our approach reliably corrects spelling, word order and grammatical errors, while being less reliable on lexical choice errors. Applied to style transfer our model is very good at meaning preservation and output fluency, while reliably transferring style for English contractions, lexical choice and grammatical constructions. The main benefit is that no annotated data is used to train the model, thus making it very easy to train it for other (especially under-resourced) languages.", "Future work includes exploring adaptations of this approach to both tasks separately, while keeping the low cost of creating such models." ], [ "After rudimentary cleaning (removing pairs where at least one sentence is longer that 100 tokens, at least one sentence is an empty string or does not contain any alphabetic characters, and pairs with length ratio over 9) and duplication to accommodate both translation directions in each language pair, the total size of the training corpus is 22.9M sentence pairs; training set sizes per language and corpus are given in Table TABREF36 . Validation set consists of 12K sentence pairs, 500 for each combination of translation direction and corpus. We also keep a test set of 24K sentence pairs, 1000 for each translation direction and corpus.", "The data preprocessing pipeline consists of tokenization with Moses tokenizer BIBREF44 , true-casing, and segmentation with SentencePiece BIBREF45 with a joint vocabulary of size 32 000.", "We trained a Transformer NMT model using the Sockeye framework BIBREF43 , mostly following the so-called Transformer base model: we used 6 layers, 512 positions, 8 attention heads and ReLU activations for both the encoder and decoder; Adam optimizer was used. Source and target token embeddings were both of size 512, and factors determining target language and style had embeddings of size 4. Batch size was set to 2048 words, initial learning rate to 0.0002, reducing by a factor of 0.7 every time the validation perplexity had not improved for 8 checkpoints, which happened every 4000 updates. The model converged during the 17th epoch, when validation perplexity has not improved for 32 consecutive checkpoints. The parameters of a single best checkpoint were used for all translations, with beam size set to 5." ], [ "We present more examples of translation of code-switched input segments, error correction and style transfer in English, Estonian and Latvian, informal (inf) and formal (fml) output style:" ] ], "section_name": [ "Introduction", "Method", "Languages and Data", "Technical Details", "Evaluation", "Grammatical Error Correction", "Test Data and Metrics", "Results", "Qualitative Analysis", "Style Transfer", "Cross-lingual Style Transfer", "Related Work", "Conclusions", "Model Training: Technical Details", "Output Examples" ] }
{ "answers": [ { "annotation_id": [ "7bac40d9be32452b45d8b60a6da347c0db920ff0" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "62e090f352bc09fff84ec07debad61a3ec517393", "a061f87b3e8e3b742b9ead9d6d1e27b10ce29300" ], "answer": [ { "evidence": [ "We use three languages in our experiments: English, Estonian and Latvian. All three have different characteristics, for example Latvian and (especially) Estonian are morphologically complex and have loose word order, while English has a strict word order and the morphology is much simpler. Most importantly, all three languages have error-corrected corpora for testing purposes, though work on their automatic grammatical error correction is extremely limited (see Section SECREF3 )." ], "extractive_spans": [ " all three languages have error-corrected corpora for testing purposes" ], "free_form_answer": "", "highlighted_evidence": [ "Most importantly, all three languages have error-corrected corpora for testing purposes, though work on their automatic grammatical error correction is extremely limited (see Section SECREF3 )." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Test Data and Metrics", "We use the following error-corrected corpora both for scoring and as basis for manual analysis:", "for English: CoNLL-2014 BIBREF5 and JFLEG BIBREF20 corpora", "for Estonian: the Learner Language Corpus BIBREF21", "for Latvian: the Error-annotated Corpus of Latvian BIBREF22", "All of these are based on language learner (L2) essays and their manual corrections." ], "extractive_spans": [], "free_form_answer": "Data already contain errors", "highlighted_evidence": [ "Test Data and Metrics\nWe use the following error-corrected corpora both for scoring and as basis for manual analysis:\n\nfor English: CoNLL-2014 BIBREF5 and JFLEG BIBREF20 corpora\n\nfor Estonian: the Learner Language Corpus BIBREF21\n\nfor Latvian: the Error-annotated Corpus of Latvian BIBREF22\n\nAll of these are based on language learner (L2) essays and their manual corrections." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "83ed3021fb0d9896156658293a3096ccf79ad1de", "8ee5eb53ee373f3e8e7e60a77b3f30cf66a32698" ], "answer": [ { "evidence": [ "To conclude this section, our model reliable corrects grammatical, spelling and word order errors on , with more mixed performance on lexical choice errors and some unnecessary paraphrasing of the input. The error types that the model manages well can be traced back to having a strong monolingual language model, a common trait of a good NMT model. As the model operates on the level of word parts and its vocabulary is limited, this leads to combining wrong word parts, sometimes across languages. This could be fixed by either using character-based NMT or doing automatic spelling correction prior to applying our current model." ], "extractive_spans": [ "grammatical, spelling and word order errors" ], "free_form_answer": "", "highlighted_evidence": [ "To conclude this section, our model reliable corrects grammatical, spelling and word order errors on , with more mixed performance on lexical choice errors and some unnecessary paraphrasing of the input." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We showed that for GEC our approach reliably corrects spelling, word order and grammatical errors, while being less reliable on lexical choice errors. Applied to style transfer our model is very good at meaning preservation and output fluency, while reliably transferring style for English contractions, lexical choice and grammatical constructions. The main benefit is that no annotated data is used to train the model, thus making it very easy to train it for other (especially under-resourced) languages." ], "extractive_spans": [ "spelling, word order and grammatical errors" ], "free_form_answer": "", "highlighted_evidence": [ "We showed that for GEC our approach reliably corrects spelling, word order and grammatical errors, while being less reliable on lexical choice errors." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "7032c8c9745c10d9566c684259ee235fe829bc48" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How do they measure style transfer success?", "Do they introduce errors in the data or does the data already contain them?", "What error types is their model more reliable for?", "How does their parallel data differ in terms of style?" ], "question_id": [ "a66a275a817f980c36e0b67d2e00bd823f63abf8", "b6f466e0fdcb310ecd212fd90396d9d13e0c0504", "62ea141d0fb342dfb97c69b49d1c978665b93b3c", "a32c792a0cef03218bf66322245677fc2d5e5a31" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Schematic illustration of zero-shot monolingual translation. The model is trained on bilingual data in all translation directions (English-to-Estonian, Estonian-to-English, English-to-Latvian, etc.) and then applied in monolingual directions only (English-toEnglish, etc.), without having seen any sentence pairs for them. The illustration is simplified, as it does not show the style (text domain) parametrization.", "Table 1: BLEU scores of the multilingual MT model on WMT’17 (Latvian↔English) and WMT’18 (Estonian↔English) test sets", "Table 2: M2 scores on the CoNLL corpus, including precision and recall.", "Table 3: GLEU scores for all three languages. No scores have been previously reported elsewhere for Estonian and Latvian.", "Table 4: GEC results by error types; “grammar” stands for grammatical mistakes, “lex” stands for lexical choice and “order” – for word order errors.", "Figure 2: Proportions of sentences with token-wise edit distance ≥ 3 from the original when translated monolingually from and into different styles", "Table 5: Examples of style transfer", "Figure 3: Number of sentences in which evaluators reported different types of changes", "Table 6: Training set sizes (number of sentence pairs)" ], "file": [ "2-Figure1-1.png", "3-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Table4-1.png", "6-Figure2-1.png", "6-Table5-1.png", "7-Figure3-1.png", "11-Table6-1.png" ] }
[ "Do they introduce errors in the data or does the data already contain them?" ]
[ [ "1903.11283-Test Data and Metrics-1", "1903.11283-Languages and Data-0", "1903.11283-Test Data and Metrics-4", "1903.11283-Test Data and Metrics-0" ] ]
[ "Data already contain errors" ]
266
1706.00139
Natural Language Generation for Spoken Dialogue System using RNN Encoder-Decoder Networks
Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.
{ "paragraphs": [ [ "Natural Language Generation (NLG) plays a critical role in Spoken Dialogue Systems (SDS) with task is to convert a meaning representation produced by the Dialogue Manager into natural language utterances. Conventional approaches still rely on comprehensive hand-tuning templates and rules requiring expert knowledge of linguistic representation, including rule-based BIBREF0 , corpus-based n-gram models BIBREF1 , and a trainable generator BIBREF2 .", "Recently, Recurrent Neural Networks (RNNs) based approaches have shown promising performance in tackling the NLG problems. The RNN-based models have been applied for NLG as a joint training model BIBREF3 , BIBREF4 and an end-to-end training model BIBREF5 . A recurring problem in such systems is requiring annotated datasets for particular dialogue acts (DAs). To ensure that the generated utterance representing the intended meaning of the given DA, the previous RNN-based models were further conditioned on a 1-hot vector representation of the DA. BIBREF3 introduced a heuristic gate to ensure that all the slot-value pair was accurately captured during generation. BIBREF4 subsequently proposed a Semantically Conditioned Long Short-term Memory generator (SC-LSTM) which jointly learned the DA gating signal and language model.", "More recently, Encoder-Decoder networks BIBREF6 , BIBREF7 , especially the attentional based models BIBREF8 , BIBREF9 have been explored to solve the NLG tasks. The Attentional RNN Encoder-Decoder BIBREF10 (ARED) based approaches have also shown improved performance on a variety of tasks, e.g., image captioning BIBREF11 , BIBREF12 , text summarization BIBREF13 , BIBREF14 .", "While the RNN-based generators with DA gating-vector can prevent the undesirable semantic repetitions, the ARED-based generators show signs of better adapting to a new domain. However, none of the models show significant advantage from out-of-domain data. To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences. We conducted experiments on four different NLG domains and found that the proposed methods significantly outperformed the state-of-the-art methods regarding BLEU BIBREF15 and slot error rate ERR scores BIBREF4 . The results also showed that our generators could scale to new domains by leveraging the out-of-domain data. To sum up, we make three key contributions in this paper:", "We review related works in Section \"Related Work\" . Following a detail of proposed model in Section \"Recurrent Neural Language Generator\" , Section \"Experiments\" describes datasets, experimental setups, and evaluation metrics. The resulting analysis is presented in Section \"Results and Analysis\" . We conclude with a brief summary and future work in Section \"Conclusion and Future Work\" ." ], [ "Recently, RNNs-based models have shown promising performance in tackling the NLG problems. BIBREF16 proposed a generator using RNNs to create Chinese poetry. BIBREF11 , BIBREF17 , BIBREF18 also used RNNs in a multi-modal setting to solve image captioning tasks. The RNN-based Sequence to Sequence models have applied to solve variety of tasks: conversational modeling BIBREF6 , BIBREF7 , BIBREF19 , machine translation BIBREF20 , BIBREF21 ", "For task-oriented dialogue systems, BIBREF3 combined a forward RNN generator, a CNN reranker, and a backward RNN reranker to generate utterances. BIBREF4 proposed SC-LSTM generator which introduced a control sigmoid gate to the LSTM cell to jointly learn the gating mechanism and language model. A recurring problem in such systems is the lack of sufficient domain-specific annotated data. BIBREF22 proposed an out-of-domain model which was trained on counterfeited data by using semantically similar slots from the target domain instead of the slots belonging to the out-of-domain dataset. The results showed that the model can achieve a satisfactory performance with a small amount of in-domain data by fine tuning the target domain on the out-of-domain trained model.", "More recently, RNN encoder-decoder based models with attention mechanism BIBREF10 have shown improved performances in various tasks. BIBREF12 proposed a review network to the image captioning, which reviews all the information encoded by the encoder and produces a compact thought vector. BIBREF9 proposed RNN encoder-decoder-based model by using two attention layers to jointly train content selection and surface realization. More close to our work, BIBREF8 proposed an attentive encoder-decoder based generator which computed the attention mechanism over the slot-value pairs. The model showed a domain scalability when a very limited amount of data is available.", "Moving from a limited domain dialogue system to an open domain dialogue system raises some issues. Therefore, it is important to build an open domain dialogue system that can make as much use of existing abilities of functioning from other domains. There have been several works to tackle this problem, such as BIBREF23 using RNN-based networks for multi-domain dialogue state tracking, BIBREF22 using a procedure to train multi-domain via multiple adaptation steps, or BIBREF24 , BIBREF25 adapting of SDS components to new domains." ], [ "The recurrent language generator proposed in this paper is based on a neural language generator BIBREF8 , which consists of three main components: (i) an Encoder that incorporates the target meaning representation (MR) as the model inputs, (ii) an Aligner that aligns and controls the semantic elements, and (iii) an RNN Decoder that generates output sentences. The generator architecture is shown in Figure 1 . The Encoder first encodes the MR into input semantic elements which are then aggregated and selected by utilizing an attention-based mechanism by the Aligner. The input to the RNN Decoder at each time step is a 1-hot encoding of a token $\\textbf {w}_{t}$ and an attentive DA representation $\\textbf {d}_{t}$ . At each time step $t$ , RNN Decoder also computes how much the feature value vector $\\textbf {s}_{t-1}$ retained for the next computational steps, and adds this information to the RNN output which represents the probability distribution of the next token $\\textbf {w}_{t+1}$ . At generation time, we can sample from this conditional distribution to obtain the next token in a generated sentence, and feed it as the next input to the RNN Decoder. This process finishes when an end sign is generated BIBREF17 , or some constraints are reached BIBREF16 . The model can produce a sequence of tokens which can finally be lexicalized to form the required utterance." ], [ "The slots and values are separated parameters used in the encoder side. This embeds the source information into a vector representation $\\textbf {z}_{i}$ which is a concatenation of embedding vector representation of each slot-value pair, and is computed by: ", "$$\\textbf {z}_{i} = \\textbf {u}_{i} \\oplus \\textbf {v}_{i}$$ (Eq. 10) ", "where $\\textbf {u}_{i}$ , $\\textbf {v}_{i}$ are the $i$ -th slot and value embedding vectors, respectively, and $\\oplus $ is vector concatenation. The i index runs over the $L$ given slot-value pairs. In this work, we use a 1-layer, Bidirectional LSTM (Bi-LSTM) to encode the sequence of slot-value pairs embedding. The Bi-LSTM consists of forward and backward LSTMs which read the sequence of slot-value pairs from left-to-right and right-to-left to produce forward and backward sequence of hidden states ( $\\overrightarrow{\\textbf {e}_{1}}, .., \\overrightarrow{\\textbf {e}_{L}}$ ), and ( $\\overleftarrow{\\textbf {e}_{1}}, .., \\overleftarrow{\\textbf {e}_{L}}$ ), respectively. We then obtain the sequence of encoded hidden states $\\textbf {E}=(\\textbf {e}_{1}, \\textbf {e}_{2}, .., \\textbf {e}_{L})$ where $\\textbf {\\textbf {e}}_{i}$ is a sum of the forward hidden state $\\overrightarrow{\\textbf {e}_{i}}$ and the backward one $\\textbf {v}_{i}$0 as follows: ", "$$\\textbf {e}_{i}=\\overrightarrow{\\textbf {e}_{i}} + \\overleftarrow{\\textbf {e}_{i}}$$ (Eq. 12) " ], [ "The Aligner utilizes attention mechanism to calculate the DA representation as follows: ", "$$\\beta _{t,i} = \\frac{\\exp e_{t,i} }{\\sum \\nolimits _{j}\\exp e_{t,j}}$$ (Eq. 14) ", "where ", "$$e_{t,i}=a(\\textbf {e}_{i}, \\textbf {h}_{t-1})$$ (Eq. 15) ", "and $\\beta _{t,i}$ is the weight of i-th slot-value pair calculated by the attention mechanism. The alignment model $a$ is computed by: ", "$$a(\\textbf {e}_{i}, \\textbf {h}_{t-1}) = \\textbf {v}_{a}^{\\top }\\tanh (\\textbf {W}_{a}\\textbf {e}_{i} + \\textbf {U}_{a}\\textbf {h}_{t-1})$$ (Eq. 16) ", "where $\\textbf {v}_{a}, \\textbf {W}_{a}, \\textbf {U}_{a}$ are the weight matrices to learn. Finally, the Aligner calculates dialogue act embedding $\\textbf {d}_{t}$ as follows: ", "$$\\textbf {d}_{t} = \\textbf {a} \\oplus \\sum \\nolimits _{i}\\beta _{t,i} \\textbf {e}_{i}$$ (Eq. 17) ", "where a is vector embedding of the action type." ], [ "The proposed semantic RALSTM cell applied for Decoder side consists of three components: a Refinement cell, a traditional LSTM cell, and an Adjustment cell:", "Firstly, instead of feeding the original input token $\\textbf {w}_{t}$ into the RNN cell, the input is recomputed by using a semantic gate as follows: ", "$$\\begin{aligned}\n\\textbf {r}_{t}&=\\sigma (\\textbf {W}_{rd}\\textbf {d}_{t} + \\textbf {W}_{rh}\\textbf {h}_{t-1})\\\\\n\\textbf {x}_{t}&=\\textbf {r}_{t} \\odot \\textbf {w}_{t}\n\\end{aligned}$$ (Eq. 19) ", "where $\\textbf {W}_{rd}$ and $\\textbf {W}_{rh}$ are weight matrices. Element-wise multiplication $\\odot $ plays a part in word-level matching which not only learns the vector similarity, but also preserves information about the two vectors. $\\textbf {W}_{rh}$ acts like a key phrase detector that learns to capture the pattern of generation tokens or the relationship between multiple tokens. In other words, the new input $\\textbf {x}_{t}$ consists of information of the original input token $\\textbf {w}_{t}$ , the DA representation $\\textbf {d}_{t}$ , and the hidden context $\\textbf {h}_{t-1}$ . $\\textbf {r}_{t}$ is called a Refinement gate because the input tokens are refined by a combination gating information of the attentive DA representation $\\textbf {d}_{t}$ and the previous hidden state $\\textbf {W}_{rh}$0 . By this way, we can represent the whole sentence based on the refined inputs.", "Secondly, the traditional LSTM network proposed by BIBREF26 bahdanau2014neural in which the input gate $\\textbf {i}_{i}$ , forget gate $\\textbf {f}_{t}$ and output gates $\\textbf {o}_{t}$ are introduced to control information flow and computed as follows: ", "$$\\begin{aligned}\n\\begin{pmatrix}\n\\textbf {i}_{t}\n\\\\ \\textbf {f}_{t}\n\\\\ \\textbf {o}_{t}\n\\\\ \\hat{\\textbf {c}}_{t}\n\\end{pmatrix}\n&=\n\\begin{pmatrix}\\sigma \\\\ \\sigma \\\\ \\sigma \\\\ \\tanh \\end{pmatrix}\\textbf {W}_{4n,4n}\n\\begin{pmatrix}\n\\textbf {x}_{t}\n\\\\ \\textbf {d}_{t}\n\\\\ \\textbf {h}_{t-1}\n\\end{pmatrix}\\\\\n\\end{aligned}$$ (Eq. 20) ", "where $n$ is hidden layer size, $\\textbf {W}_{4n,4n}$ is model parameters. The cell memory value $\\textbf {c}_{t}$ is modified to depend on the DA representation as: ", "$$\\begin{aligned}\n\\textbf {c}_{t}&=\\textbf {f}_{t}\\odot \\textbf {c}_{t-1} +\\textbf {i}_{t}\\odot \\hat{\\textbf {c}}_{t} + \\tanh (\\textbf {W}_{cr}\\textbf {r}_{t})\n\\\\ \\tilde{\\textbf {h}}_{t}&= \\textbf {o}_{t} \\odot \\tanh (\\textbf {c}_{t})\n\\end{aligned}$$ (Eq. 21) ", "where $\\tilde{\\textbf {h}}_{t}$ is the output.", "Thirdly, inspired by work of BIBREF4 in which the generator was further conditioned on a 1-hot representation vector $\\textbf {s}$ of given dialogue act, and work of BIBREF27 that proposed a visual sentinel gate to make a decision on whether the model should attend to the image or to the sentinel gate, an additional gating cell is introduced on top of the traditional LSTM to gate another controlling vector $\\textbf {s}$ . Figure 6 shows how RALSTM controls the DA vector $\\textbf {s}$ . First, starting from the 1-hot vector of the DA $\\textbf {s}_{0}$ , at each time step $t$ the proposed cell computes how much the LSTM output $\\tilde{\\textbf {h}}_{t}$ affects the DA vector, which is computed as follows: ", "$$\\begin{aligned}\n\\textbf {a}_{t}&=\\sigma (\\textbf {W}_{ax}\\textbf {x}_{t} +\\textbf {W}_{ah}\\tilde{\\textbf {h}}_{t})\\\\\n\\textbf {s}_{t}&=\\textbf {s}_{t-1} \\odot \\textbf {a}_{t}\n\\end{aligned}$$ (Eq. 22) ", "where $\\textbf {W}_{ax}$ , $\\textbf {W}_{ah}$ are weight matrices to be learned. $\\textbf {a}_{t}$ is called an $Adjustment$ gate since its task is to control what information of the given DA have been generated and what information should be retained for future time steps. Second, we consider how much the information preserved in the DA $\\textbf {s}_{t}$ can be contributed to the output, in which an additional output is computed by applying the output gate $\\textbf {o}_{t}$ on the remaining information in $\\textbf {s}_{t}$ as follows: ", "$$\\begin{aligned}\n\\textbf {c}_{a}&=\\sigma (\\textbf {W}_{os}\\textbf {s}_{t})\\\\\n\\tilde{\\textbf {h}}_{a}&= \\textbf {o}_{t} \\odot \\tanh (\\textbf {c}_{a})\n\\end{aligned}$$ (Eq. 23) ", "where $\\textbf {W}_{os}$ is a weight matrix to project the DA presentation into the output space, $\\tilde{\\textbf {h}}_{a}$ is the Adjustment cell output. Final RALSTM output is a combination of both outputs of the traditional LSTM cell and the Adjustment cell, and computed as follows: ", "$$\\textbf {h}_{t}=\\tilde{\\textbf {h}}_{t} + \\tilde{\\textbf {h}}_{a}$$ (Eq. 24) ", "Finally, the output distribution is computed by applying a softmax function $g$ , and the distribution can be sampled to obtain the next token, ", "$$\\begin{aligned}\n& P(w_{t+1}\\mid w_{t},...w_{0},\\textbf {DA})=g(\\textbf {W}_{ho}\\textbf {h}_{t}) \\\\\n& w_{t+1} \\sim P(w_{t+1}\\mid w_{t}, w_{t-1},...w_{0},\\textbf {DA})\n\\end{aligned}$$ (Eq. 25) ", "where $\\textbf {DA}=(\\textbf {s}, \\textbf {z})$ ." ], [ "The objective function was the negative log-likelihood and computed by: ", "$$\\textbf {F}(\\theta ) = -\\sum _{t=1}^{T}\\textbf {y}_{t}^{\\top }\\log {\\textbf {p}_{t}}$$ (Eq. 27) ", "where: $\\textbf {y}_{t}$ is the ground truth token distribution, $\\textbf {p}_{t}$ is the predicted token distribution, $T$ is length of the input sentence. The proposed generators were trained by treating each sentence as a mini-batch with $l_{2}$ regularization added to the objective function for every 5 training examples. The models were initialized with a pretrained Glove word embedding vectors BIBREF28 and optimized by using stochastic gradient descent and back propagation through time BIBREF29 . Early stopping mechanism was implemented to prevent over-fitting by using a validation set as suggested in BIBREF30 ." ], [ "The decoding consists of two phases: (i) over-generation, and (ii) reranking. In the over-generation, the generator conditioned on both representations of the given DA use a beam search to generate a set of candidate responses. In the reranking phase, cost of the generator is computed to form the reranking score $\\textbf {R}$ as follows: ", "$$\\textbf {R} = \\textbf {F}(\\theta ) + \\lambda \\textbf {ERR}$$ (Eq. 29) ", "where $\\lambda $ is a trade off constant and is set to a large value in order to severely penalize nonsensical outputs. The slot error rate $\\textbf {ERR}$ , which is the number of slots generated that is either missing or redundant, and is computed by: ", "$$\\textbf {ERR} = \\frac{\\textbf {p} + \\textbf {q}}{\\textbf {N}}$$ (Eq. 30) ", "where $\\textbf {N}$ is the total number of slots in DA, and $\\textbf {p}$ , $\\textbf {q}$ is the number of missing and redundant slots, respectively." ], [ "We extensively conducted a set of experiments to assess the effectiveness of the proposed models by using several metrics, datasets, and model architectures, in order to compare to prior methods." ], [ "We assessed the proposed models on four different NLG domains: finding a restaurant, finding a hotel, buying a laptop, and buying a television. The Restaurant and Hotel were collected in BIBREF4 , while the Laptop and TV datasets have been released by BIBREF22 with a much larger input space but only one training example for each DA so that the system must learn partial realization of concepts and be able to recombine and apply them to unseen DAs. This makes the NLG tasks for the Laptop and TV domains become much harder. The dataset statistics are shown in Table 1 ." ], [ "The generators were implemented using the TensorFlow library BIBREF31 and trained with training, validation and testing ratio as 3:1:1. The hidden layer size, beam size were set to be 80 and 10, respectively, and the generators were trained with a $70\\%$ of dropout rate. We performed 5 runs with different random initialization of the network and the training is terminated by using early stopping. We then chose a model that yields the highest BLEU score on the validation set as shown in Table 2 . Since the trained models can differ depending on the initialization, we also report the results which were averaged over 5 randomly initialized networks. Note that, except the results reported in Table 2 , all the results shown were averaged over 5 randomly initialized networks. We set $\\lambda $ to 1000 to severely discourage the reranker from selecting utterances which contain either redundant or missing slots. For each DA, we over-generated 20 candidate sentences and selected the top 5 realizations after reranking. Moreover, in order to better understand the effectiveness of our proposed methods, we: (i) performed an ablation experiments to demonstrate the contribution of each proposed cells (Tables 2 , 3 ), (ii) trained the models on the Laptop domain with varied proportion of training data, starting from $10\\%$ to $100\\%$ (Figure 3 ), (iii) trained general models by merging all the data from four domains together and tested them in each individual domain (Figure 4 ), and (iv) trained adaptation models on merging data from restaurant and hotel domains, then fine tuned the model on laptop domain with varied amount of adaptation data (Figure 5 )." ], [ "The generator performance was assessed on the two evaluation metrics: the BLEU and the slot error rate ERR by adopting code from an open source benchmark toolkit for Natural Language Generation. We compared the proposed models against three strong baselines which have been recently published as state-of-the-art NLG benchmarks[]. https://github.com/shawnwun/RNNLG", "HLSTM proposed by BIBREF3 thwsjy15 which used a heuristic gate to ensure that all of the slot-value information was accurately captured when generating.", "SCLSTM proposed by BIBREF4 wensclstm15 which can jointly learn the gating signal and language model.", "Enc-Dec proposed by BIBREF8 wentoward which applied the attention-based encoder-decoder architecture." ], [ "We conducted extensive experiments on our models and compared against the previous methods. Overall, the proposed models consistently achieve the better performance regarding both evaluation metrics across all domains in all test cases.", "The ablation studies (Tables 2 , 3 ) demonstrate the contribution of different model components in which the models were assessed without Adjustment cell (w/o A), or without Refinement cell (w/o R). It clearly sees that the Adjustment cell contributes to reducing the slot error rate ERR score since it can effectively prevent the undesirable slot-value pair repetitions by gating the DA vector $\\textbf {s}$ . A comparison between the ARED-based models (denoted by $^{\\sharp }$ in Table 2 ) shows that the proposed models not only have better performance with higher the BLEU score but also significantly reduce the slot error rate ERR score by a large margin about $2\\%$ to $4\\%$ in every datasets. Moreover, a comparison between the models with gating the DA vector also indicates that the proposed models (w/o R, RALSTM) have significant improved performance on both the evaluation metrics across the four domains compared to the SCLSTM model. The RALSTM cell without the Refinement cell is similar as the SCLSTM cell. However, it obtained the results much better than the SCLSTM baselines. This stipulates the necessary of the LSTM encoder and the Aligner in effectively partial learning the correlated order between slot-value representation in the DAs, especially for the unseen domain where there is only one training example for each DA. Table 3 further demonstrates the stable strength of our models since the results' pattern stays unchanged compared to those in Table 2 .", "Figure 3 shows a comparison of three models (Enc-Dec, SCLSTM, and RALSTM) which were trained from scratch on the unseen laptop domain in varied proportion of training data, from $1\\%$ to $100\\%$ . It clearly shows that the RALSTM outperforms the previous models in all cases, while the Enc-Dec has a much greater ERR score comparing to the two models.", "A comparison of top responses generated for some input DAs between different models are shown in Table 4 . While the previous models still produce some errors (missing and misplaced information), the proposed models (RALSTM and the models All2* trained by pooling all datasets together) can generate appropriate sentences. We also found that the proposed models tend to generate more complete and concise sentences than the other models.", "All these prove the importance of the proposed components: the Refinement cell in aggregating and selecting the attentive information, and the Adjustment cell in controlling the feature vector (see Examples in Figure 6 ).", "Figure 4 shows a comparison performance of general models as described in Section \"Experimental Setups\" . The results are consistent with the Figure 3 , in which the RALSTM has better performance than the Enc-Dec and SCLSTM on all domains in terms of the BLEU and the ERR scores, while the Enc-Dec has difficulties in reducing the ERR score. This indicates the relevant contribution of the proposed component Refinement and Adjustment cells to the original ARED architecture, in which the Refinement with attentional gating can effectively select and aggregate the information before putting them into the traditional LSTM cell, while the Adjustment with gating DA vector can effectively control the information flow during generation.", "Figure 5 shows domain scalability of the three models in which the models were first trained on the merging out-of-domain Restaurant and Hotel datasets, then fine tuned the parameters with varied amount of in-domain training data (laptop domain). The RALSTM model outperforms the previous model in both cases where the sufficient in-domain data is used (as in Figure 5 -left) and the limited in-domain data is used (Figure 5 -right). The Figure 5 -right also indicates that the RALSTM model can adapt to a new, unseen domain faster than the previous models." ], [ "We present an extension of ARED model, in which an RALSTM component is introduced to select and aggregate semantic elements produced by the Encoder, and to generate the required sentence. We assessed the proposed models on four NLG domains and compared to the state-of-the-art generators. The proposed models empirically show consistent improvement over the previous methods in both the BLEU and ERR evaluation metrics. The proposed models also show an ability to extend to a new, unseen domain no matter how much the in-domain training data was fed. In the future, it would be interesting to apply the proposed model to other tasks that can be modeled based on the encoder-decoder architecture, i.e., image captioning, reading comprehension, and machine translation. " ] ], "section_name": [ "Introduction", "Related Work", "Recurrent Neural Language Generator", "Encoder", "Aligner", "RALSTM Decoder", "Training", "Decoding", "Experiments", "Datasets", "Experimental Setups", "Evaluation Metrics and Baselines", "Results", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "92835f1d26b9e97f0b4f128cdf1b93819dbdde8c" ], "answer": [ { "evidence": [ "While the RNN-based generators with DA gating-vector can prevent the undesirable semantic repetitions, the ARED-based generators show signs of better adapting to a new domain. However, none of the models show significant advantage from out-of-domain data. To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences. We conducted experiments on four different NLG domains and found that the proposed methods significantly outperformed the state-of-the-art methods regarding BLEU BIBREF15 and slot error rate ERR scores BIBREF4 . The results also showed that our generators could scale to new domains by leveraging the out-of-domain data. To sum up, we make three key contributions in this paper:" ], "extractive_spans": [], "free_form_answer": "Introduce a \"Refinement Adjustment LSTM-based component\" to the decoder", "highlighted_evidence": [ "To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "a5b18f5049e570609f001691b615ebbf0fb92023", "d7f3d114688f133fda6b86256b887d3d6adb4d4e" ], "answer": [ { "evidence": [ "We assessed the proposed models on four different NLG domains: finding a restaurant, finding a hotel, buying a laptop, and buying a television. The Restaurant and Hotel were collected in BIBREF4 , while the Laptop and TV datasets have been released by BIBREF22 with a much larger input space but only one training example for each DA so that the system must learn partial realization of concepts and be able to recombine and apply them to unseen DAs. This makes the NLG tasks for the Laptop and TV domains become much harder. The dataset statistics are shown in Table 1 ." ], "extractive_spans": [], "free_form_answer": "NLG datasets", "highlighted_evidence": [ "We assessed the proposed models on four different NLG domains: finding a restaurant, finding a hotel, buying a laptop, and buying a television. The Restaurant and Hotel were collected in BIBREF4 , while the Laptop and TV datasets have been released by BIBREF22 with a much larger input space but only one training example for each DA so that the system must learn partial realization of concepts and be able to recombine and apply them to unseen DAs. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We assessed the proposed models on four different NLG domains: finding a restaurant, finding a hotel, buying a laptop, and buying a television. The Restaurant and Hotel were collected in BIBREF4 , while the Laptop and TV datasets have been released by BIBREF22 with a much larger input space but only one training example for each DA so that the system must learn partial realization of concepts and be able to recombine and apply them to unseen DAs. This makes the NLG tasks for the Laptop and TV domains become much harder. The dataset statistics are shown in Table 1 ." ], "extractive_spans": [], "free_form_answer": "NLG datasets", "highlighted_evidence": [ "We assessed the proposed models on four different NLG domains: finding a restaurant, finding a hotel, buying a laptop, and buying a television. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b", "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ], "nlp_background": [ "infinity", "infinity" ], "paper_read": [ "no", "no" ], "question": [ "What is the difference of the proposed model with a standard RNN encoder-decoder?", "Does the model evaluated on NLG datasets or dialog datasets?" ], "question_id": [ "981fd79dd69581659cb1d4e2b29178e82681eb4d", "03e9ac1a2d90152cd041342a11293a1ebd33bcc3" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Unrolled presentation of the RNNsbased neural language generator. The Encoder part is a BiLSTM, the Aligner is an attention mechanism over the encoded inputs, and the Decoder is the proposed RALSTM model conditioned on a 1-hot representation vector s. The fading color of the vector s indicates retaining information for future computational time steps.", "Figure 2: The RALSTM cell proposed in this paper, which consists of three components: an Refinement Cell, a traditional LSTM Cell, and an Adjustment Cell. At time step t, while the Refinement cell computes new input tokens xt based on the original input tokens and the attentional DA representation dt, the Adjustment Cell calculates how much information of the slot-value pairs can be generated by the LSTM Cell.", "Table 1: Dataset statistics.", "Table 2: Performance comparison on four datasets in terms of the BLEU and the error rate ERR(%) scores. The results were produced by training each network on 5 random initialization and selected model with the highest validation BLEU score. ♯ denotes the Attention-based Encoder-Decoder model. The best and second best models highlighted in bold and italic face, respectively.", "Table 3: Performance comparison of the proposed models on four datasets in terms of the BLEU and the error rate ERR(%) scores. The results were averaged over 5 randomly initialized networks. bold denotes the best model.", "Figure 3: Performance comparison of the models trained on Laptop domain.", "Figure 4: Performance comparison of the general models on four different domains.", "Figure 5: Performance on Laptop domain with varied amount of the adaptation training data when adapting models trained on Restaurant+Hotel dataset.", "Table 4: Comparison of top responses generated for some input dialogue acts between different models. Errors are marked in color (missing, misplaced information). All2* are general models.", "Figure 6: Example showing how RALSTM drives down the DA feature value vector s step-by-step, in which the model generally shows its ability to detect words and phases describing a corresponding slot-value pair." ], "file": [ "3-Figure1-1.png", "3-Figure2-1.png", "5-Table1-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Figure3-1.png", "7-Figure4-1.png", "7-Figure5-1.png", "8-Table4-1.png", "8-Figure6-1.png" ] }
[ "What is the difference of the proposed model with a standard RNN encoder-decoder?", "Does the model evaluated on NLG datasets or dialog datasets?" ]
[ [ "1706.00139-Introduction-3" ], [ "1706.00139-Datasets-0" ] ]
[ "Introduce a \"Refinement Adjustment LSTM-based component\" to the decoder", "NLG datasets" ]
268
1912.01852
PitchNet: Unsupervised Singing Voice Conversion with Pitch Adversarial Network
Singing voice conversion is to convert a singer's voice to another one's voice without changing singing content. Recent work shows that unsupervised singing voice conversion can be achieved with an autoencoder-based approach [1]. However, the converted singing voice can be easily out of key, showing that the existing approach can not model the pitch information precisely. In this paper, we propose to advance the existing unsupervised singing voice conversion method proposed in [1] to achieve more accurate pitch translation and flexible pitch manipulation. Specifically, the proposed PitchNet added an adversarially trained pitch regression network to enforce the encoder network to learn pitch invariant phoneme representation, and a separate module to feed pitch extracted from the source audio to the decoder network. Our evaluation shows that the proposed method can greatly improve the quality of the converted singing voice (2.92 vs 3.75 in MOS). We also demonstrate that the pitch of converted singing can be easily controlled during generation by changing the levels of extracted pitch before passing it to the decoder network.
{ "paragraphs": [ [ "Singing is an important way of human expression and the techniques of singing synthesis have broad applications in different prospects including virtual human, movie dubbing and so on. Traditional singing synthesize systems are based on concatenative BIBREF1 or HMM BIBREF2 based approaches. With the success of deep learning in Text-to-Speech, some neural singing synthesis methods have also been proposed recently. For example, BIBREF3 introduces a singing synthesis method using an architecture similar to WaveNet BIBREF4. It adopts lyrics and notes as input and generates vocoder features autoregressively for final singing voice synthesis.", "Singing voice conversion is another way of singing synthesis which extracts musical expression within existing singing and reproduced them with another singer's voice. It is very similar to speech based voice conversion BIBREF5, BIBREF6, BIBREF7, BIBREF8, but compared with speech voice conversion, singing voice conversion needs to deal with a wider range of frequency variations as well as the sharper change of volume and pitch within singing voice. The performance of singing conversion is highly dependent on the musical expression of converted singing and the similarity of the converted voice timbre compared to target singer's voice. There are several singing voice conversion methods to convert one's singing voice to another BIBREF9, BIBREF10, BIBREF11. They generally require parallel data to train the conversion model. To overcome the limitation of the parallel training data for singing voice conversion, an unsupervised method BIBREF0 has been proposed to utilize non-parallel data. This method employs an autoencoder architecture composed of a WaveNet-like encoder, a WaveNet BIBREF4 autoregressive decoder, and a learnable singer embedding table. Voice waveform is passed into the encoder and the output of the encoder will be concatenated with the embedding vector associated with the singer. The concatenated features will be used to condition the WaveNet decoder to reconstruct the input audio. A confusion loss BIBREF12 is also introduced to force the encoder to learn a singer-invariant representation. By switching between embeddings of different singers during generation, the singing voice conversion can be achieved. While this approach could generates singing voice perceptually similar to target singer, the quality of generated singing often suffers due to the difficulty on learning joint representation of phonetic and pitch representation.", "To address the difficulty of learning join phonetic and pitch representation in BIBREF0, we propose to use adversarially trained pitch regression network to encourage the encoder network to learn not only singer-invariant but also pitch-invariant representation, at the same time extract the pitch from source audio to be used as an additional input to the decoder. The proposed method can greatly improve the quality of the converted voice and achieve flexible pitch manipulation at the same time.", "In the following sections, we will introduce our proposed method in section SECREF2. And then section SECREF3 will show that our method is effective by quantitative and qualitative experiments. Finally, we will conclude in section SECREF4 and acknowledgements are in SECREF5." ], [ "Our method follows the autoencoder architecture in BIBREF0 except that there is an additional pitch regression network to separate pitch information out of the latent space. The architecture of PitchNet is illustrated in Fig. FIGREF1. It consists of five parts, an encoder, a decoder, a Look Up Table (LUT) of speaker embedding vectors, a singer classification network, and a pitch regression network.", "First, the input waveform is passed through the encoder to extract high-level semantic features. An average pooling of stride 800 is then applied to the features, forming a bottleneck to limit the information passing through the encoder. After that, a singer id is used to retrieve the target singer's embedding vector from LUT, and concatenated with the output of the encoder at each time step to be a sequence of condition vectors. The pitch of the input audio, extracted separately from the network, is fed into the decoder after a linear interpolation as a compensation signal together with the condition vector. Finally, the decoder is conditioned on the condition vector and pitch to generate audio samples. Since the decoder is an autoregressive model, the output will be fed back to the decoder at the next time step. The model is trained on a softmax-based loss to minimize the reconstruction error with teacher-forcing.", "In order to project the output features of the encoder into a singer and pitch invariant latent space, a singer classification network and a pitch regression network are employed to force the encoder not to encode singer and pitch information. The singer classification loss and pitch regression loss are added adversarially to the reconstruction loss to train the entire model end to end." ], [ "To formally describe the model, let $E$ be the encoder network, $D$ be the decoder network, $C_s$ be the singer classification network and $C_p$ be the pitch regression network. Let $v_j$ denote the embedding vector of singer $j$, $s^j$ denote an input audio of singer $j$ and $p(s^j)$ denote the extracted pitch of $s^j$. Now given an input audio sequence $s^j$ and a target singer $k$ where $j,k = 1,2,...,N$ and $N$ is the number of singers, the output of the model is", "Note that $D$ is an autoregressive model which would feed the output back to itself. The reconstruction loss is", "where $\\mathcal {L}_{ce}(o, y)$ is the cross entropy loss applied to each element of $o$ and $y$. However, only reconstruction loss is not enough to train the model to learn to convert singing voice between different singers because it just forces the model to reconstruct the input voice. Therefore, a singer classification loss(also named domain confusion loss BIBREF0) is applied to make the encoder to learn a singer invariant representation", "Furthermore, a pitch regression loss is introduced to force the encoder to learn a pitch-independent representation and make the whole model obtain the pitch information from $p(s^j)$ rather than directly from the input audio", "where $\\mathcal {L}_{mse}(a, b)$ is the mean square error function $\\frac{1}{m}||a-b||_2^2$ and m is the number of elements in $a$. The overall loss we minimize to train the model is", "where $\\lambda $ and $\\mu $ are two weight factors. Furthermore, the adversarial loss used to train the singer classifier and pitch regression network is", "In the training process, we minimize $\\mathcal {L}_{ad}$ and $\\mathcal {L}_{total}$ alternately, that is", "Optimize $C_s$ and $C_p$ one step using $\\mathcal {L}_{ad}$ as the objective function.", "Optimize the whole model one step using $\\mathcal {L}_{total}$ as the objective function.", "Go back to step 1.", "Furthermore, backtranslation and mixup techniques BIBREF0 are also used to improve the quality of the converted singing voice." ], [ "The encoder and decoder networks follow the design in BIBREF13. The encoder is a fully convolutional network with three blocks of ten residual-layers which consists of a ReLU activation, a dilated convolution, a ReLU activation, a 1x1 convolution, and a residual summation in order. After three residual blocks, a 1x1 convolution and an average pooling with a kernel size of 800 are applied to get the final output. The decoder is a WaveNet BIBREF4 vocoder which consists of four blocks of ten residual layers. The linear interpolation and nearest-neighbor interpolation are applied to the input pitch and encoder output respectively, upsampling them to be of the same sample rate as the input audio waveform.", "As shown by Fig. FIGREF2, the singer classification network and pitch regression network have the same architecture of a stack of two convolutional neural networks with a kernel size of 3 and channels of 100. Except that the pitch regression network does not average the output of the two convolution networks before passing it into the final fully connected network. A dropout layer is also employed at the beginning of the network to make the training process more stable." ], [ "Here we compare the audio quality between our method and BIBREF0's method (Below we call USVC), and show that the input pitch can affect the output singing voice by qualitative analysis. Since the authors of BIBREF0 do not release their source code and only provide part of the converted results at their website, we implemented USVC by ourselves, denoted by USVC(our) below, to give a more comprehensive comparison. Audio samples are available at our website ." ], [ "NUS-48E BIBREF14 dataset, sung by 6 male singers and 6 female singers, was used to train the models. It contains 48 songs each with length of several minutes. Every singer provided 4 songs. The male part of the dataset was selected to train the models. During testing, We converted each one's singing voice to the other five singer's voice. Before training, We converted the songs to monophonic audio of 16kHz sample rate and PCM-16 bit format. Besides, 8-bit mu-law encoding was employed to reduce the input space to speed up the training process, although it will degrade the audio quality. Kaldi toolkit BIBREF15 was used to extract pitch from the songs with hop length of 100 which means that we could get 1600 pitch samples in an audio segment of one second. Before feeding them into the model, we normalized the value of pitch between 0 and 1." ], [ "We implemented USVC and PitchNet using PyTorch BIBREF16 framework. Both models were trained on two Tesla P40 GPUs for four days. Adam optimizer BIBREF17 was used with a learning rate of $10^{-3}$ and a decay factor of 0.98 every 1000 steps. The batch size was set to 4 and finally the models were trained for 30k steps. $\\lambda $ and $\\mu $ in the training loss (DISPLAY_FORM8)(DISPLAY_FORM9) were set to $0.01$ and $0.1$ respectively. The dropout probability in the singer classification network and pitch regression network were both $0.2$.", "During the training process, backtranslation and mixup BIBREF0 were employed to improve the conversion. New training samples were generated by mixing embedding vectors of two different singers A and B with a uniform random weight factor. Then these samples were fed to the model to reconstruct A's voice with the embedding vector of A. The reconstructed voice and original voice were used to calculate the reconstruction loss. After training for 200k steps without backtranslation and mixup, we generated 96 new audio segments every 2k steps and used them to train for 24 steps without the adversarial loss (DISPLAY_FORM9).", "Besides, audio time reversal and phase inversion BIBREF0 were also employed to augment the training data by 4 times." ], [ "To compare the conversions between USVC and PitchNet, we employed an automatic evaluation score and a human evaluation score.", "The automatic score roughly followed the design in BIBREF13. The pitch tracker of librosa package BIBREF18 was employed to extract pitch information of the input and output audio. Then the output pitch was compared to the input pitch using the normalized cross correlation (NCC) which would give a score between 0 and 1. The higher the score is, the better the output pitch matches the input pitch. We conducted the evaluation on USVC (our) and PitchNet. The evaluated automatic scores on conversion and reconstruction tasks are shown in Tab. TABREF14. Our method performed better both on conversion and reconstruction. The scores of reconstruction are higher than conversion since both models were trained using a reconstruction loss. However, the score of our method on conversion is even higher than the score of USVC (Our) on reconstruction.", "Mean Opinion Score (MOS) was used as a subjective metric to evaluate the quality of the converted audio. Two questions were asked: (1) what is quality of the audio? (naturalness) (2) How well does the converted version match the original? (similarity) A score of 1-5 would be given to answer the questions. The evaluation was conducted on USVC (Our) and PitchNet. Besides, the converted samples provided by BIBREF0 was also included to give a more convincing evaluation. As shown by Tab. TABREF15, the naturalness and similarity of our method are both higher than the other two ones. Our implementation of USVC performed slightly lower than the original author's because we cannot fully reproduce the results of them.", "Next we qualitatively analyze the influence of input pitch in our method. We used different pitch as input to observe how the output pitch would change along with the input pitch. The input pitch was multiplied by 0.7, 1.0 and 1.3 respectively. And the output pitch was also extracted by the pitch tracker of the librosa package. Fig. FIGREF16 plots the pitch of input audio and output audio with different pitch as input while keeping the target singer the same. As shown by Fig. FIGREF16, the output pitch changes significantly along with the input pitch. The examples are also presented at our website." ], [ "In this paper, a novel unsupervised singing voice conversion method named PitchNet is proposed. A pitch regression network is employed to render an adversarial loss separating pitch related information from the latent space in autoencoder. After the WaveNet-like encoder, a singer and pitch invariant representation is generated and then fed into the WaveNet decoder conditioning on the singer embedding and the extracted pitch to reconstruct the target singing voice. Our method outperforms the existing unsupervised singing voice conversion method and achieves flexible pitch manipulation." ], [ "The authors would like to thanks Kun Xu and other members in the Tencent AI Lab team for discussions and suggestions." ] ], "section_name": [ "Introduction", "Method", "Method ::: Training Loss", "Method ::: The architecture of the Sub-Networks", "Experiments", "Experiments ::: Dataset and Preprocessing", "Experiments ::: Training", "Experiments ::: Evaluation", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "b32d6637e5f41f208986e0ad8df50bad43f97efe" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "affb04e1b97368285c294d1354a6ad525adefaba", "bea9231e463ddb1ee405aeeb92418e286a4b183a" ], "answer": [ { "evidence": [ "To compare the conversions between USVC and PitchNet, we employed an automatic evaluation score and a human evaluation score.", "The automatic score roughly followed the design in BIBREF13. The pitch tracker of librosa package BIBREF18 was employed to extract pitch information of the input and output audio. Then the output pitch was compared to the input pitch using the normalized cross correlation (NCC) which would give a score between 0 and 1. The higher the score is, the better the output pitch matches the input pitch. We conducted the evaluation on USVC (our) and PitchNet. The evaluated automatic scores on conversion and reconstruction tasks are shown in Tab. TABREF14. Our method performed better both on conversion and reconstruction. The scores of reconstruction are higher than conversion since both models were trained using a reconstruction loss. However, the score of our method on conversion is even higher than the score of USVC (Our) on reconstruction.", "Mean Opinion Score (MOS) was used as a subjective metric to evaluate the quality of the converted audio. Two questions were asked: (1) what is quality of the audio? (naturalness) (2) How well does the converted version match the original? (similarity) A score of 1-5 would be given to answer the questions. The evaluation was conducted on USVC (Our) and PitchNet. Besides, the converted samples provided by BIBREF0 was also included to give a more convincing evaluation. As shown by Tab. TABREF15, the naturalness and similarity of our method are both higher than the other two ones. Our implementation of USVC performed slightly lower than the original author's because we cannot fully reproduce the results of them.", "Next we qualitatively analyze the influence of input pitch in our method. We used different pitch as input to observe how the output pitch would change along with the input pitch. The input pitch was multiplied by 0.7, 1.0 and 1.3 respectively. And the output pitch was also extracted by the pitch tracker of the librosa package. Fig. FIGREF16 plots the pitch of input audio and output audio with different pitch as input while keeping the target singer the same. As shown by Fig. FIGREF16, the output pitch changes significantly along with the input pitch. The examples are also presented at our website." ], "extractive_spans": [ "To compare the conversions between USVC and PitchNet, we employed an automatic evaluation score and a human evaluation score." ], "free_form_answer": "", "highlighted_evidence": [ "To compare the conversions between USVC and PitchNet, we employed an automatic evaluation score and a human evaluation score.", "The automatic score roughly followed the design in BIBREF13. The pitch tracker of librosa package BIBREF18 was employed to extract pitch information of the input and output audio. Then the output pitch was compared to the input pitch using the normalized cross correlation (NCC) which would give a score between 0 and 1. The higher the score is, the better the output pitch matches the input pitch. We conducted the evaluation on USVC (our) and PitchNet. The evaluated automatic scores on conversion and reconstruction tasks are shown in Tab. TABREF14. Our method performed better both on conversion and reconstruction. The scores of reconstruction are higher than conversion since both models were trained using a reconstruction loss. However, the score of our method on conversion is even higher than the score of USVC (Our) on reconstruction.", "Mean Opinion Score (MOS) was used as a subjective metric to evaluate the quality of the converted audio. Two questions were asked: (1) what is quality of the audio? (naturalness) (2) How well does the converted version match the original? (similarity) A score of 1-5 would be given to answer the questions. The evaluation was conducted on USVC (Our) and PitchNet. Besides, the converted samples provided by BIBREF0 was also included to give a more convincing evaluation. As shown by Tab. TABREF15, the naturalness and similarity of our method are both higher than the other two ones. Our implementation of USVC performed slightly lower than the original author's because we cannot fully reproduce the results of them.", "Next we qualitatively analyze the influence of input pitch in our method. We used different pitch as input to observe how the output pitch would change along with the input pitch. The input pitch was multiplied by 0.7, 1.0 and 1.3 respectively. And the output pitch was also extracted by the pitch tracker of the librosa package. Fig. FIGREF16 plots the pitch of input audio and output audio with different pitch as input while keeping the target singer the same. As shown by Fig. FIGREF16, the output pitch changes significantly along with the input pitch. The examples are also presented at our website." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To compare the conversions between USVC and PitchNet, we employed an automatic evaluation score and a human evaluation score.", "The automatic score roughly followed the design in BIBREF13. The pitch tracker of librosa package BIBREF18 was employed to extract pitch information of the input and output audio. Then the output pitch was compared to the input pitch using the normalized cross correlation (NCC) which would give a score between 0 and 1. The higher the score is, the better the output pitch matches the input pitch. We conducted the evaluation on USVC (our) and PitchNet. The evaluated automatic scores on conversion and reconstruction tasks are shown in Tab. TABREF14. Our method performed better both on conversion and reconstruction. The scores of reconstruction are higher than conversion since both models were trained using a reconstruction loss. However, the score of our method on conversion is even higher than the score of USVC (Our) on reconstruction.", "Mean Opinion Score (MOS) was used as a subjective metric to evaluate the quality of the converted audio. Two questions were asked: (1) what is quality of the audio? (naturalness) (2) How well does the converted version match the original? (similarity) A score of 1-5 would be given to answer the questions. The evaluation was conducted on USVC (Our) and PitchNet. Besides, the converted samples provided by BIBREF0 was also included to give a more convincing evaluation. As shown by Tab. TABREF15, the naturalness and similarity of our method are both higher than the other two ones. Our implementation of USVC performed slightly lower than the original author's because we cannot fully reproduce the results of them." ], "extractive_spans": [], "free_form_answer": "Automatic: Normalized cross correlation (NCC)\nManual: Mean Opinion Score (MOS)", "highlighted_evidence": [ "To compare the conversions between USVC and PitchNet, we employed an automatic evaluation score and a human evaluation score.\n\nThe automatic score roughly followed the design in BIBREF13. The pitch tracker of librosa package BIBREF18 was employed to extract pitch information of the input and output audio. Then the output pitch was compared to the input pitch using the normalized cross correlation (NCC) which would give a score between 0 and 1. The higher the score is, the better the output pitch matches the input pitch. ", "Mean Opinion Score (MOS) was used as a subjective metric to evaluate the quality of the converted audio. Two questions were asked: (1) what is quality of the audio? (naturalness) (2) How well does the converted version match the original? (similarity) A score of 1-5 would be given to answer the questions." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity" ], "paper_read": [ "no", "no" ], "question": [ "Are there elements, other than pitch, that can potentially result in out of key converted singing?", "How is the quality of singing voice measured?" ], "question_id": [ "bfbd6040cb95b179118557352e8e3899ef25c525", "d6e353e0231d09fd5dcba493544d53706f3fe1ab" ], "question_writer": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "fa716cd87ce6fd6905e2f23f09b262e90413167f" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Fig. 2: The architecture of the singer classification network and pitch regression network. Left: Singer classification network; Right: Pitch regression network.", "Fig. 1: The overall architecture of PitchNet. PitchNet consists of five parts, an encoder, a decoder, a Look Up Table (LUT) of singer embedding vectors, a singer classification network and a pitch regression network. The audio waveform is directly fed into the encoder. The output of the encoder, the singer embedding vector retrieved from LUT and the input pitch are concatenated together to condition on the WaveNet decoder to output audio waveform.", "Table 1: Automatic quality scores", "Table 2: MOS scores", "Fig. 3: The pitch of the source audio and converted audio with different pitch as input" ], "file": [ "2-Figure2-1.png", "2-Figure1-1.png", "3-Table1-1.png", "3-Table2-1.png", "4-Figure3-1.png" ] }
[ "How is the quality of singing voice measured?" ]
[ [ "1912.01852-Experiments ::: Evaluation-2", "1912.01852-Experiments ::: Evaluation-1", "1912.01852-Experiments ::: Evaluation-0", "1912.01852-Experiments ::: Evaluation-3" ] ]
[ "Automatic: Normalized cross correlation (NCC)\nManual: Mean Opinion Score (MOS)" ]
270
1808.09029
Pyramidal Recurrent Unit for Language Modeling
LSTMs are powerful tools for modeling contextual information, as evidenced by their success at the task of language modeling. However, modeling contexts in very high dimensional space can lead to poor generalizability. We introduce the Pyramidal Recurrent Unit (PRU), which enables learning representations in high dimensional space with more generalization power and fewer parameters. PRUs replace the linear transformation in LSTMs with more sophisticated interactions including pyramidal and grouped linear transformations. This architecture gives strong results on word-level language modeling while reducing the number of parameters significantly. In particular, PRU improves the perplexity of a recent state-of-the-art language model Merity et al. (2018) by up to 1.3 points while learning 15-20% fewer parameters. For similar number of model parameters, PRU outperforms all previous RNN models that exploit different gating mechanisms and transformations. We provide a detailed examination of the PRU and its behavior on the language modeling tasks. Our code is open-source and available at https://sacmehta.github.io/PRU/
{ "paragraphs": [ [ "Long short term memory (LSTM) units BIBREF1 are popular for many sequence modeling tasks and are used extensively in language modeling. A key to their success is their articulated gating structure, which allows for more control over the information passed along the recurrence. However, despite the sophistication of the gating mechanisms employed in LSTMs and similar recurrent units, the input and context vectors are treated with simple linear transformations prior to gating. Non-linear transformations such as convolutions BIBREF2 have been used, but these have not achieved the performance of well regularized LSTMs for language modeling BIBREF3 .", "A natural way to improve the expressiveness of linear transformations is to increase the number of dimensions of the input and context vectors, but this comes with a significant increase in the number of parameters which may limit generalizability. An example is shown in Figure FIGREF1 , where LSTMs performance decreases with the increase in dimensions of the input and context vectors. Moreover, the semantics of the input and context vectors are different, suggesting that each may benefit from specialized treatment.", "Guided by these insights, we introduce a new recurrent unit, the Pyramidal Recurrent Unit (PRU), which is based on the LSTM gating structure. Figure FIGREF2 provides an overview of the PRU. At the heart of the PRU is the pyramidal transformation (PT), which uses subsampling to effect multiple views of the input vector. The subsampled representations are combined in a pyramidal fusion structure, resulting in richer interactions between the individual dimensions of the input vector than is possible with a linear transformation. Context vectors, which have already undergone this transformation in the previous cell, are modified with a grouped linear transformation (GLT) which allows the network to learn latent representations in high dimensional space with fewer parameters and better generalizability (see Figure FIGREF1 ).", "We show that PRUs can better model contextual information and demonstrate performance gains on the task of language modeling. The PRU improves the perplexity of the current state-of-the-art language model BIBREF0 by up to 1.3 points, reaching perplexities of 56.56 and 64.53 on the Penn Treebank and WikiText2 datasets while learning 15-20% fewer parameters. Replacing an LSTM with a PRU results in improvements in perplexity across a variety of experimental settings. We provide detailed ablations which motivate the design of the PRU architecture, as well as detailed analysis of the effect of the PRU on other components of the language model." ], [ "Multiple methods, including a variety of gating structures and transformations, have been proposed to improve the performance of recurrent neural networks (RNNs). We first describe these approaches and then provide an overview of recent work in language modeling." ], [ "We introduce Pyramidal Recurrent Units (PRUs), a new RNN architecture which improves modeling of context by allowing for higher dimensional vector representations while learning fewer parameters. Figure FIGREF2 provides an overview of PRU. We first elaborate on the details of the pyramidal transformation and the grouped linear transformation. We then describe our recurrent unit, PRU." ], [ "The basic transformation in many recurrent units is a linear transformation INLINEFORM0 defined as: DISPLAYFORM0 ", "where INLINEFORM0 are learned weights that linearly map INLINEFORM1 to INLINEFORM2 . To simplify notation, we omit the biases.", "Motivated by successful applications of sub-sampling in computer vision (e.g., BIBREF22 , BIBREF23 , BIBREF9 , BIBREF24 ), we subsample input vector INLINEFORM0 into INLINEFORM1 pyramidal levels to achieve representation of the input vector at multiple scales. This sub-sampling operation produces INLINEFORM2 vectors, represented as INLINEFORM3 , where INLINEFORM4 is the sampling rate and INLINEFORM5 . We learn scale-specific transformations INLINEFORM6 for each INLINEFORM7 . The transformed subsamples are concatenated to produce the pyramidal analog to INLINEFORM8 , here denoted as INLINEFORM9 : DISPLAYFORM0 ", "where INLINEFORM0 indicates concatenation. We note that pyramidal transformation with INLINEFORM1 is the same as the linear transformation.", "To improve gradient flow inside the recurrent unit, we combine the input and output using an element-wise sum (when dimension matches) to produce residual analog of pyramidal transformation, as shown in Figure FIGREF2 BIBREF25 .", "We sub-sample the input vector INLINEFORM0 into INLINEFORM1 pyramidal levels using the kernel-based approach BIBREF8 , BIBREF9 . Let us assume that we have a kernel INLINEFORM2 with INLINEFORM3 elements. Then, the input vector INLINEFORM4 can be sub-sampled as: DISPLAYFORM0 ", "where INLINEFORM0 represents the stride and INLINEFORM1 .", "The number of parameters learned by the linear transformation and the pyramidal transformation with INLINEFORM0 pyramidal levels to map INLINEFORM1 to INLINEFORM2 are INLINEFORM3 and INLINEFORM4 respectively. Thus, pyramidal transformation reduces the parameters of a linear transformation by a factor of INLINEFORM5 . For example, the pyramidal transformation (with INLINEFORM6 and INLINEFORM7 ) learns INLINEFORM8 fewer parameters than the linear transformation." ], [ "Many RNN architectures apply linear transformations to both the input and context vector. However, this may not be ideal due to the differing semantics of each vector. In many NLP applications including language modeling, the input vector is a dense word embedding which is shared across all contexts for a given word in a dataset. In contrast, the context vector is highly contextualized by the current sequence. The differences between the input and context vector motivate their separate treatment in the PRU architecture.", "The weights learned using the linear transformation (Eq. EQREF9 ) are reused over multiple time steps, which makes them prone to over-fitting BIBREF26 . To combat over-fitting, various methods, such as variational dropout BIBREF26 and weight dropout BIBREF0 , have been proposed to regularize these recurrent connections. To further improve generalization abilities while simultaneously enabling the recurrent unit to learn representations at very high dimensional space, we propose to use grouped linear transformation (GLT) instead of standard linear transformation for recurrent connections BIBREF27 . While pyramidal and linear transformations can be applied to transform context vectors, our experimental results in Section SECREF39 suggests that GLTs are more effective.", "The linear transformation INLINEFORM0 maps INLINEFORM1 linearly to INLINEFORM2 . Grouped linear transformations break the linear interactions by factoring the linear transformation into two steps. First, a GLT splits the input vector INLINEFORM3 into INLINEFORM4 smaller groups such that INLINEFORM5 . Second, a linear transformation INLINEFORM6 is applied to map INLINEFORM7 linearly to INLINEFORM8 , for each INLINEFORM9 . The INLINEFORM10 resultant output vectors INLINEFORM11 are concatenated to produce the final output vector INLINEFORM12 . DISPLAYFORM0 ", "GLTs learn representations at low dimensionality. Therefore, a GLT requires INLINEFORM0 fewer parameters than the linear transformation. We note that GLTs are subset of linear transformations. In a linear transformation, each neuron receives an input from each element in the input vector while in a GLT, each neuron receives an input from a subset of the input vector. Therefore, GLT is the same as a linear transformation when INLINEFORM1 ." ], [ "We extend the basic gating architecture of LSTM with the pyramidal and grouped linear transformations outlined above to produce the Pyramidal Recurrent Unit (PRU), whose improved sequence modeling capacity is evidenced in Section SECREF4 .", "At time INLINEFORM0 , the PRU combines the input vector INLINEFORM1 and the previous context vector (or previous hidden state vector) INLINEFORM2 using the following transformation function as: DISPLAYFORM0 ", "where INLINEFORM0 indexes the various gates in the LSTM model, and INLINEFORM1 and INLINEFORM2 represents the pyramidal and grouped linear transformations defined in Eqns. EQREF10 and EQREF15 , respectively.", "We will now incorporate INLINEFORM0 into LSTM gating architecture to produce PRU. At time INLINEFORM1 , a PRU cell takes INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 as inputs to produce forget INLINEFORM5 , input INLINEFORM6 , output INLINEFORM7 , and content INLINEFORM8 gate signals. The inputs are combined with these gate signals to produce context vector INLINEFORM9 and cell state INLINEFORM10 . Mathematically, the PRU with the LSTM gating architecture can be defined as: DISPLAYFORM0 ", "where INLINEFORM0 represents the element-wise multiplication operation, and INLINEFORM1 and INLINEFORM2 are the sigmoid and hyperbolic tangent activation functions. We note that LSTM is a special case of PRU when INLINEFORM3 = INLINEFORM4 =1." ], [ "To showcase the effectiveness of the PRU, we evaluate the performance on two standard datasets for word-level language modeling and compare with state-of-the-art methods. Additionally, we provide a detailed examination of the PRU and its behavior on the language modeling tasks." ], [ "Following recent works, we compare on two widely used datasets, the Penn Treebank (PTB) BIBREF28 as prepared by BIBREF29 and WikiText2 (WT-2) BIBREF20 . For both datasets, we follow the same training, validation, and test splits as in BIBREF0 .", "We extend the language model, AWD-LSTM BIBREF0 , by replacing LSTM layers with PRU. Our model uses 3-layers of PRU with an embedding size of 400. The number of parameters learned by state-of-the-art methods vary from 18M to 66M with majority of the methods learning about 22M to 24M parameters on the PTB dataset. For a fair comparison with state-of-the-art methods, we fix the model size to 19M and vary the value of INLINEFORM0 and hidden layer sizes so that total number of learned parameters is similar across different configurations. We use 1000, 1200, and 1400 as hidden layer sizes for values of INLINEFORM1 =1,2, and 4, respectively. We use the same settings for the WT-2 dataset. We set the number of pyramidal levels INLINEFORM2 to two in our experiments and use average pooling for sub-sampling. These values are selected based on our ablation experiments on the validation set (Section SECREF39 ). We measure the performance of our models in terms of word-level perplexity. We follow the same training strategy as in BIBREF0 .", "To understand the effect of regularization methods on the performance of PRUs, we perform experiments under two different settings: (1) Standard dropout: We use a standard dropout BIBREF12 with probability of 0.5 after embedding layer, the output between LSTM layers, and the output of final LSTM layer. (2) Advanced dropout: We use the same dropout techniques with the same dropout values as in BIBREF0 . We call this model as AWD-PRU." ], [ "Table TABREF23 compares the performance of the PRU with state-of-the-art methods. We can see that the PRU achieves the best performance with fewer parameters.", "PRUs achieve either the same or better performance than LSTMs. In particular, the performance of PRUs improves with the increasing value of INLINEFORM0 . At INLINEFORM1 , PRUs outperform LSTMs by about 4 points on the PTB dataset and by about 3 points on the WT-2 dataset. This is explained in part by the regularization effect of the grouped linear transformation (Figure FIGREF1 ). With grouped linear and pyramidal transformations, PRUs learn rich representations at very high dimensional space while learning fewer parameters. On the other hand, LSTMs overfit to the training data at such high dimensions and learn INLINEFORM2 to INLINEFORM3 more parameters than PRUs.", "With the advanced dropouts, the performance of PRUs improves by about 4 points on the PTB dataset and 7 points on the WT-2 dataset. This further improves with finetuning on the PTB (about 2 points) and WT-2 (about 1 point) datasets.", "For similar number of parameters, the PRU with standard dropout outperforms most of the state-of-the-art methods by large margin on the PTB dataset (e.g. RAN BIBREF7 by 16 points with 4M less parameters, QRNN BIBREF33 by 16 points with 1M more parameters, and NAS BIBREF31 by 1.58 points with 6M less parameters). With advanced dropouts, the PRU delivers the best performance. On both datasets, the PRU improves the perplexity by about 1 point while learning 15-20% fewer parameters.", "PRU is a drop-in replacement for LSTM, therefore, it can improve language models with modern inference techniques such as dynamic evaluation BIBREF21 . When we evaluate PRU-based language models (only with standard dropout) with dynamic evaluation on the PTB test set, the perplexity of PRU ( INLINEFORM0 ) improves from 62.42 to 55.23 while the perplexity of an LSTM ( INLINEFORM1 ) with similar settings improves from 66.29 to 58.79; suggesting that modern inference techniques are equally applicable to PRU-based language models." ], [ "It is shown above that the PRU can learn representations at higher dimensionality with more generalization power, resulting in performance gains for language modeling. A closer analysis of the impact of the PRU in a language modeling system reveals several factors that help explain how the PRU achieves these gains.", "As exemplified in Table TABREF34 , the PRU tends toward more confident decisions, placing more of the probability mass on the top next-word prediction than the LSTM. To quantify this effect, we calculate the entropy of the next-token distribution for both the PRU and the LSTM using 3687 contexts from the PTB validation set. Figure FIGREF32 shows a histogram of the entropies of the distribution, where bins of size 0.23 are used to effect categories. We see that the PRU more often produces lower entropy distributions corresponding to higher confidences for next-token choices. This is evidenced by the mass of the red PRU curve lying in the lower entropy ranges compared to the blue LSTM's curve. The PRU can produce confident decisions in part because more information is encoded in the higher dimensional context vectors.", "The PRU has the ability to model individual words at different resolutions through the pyramidal transform; which provides multiple paths for the gradient to the embedding layer (similar to multi-task learning) and improves the flow of information. When considering the embeddings by part of speech, we find that the pyramid level 1 embeddings exhibit higher variance than the LSTM across all POS categories (Figure FIGREF33 ), and that pyramid level 2 embeddings show extremely low variance. We hypothesize that the LSTM must encode both coarse group similarities and individual word differences into the same vector space, reducing the space between individual words of the same category. The PRU can rely on the subsampled embeddings to account for coarse-grained group similarities, allowing for finer individual word distinctions in the embedding layer. This hypothesis is strengthened by the entropy results described above: a model which can make finer distinctions between individual words can more confidently assign probability mass. A model that cannot make these distinctions, such as the LSTM, must spread its probability mass across a larger class of similar words.", "Saliency analysis using gradients help identify relevant words in a test sequence that contribute to the prediction BIBREF34 , BIBREF35 , BIBREF36 . These approaches compute the relevance as the squared norm of the gradients obtained through back-propagation. Table TABREF34 visualizes the heatmaps for different sequences. PRUs, in general, give more relevance to contextual words than LSTMs, such as southeast (sample 1), cost (sample 2), face (sample 4), and introduced (sample 5), which help in making more confident decisions. Furthermore, when gradients during back-propagation are visualized BIBREF37 (Table TABREF34 ), we find that PRUs have better gradient coverage than LSTMs, suggesting PRUs use more features than LSTMs that contributes to the decision. This also suggests that PRUs update more parameters at each iteration which results in faster training. Language model in BIBREF0 takes 500 and 750 epochs to converge with PRU and LSTM as a recurrent unit, respectively." ], [ "In this section, we provide a systematic analysis of our design choices. Our training methodology is the same as described in Section SECREF19 with the standard dropouts. For a thorough understanding of our design choices, we use a language model with a single layer of PRU and fix the size of embedding and hidden layers to 600. The word-level perplexities are reported on the validation sets of the PTB and the WT-2 datasets.", "The two hyper-parameters that control the trade-off between performance and number of parameters in PRUs are the number of pyramidal levels INLINEFORM0 and groups INLINEFORM1 . Figure FIGREF35 provides a trade-off between perplexity and recurrent unit (RU) parameters.", "Variable INLINEFORM0 and fixed INLINEFORM1 : When we increase the number of pyramidal levels INLINEFORM2 at a fixed value of INLINEFORM3 , the performance of the PRU drops by about 1 to 4 points while reducing the total number of recurrent unit parameters by up to 15%. We note that the PRU with INLINEFORM4 at INLINEFORM5 delivers similar performance as the LSTM while learning about 15% fewer recurrent unit parameters.", "Fixed INLINEFORM0 and variable INLINEFORM1 : When we vary the value of INLINEFORM2 at fixed number of pyramidal levels INLINEFORM3 , the total number of recurrent unit parameters decreases significantly with a minimal impact on the perplexity. For example, PRUs with INLINEFORM4 and INLINEFORM5 learns 77% fewer recurrent unit parameters while its perplexity (lower is better) increases by about 12% in comparison to LSTMs. Moreover, the decrease in number of parameters at higher value of INLINEFORM6 enables PRUs to learn the representations in high dimensional space with better generalizability (Table TABREF23 ).", "Table TABREF43 shows the impact of different transformations of the input vector INLINEFORM0 and the context vector INLINEFORM1 . We make following observations: (1) Using the pyramidal transformation for the input vectors improves the perplexity by about 1 point on both the PTB and WT-2 datasets while reducing the number of recurrent unit parameters by about 14% (see R1 and R4). We note that the performance of the PRU drops by up to 1 point when residual connections are not used (R4 and R6). (2) Using the grouped linear transformation for context vectors reduces the total number of recurrent unit parameters by about 75% while the performance drops by about 11% (see R3 and R4). When we use the pyramidal transformation instead of the linear transformation, the performance drops by up to 2% while there is no significant drop in the number of parameters (R4 and R5).", "We set sub-sampling kernel INLINEFORM0 (Eq. EQREF12 ) with stride INLINEFORM1 and size of 3 ( INLINEFORM2 ) in four different ways: (1) Skip: We skip every other element in the input vector. (2) Convolution: We initialize the elements of INLINEFORM3 randomly from normal distribution and learn them during training the model. We limit the output values between -1 and 1 using INLINEFORM4 activation function to make training stable. (3) Avg. pool: We initialize the elements of INLINEFORM5 to INLINEFORM6 . (4) Max pool: We select the maximum value in the kernel window INLINEFORM7 .", "Table TABREF45 compares the performance of the PRU with different sampling methods. Average pooling performs the best while skipping give comparable performance. Both of these methods enable the network to learn richer word representations while representing the input vector in different forms, thus delivering higher performance. Surprisingly, a convolution-based sub-sampling method does not perform as well as the averaging method. The INLINEFORM0 function used after convolution limits the range of output values which are further limited by the LSTM gating structure, thereby impeding in the flow of information inside the cell. Max pooling forces the network to learn representations from high magnitude elements, thus distinguishing features between elements vanishes, resulting in poor performance." ], [ "We introduce the Pyramidal Recurrent Unit, which better model contextual information by admitting higher dimensional representations with good generalizability. When applied to the task of language modeling, PRUs improve perplexity across several settings, including recent state-of-the-art systems. Our analysis shows that the PRU improves the flow of gradient and expand the word embedding subspace, resulting in more confident decisions. Here we have shown improvements for language modeling. In future, we plan to study the performance of PRUs on different tasks, including machine translation and question answering. In addition, we will study the performance of the PRU on language modeling with more recent inference techniques, such as dynamic evaluation and mixture of softmax." ], [ "This research was supported by NSF (IIS 1616112, III 1703166), Allen Distinguished Investigator Award, and gifts from Allen Institute for AI, Google, Amazon, and Bloomberg. We are grateful to Aaron Jaech, Hannah Rashkin, Mandar Joshi, Aniruddha Kembhavi, and anonymous reviewers for their helpful comments." ] ], "section_name": [ "Introduction", "Related work", "Pyramidal Recurrent Units", "Pyramidal transformation for input", "Grouped linear transformation for context", "Pyramidal Recurrent Unit", "Experiments", "Set-up", "Results", "Analysis", "Ablation studies", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "bf1de0355349b3447e39b12d068a7f8b8aa3fd66", "fbb7eb94ac1d40d596d26cbd31f1d95e42ddf56c" ], "answer": [ { "evidence": [ "Following recent works, we compare on two widely used datasets, the Penn Treebank (PTB) BIBREF28 as prepared by BIBREF29 and WikiText2 (WT-2) BIBREF20 . For both datasets, we follow the same training, validation, and test splits as in BIBREF0 ." ], "extractive_spans": [ " Penn Treebank", "WikiText2" ], "free_form_answer": "", "highlighted_evidence": [ "Following recent works, we compare on two widely used datasets, the Penn Treebank (PTB) BIBREF28 as prepared by BIBREF29 and WikiText2 (WT-2) BIBREF20 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Following recent works, we compare on two widely used datasets, the Penn Treebank (PTB) BIBREF28 as prepared by BIBREF29 and WikiText2 (WT-2) BIBREF20 . For both datasets, we follow the same training, validation, and test splits as in BIBREF0 ." ], "extractive_spans": [ "Penn Treebank (PTB) ", "WikiText2 (WT-2)" ], "free_form_answer": "", "highlighted_evidence": [ "Following recent works, we compare on two widely used datasets, the Penn Treebank (PTB) BIBREF28 as prepared by BIBREF29 and WikiText2 (WT-2) BIBREF20 . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "d88caf705f7481cbbad6e282a91e390fc583f5cb" ], "answer": [ { "evidence": [ "Table TABREF23 compares the performance of the PRU with state-of-the-art methods. We can see that the PRU achieves the best performance with fewer parameters.", "FLOAT SELECTED: Table 1: Comparison of single model word-level perplexity of our model with state-of-the-art on validation and test sets of Penn Treebank and Wikitext-2 dataset. For evaluation, we select the model with minimum validation loss. Lower perplexity value represents better performance." ], "extractive_spans": [], "free_form_answer": "Variational LSTM, CharCNN, Pointer Sentinel-LSTM, RHN, NAS Cell, SRU, QRNN, RAN, 4-layer skip-connection LSTM, AWD-LSTM, Quantized LSTM", "highlighted_evidence": [ "Table TABREF23 compares the performance of the PRU with state-of-the-art methods. ", "FLOAT SELECTED: Table 1: Comparison of single model word-level perplexity of our model with state-of-the-art on validation and test sets of Penn Treebank and Wikitext-2 dataset. For evaluation, we select the model with minimum validation loss. Lower perplexity value represents better performance." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "what data did they use?", "what previous RNN models do they compare with?" ], "question_id": [ "7bd6a6ec230e1efb27d691762cc0674237dc7967", "6aaf12505add25dd133c7b0dafe8f4fe966d1f1d" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Figure 1: Comparison of training (solid lines) and validation (dashed lines) perplexities on the Penn Treebank with standard dropout for pyramidal recurrent units (PRU) and LSTM. PRUs learn latent representations in very high-dimensional space with good generalizability and fewer parameters. See Section 3 for more details about PRUs. Best viewed in color.", "Figure 2: Block diagram visualizing the transformations in pyramidal recurrent unit (left) and the LSTM (bottom right) along with the LSTM gating architecture (top right). Blue, red, green (or orange), and purple signify the current input xt, output of the previous cell ht−1, the output of transformations, and the fused output, respectively. The color intensity is used to represent sub-sampling and grouping operations.", "Table 1: Comparison of single model word-level perplexity of our model with state-of-the-art on validation and test sets of Penn Treebank and Wikitext-2 dataset. For evaluation, we select the model with minimum validation loss. Lower perplexity value represents better performance.", "Figure 3: Histogram of the entropies of next-token distributions predicted by the PRU (mean 3.80) and the LSTM (mean 3.93) on the PTB validation set. Lower entropy values indicate higher confidence decisions, which is desirable if decisions are often correct.", "Figure 4: Variance of learned word embeddings for different categories of words on the PTB validation set. We compute the variance of a group of embeddings as the average squared euclidean distance to their mean. Higher variance may allow for better intra-category distinctions. The PRU with pyramid levels 1 and 2 is shown.", "Table 2: Qualitative comparison between the LSTM and the PRU: (a) Gradient-based saliency analysis along with top-5 predicted words. (b) Gradients during back-propagation. For computing the gradients for a given test sequence, the top-1 predicted word was used as the true predicted word. Best viewed in color.", "Figure 5: Impact of number of groups g and pyramidal levels K on the perplexity. Reduction in recurrent unit (RU) parameters is computed with respect to LSTM. Lower perplexity value represents better performance.", "Table 4: Impact of different sub-sampling methods on the word-level perplexity (lower is better). We used g=1 and K=4 in our experiments.", "Table 3: Impact of different transformations used for processing input and context vectors (LT - linear transformation, PT - pyramidal transformation, and GLT - grouped linear transformation). Here, † represents that PT was used without residual connection, PPL represents word-level perplexity (lower is better), and the number of parameters are in million. We used K=g=4 in our experiments." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "6-Table1-1.png", "7-Figure3-1.png", "7-Figure4-1.png", "8-Table2-1.png", "8-Figure5-1.png", "9-Table4-1.png", "9-Table3-1.png" ] }
[ "what previous RNN models do they compare with?" ]
[ [ "1808.09029-6-Table1-1.png", "1808.09029-Results-0" ] ]
[ "Variational LSTM, CharCNN, Pointer Sentinel-LSTM, RHN, NAS Cell, SRU, QRNN, RAN, 4-layer skip-connection LSTM, AWD-LSTM, Quantized LSTM" ]
271
2004.04721
Translation Artifacts in Cross-lingual Transfer Learning
Both human and machine translation play a central role in cross-lingual transfer learning: many multilingual datasets have been created through professional translation services, and using machine translation to translate either the test set or the training set is a widely used transfer technique. In this paper, we show that such translation process can introduce subtle artifacts that have a notable impact in existing cross-lingual models. For instance, in natural language inference, translating the premise and the hypothesis independently can reduce the lexical overlap between them, which current models are highly sensitive to. We show that some previous findings in cross-lingual transfer learning need to be reconsidered in the light of this phenomenon. Based on the gained insights, we also improve the state-of-the-art in XNLI for the translate-test and zero-shot approaches by 4.3 and 2.8 points, respectively.
{ "paragraphs": [ [ "While most NLP resources are English-specific, there have been several recent efforts to build multilingual benchmarks. One possibility is to collect and annotate data in multiple languages separately BIBREF0, but most existing datasets have been created through translation BIBREF1, BIBREF2. This approach has two desirable properties: it relies on existing professional translation services rather than requiring expertise in multiple languages, and it results in parallel evaluation sets that offer a meaningful measure of the cross-lingual transfer gap of different models. The resulting multilingual datasets are generally used for evaluation only, relying on existing English datasets for training.", "Closely related to that, cross-lingual transfer learning aims to leverage large datasets available in one language—typically English—to build multilingual models that can generalize to other languages. Previous work has explored 3 main approaches to that end: machine translating the test set into English and using a monolingual English model (Translate-Test), machine translating the training set into each target language and training the models on their respective languages (Translate-Train), or using English data to fine-tune a multilingual model that is then transferred to the rest of languages (Zero-Shot).", "The dataset creation and transfer procedures described above result in a mixture of original, human translated and machine translated data when dealing with cross-lingual models. In fact, the type of text a system is trained on does not typically match the type of text it is exposed to at test time: Translate-Test systems are trained on original data and evaluated on machine translated test sets, Zero-Shot systems are trained on original data and evaluated on human translated test sets, and Translate-Train systems are trained on machine translated data and evaluated on human translated test sets.", "Despite overlooked to date, we show that such mismatch has a notable impact in the performance of existing cross-lingual models. By using back-translation BIBREF3 to paraphrase each training instance, we obtain another English version of the training set that better resembles the test set, obtaining substantial improvements for the Translate-Test and Zero-Shot approaches in cross-lingual Natural Language Inference (NLI). While improvements brought by machine translation have previously been attributed to data augmentation BIBREF4, we reject this hypothesis and show that the phenomenon is only present in translated test sets, but not in original ones. Instead, our analysis reveals that this behavior is caused by subtle artifacts arising from the translation process itself. In particular, we show that translating different parts of each instance separately (e.g. the premise and the hypothesis in NLI) can alter superficial patterns in the data (e.g. the degree of lexical overlap between them), which severely affects the generalization ability of current models. Based on the gained insights, we improve the state-of-the-art in XNLI, and show that some previous findings need to be reconsidered in the light of this phenomenon." ], [ "Current cross-lingual models work by pre-training multilingual representations using some form of language modeling, which are then fine-tuned on the relevant task and transferred to different languages. Some authors leverage parallel data to that end BIBREF5, BIBREF6, but training a model akin to BERT BIBREF7 on the combination of monolingual corpora in multiple languages is also effective BIBREF8. Closely related to our work, BIBREF4 showed that replacing segments of the training data with their translation during fine-tuning is helpful. However, they attribute this behavior to a data augmentation effect, which we believe should be reconsidered given the new evidence we provide." ], [ "Most benchmarks covering a wide set of languages have been created through translation, as it is the case of XNLI BIBREF1 for NLI, PAWS-X BIBREF9 for adversarial paraphrase identification, and XQuAD BIBREF2 and MLQA BIBREF10 for Question Answering (QA). A notable exception is TyDi QA BIBREF0, a contemporaneous QA dataset that was separately annotated in 11 languages. Other cross-lingual datasets leverage existing multilingual resources, as it is the case of MLDoc BIBREF11 for document classification and Wikiann BIBREF12 for named entity recognition. Concurrent to our work, BIBREF13 combine some of these datasets into a single multilingual benchmark, and evaluate some well-known methods on it." ], [ "Several studies have shown that NLI datasets like SNLI BIBREF14 and MultiNLI BIBREF15 contain spurious patterns that can be exploited to obtain strong results without making real inferential decisions. For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap. Several authors have worked on adversarial datasets to diagnose these issues and provide a more challenging benchmark BIBREF19, BIBREF20, BIBREF21. Besides NLI, other tasks like QA have also been found to be susceptible to annotation artifacts BIBREF22, BIBREF23. While previous work has focused on the monolingual scenario, we show that translation can interfere with these artifacts in multilingual settings." ], [ "Translated texts are known to have unique features like simplification, explicitation, normalization and interference, which are refer to as translationese BIBREF24. This phenomenon has been reported to have a notable impact in machine translation evaluation BIBREF25, BIBREF26. For instance, back-translation brings large BLEU gains for reversed test sets (i.e. when translationese is on the source side and original text is used as reference), but its effect diminishes in the natural direction BIBREF27. While connected, the phenomenon we analyze is different in that it arises from translation inconsistencies due to the lack of context, and affects cross-lingual transfer learning rather than machine translation." ], [ "Our goal is to analyze the effect of both human and machine translation in cross-lingual models. For that purpose, the core idea of our work is to (i) use machine translation to either translate the training set into other languages, or generate English paraphrases of it through back-translation, and (ii) evaluate the resulting systems on original, human translated and machine translated test sets in comparison with systems trained on original data. We next describe the models used in our experiments (§SECREF6), the specific training variants explored (§SECREF8), and the evaluation procedure followed (§SECREF10)." ], [ "We experiment with two models that are representative of the state-of-the-art in monolingual and cross-lingual pre-training: (i) Roberta BIBREF28, which is an improved version of BERT that uses masked language modeling to pre-train an English Transformer model, and (ii) XLM-R BIBREF8, which is a multilingual extension of the former pre-trained on 100 languages. In both cases, we use the large models released by the authors under the fairseq repository. As discussed next, we explore different variants of the training set to fine-tune each model on different tasks. At test time, we try both machine translating the test set into English (Translate-Test) and, in the case of XLM-R, using the actual test set in the target language (Zero-Shot)." ], [ "We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI). For sentences occurring multiple times in the training set (e.g. premises repeated for multiple hypotheses), we use the exact same translation for all occurrences, as our goal is to understand the inherent effect of translation rather than its potential application as a data augmentation method.", "In order to train the machine translation systems for MT-XX and BT-XX, we use the big Transformer model BIBREF29 with the same settings as BIBREF30 and SentencePiece tokenization BIBREF31 with a joint vocabulary of 32k subwords. For English-Spanish, we train for 10 epochs on all parallel data from WMT 2013 BIBREF32 and ParaCrawl v5.0 BIBREF33. For English-Finnish, we train for 40 epochs on Europarl and Wiki Titles from WMT 2019 BIBREF34, ParaCrawl v5.0, and DGT, EUbookshop and TildeMODEL from OPUS BIBREF35. In both cases, we remove sentences longer than 250 tokens, with a source/target ratio exceeding 1.5, or for which langid.py BIBREF36 predicts a different language, resulting in a final corpus size of 48M and 7M sentence pairs, respectively. We use sampling decoding with a temperature of 0.5 for inference, which produces more diverse translations than beam search BIBREF37 and performed better in our preliminary experiments." ], [ "We use the following tasks for our experiments:" ], [ "Given a premise and a hypothesis, the task is to determine whether there is an entailment, neutral or contradiction relation between them. We fine-tune our models on MultiNLI BIBREF15 for 10 epochs using the same settings as BIBREF28. In most of our experiments, we evaluate on XNLI BIBREF1, which comprises 2490 development and 5010 test instances in 15 languages. These were originally annotated in English, and the resulting premises and hypotheses were independently translated into the rest of the languages by professional translators. For the Translate-Test approach, we use the machine translated versions from the authors. Following BIBREF8, we select the best epoch checkpoint according to the average accuracy in the development set." ], [ "Given a context paragraph and a question, the task is to identify the span answering the question in the context. We fine-tune our models on SQuAD v1.1 BIBREF38 for 2 epochs using the same settings as BIBREF28, and report test results for the last epoch. We use two datasets for evaluation: XQuAD BIBREF2, a subset of the SQuAD development set translated into 10 other languages, and MLQA BIBREF10 a dataset consisting of parallel context paragraphs plus the corresponding questions annotated in English and translated into 6 other languages. In both cases, the translation was done by professional translators at the document level (i.e. when translating a question, the text answering it was also shown). For our BT-XX and MT-XX variants, we translate the context paragraph and the questions independently, and map the answer spans using the same procedure as BIBREF39. For the Translate-Test approach, we use the official machine translated versions of MLQA, run inference over them, and map the predicted answer spans back to the target language.", "Both for NLI and QA, we run each system 5 times with different random seeds and report the average results. Space permitting, we also report the standard deviation across the 5 runs." ], [ "We next discuss our main results in the XNLI development set (§SECREF15, §SECREF16), run additional experiments to better understand the behavior of our different variants (§SECREF17, §SECREF22, §SECREF25), and compare our results to previous work in the XNLI test set (§SECREF30)." ], [ "We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another." ], [ "We next analyze the results for the Zero-Shot approach. In this case, inference is done in the test set in each target language which, in the case of XNLI, was human translated from English. As such, different from the Translate-Test approach, neither training on original data (Orig) nor training on machine translated data (BT-XX and MT-XX) makes use of the exact same type of text that the system is exposed to at test time. However, as shown in Table TABREF9, both BT-XX and MT-XX outperform Orig by approximately 2 points, which suggests that our (back-)translated versions of the training set are more similar to the human translated test sets than the original one. This also provides a new perspective on the Translate-Train approach, which was reported to outperform Orig in previous work BIBREF5: while the original motivation was to train the model on the same language that it is tested on, our results show that machine translating the training set is beneficial even when the target language is different." ], [ "So as to understand whether the improvements observed so far are limited to translated test sets or apply more generally, we conduct additional experiments comparing translated test sets to original ones. However, to the best of our knowledge, all existing non-English NLI benchmarks were created through translation. For that reason, we build a new test set that mimics XNLI, but is annotated in Spanish rather than English. We first collect the premises from a filtered version of CommonCrawl BIBREF42, taking a subset of 5 websites that represent a diverse set of genres: a newspaper, an economy forum, a celebrity magazine, a literature blog, and a consumer magazine. We then ask native Spanish annotators to generate an entailment, a neutral and a contradiction hypothesis for each premise. We collect a total of 2490 examples using this procedure, which is the same size as the XNLI development set. Finally, we create a human translated and a machine translated English version of the dataset using professional translators from Gengo and our machine translation system described in §SECREF8, respectively. We report results for the best epoch checkpoint on each set.", "As shown in Table TABREF18, both BT-XX and MT-XX clearly outperform Orig in all test sets created through translation, which is consistent with our previous results. In contrast, the best results on the original English set are obtained by Orig, and neither BT-XX nor MT-XX obtain any clear improvement on the one in Spanish either. This confirms that the underlying phenomenon is limited to translated test sets. In addition, it is worth mentioning that the results for the machine translated test set in English are slightly better than those for the human translated one, which suggests that the difficulty of the task does not only depend on the translation quality. Finally, it is also interesting that MT-ES is only marginally better than MT-FI in both Spanish test sets, even if it corresponds to the Translate-Train approach, whereas MT-FI needs to Zero-Shot transfer from Finnish into Spanish. This reinforces the idea that it is training on translated data rather than training on the target language that is key in Translate-Train." ], [ "In order to better understand how systems trained on original and translated data differ, we run additional experiments on the NLI Stress Tests BIBREF19, which were designed to test the robustness of NLI models to specific linguistic phenomena in English. The benchmark consists of a competence test, which evaluates the ability to understand antonymy relation and perform numerical reasoning, a distraction test, which evaluates the robustness to shallow patterns like lexical overlap and the presence of negation words, and a noise test, which evaluates robustness to spelling errors. Just as with previous experiments, we report results for the best epoch checkpoint in each test set.", "As shown in Table TABREF23, Orig outperforms BT-FI and MT-FI on the competence test by a large margin, but the opposite is true on the distraction test. In particular, our results show that BT-FI and MT-FI are less reliant on lexical overlap and the presence of negative words. This feels intuitive, as translating the premise and hypothesis independently—as BT-FI and MT-FI do—is likely to reduce the lexical overlap between them. More generally, the translation process can alter similar superficial patterns in the data, which NLI models are sensitive to (§SECREF2). This would explain why the resulting models have a different behavior on different stress tests." ], [ "With the aim to understand the effect of the previous phenomenon in cross-lingual settings, we look at the output class distribution of our different models in the XNLI development set. As shown in Table TABREF28, the predictions of all systems are close to the true class distribution in the case of English. Nevertheless, Orig is strongly biased for the rest of languages, and tends to underpredict entailment and overpredict neutral. This can again be attributed to the fact that the English test set is original, whereas the rest are human translated. In particular, it is well-known that NLI models tend to predict entailment when there is a high lexical overlap between the premise and the hypothesis (§SECREF2). However, the degree of overlap will be smaller in the human translated test sets given that the premise and the hypothesis were translated independently, which explains why entailment is underpredicted. In contrast, BT-FI and MT-FI are exposed to the exact same phenomenon during training, which explains why they are not that heavily affected.", "So as to measure the impact of this phenomenon, we explore a simple approach to correct this bias: having fine-tuned each model, we adjust the bias term added to the logit of each class so the model predictions match the true class distribution for each language. As shown in Table TABREF29, this brings large improvements for Orig, but is less effective for BT-FI and MT-FI. This shows that the performance of Orig was considerably hindered by this bias, which BT-FI and MT-FI effectively mitigate." ], [ "So as to put our results into perspective, we compare our best variant to previous work on the XNLI test set. As shown in Table TABREF31, our method improves the state-of-the-art for both the Translate-Test and the Zero-Shot approaches by 4.3 and 2.8 points, respectively. It also obtains the best overall results published to date, with the additional advantage that the previous state-of-the-art required a machine translation system between English and each of the 14 target languages, whereas our method uses a single machine translation system between English and Finnish (which is not one of the target languages). While the main goal of our work is not to design better cross-lingual models, but to analyze their behavior in connection to translation, this shows that the phenomenon under study is highly relevant, to the extent that it can be exploited to improve the state-of-the-art." ], [ "So as to understand whether our previous findings apply to other tasks besides NLI, we run additional experiments on QA. As shown in Table TABREF32, BT-FI and BT-ES do indeed outperform Orig for the Translate-Test approach on MLQA. The improvement is modest, but very consistent across different languages, models and runs. The results for MT-ES and MT-FI are less conclusive, presumably because mapping the answer spans across languages might introduce some noise. In contrast, we do not observe any clear improvement for the Zero-Shot approach on this dataset. Our XQuAD results in Table TABREF33 are more positive, but still inconclusive.", "These results can partly be explained by the translation procedure used to create the different benchmarks: the premises and hypotheses of XNLI were translated independently, whereas the questions and context paragraphs of XQuAD were translated together. Similarly, MLQA made use of parallel contexts, and translators were shown the sentence containing each answer when translating the corresponding question. As a result, one can expect both QA benchmarks to have more consistent translations than XNLI, which would in turn diminish this phenomenon. In contrast, the questions and context paragraphs are independently translated when using machine translation, which explains why BT-ES and BT-FI outperform Orig for the Translate-Test approach. We conclude that the translation artifacts revealed by our analysis are not exclusive to NLI, as they also show up on QA for the Translate-Test approach, but their actual impact can be highly dependent on the translation procedure used and the nature of the task." ], [ "Our analysis prompts to reconsider previous findings in cross-lingual transfer learning as follows:" ], [ "Given the parallel nature of XNLI, accuracy differences across languages are commonly interpreted as the loss of performance when generalizing from English to the rest of languages. However, our work shows that there is another factor that can have a much larger impact: the loss of performance when generalizing from original to translated data. Our results suggest that the real cross-lingual generalization ability of XLM-R is considerably better than what the accuracy numbers in XNLI reflect." ], [ "The original motivation for Translate-Train was to train the model on the same language it is tested on. However, we show that it is training on translated data, rather than training on the target language, that is key for this approach to outperform Zero-Shot as reported by previous authors." ], [ "The method by BIBREF4 combines machine translated premises and hypotheses in different languages (§SECREF2), resulting in an effect similar to BT-XX and MT-XX. As such, we believe that this method should be analyzed from the point of view of dataset artifacts rather than data augmentation, as the authors do. From this perspective, having the premise and the hypotheses in different languages can reduce the superficial patterns between them, which would explain why this approach is better than using examples in a single language." ], [ "The previous best results for Translate-Test on XNLI lagged behind the state-of-the-art by 4.6 points. Our work reduces this gap to only 0.8 points by addressing the underlying translation artifacts. The reason why Translate-Test is more severely affected by this phenomenon is twofold: (i) the effect is doubled by first using human translation to create the test set and then machine translation to translate it back to English, and (ii) Translate-Train was inadvertently mitigating this issue (see above), but equivalent techniques were never applied to Translate-Test." ], [ "The evaluation issues raised by our analysis do not have a simple solution. In fact, while we use the term translation artifacts to highlight that they are an unintended effect of translation that impacts final evaluation, one could also argue that it is the original datasets that contain the artifacts, which translation simply alters or even mitigates. In any case, this is a more general issue that falls beyond the scope of cross-lingual transfer learning, so we argue that it should be carefully controlled when evaluating cross-lingual models. In the absence of more robust datasets, we recommend that future multilingual benchmarks should at least provide consistent test sets for English and the rest of languages. This can be achieved by (i) using original annotations in all languages, (ii) using original annotations in a non-English language and translating them into English and other languages, or (iii) if translating from English, doing so at the document level to minimize translation inconsistencies." ], [ "In this paper, we have shown that both human and machine translation can alter superficial patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning. Based on the gained insights, we have improved the state-of-the-art in XNLI for the Translate-Test and Zero-Shot approaches by a substantial margin. Finally, we have shown that the phenomenon is not specific to NLI but also affects QA, although it is less pronounced there thanks to the translation procedure used in the corresponding benchmarks. So as to facilitate similar studies in the future, we release our NLI dataset, which, unlike previous benchmarks, was annotated in a non-English language and human translated into English." ], [ "We thank Nora Aranberri and Uxoa Iñurrieta for helpful discussion during the development of this work, as well as the rest of our colleagues from the IXA group that worked as annotators for our NLI dataset.", "This research was partially funded by a Facebook Fellowship, the Basque Government excellence research group (IT1343-19), the Spanish MINECO (UnsupMT TIN2017‐91692‐EXP MCIU/AEI/FEDER, UE), Project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018), and the NVIDIA GPU grant program." ] ], "section_name": [ "Introduction", "Related work ::: Cross-lingual transfer learning.", "Related work ::: Multilingual benchmarks.", "Related work ::: Annotation artifacts.", "Related work ::: Translationese.", "Experimental design", "Experimental design ::: Models and transfer methods", "Experimental design ::: Training variants", "Experimental design ::: Tasks and evaluation procedure", "Experimental design ::: Tasks and evaluation procedure ::: Natural Language Inference (NLI).", "Experimental design ::: Tasks and evaluation procedure ::: Question Answering (QA).", "NLI experiments", "NLI experiments ::: Translate-Test results", "NLI experiments ::: Zero-Shot results", "NLI experiments ::: Original vs. translated test sets", "NLI experiments ::: Stress tests", "NLI experiments ::: Output class distribution", "NLI experiments ::: Comparison with the state-of-the-art", "QA experiments", "Discussion", "Discussion ::: The cross-lingual transfer gap on XNLI was overestimated.", "Discussion ::: Overcoming the cross-lingual gap is not what makes Translate-Train work.", "Discussion ::: Improvements previously attributed to data augmentation should be reconsidered.", "Discussion ::: The potential of Translate-Test was underestimated.", "Discussion ::: Future evaluation should better account for translation artifacts.", "Conclusions", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "614ee0ce06fe50e9cb88236d8cea36cb17949e01", "7588d968ed7f7ae07154c931e718d90fcd4b7548" ], "answer": [ { "evidence": [ "Despite overlooked to date, we show that such mismatch has a notable impact in the performance of existing cross-lingual models. By using back-translation BIBREF3 to paraphrase each training instance, we obtain another English version of the training set that better resembles the test set, obtaining substantial improvements for the Translate-Test and Zero-Shot approaches in cross-lingual Natural Language Inference (NLI). While improvements brought by machine translation have previously been attributed to data augmentation BIBREF4, we reject this hypothesis and show that the phenomenon is only present in translated test sets, but not in original ones. Instead, our analysis reveals that this behavior is caused by subtle artifacts arising from the translation process itself. In particular, we show that translating different parts of each instance separately (e.g. the premise and the hypothesis in NLI) can alter superficial patterns in the data (e.g. the degree of lexical overlap between them), which severely affects the generalization ability of current models. Based on the gained insights, we improve the state-of-the-art in XNLI, and show that some previous findings need to be reconsidered in the light of this phenomenon.", "In order to better understand how systems trained on original and translated data differ, we run additional experiments on the NLI Stress Tests BIBREF19, which were designed to test the robustness of NLI models to specific linguistic phenomena in English. The benchmark consists of a competence test, which evaluates the ability to understand antonymy relation and perform numerical reasoning, a distraction test, which evaluates the robustness to shallow patterns like lexical overlap and the presence of negation words, and a noise test, which evaluates robustness to spelling errors. Just as with previous experiments, we report results for the best epoch checkpoint in each test set." ], "extractive_spans": [ "the degree of lexical overlap between them", "presence of negation words" ], "free_form_answer": "", "highlighted_evidence": [ "Instead, our analysis reveals that this behavior is caused by subtle artifacts arising from the translation process itself. In particular, we show that translating different parts of each instance separately (e.g. the premise and the hypothesis in NLI) can alter superficial patterns in the data (e.g. the degree of lexical overlap between them), which severely affects the generalization ability of current models. Based on the gained insights, we improve the state-of-the-art in XNLI, and show that some previous findings need to be reconsidered in the light of this phenomenon.", "The benchmark consists of a competence test, which evaluates the ability to understand antonymy relation and perform numerical reasoning, a distraction test, which evaluates the robustness to shallow patterns like lexical overlap and the presence of negation words, and a noise test, which evaluates robustness to spelling errors. Just as with previous experiments, we report results for the best epoch checkpoint in each test set." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Several studies have shown that NLI datasets like SNLI BIBREF14 and MultiNLI BIBREF15 contain spurious patterns that can be exploited to obtain strong results without making real inferential decisions. For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap. Several authors have worked on adversarial datasets to diagnose these issues and provide a more challenging benchmark BIBREF19, BIBREF20, BIBREF21. Besides NLI, other tasks like QA have also been found to be susceptible to annotation artifacts BIBREF22, BIBREF23. While previous work has focused on the monolingual scenario, we show that translation can interfere with these artifacts in multilingual settings." ], "extractive_spans": [ "hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length", "NLI models tend to predict entailment for sentence pairs with a high lexical overlap" ], "free_form_answer": "", "highlighted_evidence": [ "For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "34aed0ccb31ddac045030332d77df359aa499f56", "821a687d872ac34d23ef186e1d82a956ee4c5375" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: XNLI dev results (acc). BT-XX and MT-XX consistently outperform ORIG in all cases.", "We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another.", "FLOAT SELECTED: Table 5: XNLI dev results with class distribution unbiasing (average acc across all languages). Adjusting the bias term of the classifier to match the true class distribution brings large improvements for ORIG, but is less effective for BT-FI and MT-FI." ], "extractive_spans": [], "free_form_answer": "English\nFrench\nSpanish\nGerman\nGreek\nBulgarian\nRussian\nTurkish\nArabic\nVietnamese\nThai\nChinese\nHindi\nSwahili\nUrdu\nFinnish", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: XNLI dev results (acc). BT-XX and MT-XX consistently outperform ORIG in all cases.", "We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. ", "FLOAT SELECTED: Table 5: XNLI dev results with class distribution unbiasing (average acc across all languages). Adjusting the bias term of the classifier to match the true class distribution brings large improvements for ORIG, but is less effective for BT-FI and MT-FI.", "As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI). For sentences occurring multiple times in the training set (e.g. premises repeated for multiple hypotheses), we use the exact same translation for all occurrences, as our goal is to understand the inherent effect of translation rather than its potential application as a data augmentation method." ], "extractive_spans": [ "English", "Spanish", "Finnish" ], "free_form_answer": "", "highlighted_evidence": [ "We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "ab09687569601d6df67231c03e994048ed73ad36" ], "answer": [ { "evidence": [ "In this paper, we have shown that both human and machine translation can alter superficial patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning. Based on the gained insights, we have improved the state-of-the-art in XNLI for the Translate-Test and Zero-Shot approaches by a substantial margin. Finally, we have shown that the phenomenon is not specific to NLI but also affects QA, although it is less pronounced there thanks to the translation procedure used in the corresponding benchmarks. So as to facilitate similar studies in the future, we release our NLI dataset, which, unlike previous benchmarks, was annotated in a non-English language and human translated into English." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In this paper, we have shown that both human and machine translation can alter superficial patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "1844a61d51d6b6f8c586e8c0abfa6c040cf36335", "e99b4356aba51f5b6dc1d500eeef7a3feb408e59" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [ "As shown in Table TABREF23, Orig outperforms BT-FI and MT-FI on the competence test by a large margin, but the opposite is true on the distraction test. In particular, our results show that BT-FI and MT-FI are less reliant on lexical overlap and the presence of negative words. This feels intuitive, as translating the premise and hypothesis independently—as BT-FI and MT-FI do—is likely to reduce the lexical overlap between them. More generally, the translation process can alter similar superficial patterns in the data, which NLI models are sensitive to (§SECREF2). This would explain why the resulting models have a different behavior on different stress tests." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "This feels intuitive, as translating the premise and hypothesis independently—as BT-FI and MT-FI do—is likely to reduce the lexical overlap between them. More generally, the translation process can alter similar superficial patterns in the data, which NLI models are sensitive to (§SECREF2). This would explain why the resulting models have a different behavior on different stress tests." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "104b1b58cc612bd32bb3bd8cd4417d2b84e43a87" ], "answer": [ { "evidence": [ "In this paper, we have shown that both human and machine translation can alter superficial patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning. Based on the gained insights, we have improved the state-of-the-art in XNLI for the Translate-Test and Zero-Shot approaches by a substantial margin. Finally, we have shown that the phenomenon is not specific to NLI but also affects QA, although it is less pronounced there thanks to the translation procedure used in the corresponding benchmarks. So as to facilitate similar studies in the future, we release our NLI dataset, which, unlike previous benchmarks, was annotated in a non-English language and human translated into English." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Based on the gained insights, we have improved the state-of-the-art in XNLI for the Translate-Test and Zero-Shot approaches by a substantial margin." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "8714926834a33556ea0ef855965d2afdd89b26e9" ], "answer": [ { "evidence": [ "Several studies have shown that NLI datasets like SNLI BIBREF14 and MultiNLI BIBREF15 contain spurious patterns that can be exploited to obtain strong results without making real inferential decisions. For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap. Several authors have worked on adversarial datasets to diagnose these issues and provide a more challenging benchmark BIBREF19, BIBREF20, BIBREF21. Besides NLI, other tasks like QA have also been found to be susceptible to annotation artifacts BIBREF22, BIBREF23. While previous work has focused on the monolingual scenario, we show that translation can interfere with these artifacts in multilingual settings." ], "extractive_spans": [ "hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length", "NLI models tend to predict entailment for sentence pairs with a high lexical overlap" ], "free_form_answer": "", "highlighted_evidence": [ "For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "954bf962319c2b46d8aa5acc1f93daa52e590d8d" ], "answer": [ { "evidence": [ "We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI). For sentences occurring multiple times in the training set (e.g. premises repeated for multiple hypotheses), we use the exact same translation for all occurrences, as our goal is to understand the inherent effect of translation rather than its potential application as a data augmentation method." ], "extractive_spans": [ "English", "Spanish", "Finnish" ], "free_form_answer": "", "highlighted_evidence": [ "We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five", "five", "five", "five", "five" ], "paper_read": [ "", "", "no", "no", "no", "no", "no" ], "question": [ "What are examples of these artificats?", "What are the languages they use in their experiment?", "Does the professional translation or the machine translation introduce the artifacts?", "Do they recommend translating the premise and hypothesis together?", "Is the improvement over state-of-the-art statistically significant?", "What are examples of these artifacts?", "What languages do they use in their experiments?" ], "question_id": [ "73906462bd3415f23d6378590a5ba28709b17605", "5bc1dc6ebcb88fd0310b21d2a74939e35a4c1a11", "88bf368491f9613767f696f84b4bb1f5a7d7cb48", "0737954caf66f2b4c898b356d2a3c43748b9706b", "664b3eadc12c8dde309e8bbd59e9af961a433cde", "b3307d5b68c57a074c483636affee41054be06d1", "bfc1de5fa4da2f0e301fd22aea19cf01e2bb5b31" ], "question_writer": [ "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255" ], "search_query": [ "professional machine translation artifact", "professional machine translation artifact", "professional machine translation artifact", "professional machine translation artifact", "professional machine translation artifact", "professional machine translation artifact", "professional machine translation" ], "topic_background": [ "", "", "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: XNLI dev results (acc). BT-XX and MT-XX consistently outperform ORIG in all cases.", "Table 2: NLI results on original (OR), human translated (HT) and machine translated (MT) sets (acc). BT-XX and MT-XX outperform ORIG in translated sets, but do not get any clear improvement in original ones.", "Table 3: NLI Stress Test results (combined matched & mismatched acc). AT = antonymy, NR = numerical reasoning, WO = word overlap, NG = negation, LN = length mismatch, SE = spelling error. BT-FI and MT-FI are considerably weaker than ORIG in the competence test, but substantially stronger in the distraction test.", "Table 4: Output class distribution on XNLI dev. All systems are close to the true distribution in English, but ORIG is biased toward neu and con in the transfer languages. BT-FI and MT-FI alleviate this issue.", "Table 5: XNLI dev results with class distribution unbiasing (average acc across all languages). Adjusting the bias term of the classifier to match the true class distribution brings large improvements for ORIG, but is less effective for BT-FI and MT-FI.", "Table 6: XNLI test results (acc). Results for other methods are taken from their respective papers or, if not provided, from Conneau et al. (2019). For those with multiple variants, we select the one with the best results.", "Table 7: MLQA test results (F1 / exact match).", "Table 8: XQuAD results (F1). Results for the exact match metric are similar." ], "file": [ "4-Table1-1.png", "5-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "8-Table7-1.png", "8-Table8-1.png" ] }
[ "What are the languages they use in their experiment?" ]
[ [ "2004.04721-6-Table5-1.png", "2004.04721-NLI experiments ::: Translate-Test results-0", "2004.04721-4-Table1-1.png", "2004.04721-Experimental design ::: Training variants-0" ] ]
[ "English\nFrench\nSpanish\nGerman\nGreek\nBulgarian\nRussian\nTurkish\nArabic\nVietnamese\nThai\nChinese\nHindi\nSwahili\nUrdu\nFinnish" ]
272
1905.07791
Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction
Modern NLP systems require high-quality annotated data. In specialized domains, expert annotations may be prohibitively expensive. An alternative is to rely on crowdsourcing to reduce costs at the risk of introducing noise. In this paper we demonstrate that directly modeling instance difficulty can be used to improve model performance, and to route instances to appropriate annotators. Our difficulty prediction model combines two learned representations: a `universal' encoder trained on out-of-domain data, and a task-specific encoder. Experiments on a complex biomedical information extraction task using expert and lay annotators show that: (i) simply excluding from the training data instances predicted to be difficult yields a small boost in performance; (ii) using difficulty scores to weight instances during training provides further, consistent gains; (iii) assigning instances predicted to be difficult to domain experts is an effective strategy for task routing. Our experiments confirm the expectation that for specialized tasks expert annotations are higher quality than crowd labels, and hence preferable to obtain if practical. Moreover, augmenting small amounts of expert data with a larger set of lay annotations leads to further improvements in model performance.
{ "paragraphs": [ [ "Assembling training corpora of annotated natural language examples in specialized domains such as biomedicine poses considerable challenges. Experts with the requisite domain knowledge to perform high-quality annotation tend to be expensive, while lay annotators may not have the necessary knowledge to provide high-quality annotations. A practical approach for collecting a sufficiently large corpus would be to use crowdsourcing platforms like Amazon Mechanical Turk (MTurk). However, crowd workers in general are likely to provide noisy annotations BIBREF0 , BIBREF1 , BIBREF2 , an issue exacerbated by the technical nature of specialized content. Some of this noise may reflect worker quality and can be modeled BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 , but for some instances lay people may simply lack the domain knowledge to provide useful annotation.", "In this paper we report experiments on the EBM-NLP corpus comprising crowdsourced annotations of medical literature BIBREF5 . We operationalize the concept of annotation difficulty and show how it can be exploited during training to improve information extraction models. We then obtain expert annotations for the abstracts predicted to be most difficult, as well as for a similar number of randomly selected abstracts. The annotation of highly specialized data and the use of lay and expert annotators allow us to examine the following key questions related to lay and expert annotations in specialized domains:", "Can we predict item difficulty? We define a training instance as difficult if a lay annotator or an automated model disagree on its labeling. We show that difficulty can be predicted, and that it is distinct from inter-annotator agreement. Further, such predictions can be used during training to improve information extraction models.", "Are there systematic differences between expert and lay annotations? We observe decidedly lower agreement between lay workers as compared to domain experts. Lay annotations have high precision but low recall with respect to expert annotations in the new data that we collected. More generally, we expect lay annotations to be lower quality, which may translate to lower precision, recall, or both, compared to expert annotations. Can one rely solely on lay annotations? Reasonable models can be trained using lay annotations alone, but similar performance can be achieved using markedly less expert data. This suggests that the optimal ratio of expert to crowd annotations for specialized tasks will depend on the cost and availability of domain experts. Expert annotations are preferable whenever its collection is practical. But in real-world settings, a combination of expert and lay annotations is better than using lay data alone.", "Does it matter what data is annotated by experts? We demonstrate that a system trained on combined data achieves better predictive performance when experts annotate difficult examples rather than instances selected at i.i.d. random.", "Our contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal' sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here." ], [ "Crowdsourcing annotation is now a well-studied problem BIBREF7 , BIBREF0 , BIBREF1 , BIBREF2 . Due to the noise inherent in such annotations, there have also been considerable efforts to develop aggregation models that minimize noise BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 .", "There are also several surveys of crowdsourcing in biomedicine specifically BIBREF8 , BIBREF9 , BIBREF10 . Some work in this space has contrasted model performance achieved using expert vs. crowd annotated training data BIBREF11 , BIBREF12 , BIBREF13 . Dumitrache et al. Dumitrache:2018:CGT:3232718.3152889 concluded that performance is similar under these supervision types, finding no clear advantage from using expert annotators. This differs from our findings, perhaps owing to differences in design. The experts we used already hold advanced medical degrees, for instance, while those in prior work were medical students. Furthermore, the task considered here would appear to be of greater difficulty: even a system trained on $\\sim $ 5k instances performs reasonably, but far from perfect. By contrast, in some of the prior work where experts and crowd annotations were deemed equivalent, a classifier trained on 300 examples can achieve very high accuracy BIBREF12 .", "More relevant to this paper, prior work has investigated methods for `task routing' in active learning scenarios in which supervision is provided by heterogeneous labelers with varying levels of expertise BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 . The related question of whether effort is better spent collecting additional annotations for already labeled (but potentially noisily so) examples or novel instances has also been addressed BIBREF18 . What distinguishes the work here is our focus on providing an operational definition of instance difficulty, showing that this can be predicted, and then using this to inform task routing." ], [ "Our specific application concerns annotating abstracts of articles that describe the conduct and results of randomized controlled trials (RCTs). Experimentation in this domain has become easy with the recent release of the EBM-NLP BIBREF5 corpus, which includes a reasonably large training dataset annotated via crowdsourcing, and a modest test set labeled by individuals with advanced medical training. More specifically, the training set comprises 4,741 medical article abstracts with crowdsourced annotations indicating snippets (sequences) that describe the Participants (p), Interventions (i), and Outcome (o) elements of the respective RCT, and the test set is composed of 191 abstracts with p, i, o sequence annotations from three medical experts.", "Table 1 shows an example of difficult and easy examples according to our definition of difficulty. The underlined text demarcates the (consensus) reference label provided by domain experts. In the difficult examples, crowd workers marked text distinct from these reference annotations; whereas in the easy cases they reproduced them with reasonable fidelity. The difficult sentences usually exhibit complicated structure and feature jargon.", "An abstract may contain some `easy' and some `difficult' sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively." ], [ "The test set includes annotations from both crowd workers and domain experts. We treat the latter as ground truth and then define the difficulty of sentences in terms of the observed agreement between expert and lay annotators. Formally, for annotation task $t$ and instance $i$ : ", "$$\\text{Difficulty}_{ti} = \\frac{\\sum _{j=1}^n{f(\\text{label}_{ij}, y_i})}{n}$$ (Eq. 3) ", "where $f$ is a scoring function that measures the quality of the label from worker $j$ for sentence $i$ , as compared to a ground truth annotation, $y_i$ . The difficulty score of sentence $i$ is taken as an average over the scores for all $n$ layworkers. We use Spearmans' correlation coefficient as a scoring function. Specifically, for each sentence we create two vectors comprising counts of how many times each token was annotated by crowd and expert workers, respectively, and calculate the correlation between these. Sentences with no labels are treated as maximally easy; those with only either crowd worker or expert label(s) are assumed maximally difficult.", "The training set contains only crowdsourced annotations. To label the training data, we use a 10-fold validation like setting. We iteratively retrain the LSTM-CRF-Pattern sequence tagger of Patel et al. patel2018syntactic on 9 folds of the training data and use that trained model to predict labels for the 10th. In this way we obtain predictions on the full training set. We then use predicted spans as proxy `ground truth' annotations to calculate the difficulty score of sentences as described above; we normalize these to the [ $0, 1$ ] interval. We validate this approximation by comparing the proxy scores against reference scores over the test set, the Pearson's correlation coefficients are 0.57 for Population, 0.71 for Intervention and 0.68 for Outcome.", "There exist many sentences that contain neither manual nor predicted annotations. We treat these as maximally easy sentences (with difficulty scores of 0). Such sentences comprise 51%, 42% and 36% for Population, Interventions and Outcomes data respectively, indicating that it is easier to identify sentences that have no Population spans, but harder to identify sentences that have no Interventions or Outcomes spans. This is intuitive as descriptions of the latter two tend to be more technical and dense with medical jargon.", "We show the distribution of the automatically labeled scores for sentences that do contain spans in Figure 1 . The mean of the Population (p) sentence scores is significantly lower than that for other types of sentences (i and o), again indicating that they are easier on average to annotate. This aligns with a previous finding that annotating Interventions and Outcomes is more difficult than annotating Participants BIBREF5 .", "Many sentences contain spans tagged by the LSTM-CRF-Pattern model, but missed by all crowd workers, resulting in a maximally difficult score (1). Inspection of such sentences revealed that some are truly difficult examples, but others are tagging model errors. In either case, such sentences have confused workers and/or the model, and so we retain them all as `difficult' sentences.", "Content describing the p, i and o, respectively, is quite different. As such, one sentence usually contains (at most) only one of these three content types. We thus treat difficulty prediction for the respective label types as separate tasks." ], [ "Our definition of difficulty is derived from agreement between expert and crowd annotations for the test data, and agreement between a predictive model and crowd annotations in the training data. It is reasonable to ask if these measures are related to inter-annotator agreement, a metric often used in language technology research to identify ambiguous or difficult items. Here we explicitly verify that our definition of difficulty only weakly correlates with inter-annotator agreement.", "We calculate inter-worker agreement between crowd and expert annotators using Spearman's correlation coefficient. As shown in Table 2 , average agreement between domain experts are considerably higher than agreements between crowd workers for all three label types. This is a clear indication that the crowd annotations are noisier.", "Furthermore, we compare the correlation between inter-annotator agreement and difficulty scores in the training data. Given that the majority of sentences do not contain a PICO span, we only include in these calculations those that contain a reference label. Pearson's r are 0.34, 0.30 and 0.31 for p, i and o, respectively, confirming that inter-worker agreement and our proposed difficulty score are quite distinct." ], [ "We treat difficulty prediction as a regression problem, and propose and evaluate neural model variants for the task. We first train RNN BIBREF19 and CNN BIBREF20 models.", "We also use the universal sentence encoder (USE) BIBREF6 to induce sentence representations, and train a model using these as features. Following BIBREF6 , we then experiment with an ensemble model that combines the `universal' and task-specific representations to predict annotation difficulty. We expect these universal embeddings to capture general, high-level semantics, and the task specific representations to capture more granular information. Figure 2 depicts the model architecture. Sentences are fed into both the universal sentence encoder and, separately, a task specific neural encoder, yielding two representations. We concatenate these and pass the combined vector to the regression layer." ], [ "We trained models for each label type separately. Word embeddings were initialized to 300d GloVe vectors BIBREF21 trained on common crawl data; these are fine-tuned during training. We used the Adam optimizer BIBREF22 with learning rate and decay set to 0.001 and 0.99, respectively. We used batch sizes of 16.", "We used the large version of the universal sentence encoder with a transformer BIBREF23 . We did not update the pretrained sentence encoder parameters during training. All hyperparamaters for all models (including hidden layers, hidden sizes, and dropout) were tuned using Vizier BIBREF24 via 10-fold cross validation on the training set maximizing for F1.", "As a baseline, we also trained a linear Support-Vector Regression BIBREF25 model on $n$ -gram features ( $n$ ranges from 1 to 3).", "Table 3 reports Pearson correlation coefficients between the predictions with each of the neural models and the ground truth difficulty scores. Rows 1-4 correspond to individual models, and row 5 reports the ensemble performance. Columns correspond to label type. Results from all models outperform the baseline SVR model: Pearson's correlation coefficients range from 0.550 to 0.622. The regression correlations are the lowest.", "The RNN model realizes the strongest performance among the stand-alone (non-ensemble) models, outperforming variants that exploit CNN and USE representations. Combining the RNN and USE further improves results. We hypothesize that this is due to complementary sentence information encoded in universal representations.", "For all models, correlations for Intervention and Outcomes are higher than for Population, which is expected given the difficulty distributions in Figure 1 . In these, the sentences are more uniformly distributed, with a fair number of difficult and easier sentences. By contrast, in Population there are a greater number of easy sentences and considerably fewer difficult sentences, which makes the difficulty ranking task particularly challenging." ], [ "We next present experiments in which we attempt to use the predicted difficulty during training to improve models for information extraction of descriptions of Population, Interventions and Outcomes from medical article abstracts. We investigate two uses: (1) simply removing the most difficult sentences from the training set, and, (2) re-weighting the most difficult sentences.", "We again use LSTM-CRF-Pattern as the base model and experimenting on the EBM-NLP corpus BIBREF5 . This is trained on either (1) the training set with difficult sentences removed, or (2) the full training set but with instances re-weighted in proportion to their predicted difficulty score. Following BIBREF5 , we use the Adam optimizer with learning rate of 0.001, decay 0.9, batch size 20 and dropout 0.5. We use pretrained 200d GloVe vectors BIBREF21 to initialize word embeddings, and use 100d hidden char representations. Each word is thus represented with 300 dimensions in total. The hidden size is 100 for the LSTM in the character representation component, and 200 for the LSTM in the information extraction component. We train for 15 epochs, saving parameters that achieve the best F1 score on a nested development set." ], [ "We first evaluate changes in performance induced by training the sequence labeling model using less data by removing difficult sentences prior to training. The hypothesis here is that these difficult instances are likely to introduce more noise than signal. We used a cross-fold approach to predict sentence difficulties, training on 9/10ths of the data and scoring the remaining 1/10th at a time. We then sorted sentences by predicted difficulty scores, and experimented with removing increasing numbers of these (in order of difficulty) prior to training the LSTM-CRF-Pattern model.", "Figure 3 shows the results achieved by the LSTM-CRF-Pattern model after discarding increasing amounts of the training data: the $x$ and $y$ axes correspond to the the percentage of data removed and F1 scores, respectively. We contrast removing sentences predicted to be difficult with removing them (a) randomly (i.i.d.), and, (b) in inverse order of predicted inter-annotator agreement. The agreement prediction model is trained exactly the same like difficult prediction model, with simply changing the difficult score to annotation agreement. F1 scores actually improve (marginally) when we remove the most difficult sentences, up until we drop 4% of the data for Population and Interventions, and 6% for Outcomes. Removing training points at i.i.d. random degrades performance, as expected. Removing sentences in order of disagreement seems to have similar effect as removing them by difficulty score when removing small amount of the data, but the F1 scores drop much faster when removing more data. These findings indicate that sentences predicted to be difficult are indeed noisy, to the extent that they do not seem to provide the model useful signal." ], [ "We showed above that removing a small number of the most difficult sentences does not harm, and in fact modestly improves, medical IE model performance. However, using the available data we are unable to test if this will be useful in practice, as we would need additional data to determine how many difficult sentences should be dropped.", "We instead explore an alternative, practical means of exploiting difficulty predictions: we re-weight sentences during training inversely to their predicted difficulty. Formally, we weight sentence $i$ with difficulty scores above $\\tau $ according to: $1-a\\cdot (d_i-\\tau )/(1-\\tau )$ , where $d_i$ is the difficulty score for sentence $i$ , and $a$ is a parameter codifying the minimum weight value. We set $\\tau $ to 0.8 so as to only re-weight sentences with difficulty in the top 20th percentile, and we set $a$ to 0.5. The re-weighting is equivalent to down-sampling the difficult sentences. LSTM-CRF-Pattern is our base model.", "Table 4 reports the precision, recall and F1 achieved both with and without sentence re-weighting. Re-weighting improves all metrics modestly but consistently. All F1 differences are statistically significant under a sign test ( $p<0.01$ ). The model with best precision is different for Patient, Intervention and Outcome labels. However re-weighting by difficulty does consistently yield the best recall for all three extraction types, with the most notable improvement for i and o, where recall improved by 10 percentage points. This performance increase translated to improvements in F1 across all types, as compared to the base model and to re-weighting by agreement." ], [ "The preceding experiments demonstrate that re-weighting difficult sentences annotated by the crowd generally improves the extraction models. Presumably the performance is influenced by the annotation quality.", "We now examine the possibility that the higher quality and more consistent annotations of domain experts on the difficult instances will benefit the extraction model. This simulates an annotation strategy in which we route difficult instances to domain experts and easier ones to crowd annotators. We also contrast the value of difficult data to that of an i.i.d. random sample of the same size, both annotated by experts." ], [ "We re-annotate by experts a subset of most difficult instances and the same number of random instances. As collecting annotations from experts is slow and expensive, we only re-annotate the difficult instances for the interventions extraction task. We re-annotate the abstracts which cover the sentences with predicted difficulty scores in the top 5 percentile. We rank the abstracts from the training set by the count of difficult sentences, and re-annotate the abstracts that contain the most difficult sentences. Constrained by time and budget, we select only 2000 abstracts for re-annotation; 1000 of these are top-ranked, and 1000 are randomly sampled. This re-annotation cost $3,000. We have released the new annotation data at: https://github.com/bepnye/EBM-NLP.", "Following BIBREF5 , we recruited five medical experts via Up-work with advanced medical training and strong technical reading/writing skills. The expert annotator were asked to read the entire abstract and highlight, using the BRAT toolkit BIBREF26 , all spans describing medical Interventions. Each abstract is only annotated by one expert. We examined 30 re-annotated abstracts to ensure the annotation quality before hiring the annotator.", "Table 5 presents the results of LSTM-CRF-Pattern model trained on the reannotated difficult subset and the random subset. The first two rows show the results for models trained with expert annotations. The model trained on random data has a slightly better F1 than that trained on the same amount of difficult data. The model trained on random data has higher precision but lower recall.", "Rows 3 and 4 list the results for models trained on the same data but with crowd annotation. Models trained with expert-annotated data are clearly superior to those trained with crowd labels with respect to F1, indicating that the experts produced higher quality annotations. For crowdsourced annotations, training the model with data sampled at i.i.d. random achieves 2% higher F1 than when difficult instances are used. When expert annotations are used, this difference is less than 1%. This trend in performance may be explained by differences in annotation quality: the randomly sampled set was more consistently annotated by both experts and crowd because the difficult set is harder. However, in both cases expert annotations are better, with a bigger difference between the expert and crowd models on the difficult set.", "The last row is the model trained on all 5k abstracts with crowd annotations. Its F1 score is lower than either expert model trained on only 20% of data, suggesting that expert annotations should be collected whenever possible. Again the crowd model on complete data has higher precision than expert models but its recall is much lower." ], [ "So far a system was trained on one type of data, either labeled by crowd or experts. We now examine the performance of a system trained on data that was routed to either experts or crowd annotators depending on their predicted difficult. Given the results presented so far mixing annotators may be beneficial given their respective trade-offs of precision and recall. We use the annotations from experts for an abstract if it exists otherwise use crowd annotations. The results are presented in Table 6 .", "Rows 1 and 2 repeat the performance of the models trained on difficult subset and random subset with expert annotations only respectively. The third row is the model trained by combining difficult and random subsets with expert annotations. There are around 250 abstracts in the overlap of these two sets, so there are total 1.75k abstracts used for training the D+R model. Rows 4 to 6 are the models trained on all 5k abstracts with mixed annotations, where Other means the rest of the abstracts with crowd annotation only.", "The results show adding more training data with crowd annotation still improves at least 1 point F1 score in all three extraction tasks. The improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added. The model trained with re-annotating the difficult subset (D+Other) also outperforms the model with re-annotating the random subset (R+Other) by 2 points in F1. The model trained with re-annotating both of difficult and random subsets (D+R+Other), however, achieves only marginally higher F1 than the model trained with the re-annotated difficult subset (D+Other). In sum, the results clearly indicate that mixing expert and crowd annotations leads to better models than using solely crowd data, and better than using expert data alone. More importantly, there is greater gain in performance when instances are routed according to difficulty, as compared to randomly selecting the data for expert annotators. These findings align with our motivating hypothesis that annotation quality for difficult instances is important for final model performance. They also indicate that mixing annotations from expert and crowd could be an effective way to achieve acceptable model performance given a limited budget." ], [ "We established that crowd annotation are still useful in supplementing expert annotations for medical IE. Obtaining expert annotations for the one thousand most difficult instances greatly improved the model performance. However the choice of how many difficult instances to annotate was an uninformed choice. Here we check if less expert data would have yielded similar gains. Future work will need to address how best to choose this parameter for a routing system.", "We simulate a routing scenario in which we send consecutive batches of the most difficult examples to the experts for annotation. We track changes in performance as we increase the number of most-difficult-articles sent to domain experts. As shown in Figure 4 , adding expert annotations for difficult articles consistently increases F1 scores. The performance gain is mostly from increased recall; the precision changes only a bit with higher quality annotation. This observation implies that crowd workers often fail to mark target tokens, but do not tend to produce large numbers of false positives. We suspect such failures to identify relevant spans/tokens are due to insufficient domain knowledge possessed by crowd workers.", "The F1 score achieved after re-annotating the 600 most-difficult articles reaches 68.1%, which is close to the performance when re-annotating 1000 random articles. This demonstrates the effectiveness of recognizing difficult instances. The trend when we use up all expert data is still upward, so adding even more expert data is likely to further improve performance. Unfortunately we exhausted our budget and were not able to obtain additional expert annotations. It is likely that as the size of the expert annotations increases, the value of crowd annotations will diminish. This investigation is left for future work." ], [ "We have introduced the task of predicting annotation difficulty for biomedical information extraction (IE). We trained neural models using different learned representations to score texts in terms of their difficulty. Results from all models were strong with Pearson’s correlation coefficients higher than 0.45 in almost all evaluations, indicating the feasibility of this task. An ensemble model combining universal and task specific feature sentence vectors yielded the best results.", "Experiments on biomedical IE tasks show that removing up to $\\sim $ 10% of the sentences predicted to be most difficult did not decrease model performance, and that re-weighting sentences inversely to their difficulty score during training improves predictive performance. Simulations in which difficult examples are routed to experts and other instances to crowd annotators yields the best results, outperforming the strategy of randomly selecting data for expert annotation, and substantially improving upon the approach of relying exclusively on crowd annotations. In future work, routing strategies based on instance difficulty could be further investigated for budget-quality trade-off." ], [ "This work has been partially supported by NSF1748771 grant. Wallace was support in part by NIH/NLM R01LM012086." ] ], "section_name": [ "Introduction", "Related Work", "Application Domain", "Quantifying Task Difficulty", "Difficulty is not Worker Agreement", "Predicting Annotation Difficulty", "Experimental Setup and Results", "Better IE with Difficulty Prediction", "Removing Difficult Examples", "Re-weighting by Difficulty", "Involving Expert Annotators", "Expert annotations of Random and Difficult Instances", "Routing To Experts or Crowd", "How Many Expert Annotations?", "Conclusions", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "68067e71ea21eb923edb91f128ad0aa6dd656eb4" ], "answer": [ { "evidence": [ "The results show adding more training data with crowd annotation still improves at least 1 point F1 score in all three extraction tasks. The improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added. The model trained with re-annotating the difficult subset (D+Other) also outperforms the model with re-annotating the random subset (R+Other) by 2 points in F1. The model trained with re-annotating both of difficult and random subsets (D+R+Other), however, achieves only marginally higher F1 than the model trained with the re-annotated difficult subset (D+Other). In sum, the results clearly indicate that mixing expert and crowd annotations leads to better models than using solely crowd data, and better than using expert data alone. More importantly, there is greater gain in performance when instances are routed according to difficulty, as compared to randomly selecting the data for expert annotators. These findings align with our motivating hypothesis that annotation quality for difficult instances is important for final model performance. They also indicate that mixing annotations from expert and crowd could be an effective way to achieve acceptable model performance given a limited budget." ], "extractive_spans": [ "improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added" ], "free_form_answer": "", "highlighted_evidence": [ "The improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "2d5eb3b62fdd55fc54d7803863a493e795720f0d" ] }, { "annotation_id": [ "e2310072a53cefe9d2cecc45549a77c226e44005" ], "answer": [ { "evidence": [ "So far a system was trained on one type of data, either labeled by crowd or experts. We now examine the performance of a system trained on data that was routed to either experts or crowd annotators depending on their predicted difficult. Given the results presented so far mixing annotators may be beneficial given their respective trade-offs of precision and recall. We use the annotations from experts for an abstract if it exists otherwise use crowd annotations. The results are presented in Table 6 ." ], "extractive_spans": [], "free_form_answer": "Annotations from experts are used if they have already been collected.", "highlighted_evidence": [ "We use the annotations from experts for an abstract if it exists otherwise use crowd annotations. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "fe09bc2ef2737a3258f978e26226dcbac1b3f948" ] }, { "annotation_id": [ "79bee97756195400dd40f12b417ff9be403f24ec", "8ab549d6dfd42de9b6f5e271827336c0e5fe740d" ], "answer": [ { "evidence": [ "Our contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal' sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here.", "An abstract may contain some `easy' and some `difficult' sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively." ], "extractive_spans": [], "free_form_answer": "57,505 sentences", "highlighted_evidence": [ "We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal' sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task.", "In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "An abstract may contain some `easy' and some `difficult' sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively." ], "extractive_spans": [], "free_form_answer": "57,505 sentences", "highlighted_evidence": [ "In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively.", "57,505" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "4857c606a55a83454e8d81ffe17e05cf8bc4b75f", "fe09bc2ef2737a3258f978e26226dcbac1b3f948" ] }, { "annotation_id": [ "04a224bd787db5dbebb41764f9aa37e55b85c41a", "c1ac982e02f0fdeff02a0bd6ac2a7ec52262d837" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "fe09bc2ef2737a3258f978e26226dcbac1b3f948" ] }, { "annotation_id": [ "5e21bb14b1290612bae6a44edd8447181c3dbfb0" ], "answer": [ { "evidence": [ "Table 1 shows an example of difficult and easy examples according to our definition of difficulty. The underlined text demarcates the (consensus) reference label provided by domain experts. In the difficult examples, crowd workers marked text distinct from these reference annotations; whereas in the easy cases they reproduced them with reasonable fidelity. The difficult sentences usually exhibit complicated structure and feature jargon." ], "extractive_spans": [ "sentence" ], "free_form_answer": "", "highlighted_evidence": [ "Table 1 shows an example of difficult and easy examples according to our definition of difficulty. The underlined text demarcates the (consensus) reference label provided by domain experts. In the difficult examples, crowd workers marked text distinct from these reference annotations; whereas in the easy cases they reproduced them with reasonable fidelity. The difficult sentences usually exhibit complicated structure and feature jargon." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "4857c606a55a83454e8d81ffe17e05cf8bc4b75f" ] } ], "nlp_background": [ "", "", "", "", "" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "How much higher quality is the resulting annotated data?", "How do they match annotators to instances?", "How much data is needed to train the task-specific encoder?", "What kind of out-of-domain data?", "Is an instance a sentence or an IE tuple?" ], "question_id": [ "12d7055baf5bffb6e9e95e977c000ef2e77a4362", "498c0229f831c82a5eb494cdb3547452112a66a0", "8c48c726bb17a17d70ab29db4d65a93030dd5382", "89497e93980ab6d8c34a6d95ebf8c1e1d98ba43f", "06b5272774ec43ee5facfa7111033386f06cf448" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "information extraction", "information extraction", "information extraction", "information extraction", "information extraction" ], "topic_background": [ "research", "research", "research", "research", "research" ] }
{ "caption": [ "Table 1: Example sentences are difficult or easy to annotate for crowd workers. The underlined text are reference annotations from domain experts.", "Table 2: Average inter-worker agreement.", "Figure 1: Distributions of difficulty scores over all sentences that contain any span annotations", "Table 3: Pearson correlation coefficients of sentence difficulty predictions.", "Figure 2: Model architecture.", "Table 4: Medical IE performance by re-weighting sentences according to predicted agreement or difficulty scores.", "Figure 3: F1 scores achieved when removing increasingly large fractions of the training data.", "Figure 4: Precision/Recall/F1 as a function of the number of articles re-annotated by expert, in decreasing order of difficulty.", "Table 5: Interventions IE model performance trained crowd or expert. The first four models are trained with a subset of 1k abstracts and the base model is trained with all 5k abstracts.", "Table 6: Interventions IE model performance trained by mixing annotations from experts and crowd workers. [D]: Difficult-Expert; [R]: Random-Expert; [Other]: the rest of the abstracts with crowd annotation only." ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "4-Figure1-1.png", "5-Table3-1.png", "5-Figure2-1.png", "6-Table4-1.png", "6-Figure3-1.png", "8-Figure4-1.png", "8-Table5-1.png", "8-Table6-1.png" ] }
[ "How do they match annotators to instances?", "How much data is needed to train the task-specific encoder?" ]
[ [ "1905.07791-Routing To Experts or Crowd-0" ], [ "1905.07791-Application Domain-2", "1905.07791-Introduction-5" ] ]
[ "Annotations from experts are used if they have already been collected.", "57,505 sentences" ]
273
2002.04181
Performance Comparison of Crowdworkers and NLP Tools onNamed-Entity Recognition and Sentiment Analysis of Political Tweets
We report results of a comparison of the accuracy of crowdworkers and seven NaturalLanguage Processing (NLP) toolkits in solving two important NLP tasks, named-entity recognition (NER) and entity-level sentiment(ELS) analysis. We here focus on a challenging dataset, 1,000 political tweets that were collected during the U.S. presidential primary election in February 2016. Each tweet refers to at least one of four presidential candidates,i.e., four named entities. The groundtruth, established by experts in political communication, has entity-level sentiment information for each candidate mentioned in the tweet. We tested several commercial and open-source tools. Our experiments show that, for our dataset of political tweets, the most accurate NER system, Google Cloud NL, performed almost on par with crowdworkers, but the most accurate ELS analysis system, TensiStrength, did not match the accuracy of crowdworkers by a large margin of more than 30 percent points.
{ "paragraphs": [ [ "As social media, specially Twitter, takes on an influential role in presidential elections in the U.S., natural language processing of political tweets BIBREF0 has the potential to help with nowcasting and forecasting of election results as well as identifying the main issues with a candidate – tasks of much interest to journalists, political scientists, and campaign organizers BIBREF1. As a methodology to obtain training data for a machine learning system that analyzes political tweets, BIBREF2 devised a crowdsourcing scheme with variable crowdworker numbers based on the difficulty of the annotation task. They provided a dataset of tweets where the sentiments towards political candidates were labeled both by experts in political communication and by crowdworkers who were likely not domain experts. BIBREF2 revealed that crowdworkers can match expert performance relatively accurately and in a budget-efficient manner. Given this result, the authors envisioned future work in which groundtruth labels would be crowdsourced for a large number of tweets and then used to design an automated NLP tool for political tweet analysis.", "The question we address here is: How accurate are existing NLP tools for political tweet analysis? These tools would provide a baseline performance that any new machine learning system for political tweet analysis would compete against. We here explore whether existing NLP systems can answer the questions \"What sentiment?\" and \"Towards whom?\" accurately for the dataset of political tweets provided by BIBREF2. In our analysis, we include NLP tools with publicly-available APIs, even if the tools were not specifically designed for short texts like tweets, and, in particular, political tweets.", "Our experiments reveal that the task of entity-level sentiment analysis is difficult for existing tools to answer accurately while the recognition of the entity, here, which politician, was easier." ], [ "NLP toolkits typically have the following capabilities: tokenization, part-of-speech (PoS) tagging, chunking, named entity recognition and sentiment analysis. In a study by BIBREF3, it is shown that the well-known NLP toolkits NLTK BIBREF4, Stanford CoreNLP BIBREF5, and TwitterNLP BIBREF6 have tokenization, PoS tagging and NER modules in their pipelines. There are two main approaches for NER: (1) rule-based and (2) statistical or machine learning based. The most ubiquitous algorithms for sequence tagging use Hidden Markov Models BIBREF7, Maximum Entropy Markov Models BIBREF7, BIBREF8, or Conditional Random Fields BIBREF9. Recent works BIBREF10, BIBREF11 have used recurrent neural networks with attention modules for NER.", "Sentiment detection tools like SentiStrength BIBREF12 and TensiStrength BIBREF13 are rule-based tools, relying on various dictionaries of emoticons, slangs, idioms, and ironic phrases, and set of rules that can detect the sentiment of a sentence overall or a targeted sentiment. Given a list of keywords, TensiStrength (similar to SentiStrength) reports the sentiment towards selected entities in a sentence, based on five levels of relaxation and five levels of stress.", "Among commercial NLP toolkits (e.g., BIBREF14, BIBREF15, BIBREF16), we selected BIBREF17 and BIBREF18 for our experiments, which, to the best of our knowledge, are the only publicly accessible commercial APIs for the task of entity-level sentiment analysis that is agnostic to the text domain. We also report results of TensiStrength BIBREF13, TwitterNLP BIBREF6, BIBREF19, CogComp-NLP BIBREF20, and Stanford NLP NER BIBREF21." ], [ "We used the 1,000-tweet dataset by BIBREF2 that contains the named-entities labels and entity-level sentiments for each of the four 2016 presidential primary candidates Bernie Sanders, Donald Trump, Hillary Clinton, and Ted Cruz, provided by crowdworkers, and by experts in political communication, whose labels are considered groundtruth. The crowdworkers were located in the US and hired on the BIBREF22 platform. For the task of entity-level sentiment analysis, a 3-scale rating of \"negative,\" \"neutral,\" and \"positive\" was used by the annotators.", "BIBREF2 proposed a decision tree approach for computing the number of crowdworkers who should analyze a tweet based on the difficulty of the task. Tweets are labeled by 2, 3, 5, or 7 workers based on the difficulty of the task and the level of disagreement between the crowdworkers. The model computes the number of workers based on how long a tweet is, the presence of a link in a tweet, and the number of present sarcasm signals. Sarcasm is often used in political tweets and causes disagreement between the crowdworkers. The tweets that are deemed to be sarcastic by the decision tree model, are expected to be more difficult to annotate, and hence are allocated more crowdworkers to work on.", "We conducted two sets of experiments. In the first set, we used BIBREF23, BIBREF17, and BIBREF18, for entity-level sentiment analysis; in the second set, BIBREF17, BIBREF19, BIBREF24, BIBREF25, and BIBREF26, BIBREF18 for named-entity recognition.", "In the experiments that we conducted with TwitterNLP for named-entity recognition, we worked with the default values of the model. Furthermore, we selected the 3-class Stanford NER model, which uses the classes “person,” “organization,” and “location” because it resulted in higher accuracy compared to the 7-class model. For CogComp-NLP NER we used Ontonotes 5.0 NER model BIBREF27. For spaCy NER we used the `en_core_web_lg' model.", "We report the experimental results for our two tasks in terms of the correct classification rate (CCR). For sentiment analysis, we have a three-class problem (positive, negative, and neutral), where the classes are mutually exclusive. The CCR, averaged for a set of tweets, is defined to be the number of correctly-predicted sentiments over the number of groundtruth sentiments in these tweets. For NER, we consider that each tweet may reference up to four candidates, i.e., targeted entities. The CCR, averaged for a set of tweets, is the number of correctly predicted entities (candidates) over the number of groundtruth entities (candidates) in this set." ], [ "The dataset of 1,000 randomly selected tweets contains more than twice as many tweets about Trump than about the other candidates. In the named-entity recognition experiment, the average CCR of crowdworkers was 98.6%, while the CCR of the automated systems ranged from 77.2% to 96.7%. For four of the automated systems, detecting the entity Trump was more difficult than the other entities (e.g., spaCy 72.7% for the entity Trump vs. above 91% for the other entities). An example of incorrect NER is shown in Figure FIGREF1 top. The difficulties the automated tools had in NER may be explained by the fact that the tools were not trained on tweets, except for TwitterNLP, which was not in active development when the data was created BIBREF1.", "In the sentiment analysis experiments, we found that a tweet may contain multiple sentiments. The groundtruth labels contain 210 positive sentiments, 521 neutral sentiments, and 305 negative sentiments to the candidates. We measured the CCR, across all tweets, to be 31.7% for Rosette Text Analytics, 43.2% for Google Cloud, 44.2% for TensiStrength, and 74.7% for the crowdworkers. This means the difference between the performance of the tools and the crowdworkers is significant – more than 30 percent points.", "Crowdworkers correctly identified 62% of the neutral, 85% of the positive, and 92% of the negative sentiments. Google Cloud correctly identified 88% of the neutral sentiments, but only 3% of the positive, and 19% of the negative sentiments. TensiStrength correctly identified 87.2% of the neutral sentiments, but 10.5% of the positive, and 8.1% of the negative sentiments. Rosette Text Analytics correctly identified 22.7% of neutral sentiments, 38.1% of negative sentiments and 40.9% of positive sentiments. The lowest and highest CCR pertains to tweets about Trump and Sanders for both Google Cloud and TensiStrength, Trump and Clinton for Rosette Text Analytics, and Clinton and Cruz for crowdworkers. An example of incorrect ELS analysis is shown in Figure FIGREF1 bottom." ], [ "Our results show that existing NLP systems cannot accurately perform sentiment analysis of political tweets in the dataset we experimented with. Labeling by humans, even non-expert crowdworkers, yields accuracy results that are well above the results of existing automated NLP systems. In future work we will therefore use a crowdworker-labeled dataset to train a new machine-learning based NLP system for tweet analysis. We will ensure that the training data is balanced among classes. Our plan is to use state-of-the-art deep neural networks and compare their performance for entity-level sentiment analysis of political tweets." ], [ "Partial support of this work by the Hariri Institute for Computing and Computational Science & Engineering at Boston University (to L.G.) and a Google Faculty Research Award (to M.B. and L.G.) is gratefully acknowledged. Additionally, we would like to thank Daniel Khashabi for his help in running the CogComp-NLP Python API and Mike Thelwal for his help with TensiStrength. We are also grateful to the Stanford NLP group for clarifying some of the questions we had with regards to the Stanford NER tool." ] ], "section_name": [ "Introduction", "NLP Toolkits", "Dataset and Analysis Methodology", "Results and Discussion", "Conclusions and Future Work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "11cb45be2f31c798cba3297bdb21f4220b2f9ae4", "a34d9fb6bd92497a83dff63bc11c53cc7bd001ff" ], "answer": [ { "evidence": [ "We used the 1,000-tweet dataset by BIBREF2 that contains the named-entities labels and entity-level sentiments for each of the four 2016 presidential primary candidates Bernie Sanders, Donald Trump, Hillary Clinton, and Ted Cruz, provided by crowdworkers, and by experts in political communication, whose labels are considered groundtruth. The crowdworkers were located in the US and hired on the BIBREF22 platform. For the task of entity-level sentiment analysis, a 3-scale rating of \"negative,\" \"neutral,\" and \"positive\" was used by the annotators." ], "extractive_spans": [], "free_form_answer": "people in the US that use Amazon Mechanical Turk", "highlighted_evidence": [ "The crowdworkers were located in the US and hired on the BIBREF22 platform." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We used the 1,000-tweet dataset by BIBREF2 that contains the named-entities labels and entity-level sentiments for each of the four 2016 presidential primary candidates Bernie Sanders, Donald Trump, Hillary Clinton, and Ted Cruz, provided by crowdworkers, and by experts in political communication, whose labels are considered groundtruth. The crowdworkers were located in the US and hired on the BIBREF22 platform. For the task of entity-level sentiment analysis, a 3-scale rating of \"negative,\" \"neutral,\" and \"positive\" was used by the annotators." ], "extractive_spans": [ "located in the US", "hired on the BIBREF22 platform" ], "free_form_answer": "", "highlighted_evidence": [ "The crowdworkers were located in the US and hired on the BIBREF22 platform. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "ea4394112c1549185e6b763d6f36733a9f2ed794" ] }, { "annotation_id": [ "3203d87962e25c51b41bd0ac9eb8fada861c06b4", "ebb99811bea1f434cd87a11b6ec1ad9d6b0cb67c" ], "answer": [ { "evidence": [ "Among commercial NLP toolkits (e.g., BIBREF14, BIBREF15, BIBREF16), we selected BIBREF17 and BIBREF18 for our experiments, which, to the best of our knowledge, are the only publicly accessible commercial APIs for the task of entity-level sentiment analysis that is agnostic to the text domain. We also report results of TensiStrength BIBREF13, TwitterNLP BIBREF6, BIBREF19, CogComp-NLP BIBREF20, and Stanford NLP NER BIBREF21." ], "extractive_spans": [ "BIBREF17", "BIBREF18", "TensiStrength BIBREF13", "TwitterNLP BIBREF6", "BIBREF19", "CogComp-NLP BIBREF20", "Stanford NLP NER BIBREF21" ], "free_form_answer": "", "highlighted_evidence": [ "Among commercial NLP toolkits (e.g., BIBREF14, BIBREF15, BIBREF16), we selected BIBREF17 and BIBREF18 for our experiments, which, to the best of our knowledge, are the only publicly accessible commercial APIs for the task of entity-level sentiment analysis that is agnostic to the text domain. We also report results of TensiStrength BIBREF13, TwitterNLP BIBREF6, BIBREF19, CogComp-NLP BIBREF20, and Stanford NLP NER BIBREF21." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We conducted two sets of experiments. In the first set, we used BIBREF23, BIBREF17, and BIBREF18, for entity-level sentiment analysis; in the second set, BIBREF17, BIBREF19, BIBREF24, BIBREF25, and BIBREF26, BIBREF18 for named-entity recognition." ], "extractive_spans": [ "BIBREF23", "BIBREF17", "BIBREF18", "BIBREF19", "BIBREF24", "BIBREF25", "BIBREF26" ], "free_form_answer": "", "highlighted_evidence": [ "In the first set, we used BIBREF23, BIBREF17, and BIBREF18, for entity-level sentiment analysis; in the second set, BIBREF17, BIBREF19, BIBREF24, BIBREF25, and BIBREF26, BIBREF18 for named-entity recognition." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "ea4394112c1549185e6b763d6f36733a9f2ed794" ] }, { "annotation_id": [ "f712b6d20ef963bdb1cbe0e82f5417da48c5b21a" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Average Correct Classification Rate (CCR) for named-entity recognition (NER) of four presidential candidates and entity-level sentiment (ELS) analysis by NLP tools and crowdworkers", "Crowdworkers correctly identified 62% of the neutral, 85% of the positive, and 92% of the negative sentiments. Google Cloud correctly identified 88% of the neutral sentiments, but only 3% of the positive, and 19% of the negative sentiments. TensiStrength correctly identified 87.2% of the neutral sentiments, but 10.5% of the positive, and 8.1% of the negative sentiments. Rosette Text Analytics correctly identified 22.7% of neutral sentiments, 38.1% of negative sentiments and 40.9% of positive sentiments. The lowest and highest CCR pertains to tweets about Trump and Sanders for both Google Cloud and TensiStrength, Trump and Clinton for Rosette Text Analytics, and Clinton and Cruz for crowdworkers. An example of incorrect ELS analysis is shown in Figure FIGREF1 bottom." ], "extractive_spans": [], "free_form_answer": "neutral sentiment", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Average Correct Classification Rate (CCR) for named-entity recognition (NER) of four presidential candidates and entity-level sentiment (ELS) analysis by NLP tools and crowdworkers", "Crowdworkers correctly identified 62% of the neutral, 85% of the positive, and 92% of the negative sentiments. Google Cloud correctly identified 88% of the neutral sentiments, but only 3% of the positive, and 19% of the negative sentiments. TensiStrength correctly identified 87.2% of the neutral sentiments, but 10.5% of the positive, and 8.1% of the negative sentiments. Rosette Text Analytics correctly identified 22.7% of neutral sentiments, 38.1% of negative sentiments and 40.9% of positive sentiments. The lowest and highest CCR pertains to tweets about Trump and Sanders for both Google Cloud and TensiStrength, Trump and Clinton for Rosette Text Analytics, and Clinton and Cruz for crowdworkers. An example of incorrect ELS analysis is shown in Figure FIGREF1 bottom." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "d857039ba5e8a1d6d962bda50f9bdfd969b85fe2" ], "answer": [ { "evidence": [ "In the sentiment analysis experiments, we found that a tweet may contain multiple sentiments. The groundtruth labels contain 210 positive sentiments, 521 neutral sentiments, and 305 negative sentiments to the candidates. We measured the CCR, across all tweets, to be 31.7% for Rosette Text Analytics, 43.2% for Google Cloud, 44.2% for TensiStrength, and 74.7% for the crowdworkers. This means the difference between the performance of the tools and the crowdworkers is significant – more than 30 percent points." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In the sentiment analysis experiments, we found that a tweet may contain multiple sentiments. The groundtruth labels contain 210 positive sentiments, 521 neutral sentiments, and 305 negative sentiments to the candidates." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "94c73cbec1e34fc4259966d4b4f8e6055847c426" ], "answer": [ { "evidence": [ "We report the experimental results for our two tasks in terms of the correct classification rate (CCR). For sentiment analysis, we have a three-class problem (positive, negative, and neutral), where the classes are mutually exclusive. The CCR, averaged for a set of tweets, is defined to be the number of correctly-predicted sentiments over the number of groundtruth sentiments in these tweets. For NER, we consider that each tweet may reference up to four candidates, i.e., targeted entities. The CCR, averaged for a set of tweets, is the number of correctly predicted entities (candidates) over the number of groundtruth entities (candidates) in this set." ], "extractive_spans": [ "correct classification rate (CCR)" ], "free_form_answer": "", "highlighted_evidence": [ "We report the experimental results for our two tasks in terms of the correct classification rate (CCR). For sentiment analysis, we have a three-class problem (positive, negative, and neutral), where the classes are mutually exclusive. The CCR, averaged for a set of tweets, is defined to be the number of correctly-predicted sentiments over the number of groundtruth sentiments in these tweets. For NER, we consider that each tweet may reference up to four candidates, i.e., targeted entities. The CCR, averaged for a set of tweets, is the number of correctly predicted entities (candidates) over the number of groundtruth entities (candidates) in this set." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "two", "two", "five", "five", "five" ], "paper_read": [ "no", "no", "somewhat", "somewhat", "somewhat" ], "question": [ "Who are the crowdworkers?", "Which toolkits do they use?", "Which sentiment class is the most accurately predicted by ELS systems?", "Is datasets for sentiment analysis balanced?", "What measures are used for evaluation?" ], "question_id": [ "08b57deb237f15061e4029b6718f1393fa26acce", "9b7655d39c7a19a23eb8944568eb5618042b9026", "cd06d775f491b4a17c9d616a8729fd45aa2e79bf", "1329280df5ee9e902b2742bde4a97bc3e6573ff3", "58c6737070ef559e9220a8d08adc481fdcd53a24" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Incorrect NER by spaCy (top) and incorrect ELS analysis by Google Cloud (bottom)", "Table 1: Average Correct Classification Rate (CCR) for named-entity recognition (NER) of four presidential candidates and entity-level sentiment (ELS) analysis by NLP tools and crowdworkers" ], "file": [ "3-Figure1-1.png", "3-Table1-1.png" ] }
[ "Who are the crowdworkers?", "Which sentiment class is the most accurately predicted by ELS systems?" ]
[ [ "2002.04181-Dataset and Analysis Methodology-0" ], [ "2002.04181-3-Table1-1.png", "2002.04181-Results and Discussion-2" ] ]
[ "people in the US that use Amazon Mechanical Turk", "neutral sentiment" ]
274
1908.06264
EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation
In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT. By the nature of the framework of BERT, a two-sentence structure, we adapt BERT to continues dialogue emotion prediction tasks, which rely heavily on the sentence-level context-aware understanding. The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply utterance. The present method has achieved 0.815 and 0.885 micro F1 score in the testing dataset of Friends and EmotionPush, respectively.
{ "paragraphs": [ [ "Emotion detection has long been a topic of interest to scholars in natural language processing (NLP) domain. Researchers aim to recognize the emotion behind the text and distribute similar ones into the same group. Establishing an emotion classifier can not only understand each user's feeling but also be extended to various application, for example, the motivation behind a user's interests BIBREF0. Based on releasing of large text corpus on social media and the emotion categories proposed by BIBREF1, BIBREF2, numerous models have provided and achieved fabulous precision so far. For example, DeepMoji BIBREF3 which utilized transfer learning concept to enhance emotions and sarcasm understanding behind the target sentence. CARER BIBREF4 learned contextualized affect representations to make itself more sensitive to rare words and the scenario behind the texts.", "As methods become mature, text-based emotion detecting applications can be extended from a single utterance to a dialogue contributed by a series of utterances. Table TABREF2 illustrates the difference between single utterance and dialogue emotion recognition. The same utterances in Table TABREF2, even the same person said the same sentence, the emotion it convey may be various, which may depend on different background of the conversation, tone of speaking or personality. Therefore, for emotion detection, the information from preceding utterances in a conversation is relatively critical.", "In SocialNLP 2019 EmotionX, the challenge is to recognize emotions for all utterances in EmotionLines dataset, a dataset consists of dialogues. According to the needs for considering context at the same time, we develop two classification models, inspired by bidirectional encoder representations from transformers (BERT) BIBREF5, FriendsBERT and ChatBERT. In this paper, we introduce our approaches including causal utterance modeling, model pre-training, and fine-turning." ], [ "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”.", "For the datasets, there are properties worth additional mentioning. Although Friends and EmotionPush share the same data format, they are quite different in nature. Friends is a speech-based dataset which is annotated dialogues from the TV sitcom. It means most of the utterances are generated by the a few main characters. The personality of a character often affects the way of speaking, and therefore “who is the speaker\" might provide extra clues for emotion prediction. In contrast, EmotionPush does not have this trait due to the anonymous mechanism. In addition, features such as typo, hyperlink, and emoji that only appear in chat-based data will need some domain-specific techniques to process.", "Incidentally, the objective of the challenge is to predict the emotion for each utterance. Just, according to EmotionX 2019 specification, there are only four emotions be selected as our label candidates, which are Joy, Sadness, Anger, and Neutral. These emotions will be considered during performance evaluation. The technical detail will also be introduced and discussed in following Section SECREF13 and Section SECREF26." ], [ "For this challenge, we adapt BERT which is proposed by BIBREF5 to help understand the context at the same time. Technically, BERT, designed on end-to-end architecture, is a deep pre-trained transformer encoder that dynamically provides language representation and BERT already achieved multiple state-of-the-art results on GLUE benchmark BIBREF7 and many tasks. A quick recap for BERT's architecture and its pre-training tasks will be illustrated in the following subsections." ], [ "BERT, the Bidirectional Encoder Representations from Transformers, consists of several transformer encoder layers that enable the model to extract very deep language features on both token-level and sentence-level. Each transformer encoder contains multi-head self-attention layers that provide ability to learn multiple attention feature of each word from their bidirectional context. The transformer and its self-attention mechanism are proposed by BIBREF8. This self-attention mechanism can be interpreted as a key-value mapping given query. By given the embedding vector for token input, the query ($Q$), key ($K$) and value ($V$) are produced by the projection from each three parameter matrices where $W^Q \\in \\mathbb {R}^{d_{{\\rm model}} \\times d_{k}}, W^K \\in \\mathbb {R}^{d_{\\rm model} \\times d_{k}}$ and $W^V \\in \\mathbb {R}^{d_{\\rm model} \\times d_{v}}$. The self-attention BIBREF8 is formally represented as:", "The $ d_k = d_v = d_{\\rm model} = 1024$ in BERT large version and 768 in BERT base version. Once model can extract attention feature, we can extend one self-attention into multi-head self-attention, this extension makes sub-space features can be extracted in same time by this multi-head configuration. Overall, the multi-attention mechanism is adopt for each transformer encoder, and several of encoder layer will be stacked together to form a deep transformer encoder.", "For the model input, BERT allow us take one sentence as input sequence or two sentences together as one input sequence, and the maximum length of input sequence is 512. The way that BERT was designed is for giving model the sentence-level and token-level understanding. In two sentences case, a special token ([SEP]) will be inserted between two sentences. In addition, the first input token is also a special token ([CLS]), and its corresponding ouput will be vector place for classification during fine-tuning. The outputs of the last encoder layer corresponding to each input token can be treated as word representations for each token, and the word representation of the first token ([CLS]) will be consider as classification (output) representation for further fine-tuning tasks. In BERT, this vector is denoted as $C \\in \\mathbb {R}^{d_{\\rm model}} $, and a classification layer is denoted as $ W \\in \\mathbb {R}^{K \\times d_{\\rm model}}$, where $K$ is number of classification labels. Finally, the prediction $P$ of BERT is represented as $P = {\\rm softmax}(CW^T)$." ], [ "In pre-training, intead of using unidirectional language models, BERT developed two pre-training tasks: (1) Masked LM (cloze test) and (2) Next Sentence Prediction. At the first pre-training task, bidirectional language modeling can be done at this cloze-like pre-training. In detail, 15% tokens of input sequence will be masked at random and model need to predict those masked tokens. The encoder will try to learn contextual representations from every given tokens due to masking tokens at random. Model will not know which part of the input is going to be masked, so that the information of each masked tokens should be inferred by remaining tokens. At Next Sentence Prediction, two sentences concatenated together will be considered as model input. In order to give model a good nature language understanding, knowing relationship between sentence is one of important abilities. When generating input sequences, 50% of time the sentence B is actually followed by sentence A, and rest 50% of the time the sentence B will be picked randomly from dataset, and model need to predict if the sentence B is next sentence of sentence A. That is, the attention information will be shared between sentences. Such sentence-level understanding may have difficulties to be learned at first pre-training task (Masked LM), therefore, the pre-training task (NSP) is developed as second training goal to capture the cross sentence relationship.", "In this competition, limited by the size of dataset and the challenge in contextual emotion recognition, we consider BERT with both two pre-training tasks can give a good starting point to extract emotion changing during dialogue-like conversation. Especially the second pre-training task, it might be more important for dialogue-like conversation where the emotion may various by the context of continuous utterances. That is, given a set of continues conversations, the emotion of current utterance might be influenced by previous utterance. By this assumption and with supporting from the experiment results of BERT, we can take sentence A as one-sentence context and consider sentence B as the target sentence for emotion prediction. The detail will be described in Section SECREF4." ], [ "The main goal of the present work is to predict the emotion of utterance within the dialogue. Following are four major difficulties we concern about:", "The emotion of the utterances depends not only on the text but also on the interaction happened earlier.", "The source of the two datasets are different. Friends is speech-based dialogues and EmotionPush is chat-based dialogues. It makes datasets possess different characteristics.", "There are only $1,000$ dialogues in both training datasets which are not large enough for the stability of training a complex neural-based model.", "The prediction targets (emotion labels) are highly unbalanced.", "The proposed approach is summarized in Figure FIGREF3, which aims to overcome these challenges. The framework could be separated into three steps and described as follow:" ], [ "Given a dialogue $D^{(i)}$ which includes sequence of utterances denoted as $D^{(i)}=(u^{(i)}_{1}, u^{(i)}_{2}, ..., u^{(i)}_{n})$, where $i$ is the index in dataset and $n$ is the number of utterances in the given dialogue. In order to conserve the emotional information of both utterance and conversation, we rearrange each two consecutive utterances $u_{t}, u_{t-1}$ into a single sentence representation $x_{t}$ as", "The corresponding sentence representation corpus $X^{(i)}$ are denoted as $X^{(i)}=(x^{(i)}_{1}, x^{(i)}_{2}, ..., x^{(i)}_{n})$. Note that the first utterance within a conversation does not have its causal utterance (previous sentence), therefore, the causal utterance will be set as [None]. A practical example of sentence representation is shown in Table TABREF11.", "Since the characteristics of two datasets are not identical, we customize different causal utterance modeling strategies to refine the information in text.", "For Friends, there are two specific properties. The first one is that most dialogues are surrounding with the six main characters, including Rachel, Monica, Phoebe, Joey, Chandler, and Ross. The utterance ratio of given by the six roles is up to $83.4\\%$. Second, the personal characteristics of the six characters are very clear. Each leading role has its own emotion undulated rule. To make use of these features, we introduce the personality tokenization which help learning the personality of the six characters. Personality tokenization concatenate the speaker and says tokens before the input utterance if the speaker is one of the six characters. The example is shown in Table TABREF12.", "For EmotionPush, the text are informal chats which including like slang, acronym, typo, hyperlink, and emoji. Another characteristic is that the specific name entities are tokenized with random index. (e.g. “organization_80”, “person_01”, and “time_12”). We consider some of these informal text are related to expressing emotion such as repeated typing, purposed capitalization, and emoji (e.g. “:D”, “:(”, and “<3”)). Therefore, we keep most informal expressions but only process hyperlinks, empty utterance, and name entities by unifying the tokens." ], [ "Since the size of both datasets are not large enough for complex neural-based model training as well as BERT model is only pre-train on formal text datasets, the issues of overfitting and domain bias are important considerations for design the pre-training process.", "To avoid our model overfitting on the training data and increase the understanding of informal text, we adapted BERT and derived two models, namely FriendsBERT and ChatBERT, with different pre-training tasks before the formal training process for Friends and EmotionPush dataset, respectively. The pre-training strategies are described below.", "For pre-training FriendsBERT, we collect the completed scripts of all ten seasons of Friends TV shows from emorynlp which includes 3,107 scenes within 61,309 utterances. All the utterances are followed the preprocessing methods mentions above to compose the corpus for Masked language model pre-training task. The consequent utterances in the same scenes are considered as the consequent sentences to pre-train the Next Sentence Prediction task. In the pre-training process, the training loss is the sum of the mean likelihood of two pre-train tasks.", "For pre-training ChatBERT, we pre-train our model on the Twitter dataset, since the text and writing style on Twitter are close to the chat text where both may involved with many informal words or emoticons as well. The Twitter emotion dataset, 8 basic emotions from emotion wheel BIBREF1, was collected by twitter streaming API with specific emotion-related hashtags, such as #anger, #joy, #cry, #sad and etc. The hashtags in tweets are treated as emotion label for model fine-tuning. The tweets were fine-grined processing followed the rules in BIBREF9, BIBREF4, including duplicate tweets removing, the emotion hashtags must appearing in the last position of a tweet, and etc. The statis of tweets were summarized in Table TABREF17. Each tweet and corresponding emotion label composes an emotion classification dataset for pre-training." ], [ "Since our emotion recognition task is treated as a sequence-level classification task, the model would be fine-tuned on the processed training data. Following the BERT construction, we take the first embedding vector which corresponds to the special token [CLS] from the final hidden state of the Transformer encoder. This vector represents the embedding vector of the corresponding conversation utterances which is denoted as $\\mathbf {C} \\in \\mathbb {R}^{H}$, where $H$ is the embedding size. A dense neural layer is treated as a classification layer which consists of parameters $\\mathbf {W} \\in \\mathbb {R}^{K\\times H}$ and $\\mathbf {b} \\in \\mathbb {R}^{K}$, where $K$ is the number of emotion class. The emotion prediction probabilities $\\mathbf {P} \\in \\mathbb {R}^{K}$ are computed by a softmax activation function as", "All the parameters in BERT and the classification layer would be fine-turned together to minimize the Negative Log Likelihood (NLL) loss function, as Equation (DISPLAY_FORM22), based on the ground truth emotion label $c$.", "In order to tackle the problem of highly unbalanced emotion labels, we apply weighted balanced warming on NLL loss function, as Equation (DISPLAY_FORM23), in the first epoch of fine-tuning procedure.", "where $\\mathbf {w}$ are the weights of corresponding emotion label $c$ which are computed and normalize by the frequency as", "By adding the weighted balanced warming on NLL loss, the model could learn to predict the minor emotions (e.g. anger and sadness) earlier and make the training process more stable. Since the major evaluation metrics micro F1-score is effect by the number of each label, we only apply the weighted balanced warming in first epoch to optimize the performance." ], [ "Since the EmotionX challenge only provided the gold labels in training data, we pick the best performance model (weights) to predict the testing data. In this section, we present the experiment and evaluation results." ], [ "The EmotionX challenge consists of $1,000$ dialogues for both Friends and EmotionPush. In all of our experiments, each dataset is separated into top 800 dialogues for training and last 200 dialogues for validation. Since the EmotionX challenge considers only the four emotions (anger, joy, neutral, and sadness) in the evaluation stage, we ignore all the data point corresponding to other emotions directly. The details of emotions distribution are shown in Table TABREF18.", "The hyperparameters and training setup of our models (FriendsBERT and ChatBERT) are shown in Table TABREF25. Some common and easily implemented methods are selected as the baselines embedding methods and classification models. The baseline embedding methods are including bag-of-words (BOW), term frequency–inverse document frequency (TFIDF), and neural-based word embedding. The classification models are including Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe BIBREF11, and our proposed model. All the experiment results are based on the best performances of validation results." ], [ "The experiment results of validation on Friends are shown in Table TABREF19. The proposed model and baselines are evaluated based on the Precision (P.), Recall (R.), and F1-measure (F1).", "For the traditional baselines, namely BOW and TFIDF, we observe that they achieve surprising high F1 scores around $0.81$, however, the scores for Anger and Sadness are lower. This explains that traditional approaches tend to predict the labels with large sample size, such as Joy and Neutral, but fail to take of scarce samples even when an ensemble random forest classifier is adopted. In order to prevent the unbalanced learning, we choose the weighted loss mechanism for both TextCNN and causal modeling TextCNN (C-TextCNN), these models suffer less than the traditional baselines and achieve a slightly balance performance, where there are around 15% and 7% improvement on Anger and Sadness, respectively. We following adopt the casual utterance modeling to original TextCNN, mapping previous utterance as well as target utterance into model. The causal utterance modeling improve the C-TextCNN over TextCNN for 6%, 2% and 1% on Anger, Joy and overall F1 score. Motivated from these preliminary experiments, the proposed FriendsBERT also adopt the ideas of both weighted loss and causal utterance modeling. As compared to the original BERT, single sentence BERT (FriendsBERT-base-s), the proposed FriendsBERT-base improve 1% for Joy and overall F1, and 2% for Sadness. For the final validation performance, our proposed approach achieves the highest scores, which are $0.85$ and $0.86$ for FriendsBERT-base and FriendsBERT-large, respectively.", "Overall, the proposed FriendsBERT successfully captures the sentence-level context-awarded information and outperforms all the baselines, which not only achieves high performance on large sample labels, but also on small sample labels. The similar settings are also adapted to EmotionPush dataset for the final evaluation." ], [ "The testing dataset consists of 240 dialogues including $3,296$ and $3,536$ utterances in Friends and EmotionPush respectively. We re-train our FriendsBERT and ChatBERT with top 920 training dialogues and predict the evaluation results using the model performing the best validation results. The results are shown in Table TABREF29 and Table TABREF30. The present method achieves $81.5\\%$ and $88.5\\%$ micro F1-score on the testing dataset of Friends and EmotionPush, respectively." ], [ "In the present work, we propose FriendsBERT and ChatBERT for the multi-utterance emotion recognition task on EmotionLines dataset. The proposed models are adapted from BERT BIBREF5 with three main improvement during the model training procedure, which are the causal utterance modeling mechanism, specific model pre-training, and adapt weighted loss. The causal utterance modeling takes the advantages of the sentence-level context information during model inference. The specific model pre-training helps to against the bias in different text domain. The weighted loss avoids our model to only predict on large size sample. The effectiveness and generalizability of the proposed methods are demonstrated from the experiments.", "In future work, we consider to include the conditional probabilistic constraint $P ({\\rm Emo}_{B} | \\hat{\\rm Emo}_{A})$. Model should predict the emotion based on a certain understanding about context emotions. This might be more reasonable for guiding model than just predicting emotion of ${\\rm Sentence}_B$ directly. In addition, due to the limitation of BERT input format, ambiguous number of input sentences is now becoming an important design requirement for our future work. Also, personality embedding development will be another future work of the emotion recognition. The personality embedding will be considered as sentence embedding injected into word embedding, and it seems this additional information can contribute some improvement potentially." ] ], "section_name": [ "Introduction", "Dataset", "Model Description", "Model Description ::: Model Architecture", "Model Description ::: Pre-training Tasks", "Methodology", "Methodology ::: Causal Utterance Modeling", "Methodology ::: Model Pre-training", "Methodology ::: Fine-tuning", "Experiments", "Experiments ::: Experimental Setup", "Experiments ::: Performance", "Experiments ::: Evaluation Results", "Conclusion and Future work" ] }
{ "answers": [ { "annotation_id": [ "492f64e7a80e3178020158504f6174c930339b26", "83bff7b48419bc536af057c15261a1095fd22faa" ], "answer": [ { "evidence": [ "The experiment results of validation on Friends are shown in Table TABREF19. The proposed model and baselines are evaluated based on the Precision (P.), Recall (R.), and F1-measure (F1).", "FLOAT SELECTED: Table 6: Validation Results (Friends)" ], "extractive_spans": [], "free_form_answer": "BOW-LR, BOW-RF. TFIDF-RF, TextCNN, C-TextCNN", "highlighted_evidence": [ "The experiment results of validation on Friends are shown in Table TABREF19. ", "FLOAT SELECTED: Table 6: Validation Results (Friends)" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The hyperparameters and training setup of our models (FriendsBERT and ChatBERT) are shown in Table TABREF25. Some common and easily implemented methods are selected as the baselines embedding methods and classification models. The baseline embedding methods are including bag-of-words (BOW), term frequency–inverse document frequency (TFIDF), and neural-based word embedding. The classification models are including Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe BIBREF11, and our proposed model. All the experiment results are based on the best performances of validation results." ], "extractive_spans": [ "bag-of-words (BOW)", "term frequency–inverse document frequency (TFIDF)", "neural-based word embedding", "Logistic Regression (LR)", "Random Forest (RF)", "TextCNN BIBREF10 with initial word embedding as GloVe" ], "free_form_answer": "", "highlighted_evidence": [ "Some common and easily implemented methods are selected as the baselines embedding methods and classification models. The baseline embedding methods are including bag-of-words (BOW), term frequency–inverse document frequency (TFIDF), and neural-based word embedding. The classification models are including Logistic Regression (LR), Random Forest (RF), TextCNN BIBREF10 with initial word embedding as GloVe BIBREF11, and our proposed model." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "12490fb15ae38a779e863e09d7e3f65e4a8dc31b", "465ede59977b3ee4f116d00b04d9e686e3e94add" ], "answer": [ { "evidence": [ "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”." ], "extractive_spans": [ "Friends", "EmotionPush" ], "free_form_answer": "", "highlighted_evidence": [ "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”." ], "extractive_spans": [ "EmotionLines BIBREF6" ], "free_form_answer": "", "highlighted_evidence": [ "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3bed83fbe9369d3cdded238b1c129c225fb1fa35" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 6: Validation Results (Friends)", "FLOAT SELECTED: Table 7: Experimental Setup of Proposed Model" ], "extractive_spans": [], "free_form_answer": "BERT-base, BERT-large, BERT-uncased, BERT-cased", "highlighted_evidence": [ "FLOAT SELECTED: Table 6: Validation Results (Friends)", "FLOAT SELECTED: Table 7: Experimental Setup of Proposed Model" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "c943c52bb6b44d4593e49d788a8862f8b724f352" ], "answer": [ { "evidence": [ "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”." ], "extractive_spans": [ "Friends TV sitcom", "Facebook messenger chats" ], "free_form_answer": "", "highlighted_evidence": [ "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "637ddeb319cbc2f85d5d095293f35613dc54c1c8" ], "answer": [ { "evidence": [ "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”." ], "extractive_spans": [ "Ekman’s six basic emotions", " neutral" ], "free_form_answer": "", "highlighted_evidence": [ "Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "", "", "five", "five", "five" ], "paper_read": [ "", "", "no", "no", "no" ], "question": [ "what were the baselines?", "what datasets were used?", "What BERT models are used?", "What are the sources of the datasets?", "What labels does the dataset have?" ], "question_id": [ "0af16b164db20d8569df4ce688d5a62c861ace0b", "78a4ec72d76f0a736a4a01369a42b092922203b6", "6a14379fee26a39631aebd0e14511ce3756e42ad", "81588e0e207303c2867c896f3911a54a1ef7c874", "dd09db5eb321083dba16c2550676e60682f9a0cd" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "BERT", "BERT", "BERT" ], "topic_background": [ "", "", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Emotions depending on the context", "Figure 1: Framework", "Table 2: An example of sentence representation", "Table 4: Statistics for Twitter Dataset", "Table 5: Emotions Distribution of two dataset", "Table 3: An example of personality tokenization", "Table 6: Validation Results (Friends)", "Table 7: Experimental Setup of Proposed Model", "Table 8: Evaluation (Testing) Results of Friends", "Table 9: Evaluation (Testing) Results of EmotionPush" ], "file": [ "1-Table1-1.png", "3-Figure1-1.png", "4-Table2-1.png", "4-Table4-1.png", "4-Table5-1.png", "4-Table3-1.png", "5-Table6-1.png", "5-Table7-1.png", "6-Table8-1.png", "6-Table9-1.png" ] }
[ "what were the baselines?", "What BERT models are used?" ]
[ [ "1908.06264-Experiments ::: Experimental Setup-1", "1908.06264-5-Table6-1.png", "1908.06264-Experiments ::: Performance-0" ], [ "1908.06264-5-Table7-1.png", "1908.06264-5-Table6-1.png" ] ]
[ "BOW-LR, BOW-RF. TFIDF-RF, TextCNN, C-TextCNN", "BERT-base, BERT-large, BERT-uncased, BERT-cased" ]
275
1709.10367
Structured Embedding Models for Grouped Data
Word embeddings are a powerful approach for analyzing language, and exponential family embeddings (EFE) extend them to other types of data. Here we develop structured exponential family embeddings (S-EFE), a method for discovering embeddings that vary across related groups of data. We study how the word usage of U.S. Congressional speeches varies across states and party affiliation, how words are used differently across sections of the ArXiv, and how the co-purchase patterns of groceries can vary across seasons. Key to the success of our method is that the groups share statistical information. We develop two sharing strategies: hierarchical modeling and amortization. We demonstrate the benefits of this approach in empirical studies of speeches, abstracts, and shopping baskets. We show how S-EFE enables group-specific interpretation of word usage, and outperforms EFE in predicting held-out data.
{ "paragraphs": [ [ "Word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 are unsupervised learning methods for capturing latent semantic structure in language. Word embedding methods analyze text data to learn distributed representations of the vocabulary that capture its co-occurrence statistics. These representations are useful for reasoning about word usage and meaning BIBREF7 , BIBREF8 . Word embeddings have also been extended to data beyond text BIBREF9 , BIBREF10 , such as items in a grocery store or neurons in the brain. efe is a probabilistic perspective on embeddings that encompasses many existing methods and opens the door to bringing expressive probabilistic modeling BIBREF11 , BIBREF12 to the problem of learning distributed representations.", "We develop sefe, an extension of efe for studying how embeddings can vary across groups of related data. We will study several examples: in U.S. Congressional speeches, word usage can vary across states or party affiliations; in scientific literature, the usage patterns of technical terms can vary across fields; in supermarket shopping data, co-purchase patterns of items can vary across seasons of the year. We will see that sefe discovers a per-group embedding representation of objects. While the naïve approach of fitting an individual embedding model for each group would typically suffer from lack of data—especially in groups for which fewer observations are available—we develop two methods that can share information across groups.", "Figure FIGREF1 illustrates the kind of variation that we can capture. We fit an sefe to ArXiv abstracts grouped into different sections, such as computer science (cs), quantitative finance (q-fin), and nonlinear sciences (nlin). sefe results in a per-section embedding of each term in the vocabulary. Using the fitted embeddings, we illustrate similar words to the word 1.10intelligence. We can see that how 1.10intelligence is used varies by field: in computer science the most similar words include 1.10artificial and 1.10ai; in finance, similar words include 1.10abilities and 1.10consciousness.", "In more detail, embedding methods posit two representation vectors for each term in the vocabulary; an embedding vector and a context vector. (We use the language of text for concreteness; as we mentioned, efe extend to other types of data.) The idea is that the conditional probability of each observed word depends on the interaction between the embedding vector and the context vectors of the surrounding words. In sefe, we posit a separate set of embedding vectors for each group but a shared set of context vectors; this ensures that the embedding vectors are in the same space.", "We propose two methods to share statistical strength among the embedding vectors. The first approach is based on hierarchical modeling BIBREF13 , which assumes that the group-specific embedding representations are tied through a global embedding. The second approach is based on amortization BIBREF14 , BIBREF15 , which considers that the individual embeddings are the output of a deterministic function of a global embedding representation. We use stochastic optimization to fit large data sets.", "Our work relates closely to two threads of research in the embedding literature. One is embedding methods that study how language evolves over time BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Time can be thought of as a type of “group”, though with evolutionary structure that we do not consider. The second thread is multilingual embeddings BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 ; our approach is different in that most words appear in all groups and we are interested in the variations of the embeddings across those groups.", "Our contributions are thus as follows. We introduce the sefe model, extending efe to grouped data. We present two techniques to share statistical strength among the embedding vectors, one based on hierarchical modeling and one based on amortization. We carry out a thorough experimental study on two text databases, ArXiv papers by section and U.S. Congressional speeches by home state and political party. Using Poisson embeddings, we study market basket data from a large grocery store, grouped by season. On all three data sets, sefe outperforms efe in terms of held-out log-likelihood. Qualitatively, we demonstrate how sefe discovers which words are used most differently across U.S. states and political parties, and show how word usage changes in different ArXiv disciplines." ], [ "In this section, we develop sefe, a model that builds on efe BIBREF10 to capture semantic variations across groups of data. In embedding models, we represent each object (e.g., a word in text, or an item in shopping data) using two sets of vectors, an embedding vector and a context vector. In this paper, we are interested in how the embeddings vary across groups of data, and for each object we want to learn a separate embedding vector for each group. Having a separate embedding for each group allows us to study how the usage of a word like 1.10intelligence varies across categories of the ArXiv, or which words are used most differently by U.S. Senators depending on which state they are from and whether they are Democrats or Republicans.", "The sefe model extends efe to grouped data, by having the embedding vectors be specific for each group, while sharing the context vectors across all groups. We review the efe model in Section SECREF4 . We then formalize the idea of sharing the context vectors in Section SECREF8 , where we present two approaches to build a hierarchical structure over the group-specific embeddings." ], [ "In exponential family embeddings, we have a collection of objects, and our goal is to learn a vector representation of these objects based on their co-occurrence patterns.", "Let us consider a dataset represented as a (typically sparse) matrix INLINEFORM0 , where columns are datapoints and rows are objects. For example, in text, each column corresponds to a location in the text, and each entry INLINEFORM1 is a binary variable that indicates whether word INLINEFORM2 appears at location INLINEFORM3 .", "In efe, we represent each object INLINEFORM0 with two sets of vectors, embeddings vectors INLINEFORM1 and context vectors INLINEFORM2 , and we posit a probability distribution of data entries INLINEFORM3 in which these vectors interact. The definition of the efe model requires three ingredients: a context, a conditional exponential family, and a parameter sharing structure. We next describe these three components.", "Exponential family embeddings learn the vector representation of objects based on the conditional probability of each observation, conditioned on the observations in its context. The context INLINEFORM0 gives the indices of the observations that appear in the conditional probability distribution of INLINEFORM1 . The definition of the context varies across applications. In text, it corresponds to the set of words in a fixed-size window centered at location INLINEFORM2 .", "Given the context INLINEFORM0 and the corresponding observations INLINEFORM1 indexed by INLINEFORM2 , the distribution for INLINEFORM3 is in the exponential family, DISPLAYFORM0 ", " with sufficient statistics INLINEFORM0 and natural parameter INLINEFORM1 . The parameter vectors interact in the conditional probability distributions of each observation INLINEFORM2 as follows. The embedding vectors INLINEFORM3 and the context vectors INLINEFORM4 are combined to form the natural parameter, DISPLAYFORM0 ", " where INLINEFORM0 is the link function. Exponential family embeddings can be understood as a bank of glm. The context vectors are combined to give the covariates, and the “regression coefficients” are the embedding vectors. In Eq. EQREF6 , the link function INLINEFORM1 plays the same role as in glm and is a modeling choice. We use the identity link function.", "The third ingredient of the efe model is the parameter sharing structure, which indicates how the embedding vectors are shared across observations. In the standard efe model, we use INLINEFORM0 and INLINEFORM1 for all columns of INLINEFORM2 . That is, each unique object INLINEFORM3 has a shared representation across all instances.", "The objective function. In efe, we maximize the objective function, which is given by the sum of the log-conditional likelihoods in Eq. EQREF5 . In addition, we add an INLINEFORM0 -regularization term (we use the notation of the log Gaussian pdf) over the embedding and context vectors, yielding DISPLAYFORM0 ", "Note that maximizing the regularized conditional likelihood is not equivalent to maximum a posteriori. Rather, it is similar to maximization of the pseudo-likelihood in conditionally specified models BIBREF26 , BIBREF10 ." ], [ "Here, we describe the sefe model for grouped data. In text, some examples of grouped data are Congressional speeches grouped into political parties or scientific documents grouped by discipline. Our goal is to learn group-specific embeddings from data partitioned into INLINEFORM0 groups, i.e., each instance INLINEFORM1 is associated with a group INLINEFORM2 . The sefe model extends efe to learn a separate set of embedding vectors for each group.", "To build the sefe model, we impose a particular parameter sharing structure over the set of embedding and context vectors. We posit a structured model in which the context vectors are shared across groups, i.e., INLINEFORM0 (as in the standard efe model), but the embedding vectors are only shared at the group level, i.e., for an observation INLINEFORM1 belonging to group INLINEFORM2 , INLINEFORM3 . Here, INLINEFORM4 denotes the embedding vector corresponding to group INLINEFORM5 . We show a graphical representation of the sefe in Figure FIGREF1 .", "Sharing the context vectors INLINEFORM0 has two advantages. First, the shared structure reduces the number of parameters, while the resulting sefe model is still flexible to capture how differently words are used across different groups, as INLINEFORM1 is allowed to vary. Second, it has the important effect of uniting all embedding parameters in the same space, as the group-specific vectors INLINEFORM4 need to agree with the components of INLINEFORM5 . While one could learn a separate embedding model for each group, as has been done for text grouped into time slices BIBREF16 , BIBREF17 , BIBREF18 , this approach would require ad-hoc postprocessing steps to align the embeddings.", "When there are INLINEFORM0 groups, the sefe model has INLINEFORM1 times as many embedding vectors than the standard embedding model. This may complicate inferences about the group-specific vectors, especially for groups with less data. Additionally, an object INLINEFORM2 may appear with very low frequency in a particular group. Thus, the naïve approach for building the sefe model without additional structure may be detrimental for the quality of the embeddings, especially for small-sized groups. To address this problem, we propose two different methods to tie the individual INLINEFORM3 together, sharing statistical strength among them. The first approach consists in a hierarchical embedding structure. The second approach is based on amortization. In both methods, we introduce a set of global embedding vectors INLINEFORM4 , and impose a particular structure to generate INLINEFORM5 from INLINEFORM6 .", "Hierarchical embedding structure. Here, we impose a hierarchical structure that allows sharing statistical strength among the per-group variables. For that, we assume that each INLINEFORM0 , where INLINEFORM1 is a fixed hyperparameter. Thus, we replace the efe objective function in Eq. EQREF7 with DISPLAYFORM0 ", "where the INLINEFORM0 -regularization term now applies only on INLINEFORM1 and the global vectors INLINEFORM2 .", "Fitting the hierarchical model involves maximizing Eq. EQREF11 with respect to INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 . We note that we have not reduced the number of parameters to be inferred; rather, we tie them together through a common prior distribution. We use stochastic gradient ascent to maximize Eq. EQREF11 .", "Amortization. The idea of amortization has been applied in the literature to develop amortized inference algorithms BIBREF14 , BIBREF15 . The main insight behind amortization is to reuse inferences about past experiences when presented with a new task, leveraging the accumulated knowledge to quickly solve the new problem. Here, we use amortization to control the number of parameters of the sefe model. In particular, we set the per-group embeddings INLINEFORM0 to be the output of a deterministic function of the global embedding vectors, INLINEFORM1 . We use a different function INLINEFORM2 for each group INLINEFORM3 , and we parameterize them using neural networks, similarly to other works on amortized inference BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 . Unlike standard uses of amortized inference, in sefe the input to the functions INLINEFORM4 is unobserved and must be estimated together with the parameters of the functions INLINEFORM5 .", "Depending on the architecture of the neural networks, the amortization can significantly reduce the number of parameters in the model (as compared to the non-amortized model), while still having the flexibility to model different embedding vectors for each group. The number of parameters in the sefe model is INLINEFORM0 , where INLINEFORM1 is the number of groups, INLINEFORM2 is the dimensionality of the embedding vectors, and INLINEFORM3 is the number of objects (e.g., the vocabulary size). With amortization, we reduce the number of parameters to INLINEFORM4 , where INLINEFORM5 is the number of parameters of the neural network. Since typically INLINEFORM6 , this corresponds to a significant reduction in the number of parameters, even when INLINEFORM7 scales linearly with INLINEFORM8 .", "In the amortized sefe model, we need to introduce a new set of parameters INLINEFORM0 for each group INLINEFORM1 , corresponding to the neural network parameters. Given these, the group-specific embedding vectors INLINEFORM2 are obtained as DISPLAYFORM0 ", " We compare two architectures for the function INLINEFORM0 : fully connected feed-forward neural networks and residual networks BIBREF32 . For both, we consider one hidden layer with INLINEFORM1 units. Hence, the network parameters INLINEFORM2 are two weight matrices, DISPLAYFORM0 ", " i.e., INLINEFORM0 parameters. The neural network takes as input the global embedding vector INLINEFORM1 , and it outputs the group-specific embedding vectors INLINEFORM2 . The mathematical expression for INLINEFORM3 for a feed-forward neural network and a residual network is respectively given by DISPLAYFORM0 ", " where we have considered the hyperbolic tangent nonlinearity. The main difference between both network architectures is that the residual network focuses on modeling how the group-specific embedding vectors INLINEFORM0 differ from the global vectors INLINEFORM1 . That is, if all weights were set to 0, the feed-forward network would output 0, while the residual network would output the global vector INLINEFORM2 for all groups.", "The objective function under amortization is given by DISPLAYFORM0 ", "We maximize this objective with respect to INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 using stochastic gradient ascent. We implement the hierarchical and amortized sefe models in TensorFlow BIBREF33 , which allows us to leverage automatic differentiation.", "Example: structured Bernoulli embeddings for grouped text data. Here, we consider a set of documents broken down into groups, such as political affiliations or scientific disciplines. We can represent the data as a binary matrix INLINEFORM0 and a set of group indicators INLINEFORM1 . Since only one word can appear in a certain position INLINEFORM2 , the matrix INLINEFORM3 contains one non-zero element per column. In embedding models, we ignore this one-hot constraint for computational efficiency, and consider that the observations are generated following a set of conditional Bernoulli distributions BIBREF2 , BIBREF10 . Given that most of the entries in INLINEFORM4 are zero, embedding models typically downweigh the contribution of the zeros to the objective function. BIBREF2 use negative sampling, which consists in randomly choosing a subset of the zero observations. This corresponds to a biased estimate of the gradient in a Bernoulli exponential family embedding model BIBREF10 .", "The context INLINEFORM0 is given at each position INLINEFORM1 by the set of surrounding words in the document, according to a fixed-size window.", "Example: structured Poisson embeddings for grouped shopping data. efe and sefe extend to applications beyond text and we use sefe to model supermarket purchases broken down by month. For each market basket INLINEFORM0 , we have access to the month INLINEFORM1 in which that shopping trip happened. Now, the rows of the data matrix INLINEFORM2 index items, while columns index shopping trips. Each element INLINEFORM3 denotes the number of units of item INLINEFORM4 purchased at trip INLINEFORM5 . Unlike text, each column of INLINEFORM6 may contain more than one non-zero element. The context INLINEFORM7 corresponds to the set of items purchased in trip INLINEFORM8 , excluding INLINEFORM9 .", "In this case, we use the Poisson conditional distribution, which is more appropriate for count data. In Poisson sefe, we also downweigh the contribution of the zeros in the objective function, which provides better results because it allows the inference to focus on the positive signal of the actual purchases BIBREF10 , BIBREF2 ." ], [ "In this section, we describe the experimental study. We fit the sefe model on three datasets and compare it against the efe BIBREF10 . Our quantitative results show that sharing the context vectors provides better results, and that amortization and hierarchical structure give further improvements.", "Data. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .", "ArXiv papers: This dataset contains the abstracts of papers published on the ArXiv under the 19 different tags between April 2007 and June 2015. We treat each tag as a group and fit sefe with the goal of uncovering which words have the strongest shift in usage. We split the abstracts into training, validation, and test sets, with proportions of INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.", "Senate speeches: This dataset contains U.S. Senate speeches from 1994 to mid 2009. In contrast to the ArXiv collection, it is a transcript of spoken language. We group the data into state of origin of the speaker and his or her party affiliation. Only affiliations with the Republican and Democratic Party are considered. As a result, there are 83 groups (Republicans from Alabama, Democrats from Alabama, Republicans from Arkansas, etc.). Some of the state/party combinations are not available in the data, as some of the 50 states have only had Senators with the same party affiliation. We split the speeches into training ( INLINEFORM0 ), validation ( INLINEFORM1 ), and testing ( INLINEFORM2 ).", "Grocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.", "For the text corpora, we fix the vocabulary to the 15k most frequent terms and remove all words that are not in the vocabulary. Following BIBREF2 , we additionally remove each word with probability INLINEFORM0 , where INLINEFORM1 is the word frequency. This downsamples especially the frequent words and speeds up training. (Sizes reported in Table TABREF17 are the number of words remaining after preprocessing.)", "Models. Our goal is to fit the sefe model on these datasets. For the text data, we use the Bernoulli distribution as the conditional exponential family, while for the shopping data we use the Poisson distribution, which is more appropriate for count data.", "On each dataset, we compare four approaches based on sefe with two efe BIBREF10 baselines. All are fit using sgd BIBREF34 . In particular, we compare the following methods:" ] ], "section_name": [ "Introduction", "Model Description", "Background: Exponential Family Embeddings", "Structured Exponential Family Embeddings", "Empirical Study" ] }
{ "answers": [ { "annotation_id": [ "1d75b61c6e1083c2b5d201a2aa0111eda0c84cb0", "cfe781ff996de916753fff939d1eea485284bd4b" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Data. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .", "Grocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.", "FLOAT SELECTED: Table 1: Group structure and size of the three corpora analyzed in Section 3." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Data. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .", "Grocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.", "FLOAT SELECTED: Table 1: Group structure and size of the three corpora analyzed in Section 3." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "1faedd7606db854b1645d4ea07313b2b022074ec", "a09df34e5d09bc0fb463a622f63204860275d79f" ], "answer": [ { "evidence": [ "In this section, we describe the experimental study. We fit the sefe model on three datasets and compare it against the efe BIBREF10 . Our quantitative results show that sharing the context vectors provides better results, and that amortization and hierarchical structure give further improvements.", "Data. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .", "ArXiv papers: This dataset contains the abstracts of papers published on the ArXiv under the 19 different tags between April 2007 and June 2015. We treat each tag as a group and fit sefe with the goal of uncovering which words have the strongest shift in usage. We split the abstracts into training, validation, and test sets, with proportions of INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.", "Senate speeches: This dataset contains U.S. Senate speeches from 1994 to mid 2009. In contrast to the ArXiv collection, it is a transcript of spoken language. We group the data into state of origin of the speaker and his or her party affiliation. Only affiliations with the Republican and Democratic Party are considered. As a result, there are 83 groups (Republicans from Alabama, Democrats from Alabama, Republicans from Arkansas, etc.). Some of the state/party combinations are not available in the data, as some of the 50 states have only had Senators with the same party affiliation. We split the speeches into training ( INLINEFORM0 ), validation ( INLINEFORM1 ), and testing ( INLINEFORM2 ).", "Grocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.", "For the text corpora, we fix the vocabulary to the 15k most frequent terms and remove all words that are not in the vocabulary. Following BIBREF2 , we additionally remove each word with probability INLINEFORM0 , where INLINEFORM1 is the word frequency. This downsamples especially the frequent words and speeds up training. (Sizes reported in Table TABREF17 are the number of words remaining after preprocessing.)", "Models. Our goal is to fit the sefe model on these datasets. For the text data, we use the Bernoulli distribution as the conditional exponential family, while for the shopping data we use the Poisson distribution, which is more appropriate for count data.", "On each dataset, we compare four approaches based on sefe with two efe BIBREF10 baselines. All are fit using sgd BIBREF34 . In particular, we compare the following methods:" ], "extractive_spans": [ "On each dataset, we compare four approaches based on sefe with two efe BIBREF10 baselines. All are fit using sgd BIBREF34 . In particular, we compare the following methods:" ], "free_form_answer": "", "highlighted_evidence": [ "In this section, we describe the experimental study. We fit the sefe model on three datasets and compare it against the efe BIBREF10 . Our quantitative results show that sharing the context vectors provides better results, and that amortization and hierarchical structure give further improvements.", "Data. We apply the sefe on three datasets: ArXiv papers, U.S. Senate speeches, and purchases on supermarket grocery shopping data. We describe these datasets below, and we provide a summary of the datasets in Table TABREF17 .", "ArXiv papers: This dataset contains the abstracts of papers published on the ArXiv under the 19 different tags between April 2007 and June 2015. We treat each tag as a group and fit sefe with the goal of uncovering which words have the strongest shift in usage. We split the abstracts into training, validation, and test sets, with proportions of INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.\n\nSenate speeches: This dataset contains U.S. Senate speeches from 1994 to mid 2009. In contrast to the ArXiv collection, it is a transcript of spoken language. We group the data into state of origin of the speaker and his or her party affiliation. Only affiliations with the Republican and Democratic Party are considered. As a result, there are 83 groups (Republicans from Alabama, Democrats from Alabama, Republicans from Arkansas, etc.). Some of the state/party combinations are not available in the data, as some of the 50 states have only had Senators with the same party affiliation. We split the speeches into training ( INLINEFORM0 ), validation ( INLINEFORM1 ), and testing ( INLINEFORM2 ).\n\nGrocery shopping data: This dataset contains the purchases of INLINEFORM0 customers. The data covers a period of 97 weeks. After removing low-frequency items, the data contains INLINEFORM1 unique items at the 1.10upc (Universal Product Code) level. We split the data into a training, test, and validation sets, with proportions of INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 , respectively. The training data contains INLINEFORM5 shopping trips and INLINEFORM6 purchases in total.", "For the text corpora, we fix the vocabulary to the 15k most frequent terms and remove all words that are not in the vocabulary. Following BIBREF2 , we additionally remove each word with probability INLINEFORM0 , where INLINEFORM1 is the word frequency. This downsamples especially the frequent words and speeds up training. (Sizes reported in Table TABREF17 are the number of words remaining after preprocessing.)", "Models. Our goal is to fit the sefe model on these datasets. For the text data, we use the Bernoulli distribution as the conditional exponential family, while for the shopping data we use the Poisson distribution, which is more appropriate for count data.", "On each dataset, we compare four approaches based on sefe with two efe BIBREF10 baselines. All are fit using sgd BIBREF34 . In particular, we compare the following methods:" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our contributions are thus as follows. We introduce the sefe model, extending efe to grouped data. We present two techniques to share statistical strength among the embedding vectors, one based on hierarchical modeling and one based on amortization. We carry out a thorough experimental study on two text databases, ArXiv papers by section and U.S. Congressional speeches by home state and political party. Using Poisson embeddings, we study market basket data from a large grocery store, grouped by season. On all three data sets, sefe outperforms efe in terms of held-out log-likelihood. Qualitatively, we demonstrate how sefe discovers which words are used most differently across U.S. states and political parties, and show how word usage changes in different ArXiv disciplines." ], "extractive_spans": [], "free_form_answer": "Calculate test log-likelihood on the three considered datasets", "highlighted_evidence": [ "We carry out a thorough experimental study on two text databases, ArXiv papers by section and U.S. Congressional speeches by home state and political party. Using Poisson embeddings, we study market basket data from a large grocery store, grouped by season. On all three data sets, sefe outperforms efe in terms of held-out log-likelihood." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "b4766137d62a4bffe4ad893d8d4642175eb56ba5" ], "answer": [ { "evidence": [ "We propose two methods to share statistical strength among the embedding vectors. The first approach is based on hierarchical modeling BIBREF13 , which assumes that the group-specific embedding representations are tied through a global embedding. The second approach is based on amortization BIBREF14 , BIBREF15 , which considers that the individual embeddings are the output of a deterministic function of a global embedding representation. We use stochastic optimization to fit large data sets." ], "extractive_spans": [ "the group-specific embedding representations are tied through a global embedding" ], "free_form_answer": "", "highlighted_evidence": [ "The first approach is based on hierarchical modeling BIBREF13 , which assumes that the group-specific embedding representations are tied through a global embedding." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "b2cf6a6d6dd8eec21f9d1ca58db1a60ca31c9190" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "2d00bb5489c8e80ac2b82ba92e9ab5d8ccc759e7" ], "answer": [ { "evidence": [ "Figure FIGREF1 illustrates the kind of variation that we can capture. We fit an sefe to ArXiv abstracts grouped into different sections, such as computer science (cs), quantitative finance (q-fin), and nonlinear sciences (nlin). sefe results in a per-section embedding of each term in the vocabulary. Using the fitted embeddings, we illustrate similar words to the word 1.10intelligence. We can see that how 1.10intelligence is used varies by field: in computer science the most similar words include 1.10artificial and 1.10ai; in finance, similar words include 1.10abilities and 1.10consciousness." ], "extractive_spans": [ "intelligence" ], "free_form_answer": "", "highlighted_evidence": [ "We can see that how 1.10intelligence is used varies by field: in computer science the most similar words include 1.10artificial and 1.10ai; in finance, similar words include 1.10abilities and 1.10consciousness." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "Do they evaluate on English only datasets?", "What experiments are used to demonstrate the benefits of this approach?", "What hierarchical modelling approach is used?", "How do co-purchase patterns vary across seasons?", "Which words are used differently across ArXiv?" ], "question_id": [ "40c0f97c3547232d6aa039fcb330f142668dea4b", "777217e025132ddc173cf33747ee590628a8f62f", "2dbf6fe095cd879a9bf40f110b7b72c8bdde9475", "7d483077ed7f2f504d59f4fc2f162741fa5ac23b", "de830c534c23f103288c198eb19174c76bfd38a1" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: (a) INTELLIGENCE is used differently across the ArXiv sections. Words with the closest embedding to the query are listed for 5 sections. (The embeddings were obtained by fitting an amortized S-EFE.) The method automatically orders the sections along the horizontal axis by their similarity in the usage of INTELLIGENCE. See Section 3 additional for details. (b) Graphical representation of S-EFE for data in S categories. The embedding vectors ρ(s)v are specific to each group, and the context vectors αv are shared across all categories.", "Table 1: Group structure and size of the three corpora analyzed in Section 3.", "Table 2: Test log-likelihood on the three considered datasets. S-EFE consistently achieves the highest held-out likelihood. The competing methods are the global EFE, which can not capture group variations, and the separate EFE, which cannot share information across groups.", "Table 3: List of the three most different words for different groups for the Congressional speeches. S-EFE uncovers which words are used most differently by Republican Senators (red) and Democratic Senators (blue) from different states. The complete table is in the Appendix." ], "file": [ "2-Figure1-1.png", "6-Table1-1.png", "8-Table2-1.png", "9-Table3-1.png" ] }
[ "What experiments are used to demonstrate the benefits of this approach?" ]
[ [ "1709.10367-Empirical Study-2", "1709.10367-Empirical Study-6", "1709.10367-Empirical Study-1", "1709.10367-Empirical Study-3", "1709.10367-Introduction-6", "1709.10367-Empirical Study-5", "1709.10367-Empirical Study-7", "1709.10367-Empirical Study-4", "1709.10367-Empirical Study-0" ] ]
[ "Calculate test log-likelihood on the three considered datasets" ]
276
1908.06267
Message Passing Attention Networks for Document Understanding
Graph neural networks have recently emerged as a very effective framework for processing graph-structured data. These models have achieved state-of-the-art performance in many tasks. Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code is publicly available at: https://github.com/giannisnik/mpad .
{ "paragraphs": [ [ "The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks (GNNs) BIBREF2, BIBREF3. However, GNNs have only recently started to be closely investigated, following the advent of deep learning. Some notable examples include BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. These approaches are known as spectral. Their similarity with message passing (MP) was observed by BIBREF9 and formalized by BIBREF13 and BIBREF14.", "The MP framework is based on the core idea of recursive neighborhood aggregation. That is, at every iteration, the representation of each vertex is updated based on messages received from its neighbors. All spectral GNNs can be described in terms of the MP framework.", "GNNs have been applied with great success to bioinformatics and social network data, for node classification, link prediction, and graph classification. However, a few studies only have focused on the application of the MP framework to representation learning on text. This paper proposes one such application. More precisely, we represent documents as word co-occurrence networks, and develop an expressive MP GNN tailored to document understanding, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Evaluation on 10 document classification datasets show that our architectures learn representations that are competitive with the state-of-the-art. Furthermore, ablation experiments shed light on the impact of various architectural choices.", "In what follows, we first provide some background about the MP framework (in sec. SECREF2), thoroughly describe and explain MPAD (sec. SECREF3), present our experimental framework (sec. SECREF4), report and interpret our results (sec. SECREF5), and provide a review of the relevant literature (sec. SECREF6)." ], [ "BIBREF13 proposed a MP framework under which many of the recently introduced GNNs can be reformulated. MP consists in an aggregation phase followed by a combination phase BIBREF14. More precisely, let $G(V,E)$ be a graph, and let us consider $v \\in V$. At time $t+1$, a message vector $\\mathbf {m}_v^{t+1}$ is computed from the representations of the neighbors $\\mathcal {N}(v)$ of $v$:", "", "The new representation $\\mathbf {h}^{t+1}_v$ of $v$ is then computed by combining its current feature vector $\\mathbf {h}^{t}_v$ with the message vector $\\mathbf {m}_v^{t+1}$:", "", "Messages are passed for $T$ time steps. Each step is implemented by a different layer of the MP network. Hence, iterations correspond to network depth. The final feature vector $\\mathbf {h}_v^T$ of $v$ is based on messages propagated from all the nodes in the subtree of height $T$ rooted at $v$. It captures both the topology of the neighborhood of $v$ and the distribution of the vertex representations in it.", "If a graph-level feature vector is needed, e.g., for classification or regression, a READOUT pooling function, that must be invariant to permutations, is applied:", "", "Next, we present the MP network we developed for document understanding." ], [ "We represent a document as a statistical word co-occurrence network BIBREF18, BIBREF19 with a sliding window of size 2 overspanning sentences. Let us denote that graph $G(V,E)$. Each unique word in the preprocessed document is represented by a node in $G$, and an edge is added between two nodes if they are found together in at least one instantiation of the window. $G$ is directed and weighted: edge directions and weights respectively capture text flow and co-occurrence counts.", "$G$ is a compact representation of its document. In $G$, immediate neighbors are consecutive words in the same sentence. That is, paths of length 2 correspond to bigrams. Paths of length more than 2 can correspond either to traditional $n$-grams or to relaxed $n$-grams, that is, words that never appear in the same sentence but co-occur with the same word(s). Such nodes are linked through common neighbors.", "Master node. Inspired by BIBREF3, our $G$ also includes a special document node, linked to all other nodes via unit weight bi-directional edges. In what follows, let us denote by $n$ the number of nodes in $G$, including the master node." ], [ "We formulate our AGGREGATE function as:", "", "where $\\mathbf {H}^t \\in \\mathbb {R}^{n \\times d}$ contains node features ($d$ is a hyperparameter), and $\\mathbf {A} \\in \\mathbb {R}^{n \\times n}$ is the adjacency matrix of $G$. Since $G$ is directed, $\\mathbf {A}$ is asymmetric. Also, $\\mathbf {A}$ has zero diagonal as we choose not to consider the feature of the node itself, only that of its incoming neighbors, when updating its representation. Since $G$ is weighted, the $i^{th}$ row of $A$ contains the weights of the edges incoming on node $v_i$. $\\mathbf {D} \\in \\mathbb {R}^{n \\times n}$ is the diagonal in-degree matrix of $G$. MLP denotes a multi-layer perceptron, and $\\mathbf {M}^{t+1} \\in \\mathbb {R}^{n \\times d}$ is the message matrix.", "The use of a MLP was motivated by the observation that for graph classification, MP neural nets with 1-layer perceptrons are inferior to their MLP counterparts BIBREF14. Indeed, 1-layer perceptrons are not universal approximators of multiset functions. Note that like in BIBREF14, we use a different MLP at each layer.", "Renormalization. The rows of $\\mathbf {D}^{-1}\\mathbf {A}$ sum to 1. This is equivalent to the renormalization trick of BIBREF9, but using only the in-degrees. That is, instead of computing a weighted sum of the incoming neighbors' feature vectors, we compute a weighted average of them. The coefficients are proportional to the strength of co-occurrence between words. One should note that by averaging, we lose the ability to distinguish between different neighborhood structures in some special cases, that is, we lose injectivity. Such cases include neighborhoods in which all nodes have the same representations, and neighborhoods of different sizes containing various representations in equal proportions BIBREF14. As suggested by the results of an ablation experiment, averaging is better than summing in our application (see subsection SECREF30). Note that instead of simply summing/averaging, we also tried using GAT-like attention BIBREF11 in early experiments, without obtaining better results.", "As far as our COMBINE function, we use the Gated Recurrent Unit BIBREF20, BIBREF21:", "", "Omitting biases for readability, we have:", "", "where the $\\mathbf {W}$ and $\\mathbf {U}$ matrices are trainable weight matrices not shared across time steps, $\\sigma (\\mathbf {x}) = 1/(1+\\exp (-\\mathbf {x}))$ is the sigmoid function, and $\\mathbf {R}$ and $\\mathbf {Z}$ are the parameters of the reset and update gates. The reset gate controls the amount of information from the previous time step (in $\\mathbf {H}^t$) that should propagate to the candidate representations, $\\tilde{\\mathbf {H}}^{t+1}$. The new representations $\\mathbf {H}^{t+1}$ are finally obtained by linearly interpolating between the previous and the candidate ones, using the coefficients returned by the update gate.", "Interpretation. Updating node representations through a GRU should in principle allow nodes to encode a combination of local and global signals (low and high values of $t$, resp.), by allowing them to remember about past iterations. In addition, we also explicitly consider node representations at all iterations when reading out (see Eq. DISPLAY_FORM18).", "" ], [ "After passing messages and performing updates for $T$ iterations, we obtain a matrix $\\mathbf {H}^T \\in \\mathbb {R}^{n \\times d}$ containing the final vertex representations. Let $\\hat{G}$ be graph $G$ without the special document node, and matrix $\\mathbf {\\hat{H}}^T \\in \\mathbb {R}^{(n-1) \\times d}$ be the corresponding representation matrix (i.e., $\\mathbf {H}^T$ without the row of the document node).", "We use as our READOUT function the concatenation of self-attention applied to $\\mathbf {\\hat{H}}^T$ with the final document node representation. More precisely, we apply a global self-attention mechanism BIBREF22 to the rows of $\\mathbf {\\hat{H}}^T$. As shown in Eq. DISPLAY_FORM17, $\\mathbf {\\hat{H}}^T$ is first passed to a dense layer parameterized by matrix $\\mathbf {W}_A^T \\in \\mathbb {R}^{d \\times d}$. An alignment vector $\\mathbf {a}$ is then derived by comparing, via dot products, the rows of the output of the dense layer $\\mathbf {Y}^T \\in \\mathbb {R}^{(n-1) \\times d}$ with a trainable vector $\\mathbf {v}^T \\in \\mathbb {R}^d$ (initialized randomly) and normalizing with a softmax. The normalized alignment coefficients are finally used to compute the attentional vector $\\mathbf {u}^T \\in \\mathbb {R}^d$ as a weighted sum of the final representations $\\mathbf {\\hat{H}}^T$.", "", "", "Note that we tried with multiple context vectors, i.e., with a matrix $\\mathbf {V}^T$ instead of a vector $\\mathbf {v}^T$, like in BIBREF22, but results were not convincing, even when adding a regularization term to the loss to favor diversity among the rows of $\\mathbf {V}^T$.", "Master node skip connection. $\\mathbf {h}_G^T \\in \\mathbb {R}^{2d}$ is obtained by concatenating $\\mathbf {u}^T$ and the final master node representation. That is, the master node vector bypasses the attention mechanism. This is equivalent to a skip or shortcut connection BIBREF23. The reason behind this choice is that we expect the special document node to learn a high-level summary about the document, such as its size, vocabulary, etc. (more details are given in subsection SECREF30). Therefore, by making the master node bypass the attention layer, we directly inject global information about the document into its final representation.", "Multi-readout. BIBREF14, inspired by Jumping Knowledge Networks BIBREF12, recommend to not only use the final representations when performing readout, but also that of the earlier steps. Indeed, as one iterates, node features capture more and more global information. However, retaining more local, intermediary information might be useful too. Thus, instead of applying the readout function only to $t=T$, we apply it to all time steps and concatenate the results, finally obtaining $\\mathbf {h}_G \\in \\mathbb {R}^{T \\times 2d}$ :", "", "In effect, with this modification, we take into account features based on information aggregated from subtrees of different heights (from 1 to $T$), corresponding to local and global features." ], [ "Through the successive MP iterations, it could be argued that MPAD implicitly captures some soft notion of the hierarchical structure of documents (words $\\rightarrow $ bigrams $\\rightarrow $ compositions of bigrams, etc.). However, it might be beneficial to explicitly capture document hierarchy. Hierarchical architectures have brought significant improvements to many NLP tasks, such as language modeling and generation BIBREF24, BIBREF25, sentiment and topic classification BIBREF26, BIBREF27, and spoken language understanding BIBREF28, BIBREF29. Inspired by this line of research, we propose several hierarchical variants of MPAD, detailed in what follows. In all of them, we represent each sentence in the document as a word co-occurrence network, and obtain an embedding for it by applying MPAD as previously described.", "MPAD-sentence-att. Here, the sentence embeddings are simply combined through self-attention.", "MPAD-clique. In this variant, we build a complete graph where each node represents a sentence. We then feed that graph to MPAD, where the feature vectors of the nodes are initialized with the sentence embeddings previously obtained.", "MPAD-path. This variant is similar to the clique one, except that instead of a complete graph, we build a path according to the natural flow of the text. That is, two nodes are linked by a directed edge if the two sentences they represent follow each other in the document." ], [ "We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21.", "(1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes).", "(2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles.", "(3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes.", "(4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences).", "(5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering.", "(6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie.", "(7) TREC BIBREF35 consists of questions that are classified into 6 different categories.", "(8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive).", "(9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative.", "(10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge." ], [ "We evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants.", "doc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier.", "CNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions).", "DAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax.", "Tree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees.", "DRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees.", "LSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations.", "C-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM.", "SPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents.", "WMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used.", "S-WMD BIBREF46 is a supervised extension of the Word Mover's Distance.", "Semantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space.", "LSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector.", "HN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level." ], [ "We preprocess all datasets using the code of BIBREF38. On Yelp2013, we also replace all tokens appearing strictly less than 6 times with a special UNK token, like in BIBREF27. We then build a directed word co-occurrence network from each document, with a window of size 2.", "We use two MP iterations ($T$=2) for the basic MPAD, and two MP iterations at each level, for the hierarchical variants. We set $d$ to 64, except on IMDB and Yelp on which $d=128$, and use a two-layer MLP. The final graph representations are passed through a softmax for classification. We train MPAD in an end-to-end fashion by minimizing the cross-entropy loss function with the Adam optimizer BIBREF48 and an initial learning rate of 0.001.", "To regulate potential differences in magnitude, we apply batch normalization after concatenating the feature vector of the master node with the self-attentional vector, that is, after the skip connection (see subsection SECREF16). To prevent overfitting, we use dropout BIBREF49 with a rate of 0.5. We select the best epoch, capped at 200, based on the validation accuracy. When cross-validation is used (see 3rd column of Table TABREF21), we construct a validation set by randomly sampling 10% of the training set of each fold.", "On all datasets except Yelp2013, we use the publicly available 300-dimensional pre-trained Google News vectors ($D$=300) BIBREF50 to initialize the node representations $\\mathbf {H}^0$. On Yelp2013, we follow BIBREF27 and learn our own word vectors from the training and validation sets with the gensim implementation of word2vec BIBREF51.", "MPAD was implemented in Python 3.6 using the PyTorch library BIBREF52. All experiments were run on a single machine consisting of a 3.4 GHz Intel Core i7 CPU with 16 GB of RAM and an NVidia GeForce Titan Xp GPU." ], [ "Experimental results are shown in Table TABREF28. For the baselines, the best scores reported in each original paper are shown. MPAD reaches best performance on 7 out of 10 datasets, and is close second elsewhere. Moreover, the 7 datasets on which MPAD ranks first widely differ in training set size, number of categories, and prediction task (topic, sentiment, subjectivity), which indicates that MPAD can perform well in different settings.", "MPAD vs. hierarchical variants. On 9 datasets out of 10, one or more of the hierarchical variants outperform the vanilla MPAD architecture, highlighting the benefit of explicitly modeling the hierarchical nature of documents.", "However, on Subjectivity, standard MPAD outperforms all hierarchical variants. On TREC, it reaches the same accuracy. We hypothesize that in some cases, using a different graph to separately encode each sentence might be worse than using one single graph to directly encode the document. Indeed, in the single document graph, some words that never appear in the same sentence can be connected through common neighbors, as was explained in subsection SECREF7. So, this way, some notion of cross-sentence context is captured while learning representations of words, bigrams, etc. at each MP iteration. This creates better informed representations, resulting in a better document embedding. With the hierarchical variants, on the other hand, each sentence vector is produced in isolation, without any contextual information about the other sentences in the document. Therefore, the final sentence embeddings might be of lower quality, and as a group might also contain redundant/repeated information. When the sentence vectors are finally combined into a document representation, it is too late to take context into account." ], [ "To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.", "Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.", "Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation.", "No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document.", "No renormalization. Here, we do not use the renormalization trick of BIBREF9 during MP (see subsection SECREF10). That is, Eq. DISPLAY_FORM11 becomes $\\mathbf {M}^{t+1} = \\textsc {MLP}^{t+1}\\big (\\mathbf {A}\\mathbf {H}^{t}\\big )$. In other words, instead of computing a weighted average of the incoming neighbors' feature vectors, we compute a weighted sum of them. Unlike the mean, which captures distributions, the sum captures structural information BIBREF14. As shown in Table TABREF29, using sum instead of mean decreases performance everywhere, suggesting that in our application, capturing the distribution of neighbor representations is more important that capturing their structure. We hypothesize that this is the case because statistical word co-occurrence networks tend to have similar structural properties, regardless of the topic, polarity, sentiment, etc. of the corresponding documents.", "Neighbors-only. In this experiment, we replaced the GRU combine function (see Eq. DISPLAY_FORM14) with the identity function. That is, we simply have $\\mathbf {H}^{t+1}$=$\\mathbf {M}^{t+1}$. Since $\\mathbf {A}$ has zero diagonal, by doing so, we completely ignore the previous feature of the node itself when updating its representation. That is, the update is based entirely on its neighbors. Except on Reuters (almost no change), performance always suffers, stressing the need to take into account the root node during updates, not only its neighborhood." ], [ "In what follows, we offer a brief review of relevant studies, ranked by increasing order of similarity with our work.", "BIBREF9, BIBREF54, BIBREF11, BIBREF10 conduct some node classification experiments on citation networks, where nodes are scientific papers, i.e., textual data. However, text is only used to derive node feature vectors. The external graph structure, which plays a central role in determining node labels, is completely unrelated to text.", "On the other hand, BIBREF55, BIBREF7 experiment on traditional document classification tasks. They both build $k$-nearest neighbor similarity graphs based on the Gaussian diffusion kernel. More precisely, BIBREF55 build one single graph where nodes are documents and distance is computed in the BoW space. Node features are then used for classification. Closer to our work, BIBREF7 represent each document as a graph. All document graphs are derived from the same underlying structure. Only node features, corresponding to the entries of the documents' BoW vectors, vary. The underlying, shared structure is that of a $k$-NN graph where nodes are vocabulary terms and similarity is the cosine of the word embedding vectors. BIBREF7 then perform graph classification. However they found performance to be lower than that of a naive Bayes classifier.", "BIBREF56 use a GNN for hierarchical classification into a large taxonomy of topics. This task differs from traditional document classification. The authors represent documents as unweighted, undirected word co-occurrence networks with word embeddings as node features. They then use the spatial GNN of BIBREF15 to perform graph classification.", "The work closest to ours is probably that of BIBREF53. The authors adopt the semi-supervised node classification approach of BIBREF9. They build one single undirected graph from the entire dataset, with both word and document nodes. Document-word edges are weighted by TF-IDF and word-word edges are weighted by pointwise mutual information derived from co-occurrence within a sliding window. There are no document-document edges. The GNN is trained based on the cross-entropy loss computed only for the labeled nodes, that is, the documents in the training set. When the final node representations are obtained, one can use that of the test documents to classify them and evaluate prediction performance.", "There are significant differences between BIBREF53 and our work. First, our approach is inductive, not transductive. Indeed, while the node classification approach of BIBREF53 requires all test documents at training time, our graph classification model is able to perform inference on new, never-seen documents. The downside of representing documents as separate graphs, however, is that we lose the ability to capture corpus-level dependencies. Also, our directed graphs capture word ordering, which is ignored by BIBREF53. Finally, the approach of BIBREF53 requires computing the PMI for every word pair in the vocabulary, which may be prohibitive on datasets with very large vocabularies. On the other hand, the complexity of MPAD does not depend on vocabulary size." ], [ "We have proposed an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). Experiments conducted on 10 standard text classification datasets show that our architecture is competitive with the state-of-the-art. By processing weighted, directed word co-occurrence networks, MPAD is sensitive to word order and word-word relationship strength. To explicitly capture the hierarchical structure of documents, we also propose three hierarchical variants of MPAD, that we show bring improvements over the vanilla architecture." ], [ "We thank the NVidia corporation for the donation of a GPU as part of their GPU grant program." ] ], "section_name": [ "Introduction", "Message Passing Neural Networks", "Message Passing Attention network for Document understanding (MPAD) ::: Word co-occurrence networks", "Message Passing Attention network for Document understanding (MPAD) ::: Message passing", "Message Passing Attention network for Document understanding (MPAD) ::: Readout", "Message Passing Attention network for Document understanding (MPAD) ::: Hierarchical variants of MPAD", "Experiments ::: Datasets", "Experiments ::: Baselines", "Experiments ::: Model configuration and training", "Results and ablations ::: Results", "Results and ablations ::: Ablation studies", "Related work", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "50da4230eb9c55d46901ff29300a0120dc4e817c" ], "answer": [ { "evidence": [ "Results and ablations ::: Ablation studies", "To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.", "FLOAT SELECTED: Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2).", "Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation." ], "extractive_spans": [], "free_form_answer": "Based on table results provided changing directed to undirected edges had least impact - max abs difference of 0.33 points on all three datasets.", "highlighted_evidence": [ "Results and ablations ::: Ablation studies\nTo understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.", "FLOAT SELECTED: Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2).", "Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "22c8a53d34cbbe504b70eaece340fc99d3fbe988", "50eb56a9e5f2d7e4bf5451289d7f1c174cf1d191" ], "answer": [ { "evidence": [ "Results and ablations ::: Ablation studies", "To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.", "Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.", "FLOAT SELECTED: Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2)." ], "extractive_spans": [], "free_form_answer": "Increasing number of message passing iterations showed consistent improvement in performance - around 1 point improvement compared between 1 and 4 iterations", "highlighted_evidence": [ "Results and ablations ::: Ablation studies\nTo understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.\n\nNumber of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.", "FLOAT SELECTED: Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document." ], "extractive_spans": [ "Removing the master node deteriorates performance across all datasets" ], "free_form_answer": "", "highlighted_evidence": [ "No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "b437b7810b84d2ef1546eb5de88e973f65177bea" ], "answer": [ { "evidence": [ "Experiments ::: Baselines", "We evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants.", "doc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier.", "CNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions).", "DAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax.", "Tree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees.", "DRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees.", "LSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations.", "C-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM.", "SPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents.", "WMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used.", "S-WMD BIBREF46 is a supervised extension of the Word Mover's Distance.", "Semantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space.", "LSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector.", "HN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level." ], "extractive_spans": [ "doc2vec ", "CNN", "DAN", "Tree-LSTM", "DRNN", "LSTMN", "C-LSTM", "SPGK", "WMD", "S-WMD", "Semantic-CNN", "LSTM-GRNN", "HN-ATT" ], "free_form_answer": "", "highlighted_evidence": [ "Experiments ::: Baselines\nWe evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants.\n\ndoc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier.\n\nCNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions).\n\nDAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax.\n\nTree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees.\n\nDRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees.\n\nLSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations.\n\nC-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM.\n\nSPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents.\n\nWMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used.\n\nS-WMD BIBREF46 is a supervised extension of the Word Mover's Distance.\n\nSemantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space.\n\nLSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector.\n\nHN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "ab1d400e4b674a97342310487f7ed0a41b412136", "bd4feb32370781bd3cbe7b986abc7fbbca2a9f71" ], "answer": [ { "evidence": [ "We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21.", "(1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes).", "(2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles.", "(3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes.", "(4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences).", "(5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering.", "(6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie.", "(7) TREC BIBREF35 consists of questions that are classified into 6 different categories.", "(8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive).", "(9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative.", "(10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge." ], "extractive_spans": [ "Reuters", " BBCSport", "Polarity", "Subjectivity", "MPQA", "IMDB", "TREC", "SST-1", "SST-2", "Yelp2013" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21.\n\n(1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes).\n\n(2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles.\n\n(3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes.\n\n(4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences).\n\n(5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering.\n\n(6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie.\n\n(7) TREC BIBREF35 consists of questions that are classified into 6 different categories.\n\n(8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive).\n\n(9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative.\n\n(10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21.", "(1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes).", "(2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles.", "(3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes.", "(4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences).", "(5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering.", "(6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie.", "(7) TREC BIBREF35 consists of questions that are classified into 6 different categories.", "(8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive).", "(9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative.", "(10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge." ], "extractive_spans": [ " Reuters", "BBCSport BIBREF30", "Polarity BIBREF31", "Subjectivity BIBREF32", "MPQA BIBREF33", "IMDB BIBREF34", "TREC BIBREF35", "SST-1 BIBREF36", "SST-2 BIBREF36", "Yelp2013 BIBREF26" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21.\n\n(1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes).\n\n(2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles.\n\n(3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes.\n\n(4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences).\n\n(5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering.\n\n(6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie.\n\n(7) TREC BIBREF35 consists of questions that are classified into 6 different categories.\n\n(8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive).\n\n(9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative.\n\n(10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "e701afa35be0d8d633212ef831211ef56e85d080" ], "answer": [ { "evidence": [ "The MP framework is based on the core idea of recursive neighborhood aggregation. That is, at every iteration, the representation of each vertex is updated based on messages received from its neighbors. All spectral GNNs can be described in terms of the MP framework.", "GNNs have been applied with great success to bioinformatics and social network data, for node classification, link prediction, and graph classification. However, a few studies only have focused on the application of the MP framework to representation learning on text. This paper proposes one such application. More precisely, we represent documents as word co-occurrence networks, and develop an expressive MP GNN tailored to document understanding, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Evaluation on 10 document classification datasets show that our architectures learn representations that are competitive with the state-of-the-art. Furthermore, ablation experiments shed light on the impact of various architectural choices.", "The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks (GNNs) BIBREF2, BIBREF3. However, GNNs have only recently started to be closely investigated, following the advent of deep learning. Some notable examples include BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. These approaches are known as spectral. Their similarity with message passing (MP) was observed by BIBREF9 and formalized by BIBREF13 and BIBREF14." ], "extractive_spans": [], "free_form_answer": "It is a framework used to describe algorithms for neural networks represented as graphs. Main idea is that that representation of each vertex is updated based on messages from its neighbors.", "highlighted_evidence": [ "The MP framework is based on the core idea of recursive neighborhood aggregation. That is, at every iteration, the representation of each vertex is updated based on messages received from its neighbors. All spectral GNNs can be described in terms of the MP framework.\n\nGNNs have been applied with great success to bioinformatics and social network data, for node classification, link prediction, and graph classification. However, a few studies only have focused on the application of the MP framework to representation learning on text. This paper proposes one such application.", "The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "two", "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "Which component is the least impactful?", "Which component has the greatest impact on performance?", "What is the state-of-the-art system?", "Which datasets are used?", "What is the message passing framework?" ], "question_id": [ "2858620e0498db2f2224bfbed5263432f0570832", "545e92833b0ad4ba32eac5997edecf97a366a244", "cb12c19f9d14bef7b2f778892d9071eea2d6c63d", "9193006f359c53eb937deff1248ee3317978e576", "bc67b91dd73acded2d52fd4fee732b7a9722ea8b" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Statistics of the datasets used in our experiments. CV indicates that cross-validation was used. # pretrained words refers to the number of words in the vocabulary having an entry in the Google News word vectors (except for Yelp2013).", "Table 2: Classification accuracy on the 10 datasets. Best performance per column in bold, *best MPAD variant.", "Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2)." ], "file": [ "5-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png" ] }
[ "Which component is the least impactful?", "Which component has the greatest impact on performance?", "What is the message passing framework?" ]
[ [ "1908.06267-Results and ablations ::: Ablation studies-2", "1908.06267-7-Table3-1.png", "1908.06267-Results and ablations ::: Ablation studies-0" ], [ "1908.06267-Results and ablations ::: Ablation studies-0", "1908.06267-7-Table3-1.png", "1908.06267-Results and ablations ::: Ablation studies-1", "1908.06267-Results and ablations ::: Ablation studies-3" ], [ "1908.06267-Introduction-1", "1908.06267-Introduction-2", "1908.06267-Introduction-0" ] ]
[ "Based on table results provided changing directed to undirected edges had least impact - max abs difference of 0.33 points on all three datasets.", "Increasing number of message passing iterations showed consistent improvement in performance - around 1 point improvement compared between 1 and 4 iterations", "It is a framework used to describe algorithms for neural networks represented as graphs. Main idea is that that representation of each vertex is updated based on messages from its neighbors." ]
278
1701.05574
Harnessing Cognitive Features for Sarcasm Detection
In this paper, we propose a novel mechanism for enriching the feature vector, for the task of sarcasm detection, with cognitive features extracted from eye-movement patterns of human readers. Sarcasm detection has been a challenging research problem, and its importance for NLP applications such as review summarization, dialog systems and sentiment analysis is well recognized. Sarcasm can often be traced to incongruity that becomes apparent as the full sentence unfolds. This presence of incongruity- implicit or explicit- affects the way readers eyes move through the text. We observe the difference in the behaviour of the eye, while reading sarcastic and non sarcastic sentences. Motivated by his observation, we augment traditional linguistic and stylistic features for sarcasm detection with the cognitive features obtained from readers eye movement data. We perform statistical classification using the enhanced feature set so obtained. The augmented cognitive features improve sarcasm detection by 3.7% (in terms of F-score), over the performance of the best reported system.
{ "paragraphs": [ [ "Sarcasm is an intensive, indirect and complex construct that is often intended to express contempt or ridicule . Sarcasm, in speech, is multi-modal, involving tone, body-language and gestures along with linguistic artifacts used in speech. Sarcasm in text, on the other hand, is more restrictive when it comes to such non-linguistic modalities. This makes recognizing textual sarcasm more challenging for both humans and machines.", "Sarcasm detection plays an indispensable role in applications like online review summarizers, dialog systems, recommendation systems and sentiment analyzers. This makes automatic detection of sarcasm an important problem. However, it has been quite difficult to solve such a problem with traditional NLP tools and techniques. This is apparent from the results reported by the survey from DBLP:journals/corr/JoshiBC16. The following discussion brings more insights into this.", "Consider a scenario where an online reviewer gives a negative opinion about a movie through sarcasm: “This is the kind of movie you see because the theater has air conditioning”. It is difficult for an automatic sentiment analyzer to assign a rating to the movie and, in the absence of any other information, such a system may not be able to comprehend that prioritizing the air-conditioning facilities of the theater over the movie experience indicates a negative sentiment towards the movie. This gives an intuition to why, for sarcasm detection, it is necessary to go beyond textual analysis.", "We aim to address this problem by exploiting the psycholinguistic side of sarcasm detection, using cognitive features extracted with the help of eye-tracking. A motivation to consider cognitive features comes from analyzing human eye-movement trajectories that supports the conjecture: Reading sarcastic texts induces distinctive eye movement patterns, compared to literal texts. The cognitive features, derived from human eye movement patterns observed during reading, include two primary feature types:", "The cognitive features, along with textual features used in best available sarcasm detectors, are used to train binary classifiers against given sarcasm labels. Our experiments show significant improvement in classification accuracy over the state of the art, by performing such augmentation." ], [ "Sarcasm, in general, has been the focus of research for quite some time. In one of the pioneering works jorgensen1984test explained how sarcasm arises when a figurative meaning is used opposite to the literal meaning of the utterance. In the word of clark1984pretense, sarcasm processing involves canceling the indirectly negated message and replacing it with the implicated one. giora1995irony, on the other hand, define sarcasm as a mode of indirect negation that requires processing of both negated and implicated messages. ivanko2003context define sarcasm as a six tuple entity consisting of a speaker, a listener, Context, Utterance, Literal Proposition and Intended Proposition and study the cognitive aspects of sarcasm processing.", "Computational linguists have previously addressed this problem using rule based and statistical techniques, that make use of : (a) Unigrams and Pragmatic features BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 (b) Stylistic patterns BIBREF4 and patterns related to situational disparity BIBREF5 and (c) Hastag interpretations BIBREF6 , BIBREF7 .", "Most of the previously done work on sarcasm detection uses distant supervision based techniques (ex: leveraging hashtags) and stylistic/pragmatic features (emoticons, laughter expressions such as “lol” etc). But, detecting sarcasm in linguistically well-formed structures, in absence of explicit cues or information (like emoticons), proves to be hard using such linguistic/stylistic features alone.", "With the advent of sophisticated eye-trackers and electro/magneto-encephalographic (EEG/MEG) devices, it has been possible to delve deep into the cognitive underpinnings of sarcasm understanding. Filik2014, using a series of eye-tracking and EEG experiments try to show that for unfamiliar ironies, the literal interpretation would be computed first. They also show that a mismatch with context would lead to a re-interpretation of the statement, as being ironic. Camblin2007103 show that in multi-sentence passages, discourse congruence has robust effects on eye movements. This also implies that disrupted processing occurs for discourse incongruent words, even though they are perfectly congruous at the sentence level. In our previous work BIBREF8 , we augment cognitive features, derived from eye-movement patterns of readers, with textual features to detect whether a human reader has realized the presence of sarcasm in text or not.", "The recent advancements in the literature discussed above, motivate us to explore gaze-based cognition for sarcasm detection. As far as we know, our work is the first of its kind." ], [ "Sarcasm often emanates from incongruity BIBREF9 , which enforces the brain to reanalyze it BIBREF10 . This, in turn, affects the way eyes move through the text. Hence, distinctive eye-movement patterns may be observed in the case of successful processing of sarcasm in text in contrast to literal texts. This hypothesis forms the crux of our method for sarcasm detection and we validate this using our previously released freely available sarcasm dataset BIBREF8 enriched with gaze information." ], [ "The database consists of 1,000 short texts, each having 10-40 words. Out of these, 350 are sarcastic and are collected as follows: (a) 103 sentences are from two popular sarcastic quote websites, (b) 76 sarcastic short movie reviews are manually extracted from the Amazon Movie Corpus BIBREF11 by two linguists. (c) 171 tweets are downloaded using the hashtag #sarcasm from Twitter. The 650 non-sarcastic texts are either downloaded from Twitter or extracted from the Amazon Movie Review corpus. The sentences do not contain words/phrases that are highly topic or culture specific. The tweets were normalized to make them linguistically well formed to avoid difficulty in interpreting social media lingo. Every sentence in our dataset carries positive or negative opinion about specific “aspects”. For example, the sentence “The movie is extremely well cast” has positive sentiment about the aspect “cast”.", "", "", "The annotators were seven graduate students with science and engineering background, and possess good English proficiency. They were given a set of instructions beforehand and are advised to seek clarifications before they proceed. The instructions mention the nature of the task, annotation input method, and necessity of head movement minimization during the experiment." ], [ "The task assigned to annotators was to read sentences one at a time and label them with with binary labels indicating the polarity (i.e., positive/negative). Note that, the participants were not instructed to annotate whether a sentence is sarcastic or not., to rule out the Priming Effect (i.e., if sarcasm is expected beforehand, processing incongruity becomes relatively easier BIBREF12 ). The setup ensures its “ecological validity” in two ways: (1) Readers are not given any clue that they have to treat sarcasm with special attention. This is done by setting the task to polarity annotation (instead of sarcasm detection). (2) Sarcastic sentences are mixed with non sarcastic text, which does not give prior knowledge about whether the forthcoming text will be sarcastic or not.", "The eye-tracking experiment is conducted by following the standard norms in eye-movement research BIBREF13 . At a time, one sentence is displayed to the reader along with the “aspect” with respect to which the annotation has to be provided. While reading, an SR-Research Eyelink-1000 eye-tracker (monocular remote mode, sampling rate 500Hz) records several eye-movement parameters like fixations (a long stay of gaze) and saccade (quick jumping of gaze between two positions of rest) and pupil size.", "The accuracy of polarity annotation varies between 72%-91% for sarcastic texts and 75%-91% for non-sarcastic text, showing the inherent difficulty of sentiment annotation, when sarcasm is present in the text under consideration. Annotation errors may be attributed to: (a) lack of patience/attention while reading, (b) issues related to text comprehension, and (c) confusion/indecisiveness caused due to lack of context.", "For our analysis, we do not discard the incorrect annotations present in the database. Since our system eventually aims to involve online readers for sarcasm detection, it will be hard to segregate readers who misinterpret the text. We make a rational assumption that, for a particular text, most of the readers, from a fairly large population, will be able to identify sarcasm. Under this assumption, the eye-movement parameters, averaged across all readers in our setting, may not be significantly distorted by a few readers who would have failed to identify sarcasm. This assumption is applicable for both regular and multi-instance based classifiers explained in section SECREF6 ." ], [ "We observe distinct behavior during sarcasm reading, by analyzing the “fixation duration on the text” (also referred to as “dwell time” in the literature) and “scanpaths” of the readers." ], [ "Since sarcasm in text can be expected to induce cognitive load, it is reasonable to believe that it would require more processing time BIBREF14 . Hence, fixation duration normalized over total word count should usually be higher for a sarcastic text than for a non-sarcastic one. We observe this for all participants in our dataset, with the average fixation duration per word for sarcastic texts being at least 1.5 times more than that of non-sarcastic texts. To test the statistical significance, we conduct a two-tailed t-test (assuming unequal variance) to compare the average fixation duration per word for sarcastic and non-sarcastic texts. The hypothesized mean difference is set to 0 and the error tolerance limit ( INLINEFORM0 ) is set to 0.05. The t-test analysis, presented in Table TABREF11 , shows that for all participants, a statistically significant difference exists between the average fixation duration per word for sarcasm (higher average fixation duration) and non-sarcasm (lower average fixation duration). This affirms that the presence of sarcasm affects the duration of fixation on words.", "It is important to note that longer fixations may also be caused by other linguistic subtleties (such as difficult words, ambiguity and syntactically complex structures) causing delay in comprehension, or occulomotor control problems forcing readers to spend time adjusting eye-muscles. So, an elevated average fixation duration per word may not sufficiently indicate the presence of sarcasm. But we would also like to share that, for our dataset, when we considered readability (Flesch readability ease-score BIBREF15 ), number of words in a sentence and average character per word along with the sarcasm label as the predictors of average fixation duration following a linear mixed effect model BIBREF16 , sarcasm label turned out to be the most significant predictor with a maximum slope. This indicates that average fixation duration per word has a strong connection with the text being sarcastic, at least in our dataset.", "We now analyze scanpaths to gain more insights into the sarcasm comprehension process." ], [ "Scanpaths are line-graphs that contain fixations as nodes and saccades as edges; the radii of the nodes represent the fixation duration. A scanpath corresponds to a participant's eye-movement pattern while reading a particular sentence. Figure FIGREF14 presents scanpaths of three participants for the sarcastic sentence S1 and the non-sarcastic sentence S2. The x-axis of the graph represents the sequence of words a reader reads, and the y-axis represents a temporal sequence in milliseconds.", "Consider a sarcastic text containing incongruous phrases A and B. Our qualitative scanpath-analysis reveals that scanpaths with respect to sarcasm processing have two typical characteristics. Often, a long regression - a saccade that goes to a previously visited segment - is observed when a reader starts reading B after skimming through A. In a few cases, the fixation duration on A and B are significantly higher than the average fixation duration per word. In sentence S1, we see long and multiple regressions from the two incongruous phrases “misconception” and “cherish”, and a few instances where phrases “always cherish” and “original misconception” are fixated longer than usual. Such eye-movement behaviors are not seen for S2.", "Though sarcasm induces distinctive scanpaths like the ones depicted in Figure FIGREF14 in the observed examples, presence of such patterns is not sufficient to guarantee sarcasm; such patterns may also possibly arise from literal texts. We believe that a combination of linguistic features, readability of text and features derived from scanpaths would help discriminative machine learning models learn sarcasm better." ], [ "We describe the features used for sarcasm detection in Table . The features enlisted under lexical,implicit incongruity and explicit incongruity are borrowed from various literature (predominantly from joshi2015harnessing). These features are essential to separate sarcasm from other forms semantic incongruity in text (for example ambiguity arising from semantic ambiguity or from metaphors). Two additional textual features viz. readability and word count of the text are also taken under consideration. These features are used to reduce the effect of text hardness and text length on the eye-movement patterns." ], [ "Readers' eye-movement behavior, characterized by fixations, forward saccades, skips and regressions, can be directly quantified by simple statistical aggregation (i.e., either computing features for individual participants and then averaging or performing a multi-instance based learning as explained in section SECREF6 ). Since these eye-movement attributes relate to the cognitive process in reading BIBREF17 , we consider these as features in our model. Some of these features have been reported by sarcasmunderstandability for modeling sarcasm understandability of readers. However, as far as we know, these features are being introduced in NLP tasks like textual sarcasm detection for the first time. The values of these features are believed to increase with the increase in the degree of surprisal caused by incongruity in text (except skip count, which will decrease)." ], [ "For these features, we rely on a graph structure, namely “saliency graphs\", derived from eye-gaze information and word sequences in the text.", "For each reader and each sentence, we construct a “saliency graph”, representing the reader's attention characteristics. A saliency graph for a sentence INLINEFORM0 for a reader INLINEFORM1 , represented as INLINEFORM2 , is a graph with vertices ( INLINEFORM3 ) and edges ( INLINEFORM4 ) where each vertex INLINEFORM5 corresponds to a word in INLINEFORM6 (may not be unique) and there exists an edge INLINEFORM7 between vertices INLINEFORM8 and INLINEFORM9 if R performs at least one saccade between the words corresponding to INLINEFORM10 and INLINEFORM11 .", "Figure FIGREF15 shows an example of a saliency graph.A saliency graph may be weighted, but not necessarily connected, for a given text (as there may be words in the given text with no fixation on them). The “complex” gaze features derived from saliency graphs are also motivated by the theory of incongruity. For instance, Edge Density of a saliency graph increases with the number of distinct saccades, which could arise from the complexity caused by presence of sarcasm. Similarly, the highest weighted degree of a graph is expected to be higher, if the node corresponds to a phrase, incongruous to some other phrase in the text." ], [ "We interpret sarcasm detection as a binary classification problem. The training data constitutes 994 examples created using our eye-movement database for sarcasm detection. To check the effectiveness of our feature set, we observe the performance of multiple classification techniques on our dataset through a stratified 10-fold cross validation. We also compare the classification accuracy of our system and the best available systems proposed by riloff2013sarcasm and joshi2015harnessing on our dataset. Using Weka BIBREF18 and LibSVM BIBREF19 APIs, we implement the following classifiers:" ], [ "Table TABREF17 shows the classification results considering various feature combinations for different classifiers and other systems. These are:", "Unigram (with principal components of unigram feature vectors),", "Sarcasm (the feature-set reported by joshi2015harnessing subsuming unigram features and features from other reported systems)", "Gaze (the simple and complex cognitive features we introduce, along with readability and word count features), and", "Gaze+Sarcasm (the complete set of features).", "For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance “bag” of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag.", "For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall.", "To see if the improvement obtained is statistically significant over the state-of-the art system with textual sarcasm features alone, we perform McNemar test. The output of the SVM classifier using only linguistic features used for sarcasm detection by joshi2015harnessing and the output of the MILR classifier with the complete set of features are compared, setting threshold INLINEFORM0 . There was a significant difference in the classifier's accuracy with p(two-tailed) = 0.02 with an odds-ratio of 1.43, showing that the classification accuracy improvement is unlikely to be observed by chance in 95% confidence interval." ], [ "One may argue that, considering simple measures of reading effort like “reading time” as cognitive feature instead of the expensive eye-tracking features for sarcasm detection may be a cost-effective solution. To examine this, we repeated our experiments with “reading time” considered as the only cognitive feature, augmented with the textual features. The F-scores of all the classifiers turn out to be close to that of the classifiers considering sarcasm feature alone and the difference in the improvement is not statistically significant ( INLINEFORM0 ). One the other hand, F-scores with gaze features are superior to the F-scores when reading time is considered as a cognitive feature." ], [ "We examine the effectiveness of cognitive features on the classification accuracy by varying the input training data size. To examine this, we create a stratified (keeping the class ratio constant) random train-test split of 80%:20%. We train our classifier with 100%, 90%, 80% and 70% of the training data with our whole feature set, and the feature combination from joshi2015harnessing. The goodness of our system is demonstrated by improvements in F-score and Kappa statistics, shown in Figure FIGREF22 .", "We further analyze the importance of features by ranking the features based on (a) Chi squared test, and (b) Information Gain test, using Weka's attribute selection module. Figure FIGREF23 shows the top 20 ranked features produced by both the tests. For both the cases, we observe 16 out of top 20 features to be gaze features. Further, in each of the cases, Average Fixation Duration per Word and Largest Regression Position are seen to be the two most significant features." ], [ "Table TABREF21 shows a few example cases from the experiment with stratified 80%-20% train-test split.", "Example sentence 1 is sarcastic, and requires extra-linguistic knowledge (about poor living conditions at Manchester). Hence, the sarcasm detector relying only on textual features is unable to detect the underlying incongruity. However, our system predicts the label successfully, possibly helped by the gaze features.", "Similarly, for sentence 2, the false sense of presence of incongruity (due to phrases like “Helped me” and “Can't stop”) affects the system with only linguistic features. Our system, though, performs well in this case also.", "Sentence 3 presents a false-negative case where it was hard for even humans to get the sarcasm. This is why our gaze features (and subsequently the complete set of features) account for erroneous prediction.", "In sentence 4, gaze features alone false-indicate presence of incongruity, whereas the system predicts correctly when gaze and linguistic features are taken together.", "From these examples, it can be inferred that, only gaze features would not have sufficed to rule out the possibility of detecting other forms of incongruity that do not result in sarcasm." ], [ "Errors committed by our system arise from multiple factors, starting from limitations of the eye-tracker hardware to errors committed by linguistic tools and resources. Also, aggregating various eye-tracking parameters to extract the cognitive features may have caused information loss in the regular classification setting." ], [ "In the current work, we created a novel framework to detect sarcasm, that derives insights from human cognition, that manifests over eye movement patterns. We hypothesized that distinctive eye-movement patterns, associated with reading sarcastic text, enables improved detection of sarcasm. We augmented traditional linguistic features with cognitive features obtained from readers' eye-movement data in the form of simple gaze-based features and complex features derived from a graph structure. This extended feature-set improved the success rate of the sarcasm detector by 3.7%, over the best available system. Using cognitive features in an NLP Processing system like ours is the first proposal of its kind.", "Our general approach may be useful in other NLP sub-areas like sentiment and emotion analysis, text summarization and question answering, where considering textual clues alone does not prove to be sufficient. We propose to augment this work in future by exploring deeper graph and gaze features. We also propose to develop models for the purpose of learning complex gaze feature representation, that accounts for the power of individual eye movement patterns along with the aggregated patterns of eye movements." ], [ "We thank the members of CFILT Lab, especially Jaya Jha and Meghna Singh, and the students of IIT Bombay for their help and support. " ] ], "section_name": [ "Introduction", "Related Work", "Eye-tracking Database for Sarcasm Analysis", "Document Description", "Task Description", "Analysis of Eye-movement Data", "Variation in the Average Fixation Duration per Word", "Analysis of Scanpaths", "Features for Sarcasm Detection", "Simple Gaze Based Features", "Complex Gaze Based Features", "The Sarcasm Classifier", "Results", "Considering Reading Time as a Cognitive Feature along with Sarcasm Features", "How Effective are the Cognitive Features", "Example Cases", "Error Analysis", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "640333fb1b1eb32a4350f17bc6cd1f3f8e9bcfa6", "ea424a9f7ab9dd4e0e4d80ee95d27b91b4884504" ], "answer": [ { "evidence": [ "For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall." ], "extractive_spans": [ "F-score", "Kappa" ], "free_form_answer": "", "highlighted_evidence": [ "For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "432dc6e1dc599d08e59cd1dc4f6a45cb8dd27f8e", "bc3b4c47b0d45a447b6c1fbbd065a17111580482" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Classification results for different feature combinations. P→ Precision, R→Recall, F→ F˙score, Kappa→ Kappa statistics show agreement with the gold labels. Subscripts 1 and -1 correspond to sarcasm and non-sarcasm classes respectively." ], "extractive_spans": [], "free_form_answer": "Gaze Sarcasm using Multi Instance Logistic Regression.", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Classification results for different feature combinations. P→ Precision, R→Recall, F→ F˙score, Kappa→ Kappa statistics show agreement with the gold labels. Subscripts 1 and -1 correspond to sarcasm and non-sarcasm classes respectively." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance “bag” of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag.", "For all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall." ], "extractive_spans": [ "the MILR classifier" ], "free_form_answer": "", "highlighted_evidence": [ "For all regular classifiers, the gaze features are averaged across participants and augmented with linguistic and sarcasm related features. For the MILR classifier, the gaze features derived from each participant are augmented with linguistic features and thus, a multi instance “bag” of features is formed for each sentence in the training data. This multi-instance dataset is given to an MILR classifier, which follows the standard multi instance assumption to derive class-labels for each bag.\n\nFor all the classifiers, our feature combination outperforms the baselines (considering only unigram features) as well as BIBREF3 , with the MILR classifier getting an F-score improvement of 3.7% and Kappa difference of 0.08. We also achieve an improvement of 2% over the baseline, using SVM classifier, when we employ our feature set. We also observe that the gaze features alone, also capture the differences between sarcasm and non-sarcasm classes with a high-precision but a low recall." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "fa29a62043bb436d79c7edd374d08e564cab2df8" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "d14ff6f305a54342fd7cb1c945450b737f2c699e" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "22d91f007f696304bce4bec44417779b0347625b" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: The complete set of features used in our system." ], "extractive_spans": [], "free_form_answer": "Readability (RED), Number of Words (LEN), Avg. Fixation Duration (FDUR), Avg. Fixation Count (FC), Avg. Saccade Length (SL), Regression Count (REG), Skip count (SKIP), Count of regressions from second half\nto first half of the sentence (RSF), Largest Regression Position (LREG), Edge density of the saliency gaze\ngraph (ED), Fixation Duration at Left/Source\n(F1H, F1S), Fixation Duration at Right/Target\n(F2H, F2S), Forward Saccade Word Count of\nSource (PSH, PSS), Forward SaccadeWord Count of Destination\n(PSDH, PSDS), Regressive Saccade Word Count of\nSource (RSH, RSS), Regressive Saccade Word Count of\nDestination (RSDH, RSDS)", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: The complete set of features used in our system." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "question": [ "What other evaluation metrics are looked at?", "What is the best reported system?", "What kind of stylistic features are obtained?", "What traditional linguistics features did they use?", "What cognitive features are used?" ], "question_id": [ "49c32a2a64eb41381e5f12ccea4150cac9f3303d", "bbb77f2d6685c9257763ca38afaaef29044b4018", "22732cb9476e521452bf0538f3fdb94cf3867651", "4e748cb2b5e74d905d9b24b53be6cfdf326e8054", "74b338d5352fe1a6fd592e38269a4c81fe79b866" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ] }
{ "caption": [ "Table 1: T-test statistics for average fixation duration time per word (in ms) for presence of sarcasm (represented by S) and its absence (NS) for participants P1-P7.", "Figure 1: Scanpaths of three participants for two negatively polar sentences sentence S1 and S2. Sentence S1 is sarcastic but S2 is not.", "Figure 2: Saliency graph of participant P1 for the sentence I will always cherish the original misconception I had of you.", "Table 2: The complete set of features used in our system.", "Table 3: Classification results for different feature combinations. P→ Precision, R→Recall, F→ F˙score, Kappa→ Kappa statistics show agreement with the gold labels. Subscripts 1 and -1 correspond to sarcasm and non-sarcasm classes respectively.", "Table 4: Example test-cases with S and NS representing labels for sarcastic and not-sarcastic respectively.", "Figure 3: Effect of training data size on classification in terms of (a) F-score and (b) Kappa statistics", "Figure 4: Significance of features observed by ranking the features using Attribute Evaluation based on Information Gain and Attribute Evaluation based on Chi-squared test. The length of the bar corresponds to the average merit of the feature. Features marked with * are gaze features." ], "file": [ "3-Table1-1.png", "4-Figure1-1.png", "5-Figure2-1.png", "6-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png", "8-Figure3-1.png", "8-Figure4-1.png" ] }
[ "What is the best reported system?", "What cognitive features are used?" ]
[ [ "1701.05574-Results-5", "1701.05574-7-Table3-1.png", "1701.05574-Results-6" ], [ "1701.05574-6-Table2-1.png" ] ]
[ "Gaze Sarcasm using Multi Instance Logistic Regression.", "Readability (RED), Number of Words (LEN), Avg. Fixation Duration (FDUR), Avg. Fixation Count (FC), Avg. Saccade Length (SL), Regression Count (REG), Skip count (SKIP), Count of regressions from second half\nto first half of the sentence (RSF), Largest Regression Position (LREG), Edge density of the saliency gaze\ngraph (ED), Fixation Duration at Left/Source\n(F1H, F1S), Fixation Duration at Right/Target\n(F2H, F2S), Forward Saccade Word Count of\nSource (PSH, PSS), Forward SaccadeWord Count of Destination\n(PSDH, PSDS), Regressive Saccade Word Count of\nSource (RSH, RSS), Regressive Saccade Word Count of\nDestination (RSDH, RSDS)" ]
279
1907.01468
How we do things with words: Analyzing text as social and cultural data
In this article we describe our experiences with computational text analysis. We hope to achieve three primary goals. First, we aim to shed light on thorny issues not always at the forefront of discussions about computational text analysis methods. Second, we hope to provide a set of best practices for working with thick social and cultural concepts. Our guidance is based on our own experiences and is therefore inherently imperfect. Still, given our diversity of disciplinary backgrounds and research practices, we hope to capture a range of ideas and identify commonalities that will resonate for many. And this leads to our final goal: to help promote interdisciplinary collaborations. Interdisciplinary insights and partnerships are essential for realizing the full potential of any computational text analysis that involves social and cultural concepts, and the more we are able to bridge these divides, the more fruitful we believe our work will be.
{ "paragraphs": [ [ "In June 2015, the operators of the online discussion site Reddit banned several communities under new anti-harassment rules. BIBREF0 used this opportunity to combine rich online data with computational methods to study a current question: Does eliminating these “echo chambers” diminish the amount of hate speech overall? Exciting opportunities like these, at the intersection of “thick” cultural and societal questions on the one hand, and the computational analysis of rich textual data on larger-than-human scales on the other, are becoming increasingly common.", "Indeed, computational analysis is opening new possibilities for exploring challenging questions at the heart of some of the most pressing contemporary cultural and social issues. While a human reader is better equipped to make logical inferences, resolve ambiguities, and apply cultural knowledge than a computer, human time and attention are limited. Moreover, many patterns are not obvious in any specific context, but only stand out in the aggregate. For example, in a landmark study, BIBREF1 analyzed the authorship of The Federalist Papers using a statistical text analysis by focusing on style, based on the distribution of function words, rather than content. As another example, BIBREF2 studied what defines English haiku and showed how computational analysis and close reading can complement each other. Computational approaches are valuable precisely because they help us identify patterns that would not otherwise be discernible.", "Yet these approaches are not a panacea. Examining thick social and cultural questions using computational text analysis carries significant challenges. For one, texts are culturally and socially situated. They reflect the ideas, values and beliefs of both their authors and their target audiences, and such subtleties of meaning and interpretation are difficult to incorporate in computational approaches. For another, many of the social and cultural concepts we seek to examine are highly contested — hate speech is just one such example. Choices regarding how to operationalize and analyze these concepts can raise serious concerns about conceptual validity and may lead to shallow or obvious conclusions, rather than findings that reflect the depth of the questions we seek to address.", "These are just a small sample of the many opportunities and challenges faced in computational analyses of textual data. New possibilities and frustrating obstacles emerge at every stage of research, from identification of the research question to interpretation of the results. In this article, we take the reader through a typical research process that involves measuring social or cultural concepts using computational methods, discussing both the opportunities and complications that often arise. In the Reddit case, for example, hate speech is measured, however imperfectly, by the presence of particular words semi-automatically extracted from a machine learning algorithm. Operationalizations are never perfect translations, and are often refined over the course of an investigation, but they are crucial.", "We begin our exploration with the identification of research questions, proceed through data selection, conceptualization, and operationalization, and end with analysis and the interpretation of results. The research process sounds more or less linear this way, but each of these phases overlaps, and in some instances turns back upon itself. The analysis phase, for example, often feeds back into the original research questions, which may continue to evolve for much of the project. At each stage, our discussion is critically informed by insights from the humanities and social sciences, fields that have focused on, and worked to tackle, the challenges of textual analysis—albeit at smaller scales—since their inception.", "In describing our experiences with computational text analysis, we hope to achieve three primary goals. First, we aim to shed light on thorny issues not always at the forefront of discussions about computational text analysis methods. Second, we hope to provide a set of best practices for working with thick social and cultural concepts. Our guidance is based on our own experiences and is therefore inherently imperfect. Still, given our diversity of disciplinary backgrounds and research practices, we hope to capture a range of ideas and identify commonalities that will resonate for many. And this leads to our final goal: to help promote interdisciplinary collaborations. Interdisciplinary insights and partnerships are essential for realizing the full potential of any computational text analysis that involves social and cultural concepts, and the more we are able to bridge these divides, the more fruitful we believe our work will be." ], [ "We typically start by identifying the questions we wish to explore. Can text analysis provide a new perspective on a “big question” that has been attracting interest for years? Or can we raise new questions that have only recently emerged, for example about social media? For social scientists working in computational analysis, the questions are often grounded in theory, asking: How can we explain what we observe? These questions are also influenced by the availability and accessibility of data sources. For example, the choice to work with data from a particular social media platform may be partly determined by the fact that it is freely available, and this will in turn shape the kinds of questions that can be asked. A key output of this phase are the concepts to measure, for example: influence; copying and reproduction; the creation of patterns of language use; hate speech. Computational analysis of text motivated by these questions is insight driven: we aim to describe a phenomenon or explain how it came about. For example, what can we learn about how and why hate speech is used or how this changes over time? Is hate speech one thing, or does it comprise multiple forms of expression? Is there a clear boundary between hate speech and other types of speech, and what features make it more or less ambiguous? In these cases, it is critical to communicate high-level patterns in terms that are recognizable.", "This contrasts with much of the work in computational text analysis, which tends to focus on automating tasks that humans perform inefficiently. These tasks range from core linguistically motivated tasks that constitute the backbone of natural language processing, such as part-of-speech tagging and parsing, to filtering spam and detecting sentiment. Many tasks are motivated by applications, for example to automatically block online trolls. Success, then, is often measured by performance, and communicating why a certain prediction was made—for example, why a document was labeled as positive sentiment, or why a word was classified as a noun—is less important than the accuracy of the prediction itself. The approaches we use and what we mean by `success' are thus guided by our research questions.", "Domain experts and fellow researchers can provide feedback on questions and help with dynamically revising them. For example, they may say “we already think we know that”, “that's too naïve”, “that doesn't reflect social reality” (negative); “two major camps in the field would give different answers to that question” (neutral); “we tried to look at that back in the 1960s, but we didn't have the technology” (positive); and “that sounds like something that people who made that archive would love”, “that's a really fundamental question” (very positive).", "Sometimes we also hope to connect to multiple disciplines. For example, while focusing on the humanistic concerns of an archive, we could also ask social questions such as “is this archive more about collaborative processes, culture-building or norm creation?” or “how well does this archive reflect the society in which it is embedded?\" BIBREF3 used quantitative methods to tell a story about Darwin's intellectual development—an essential biographical question for a key figure in the history of science. At the same time, their methods connected Darwin's development to the changing landscape of Victorian scientific culture, allowing them to contrast Darwin's “foraging” in the scientific literature of his time to the ways in which that literature was itself produced. Finally, their methods provided a case study, and validation of technical approaches, for cognitive scientists who are interested in how people explore and exploit sources of knowledge.", "Questions about potential “dual use” may also arise. Returning to our introductory example, BIBREF0 started with a deceptively simple question: if an internet platform eliminates forums for hate speech, does this impact hate speech in other forums? The research was motivated by the belief that a rising tide of online hate speech was (and is) making the internet increasingly unfriendly for disempowered groups, including minorities, women, and LBGTQ individuals. Yet the possibility of dual use troubled the researchers from the onset. Could the methodology be adopted to target the speech of groups like Black Lives Matter? Could it be adopted by repressive governments to minimize online dissent? While these concerns remained, they concluded that hypothetical dual use scenarios did not outweigh the tangible contribution this research could offer towards making the online environment more equal and just." ], [ "The next step involves deciding on the data sources, collecting and compiling the dataset, and inspecting its metadata." ], [ "Many scholars in the humanities and the social sciences work with sources that are not available in digital form, and indeed may never be digitized. Others work with both analogue and digitized materials, and the increasing digitization of archives has opened opportunities to study these archives in new ways. We can go to the canonical archive or open up something that nobody has studied before. For example, we might focus on major historical moments (French Revolution, post-Milosevic Serbia) or critical epochs (Britain entering the Victorian era, the transition from Latin to proto-Romance). Or, we could look for records of how people conducted science, wrote and consumed literature, and worked out their philosophies.", "A growing number of researchers work with born-digital sources or data. Born-digital data, e.g., from social media, generally do not involve direct elicitation from participants and therefore enable unobtrusive measurements BIBREF5 , BIBREF6 . In contrast, methods like surveys sometimes elicit altered responses from participants, who might adapt their responses to what they think is expected. Moreover, born-digital data is often massive, enabling large-scale studies of language and behavior in a variety of social contexts.", "Still, many scholars in the social sciences and humanities work with multiple data sources. The variety of sources typically used means that more than one data collection method is often required. For example, a project examining coverage of a UK General Election, could draw data from traditional media, web archives, Twitter and Facebook, campaign manifestos, etc. and might combine textual analysis of these materials with surveys, laboratory experiments, or field observations offline. In contrast, many computational studies based on born-digital data have focused on one specific source, such as Twitter.", "The use of born-digital data raises ethical concerns. Although early studies often treated privacy as a binary construct, many now acknowledge its complexity BIBREF7 . Conversations on private matters can be posted online, visible for all, but social norms regarding what should be considered public information may differ from the data's explicit visibility settings. Often no informed consent has been obtained, raising concerns and challenges regarding publishing content and potentially harmful secondary uses BIBREF8 , BIBREF4 .", "Recently, concerns about potential harms stemming from secondary uses have led a number of digital service providers to restrict access to born-digital data. Facebook and Twitter, for example, have reduced or eliminated public access to their application programming interfaces (APIs) and expressed hesitation about allowing academic researchers to use data from their platforms to examine certain sensitive or controversial topics. Despite the seeming abundance of born-digital data, we therefore cannot take its availability for granted.", "Working with data that someone else has acquired presents additional problems related to provenance and contextualisation. It may not always be possible to determine the criteria applied during the creation process. For example, why were certain newspapers digitized but not others, and what does this say about the collection? Similar questions arise with the use of born-digital data. For instance, when using the Internet Archive’s Wayback Machine to gather data from archived web pages, we need to consider what pages were captured, which are likely missing, and why.", "We must often repurpose born-digital data (e.g., Twitter was not designed to measure public opinion), but data biases may lead to spurious results and limit justification for generalization. In particular, data collected via black box APIs designed for commercial, not research, purposes are likely to introduce biases into the inferences we draw, and the closed nature of these APIs means we rarely know what biases are introduced, let alone how severely they might impact our research BIBREF10 . These, however, are not new problems. Historians, for example, have always understood that their sources were produced within particular contexts and for particular purposes, which are not always apparent to us.", "Non-representative data can still be useful for making comparisons within a sample. In the introductory example on hate speech BIBREF0 , the Reddit forums do not present a comprehensive or balanced picture of hate speech: the writing is almost exclusively in English, the targets of hate speech are mainly restricted (e.g., to black people, or women), and the population of writers is shaped by Reddit's demographics, which skew towards young white men. These biases limit the generalizability of the findings, which cannot be extrapolated to other languages, other types of hate speech, and other demographic groups. However, because the findings are based on measurements on the same sort of hate speech and the same population of writers, as long as the collected data are representative of this specific population, these biases do not pose an intractable validity problem if claims are properly restricted.", "The size of many newly available datasets is one of their most appealing characteristics. Bigger datasets often make statistics more robust. The size needed for a computational text analysis depends on the research goal: When it involves studying rare events, bigger datasets are needed. However, larger is not always better. Some very large archives are “secretly” collections of multiple and distinct processes that no in-field scholar would consider related. For example, Google Books is frequently used to study cultural patterns, but the over-representation of scientific articles in Google books can be problematic BIBREF11 . Even very large born-digital datasets usually cover limited timespans compared to, e.g., the Gutenberg archive of British novels.", "This stage of the research also raises important questions about fairness. Are marginalized groups, for example, represented in the tweets we have collected? If not, what types of biases might result from analyses relying on those tweets?", "Local experts and “informants” can help navigate the data. They can help understand the role an archive plays in the time and place. They might tell us: Is this the central archive, or a peripheral one? What makes it unusual? Or they might tell us how certain underrepresented communities use a social media platform and advise us on strategies for ensuring our data collection includes their perspectives.", "However, when it is practically infeasible to navigate the data in this way—for instance, when we cannot determine what is missing from Twitter's Streaming API or what webpages are left out of the Internet Archive—we should be open about the limitations of our analyses, acknowledging the flaws in our data and drawing cautious and reasonable conclusions from them. In all cases, we should report the choices we have made when creating or re-using any dataset." ], [ "After identifying the data source(s), the next step is compiling the data. This step is fundamental: if the sources cannot support a convincing result, no result will be convincing. In many cases, this involves defining a “core\" set of documents and a “comparison\" set. We often have a specific set of documents in mind: an author's work, a particular journal, a time period. But if we want to say that this “core\" set has some distinctive property, we need a “comparison\" set. Expanding the collection beyond the documents that we would immediately think of has the beneficial effect of increasing our sample size. Having more sources increases the chance that we will notice something consistent across many individually varying contexts.", "Comparing sets of documents can sometimes support causal inference, presented as a contrast between a treatment group and a control. In BIBREF0 , the treatment consisted of the text written in the two forums that were eventually closed by Reddit. However, identifying a control group required a considerable amount of time and effort. Reddit is a diverse platform, with a wide variety of interactional and linguistic styles; it would be pointless to compare hate speech forums against forums dedicated to, say, pictures of wrecked bicycles. Chandrasekharan et al. used a matching design, populating the control group with forums that were as similar as possible to the treatment group, but were not banned from Reddit. The goal is to estimate the counterfactual scenario: in this case, what would have happened had the site not taken action against these specific forums? An ideal control would make it possible to distinguish the effect of the treatment — closing the forums — from other idiosyncratic properties of texts that were treated.", "We also look for categories of documents that might not be useful. We might remove documents that are meta-discourse, like introductions and notes, or documents that are in a language that is not the primary language of the collection, or duplicates when we are working with archived web pages. However, we need to carefully consider the potential consequences of information we remove. Does its removal alter the data, or the interpretation of the data, we are analyzing? Are we losing anything that might be valuable at a later stage?" ], [ "Sometimes all we have is documents, but often we want to look at documents in the context of some additional information, or metadata. This additional information could tell us about the creation of documents (date, author, forum), or about the reception of documents (flagged as hate speech, helpful review). Information about text segments can be extremely valuable, but it is also prone to errors, inconsistencies, bias, and missing information. Examining metadata is a good way to check a collection's balance and representativeness. Are sources disproportionately of one form? Is the collection missing a specific time window? This type of curation can be extremely time consuming as it may require expert labeling, but it often leads to the most compelling results. Sometimes metadata are also used as target labels to develop machine learning models. But using them as a “ground truth” requires caution. Labels sometimes mean something different than we expect. For example, a down vote for a social media post could indicate that the content is offensive, or that the voter simply disagreed with the expressed view." ], [ "A core step in many analyses is translating social and cultural concepts (such as hate speech, rumor, or conversion) into measurable quantities. Before we can develop measurements for these concepts (the operationalization step, or the “implementation” step as denoted by BIBREF12 ), we need to define them. In the conceptualization phase we often start with questions such as: who are the domain experts, and how have they approached the topic? We are looking for a definition of the concept that is flexible enough to apply on our dataset, yet formal enough for computational research. For example, our introductory study on hate speech BIBREF0 used a statement on hate speech produced by the European Union Court of Human Rights. The goal was not to implement this definition directly in software but to use it as a reference point to anchor subsequent analyses.", "If we want to move beyond the use of ad hoc definitions, it can be useful to distinguish between what political scientists Adcock and Collier call the “background concept” and the “systematized concept” BIBREF13 . The background concept comprises the full and diverse set of meanings that might be associated with a particular term. This involves delving into theoretical, conceptual, and empirical studies to assess how a concept has been defined by other scholars and, most importantly, to determine which definition is most appropriate for the particular research question and the theoretical framework in which it is situated. That definition, in turn, represents the systematized concept: the formulation that is adopted for the study.", "It is important to consider that for social and cultural concepts there is no absolute ground truth. There are often multiple valid definitions for a concept (the “background” concept in the terms of Adcock and Collier), and definitions might be contested over time. This may be uncomfortable for computer scientists, whose primary measure of success is often based on comparing a model's output against “ground truth” or a “gold standard”, e.g., by comparing a sentiment classifier's output against manual annotations. However, the notion of ground truth is uncommon in the humanities and the social sciences and it is often taken too far in machine learning. BIBREF14 notes that in literary criticism and the digital humanities more broadly “interpretation, ambiguity, and argumentation are prized far above ground truth and definitive conclusions\". BIBREF15 draw attention to the different attitudes of literary scholars and computational linguists towards ambiguity, stating that “In Computational Linguistics [..] ambiguity is almost uniformly treated as a problem to be solved; the focus is on disambiguation, with the assumption that one true, correct interpretation exists.\" The latter is probably true for tasks such as spam filtering, but in the social sciences and the humanities many relevant concepts are fundamentally unobservable, such as latent traits of political actors BIBREF16 or cultural fit in organizations BIBREF17 , leading to validation challenges. Moreover, when the ground truth comes from people, it may be influenced by ideological priors, priming, simple differences of opinion or perspective, and many other factors BIBREF18 . We return to this issue in our discussions on validation and analysis." ], [ "In this phase we develop measures (or, “operationalizations”, or “indicators”) for the concepts of interest, a process called “operationalization”. Regardless of whether we are working with computers, the output produced coincides with Adcock and Collier's “scores”—the concrete translation and output of the systematized concept into numbers or labels BIBREF13 . Choices made during this phase are always tied to the question “Are we measuring what we intend to measure?” Does our operationalization match our conceptual definition? To ensure validity we must recognize gaps between what is important and what is easy to measure. We first discuss modeling considerations. Next, we describe several frequently used computational approaches and their limitations and strengths." ], [ "The variables (both predictors and outcomes) are rarely simply binary or categorical. For example, a study on language use and age could focus on chronological age (instead of, e.g., social age BIBREF19 ). However, even then, age can be modeled in different ways. Discretization can make the modeling easier and various NLP studies have modeled age as a categorical variable BIBREF20 . But any discretization raises questions: How many categories? Where to place the boundaries? Fine distinctions might not always be meaningful for the analysis we are interested in, but categories that are too broad can threaten validity. Other interesting variables include time, space, and even the social network position of the author. It is often preferable to keep the variable in its most precise form. For example, BIBREF21 perform exploration in the context of hypothesis testing by using latitude and longitude coordinates — the original metadata attached to geotagged social media such as tweets — rather than aggregating into administrative units such as counties or cities. This is necessary when such administrative units are unlikely to be related to the target concept, as is the case in their analysis of dialect differences. Focusing on precise geographical coordinates also makes it possible to recognize fine-grained effects, such as language variation across the geography of a city.", "Using a particular classification scheme means deciding which variations are visible, and which ones are hidden BIBREF22 . We are looking for a categorization scheme for which it is feasible to collect a large enough labeled document collection (e.g., to train supervised models), but which is also fine-grained enough for our purposes. Classification schemes rarely exhibit the ideal properties, i.e., that they are consistent, their categories are mutually exclusive, and that the system is complete BIBREF22 . Borderline cases are challenging, especially with social and cultural concepts, where the boundaries are often not clear-cut. The choice of scheme can also have ethical implications BIBREF22 . For example, gender is usually represented as a binary variable in NLP and computational models tend to learn gender-stereotypical patterns. The operationalization of gender in NLP has been challenged only recently BIBREF23 , BIBREF24 , BIBREF25 .", "Supervised and unsupervised learning are the most common approaches to learning from data. With supervised learning, a model learns from labeled data (e.g., social media messages labeled by sentiment) to infer (or predict) these labels from unlabeled texts. In contrast, unsupervised learning uses unlabeled data. Supervised approaches are especially suitable when we have a clear definition of the concept of interest and when labels are available (either annotated or native to the data). Unsupervised approaches, such as topic models, are especially useful for exploration. In this setting, conceptualization and operationalization may occur simultaneously, with theory emerging from the data BIBREF26 . Unsupervised approaches are also used when there is a clear way of measuring a concept, often based on strong assumptions. For example, BIBREF3 measure “surprise” in an analysis of Darwin's reading decisions based on the divergence between two probability distributions.", "From an analysis perspective, the unit of text that we are labeling (or annotating, or coding), either automatic or manual, can sometimes be different than one's final unit of analysis. For example, if in a study on media frames in news stories, the theoretical framework and research question point toward frames at the story level (e.g., what is the overall causal analysis of the news article?), the story must be the unit of analysis. Yet it is often difficult to validly and reliably code a single frame at the story level. Multiple perspectives are likely to sit side-by-side in a story. Thus, an article on income inequality might point to multiple causes, such as globalization, education, and tax policies. Coding at the sentence level would detect each of these causal explanations individually, but this information would need to be somehow aggregated to determine the overall story-level frame. Sometimes scholars solve this problem by only examining headlines and lead paragraphs, arguing that based on journalistic convention, the most important information can be found at the beginning of a story. However, this leads to a return to a shorter, less nuanced analysis.", "From a computational perspective, the unit of text can also make a huge difference, especially when we are using bag-of-words models, where word order within a unit does not matter. Small segments, like tweets, sometimes do not have enough information to make their semantic context clear. In contrast, larger segments, like novels, have too much variation, making it difficult to train focused models. Finding a good segmentation sometimes means combining short documents and subdividing long documents. The word “document\" can therefore be misleading. But it is so ingrained in the common NLP lexicon that we use it anyway in this article.", "For insight-driven text analysis, it is often critical that high-level patterns can be communicated. Furthermore, interpretable models make it easier to find spurious features, to do error analysis, and to support interpretation of results. Some approaches are effective for prediction, but harder to interpret. The value we place on interpretability can therefore influence the approach we choose. There is an increasing interest in developing interpretable or transparent models in the NLP and machine learning communities." ], [ "Many studies involve human coders. Sometimes the goal is to fully code the data, but in a computational analysis we often use the labels (or annotations) to train machine learning models to automatically recognize them, and to identify language patterns that are associated with these labels. For example, for a project analyzing rumors online BIBREF27 , conversation threads were annotated along different dimensions, including rumor versus non-rumor and stance towards a rumor.", "The collection of annotation choices make up an annotation scheme (or “codebook”). Existing schemes and annotations can be useful as starting points. Usually settling on an annotation scheme requires several iterations, in which the guidelines are updated and annotation examples are added. For example, a political scientist could use a mixed deductive-inductive strategy for developing a codebook. She starts by laying out a set of theory-driven deductive coding rules, which means that the broad principles of the coding rules are laid out without examining examples first. These are then tested (and possibly adjusted) based on a sample of the data. In line with Adcock and Collier's notion of “content validity” BIBREF13 , the goal is to assess whether the codebook adequately captures the systematized concept. By looking at the data themselves, she gains a better sense of whether some things have been left out of the coding rules and whether anything is superfluous, misleading, or confusing. Adjustments are made and the process is repeated, often with another researcher involved.", "The final annotations can be collected using a crowdsourcing platform, a smaller number of highly-trained annotators, or a group of experts. Which type of annotator to use should be informed by the complexity and specificity of the concept. For more complex concepts, highly-trained or expert annotators tend to produce more reliable results. However, complex concepts can sometimes be broken down into micro-tasks that can be performed independently in parallel by crowdsourced annotators. Concepts from highly specialized domains may require expert annotators. In all cases, however, some training will be required, and the training phase should involve continual checks of inter-annotator agreement (i.e. intercoder reliability) or checks against a gold standard (e.g. quizzes in crowdsourcing platforms).", "We also need to decide how inter-annotator agreement will be measured and what an acceptable level of agreement would be. Krippendorff's alpha is frequently used in the social sciences, but the right measure depends on the type of data and task. For manual coding, we can continually check inter-annotator agreement and begin introducing checks of intra-annotator agreement, too. For most communication scholars using only manual content analysis, an acceptable rate of agreement is achieved when Krippendorf's alpha reaches 0.80 or above. When human-coded data are used to validate machine learning algorithms, the reliability of the human-coded data is even more important. Disagreement between annotators can signal weaknesses of the annotation scheme, or highlight the inherent ambiguity in what we are trying to measure. Disagreement itself can be meaningful and can be integrated in subsequent analyses BIBREF28 , BIBREF29 ." ], [ "Preparing the data can be a complex and time-consuming process, often involving working with partially or wholly unstructured data. The pre-processing steps have a big impact on the operationalizations, subsequent analyses and reproducibility efforts BIBREF30 , and they are usually tightly linked to what we intend to measure. Unfortunately, these steps tend to be underreported, but documenting the pre-processing choices made is essential and is analogous to recording the decisions taken during the production of a scholarly edition or protocols in biomedical research. Data may also vary enormously in quality, depending on how it has been generated. Many historians, for example, work with text produced from an analogue original using Optical Character Recognition (OCR). Often, there will be limited information available regarding the accuracy of the OCR, and the degree of accuracy may even vary within a single corpus (e.g. where digitized text has been produced over a period of years, and the software has gradually improved). The first step, then, is to try to correct for common OCR errors. These will vary depending on the type of text, the date at which the `original' was produced, and the nature of the font and typesetting.", "One step that almost everyone takes is to tokenize the original character sequence into the words and word-like units. Tokenization is a more subtle and more powerful process than people expect. It is often done using regular expressions or scripts that have been circulating within the NLP community. Tokenization heuristics, however, can be badly confused by emoticons, creative orthography (e.g., U$A, sh!t), and missing whitespace. Multi-word terms are also challenging. Treating them as a single unit can dramatically alter the patterns in text. Many words that are individually ambiguous have clear, unmistakable meanings as terms, like “black hole\" or “European Union\". However, deciding what constitutes a multi-word term is a difficult problem. In writing systems like Chinese, tokenization is a research problem in its own right.", "Beyond tokenization, common steps include lowercasing, removing punctuation, stemming (removing suffixes), lemmatization (converting inflections to a base lemma), and normalization, which has never been clearly defined, but often includes grouping abbreviations like “U.S.A.\" and “USA\", ordinals like “1st\" and “first\", and variant spellings like “noooooo\". The main goal of these steps is to improve the ratio of tokens (individual occurrences) to types (the distinct things in a corpus). Each step requires making additional assumptions about which distinctions are relevant: is “apple” different from “Apple”? Is “burnt” different from “burned”? Is “cool\" different from “coooool\"? Sometimes these steps can actively hide useful patterns, like social meaning BIBREF32 . Some of us therefore try do as little modification as possible.", "From a multilingual perspective, English and Chinese have an unusually simple inflectional system, and so it is statistically reasonable to treat each inflection as a unique word type. Romance languages have considerably more inflections than English; many indigenous North American languages have still more. For these languages, unseen data is far more likely to include previously-unseen inflections, and therefore, dealing with inflections is more important. On the other hand, the resources for handling inflections vary greatly by language, with European languages dominating the attention of the computational linguistics community thus far.", "We sometimes also remove words that are not relevant to our goals, for example by calculating vocabulary frequencies. We construct a “stoplist” of words that we are not interested in. If we are looking for semantic themes we might remove function words like determiners and prepositions. If we are looking for author-specific styles, we might remove all words except function words. Some words are generally meaningful but too frequent to be useful within a specific collection. We sometimes also remove very infrequent words. Their occurrences are too low for robust patterns and removing them helps reducing the vocabulary size.", "The choice of processing steps can be guided by theory or knowledge about the domain as well as experimental investigation. When we have labels, predictive accuracy of a model is a way to assess the effect of the processing steps. In unsupervised settings, it is more challenging to understand the effects of different steps. Inferences drawn from unsupervised settings can be sensitive to pre-processing choices BIBREF33 . Stemming has been found to provide little measurable benefits for topic modeling and can sometimes even be harmful BIBREF34 . All in all, this again highlights the need to document these steps.", "Finally, we can also mark up the data, e.g., by identifying entities (people, places, organizations, etc.) or parts of speech. Although many NLP tools are available for such tasks, they are often challenged by linguistic variation, such as orthographic variation in historical texts BIBREF35 and social media BIBREF32 . Moreover, the performance of NLP tools often drops when applying them outside the training domain, such as applying tools developed on newswire texts to texts written by younger authors BIBREF36 . Problems (e.g., disambiguation in named entity recognition) are sometimes resolved using considerable manual intervention. This combination of the automated and the manual, however, becomes more difficult as the scale of the data increases, and the `certainty' brought by the latter may have to be abandoned." ], [ "Dictionaries are frequently used to code texts in content analyses BIBREF37 . Dictionaries consist of one or more categories (i.e. word lists). Sometimes the output is simply the number of category occurrences (e.g., positive sentiment), thus weighting words within a category equally. In some other cases, words are assigned continuous scores. The high transparency of dictionaries makes them sometimes more suitable than supervised machine learning models. However, dictionaries should only be used if the scores assigned to words match how the words are used in the data (see BIBREF38 for a detailed discussion on limitations). There are many off-the-shelf dictionaries available (e.g., LIWC BIBREF39 ). These are often well-validated, but applying them on a new domain may not be appropriate without additional validation. Corpus- or domain-specific dictionaries can overcome limitations of general-purpose dictionaries.", "The dictionaries are often manually compiled, but increasingly they are constructed semi-automatically (e.g., BIBREF40 ). When we semi-automatically create a word list, we use automation to identify an initial word list, and human insight to filter it. By automatically generating the initial words lists, words can be identified that human annotators might have difficulty intuiting. By manually filtering the lists, we use our theoretical understanding of the target concept to remove spurious features.", "In the introduction study, SAGE BIBREF41 was used to obtain a list of words that distinguished the text in the treatment group (subreddits that were closed by Reddit) from text in the control group (similar subreddits that were not closed). The researchers then returned to the hate speech definition provided by the European Court of Human Rights, and manually filtered the top SAGE words based on this definition. Not all identified words fitted the definition. The others included: the names of the subreddits themselves, names of related subreddits, community-specific jargon that was not directly related to hate speech, and terms such as IQ and welfare, which were frequently used in discourses of hate speech, but had significant other uses. The word lists provided the measurement instrument for their main result, which is that the use of hate speech throughout Reddit declined after the two treatment subreddits were closed." ], [ "Supervised learning is frequently used to scale up analyses. For example, BIBREF42 wanted to analyze the motivations of Movember campaign participants. By developing a classifier based on a small set of annotations, they were able to expand the analysis to over 90k participants.", "The choice of supervised learning model is often guided by the task definition and the label types. For example, to identify stance towards rumors based on sequential annotations, an algorithm for learning from sequential BIBREF43 or time series data BIBREF44 could be used. The features (sometimes called variables or predictors) are used by the model to make the predictions. They may vary from content-based features such as single words, sequences of words, or information about their syntactic structure, to meta-information such as user or network information. Deciding on the features requires experimentation and expert insight and is often called feature engineering. For insight-driven analysis, we are often interested in why a prediction has been made and features that can be interpreted by humans may be preferred. Recent neural network approaches often use simple features as input (such as word embeddings or character sequences), which requires less feature engineering but make interpretation more difficult.", "Supervised models are powerful, but they can latch on to spurious features of the dataset. This is particularly true for datasets that are not well-balanced, and for annotations that are noisy. In our introductory example on hate speech in Reddit BIBREF0 , the annotations are automatically derived from the forum in which each post appears, and indeed, many of the posts in the forums (subreddits) that were banned by Reddit would be perceived by many as hate speech. But even in banned subreddits, not all of the content is hate speech (e.g., some of the top features were self-referential like the name of the subreddit) but a classifier would learn a high weight for these features.", "Even when expert annotations are available on the level of individual posts, spurious features may remain. BIBREF45 produced expert annotations of hate speech on Twitter. They found that one of the strongest features for sexism is the name of an Australian TV show, because people like to post sexist comments about the contestants. If we are trying to make claims about what inhibits or encourages hate speech, we would not want those claims to be tied to the TV show's popularity. Such problems are inevitable when datasets are not well-balanced over time, across genres, topics, etc. Especially with social media data, we lack a clear and objective definition of `balance' at this time.", "The risk of supervised models latching on to spurious features reinforces the need for interpretability. Although the development of supervised models is usually performance driven, placing more emphasis on interpretability could increase the adoption of these models in insight-driven analyses. One way would be to only use models that are already somewhat interpretable, for example models that use a small number of human-interpretable features. Rather than imposing such restrictions, there is also work on generating post-hoc explanations for individual predictions (e.g., BIBREF46 ), even when the underlying model itself is very complex." ], [ "Topic models (e.g., LDA BIBREF47 ) are usually unsupervised and therefore less biased towards human-defined categories. They are especially suited for insight-driven analysis, because they are constrained in ways that make their output interpretable. Although there is no guarantee that a “topic” will correspond to a recognizable theme or event or discourse, they often do so in ways that other methods do not. Their easy applicability without supervision and ready interpretability make topic models good for exploration. Topic models are less successful for many performance-driven applications. Raw word features are almost always better than topics for search and document classification. LSTMs and other neural network models are better as language models. Continuous word embeddings have more expressive power to represent fine-grained semantic similarities between words.", "A topic model provides a different perspective on a collection. It creates a set of probability distributions over the vocabulary of the collection, which, when combined together in different proportions, best match the content of the collection. We can sort the words in each of these distributions in descending order by probability, take some arbitrary number of most-probable words, and get a sense of what (if anything) the topic is “about”. Each of the text segments also has its own distribution over the topics, and we can sort these segments by their probability within a given topic to get a sense of how that topic is used.", "One of the most common questions about topic models is how many topics to use, usually with the implicit assumption that there is a “right” number that is inherent in the collection. We prefer to think of this parameter as more like the scale of a map or the magnification of a microscope. The “right” number is determined by the needs of the user, not by the collection. If the analyst is looking for a broad overview, a relatively small number of topics may be best. If the analyst is looking for fine-grained phenomena, a larger number is better.", "After fitting the model, it may be necessary to circle back to an earlier phase. Topic models find consistent patterns. When authors repeatedly use a particular theme or discourse, that repetition creates a consistent pattern. But other factors can also create similar patterns, which look as good to the algorithm. We might notice a topic that has highest probability on French stopwords, indicating that we need to do a better job of filtering by language. We might notice a topic of word fragments, such as “ing”, “tion”, “inter”, indicating that we are not handling end-of-line hyphenation correctly. We may need to add to our stoplist or change how we curate multi-word terms." ], [ "The output of our measurement procedures (in the social sciences often called the “scores”) must now be assessed in terms of their reliability and validity with regard to the (systemized) concept. Reliability aims to capture repeatability, i.e. the extent to which a given tool provides consistent results.", "Validity assesses the extent to which a given measurement tool measures what it is supposed to measure. In NLP and machine learning, most models are primarily evaluated by comparing the machine-generated labels against an annotated sample. This approach presumes that the human output is the “gold standard\" against which performance should be tested. In contrast, when the reliability is measured based on the output of different annotators, no coder is taken as the standard and the likelihood of coders reaching agreement by chance (rather than because they are “correct\") is factored into the resulting statistic. Comparing against a “gold standard” suggests that the threshold for human inter- and intra-coder reliability should be particularly high.", "Accuracy, as well as other measures such as precision, recall and F-score, are sometimes presented as a measure of validity, but if we do not have a genuinely objective determination of what something is supposed measure—as is often the case in text analysis—then accuracy is perhaps a better indication of reliability than of validity. In that case, validity needs to be assessed based on other techniques like those we discuss later in this section. It is also worth asking what level of accuracy is sufficient for our analysis and to what extent there may be an upper bound, especially when the labels are native to the data or when the notion of a “gold standard” is not appropriate.", "For some in the humanities, validation takes the form of close reading, not designed to confirm whether the model output is correct, but to present what BIBREF48 refers to as a form of “further discovery in two directions”. Model outputs tell us something about the texts, while a close reading of the texts alongside those outputs tells us something about the models that can be used for more effective model building. Applying this circular, iterative process to 450 18th-century novels written in three languages, Piper was able to uncover a new form of “conversional novel” that was not previously captured in “literary history's received critical categories” BIBREF48 .", "Along similar lines, we can subject both the machine-generated output and the human annotations to another round of content validation. That is, take a stratified random sample, selecting observations from the full range of scores, and ask: Do these make sense in light of the systematized concept? If not, what seems to be missing? Or is something extraneous being captured? This is primarily a qualitative process that requires returning to theory and interrogating the systematized concept, indicators, and scores together. This type of validation is rarely done in NLP, but it is especially important when it is difficult to assess what drives a given machine learning model. If there is a mismatch between the scores and systematized concept at this stage, the codebook may need to be adjusted, human coders retrained, more training data prepared, algorithms adjusted, or in some instances, even a new analytical method adopted.", "Other types of validation are also possible, such as comparing with other approaches that aim to capture the same concept, or comparing the output with external measures (e.g., public opinion polls, the occurrence of future events). We can also go beyond only evaluating the labels (or point estimates). BIBREF16 used human judgments to not only assess the positional estimates from a scaling method of latent political traits but also to assess uncertainty intervals. Using different types of validation can increase our confidence in the approach, especially when there is no clear notion of ground truth.", "Besides focusing on rather abstract evaluation measures, we could also assess the models in task-based settings using human experts. Furthermore, for insight-driven analyses, it can be more useful to focus on improving explanatory power than making small improvements in predictive performance." ], [ "In this phase, we use our models to explore or answer our research questions. For example, given a topic model we can look at the connection between topics and metadata elements. Tags such as “hate speech\" or metadata information imply a certain way of organizing the collection. Computational models provide another organization, which may differ in ways that provide more insight into how these categories manifest themselves, or fail to do so.", "Moreover, when using a supervised approach, the “errors”, i.e. disagreement between the system output and human-provided labels, can point towards interesting cases for closer analysis and help us reflect on our conceptualizations. In the words of BIBREF2 , they can be “opportunities for interpretation”. Other types of “failures” can be insightful as well. Sometimes there is a “dog that didn't bark” BIBREF49 –i.e., something that everyone thinks we should have found, but we did not. Or, sometimes the failures are telling us about the existence of something in the data that nobody noticed, or thought important, until then (e.g., the large number of travel journals in Darwin's reading lists).", "Computational text analysis is not a replacement for but rather an addition to the approaches one can take to analyze social and cultural phenomena using textual data. By moving back and forth between large-scale computational analyses and small-scale qualitative analyses, we can combine their strengths so that we can identify large-scale and long-term trends, but also tell individual stories. For example, the Reddit study on hate speech BIBREF0 raised various follow-up questions: Can we distinguish hate speech from people talking about hate speech? Did people find new ways to express hate speech? If so, did the total amount of online hate speech decrease after all? As possible next steps, a qualitative discourse analyst might examine a smaller corpus to investigate whether commenters were indeed expressing hate speech in new ways; a specialist in interview methodologies might reach out to commenters to better understand the role of online hate speech in their lives. Computational text analysis represents a step towards better understanding social and cultural phenomena, and it is in many cases better suited towards opening questions rather than closing them." ], [ "Insight-driven computational analysis of text is becoming increasingly common. It not only helps us see more broadly, it helps us see subtle patterns more clearly and allows us to explore radical new questions about culture and society. In this article we have consolidated our experiences, as scholars from very different disciplines, in analyzing text as social and cultural data and described how the research process often unfolds. Each of the steps in the process is time-consuming and labor-intensive. Each presents challenges. And especially when working across disciplines, the research often involves a fair amount of discussion—even negotiation—about what means of operationalization and approaches to analysis are appropriate and feasible. And yet, with a bit of perseverance and mutual understanding, conceptually sound and meaningful work results so that we can truly make use of the exciting opportunities rich textual data offers." ], [ "This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. Dong Nguyen is supported with an Alan Turing Institute Fellowship (TU/A/000006). Maria Liakata is a Turing fellow at 40%. We would also like to thank the participants of the “Bridging disciplines in analysing text as social and cultural data” workshop held at the Turing Institute (2017) for insightful discussions. The workshop was funded by a Turing Institute seed funding award to Nguyen and Liakata." ] ], "section_name": [ "Introduction", "Research questions", "Data", "Data acquisition", "Compiling data", "Labels and metadata", "Conceptualization", "Operationalization", "Modeling considerations", "Annotation", "Data pre-processing", "Dictionary-based approaches", "Supervised models", "Topic modeling", "Validation", "Analysis", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "2460a38923f4bee3c2c7f75ea11ea4afb47e16da", "d9903c75b36a53efa231d3c8c0956a1d3cb4c4ef" ], "answer": [ { "evidence": [ "This contrasts with much of the work in computational text analysis, which tends to focus on automating tasks that humans perform inefficiently. These tasks range from core linguistically motivated tasks that constitute the backbone of natural language processing, such as part-of-speech tagging and parsing, to filtering spam and detecting sentiment. Many tasks are motivated by applications, for example to automatically block online trolls. Success, then, is often measured by performance, and communicating why a certain prediction was made—for example, why a document was labeled as positive sentiment, or why a word was classified as a noun—is less important than the accuracy of the prediction itself. The approaches we use and what we mean by `success' are thus guided by our research questions.", "Domain experts and fellow researchers can provide feedback on questions and help with dynamically revising them. For example, they may say “we already think we know that”, “that's too naïve”, “that doesn't reflect social reality” (negative); “two major camps in the field would give different answers to that question” (neutral); “we tried to look at that back in the 1960s, but we didn't have the technology” (positive); and “that sounds like something that people who made that archive would love”, “that's a really fundamental question” (very positive).", "Sometimes we also hope to connect to multiple disciplines. For example, while focusing on the humanistic concerns of an archive, we could also ask social questions such as “is this archive more about collaborative processes, culture-building or norm creation?” or “how well does this archive reflect the society in which it is embedded?\" BIBREF3 used quantitative methods to tell a story about Darwin's intellectual development—an essential biographical question for a key figure in the history of science. At the same time, their methods connected Darwin's development to the changing landscape of Victorian scientific culture, allowing them to contrast Darwin's “foraging” in the scientific literature of his time to the ways in which that literature was itself produced. Finally, their methods provided a case study, and validation of technical approaches, for cognitive scientists who are interested in how people explore and exploit sources of knowledge.", "Questions about potential “dual use” may also arise. Returning to our introductory example, BIBREF0 started with a deceptively simple question: if an internet platform eliminates forums for hate speech, does this impact hate speech in other forums? The research was motivated by the belief that a rising tide of online hate speech was (and is) making the internet increasingly unfriendly for disempowered groups, including minorities, women, and LBGTQ individuals. Yet the possibility of dual use troubled the researchers from the onset. Could the methodology be adopted to target the speech of groups like Black Lives Matter? Could it be adopted by repressive governments to minimize online dissent? While these concerns remained, they concluded that hypothetical dual use scenarios did not outweigh the tangible contribution this research could offer towards making the online environment more equal and just." ], "extractive_spans": [ "Domain experts and fellow researchers can provide feedback on questions and help with dynamically revising them.", "connect to multiple disciplines", "dual use" ], "free_form_answer": "", "highlighted_evidence": [ "The approaches we use and what we mean by `success' are thus guided by our research questions.\n\nDomain experts and fellow researchers can provide feedback on questions and help with dynamically revising them.", "Sometimes we also hope to connect to multiple disciplines.", "Questions about potential “dual use” may also arise." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this phase we develop measures (or, “operationalizations”, or “indicators”) for the concepts of interest, a process called “operationalization”. Regardless of whether we are working with computers, the output produced coincides with Adcock and Collier's “scores”—the concrete translation and output of the systematized concept into numbers or labels BIBREF13 . Choices made during this phase are always tied to the question “Are we measuring what we intend to measure?” Does our operationalization match our conceptual definition? To ensure validity we must recognize gaps between what is important and what is easy to measure. We first discuss modeling considerations. Next, we describe several frequently used computational approaches and their limitations and strengths.", "Modeling considerations", "The variables (both predictors and outcomes) are rarely simply binary or categorical. For example, a study on language use and age could focus on chronological age (instead of, e.g., social age BIBREF19 ). However, even then, age can be modeled in different ways. Discretization can make the modeling easier and various NLP studies have modeled age as a categorical variable BIBREF20 . But any discretization raises questions: How many categories? Where to place the boundaries? Fine distinctions might not always be meaningful for the analysis we are interested in, but categories that are too broad can threaten validity. Other interesting variables include time, space, and even the social network position of the author. It is often preferable to keep the variable in its most precise form. For example, BIBREF21 perform exploration in the context of hypothesis testing by using latitude and longitude coordinates — the original metadata attached to geotagged social media such as tweets — rather than aggregating into administrative units such as counties or cities. This is necessary when such administrative units are unlikely to be related to the target concept, as is the case in their analysis of dialect differences. Focusing on precise geographical coordinates also makes it possible to recognize fine-grained effects, such as language variation across the geography of a city.", "Using a particular classification scheme means deciding which variations are visible, and which ones are hidden BIBREF22 . We are looking for a categorization scheme for which it is feasible to collect a large enough labeled document collection (e.g., to train supervised models), but which is also fine-grained enough for our purposes. Classification schemes rarely exhibit the ideal properties, i.e., that they are consistent, their categories are mutually exclusive, and that the system is complete BIBREF22 . Borderline cases are challenging, especially with social and cultural concepts, where the boundaries are often not clear-cut. The choice of scheme can also have ethical implications BIBREF22 . For example, gender is usually represented as a binary variable in NLP and computational models tend to learn gender-stereotypical patterns. The operationalization of gender in NLP has been challenged only recently BIBREF23 , BIBREF24 , BIBREF25 .", "Supervised and unsupervised learning are the most common approaches to learning from data. With supervised learning, a model learns from labeled data (e.g., social media messages labeled by sentiment) to infer (or predict) these labels from unlabeled texts. In contrast, unsupervised learning uses unlabeled data. Supervised approaches are especially suitable when we have a clear definition of the concept of interest and when labels are available (either annotated or native to the data). Unsupervised approaches, such as topic models, are especially useful for exploration. In this setting, conceptualization and operationalization may occur simultaneously, with theory emerging from the data BIBREF26 . Unsupervised approaches are also used when there is a clear way of measuring a concept, often based on strong assumptions. For example, BIBREF3 measure “surprise” in an analysis of Darwin's reading decisions based on the divergence between two probability distributions.", "From an analysis perspective, the unit of text that we are labeling (or annotating, or coding), either automatic or manual, can sometimes be different than one's final unit of analysis. For example, if in a study on media frames in news stories, the theoretical framework and research question point toward frames at the story level (e.g., what is the overall causal analysis of the news article?), the story must be the unit of analysis. Yet it is often difficult to validly and reliably code a single frame at the story level. Multiple perspectives are likely to sit side-by-side in a story. Thus, an article on income inequality might point to multiple causes, such as globalization, education, and tax policies. Coding at the sentence level would detect each of these causal explanations individually, but this information would need to be somehow aggregated to determine the overall story-level frame. Sometimes scholars solve this problem by only examining headlines and lead paragraphs, arguing that based on journalistic convention, the most important information can be found at the beginning of a story. However, this leads to a return to a shorter, less nuanced analysis.", "From a computational perspective, the unit of text can also make a huge difference, especially when we are using bag-of-words models, where word order within a unit does not matter. Small segments, like tweets, sometimes do not have enough information to make their semantic context clear. In contrast, larger segments, like novels, have too much variation, making it difficult to train focused models. Finding a good segmentation sometimes means combining short documents and subdividing long documents. The word “document\" can therefore be misleading. But it is so ingrained in the common NLP lexicon that we use it anyway in this article.", "For insight-driven text analysis, it is often critical that high-level patterns can be communicated. Furthermore, interpretable models make it easier to find spurious features, to do error analysis, and to support interpretation of results. Some approaches are effective for prediction, but harder to interpret. The value we place on interpretability can therefore influence the approach we choose. There is an increasing interest in developing interpretable or transparent models in the NLP and machine learning communities." ], "extractive_spans": [], "free_form_answer": "Modeling considerations: the variables (both predictors and outcomes) are rarely simply binary or categorical; using a particular classification scheme means deciding which variations are visible,; Supervised and unsupervised learning are the most common approaches to learning from data; the unit of text that we are labeling (or annotating, or coding), either automatic or manual, can sometimes be different than one's final unit of analysis.", "highlighted_evidence": [ "Next, we describe several frequently used computational approaches and their limitations and strengths.\n\nModeling considerations\nThe variables (both predictors and outcomes) are rarely simply binary or categorical. For example, a study on language use and age could focus on chronological age (instead of, e.g., social age BIBREF19 ). However, even then, age can be modeled in different ways. Discretization can make the modeling easier and various NLP studies have modeled age as a categorical variable BIBREF20 . But any discretization raises questions: How many categories? Where to place the boundaries? Fine distinctions might not always be meaningful for the analysis we are interested in, but categories that are too broad can threaten validity. Other interesting variables include time, space, and even the social network position of the author. It is often preferable to keep the variable in its most precise form. For example, BIBREF21 perform exploration in the context of hypothesis testing by using latitude and longitude coordinates — the original metadata attached to geotagged social media such as tweets — rather than aggregating into administrative units such as counties or cities. This is necessary when such administrative units are unlikely to be related to the target concept, as is the case in their analysis of dialect differences. Focusing on precise geographical coordinates also makes it possible to recognize fine-grained effects, such as language variation across the geography of a city.\n\nUsing a particular classification scheme means deciding which variations are visible, and which ones are hidden BIBREF22 . We are looking for a categorization scheme for which it is feasible to collect a large enough labeled document collection (e.g., to train supervised models), but which is also fine-grained enough for our purposes. Classification schemes rarely exhibit the ideal properties, i.e., that they are consistent, their categories are mutually exclusive, and that the system is complete BIBREF22 . Borderline cases are challenging, especially with social and cultural concepts, where the boundaries are often not clear-cut. The choice of scheme can also have ethical implications BIBREF22 . For example, gender is usually represented as a binary variable in NLP and computational models tend to learn gender-stereotypical patterns. The operationalization of gender in NLP has been challenged only recently BIBREF23 , BIBREF24 , BIBREF25 .\n\nSupervised and unsupervised learning are the most common approaches to learning from data. With supervised learning, a model learns from labeled data (e.g., social media messages labeled by sentiment) to infer (or predict) these labels from unlabeled texts. In contrast, unsupervised learning uses unlabeled data. Supervised approaches are especially suitable when we have a clear definition of the concept of interest and when labels are available (either annotated or native to the data). Unsupervised approaches, such as topic models, are especially useful for exploration. In this setting, conceptualization and operationalization may occur simultaneously, with theory emerging from the data BIBREF26 . Unsupervised approaches are also used when there is a clear way of measuring a concept, often based on strong assumptions. For example, BIBREF3 measure “surprise” in an analysis of Darwin's reading decisions based on the divergence between two probability distributions.\n\nFrom an analysis perspective, the unit of text that we are labeling (or annotating, or coding), either automatic or manual, can sometimes be different than one's final unit of analysis. For example, if in a study on media frames in news stories, the theoretical framework and research question point toward frames at the story level (e.g., what is the overall causal analysis of the news article?), the story must be the unit of analysis. Yet it is often difficult to validly and reliably code a single frame at the story level. Multiple perspectives are likely to sit side-by-side in a story. Thus, an article on income inequality might point to multiple causes, such as globalization, education, and tax policies. Coding at the sentence level would detect each of these causal explanations individually, but this information would need to be somehow aggregated to determine the overall story-level frame. Sometimes scholars solve this problem by only examining headlines and lead paragraphs, arguing that based on journalistic convention, the most important information can be found at the beginning of a story. However, this leads to a return to a shorter, less nuanced analysis.\n\nFrom a computational perspective, the unit of text can also make a huge difference, especially when we are using bag-of-words models, where word order within a unit does not matter. Small segments, like tweets, sometimes do not have enough information to make their semantic context clear. In contrast, larger segments, like novels, have too much variation, making it difficult to train focused models. Finding a good segmentation sometimes means combining short documents and subdividing long documents. The word “document\" can therefore be misleading. But it is so ingrained in the common NLP lexicon that we use it anyway in this article.\n\nFor insight-driven text analysis, it is often critical that high-level patterns can be communicated. Furthermore, interpretable models make it easier to find spurious features, to do error analysis, and to support interpretation of results. Some approaches are effective for prediction, but harder to interpret. The value we place on interpretability can therefore influence the approach we choose. There is an increasing interest in developing interpretable or transparent models in the NLP and machine learning communities." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "bc43a2caeb0db1fb04a58774b7d9f4de8cd811aa" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "835c5bd1074662d8e035d4a971ffc54cfd442494", "8dddf0a9253a2635b607b197b1bd115db43fef46" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "3278d962c1094692098556b206b74a9de36aad4b" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3b57b71b33f2cc8b0b430a3c3158c05bd8cac6e0" ], "answer": [ { "evidence": [ "We typically start by identifying the questions we wish to explore. Can text analysis provide a new perspective on a “big question” that has been attracting interest for years? Or can we raise new questions that have only recently emerged, for example about social media? For social scientists working in computational analysis, the questions are often grounded in theory, asking: How can we explain what we observe? These questions are also influenced by the availability and accessibility of data sources. For example, the choice to work with data from a particular social media platform may be partly determined by the fact that it is freely available, and this will in turn shape the kinds of questions that can be asked. A key output of this phase are the concepts to measure, for example: influence; copying and reproduction; the creation of patterns of language use; hate speech. Computational analysis of text motivated by these questions is insight driven: we aim to describe a phenomenon or explain how it came about. For example, what can we learn about how and why hate speech is used or how this changes over time? Is hate speech one thing, or does it comprise multiple forms of expression? Is there a clear boundary between hate speech and other types of speech, and what features make it more or less ambiguous? In these cases, it is critical to communicate high-level patterns in terms that are recognizable.", "Sometimes we also hope to connect to multiple disciplines. For example, while focusing on the humanistic concerns of an archive, we could also ask social questions such as “is this archive more about collaborative processes, culture-building or norm creation?” or “how well does this archive reflect the society in which it is embedded?\" BIBREF3 used quantitative methods to tell a story about Darwin's intellectual development—an essential biographical question for a key figure in the history of science. At the same time, their methods connected Darwin's development to the changing landscape of Victorian scientific culture, allowing them to contrast Darwin's “foraging” in the scientific literature of his time to the ways in which that literature was itself produced. Finally, their methods provided a case study, and validation of technical approaches, for cognitive scientists who are interested in how people explore and exploit sources of knowledge." ], "extractive_spans": [ "identifying the questions we wish to explore", "Can text analysis provide a new perspective on a “big question” that has been attracting interest for years?", "How can we explain what we observe?", "hope to connect to multiple disciplines" ], "free_form_answer": "", "highlighted_evidence": [ "We typically start by identifying the questions we wish to explore. Can text analysis provide a new perspective on a “big question” that has been attracting interest for years? Or can we raise new questions that have only recently emerged, for example about social media? For social scientists working in computational analysis, the questions are often grounded in theory, asking: How can we explain what we observe?", "Computational analysis of text motivated by these questions is insight driven: we aim to describe a phenomenon or explain how it came about. For example, what can we learn about how and why hate speech is used or how this changes over time? Is hate speech one thing, or does it comprise multiple forms of expression? Is there a clear boundary between hate speech and other types of speech, and what features make it more or less ambiguous?", "Sometimes we also hope to connect to multiple disciplines. For example, while focusing on the humanistic concerns of an archive, we could also ask social questions such as “is this archive more about collaborative processes, culture-building or norm creation?” or “how well does this archive reflect the society in which it is embedded?\" BIBREF3 used quantitative methods to tell a story about Darwin's intellectual development—an essential biographical question for a key figure in the history of science." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "What approaches do they use towards text analysis?", "What dataset do they use for analysis?", "Do they demonstrate why interdisciplinary insights are important?", "What background do they have?", "What kind of issues (that are not on the forefront of computational text analysis) do they tackle?" ], "question_id": [ "d6ea7a30b0b61ae126b00b59d2a14fff2ef887bf", "f903396d943541a8cc65edefb04ca37814ed30dd", "ba28ce9a2f7e8524243adf288cc3f11055e667bb", "975e60535724f4149c7488699a199ba2920a062c", "b970f48d30775d3468952795bc72976baab3438e" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [], "file": [] }
[ "What approaches do they use towards text analysis?" ]
[ [ "1907.01468-Research questions-4", "1907.01468-Research questions-2", "1907.01468-Modeling considerations-3", "1907.01468-Modeling considerations-2", "1907.01468-Operationalization-0", "1907.01468-Modeling considerations-0", "1907.01468-Research questions-3", "1907.01468-Modeling considerations-1", "1907.01468-Modeling considerations-4", "1907.01468-Research questions-1", "1907.01468-Modeling considerations-5" ] ]
[ "Modeling considerations: the variables (both predictors and outcomes) are rarely simply binary or categorical; using a particular classification scheme means deciding which variations are visible,; Supervised and unsupervised learning are the most common approaches to learning from data; the unit of text that we are labeling (or annotating, or coding), either automatic or manual, can sometimes be different than one's final unit of analysis." ]
280
1911.10742
End-to-End Trainable Non-Collaborative Dialog System
End-to-end task-oriented dialog models have achieved promising performance on collaborative tasks where users willingly coordinate with the system to complete a given task. While in non-collaborative settings, for example, negotiation and persuasion, users and systems do not share a common goal. As a result, compared to collaborate tasks, people use social content to build rapport and trust in these non-collaborative settings in order to advance their goals. To handle social content, we introduce a hierarchical intent annotation scheme, which can be generalized to different non-collaborative dialog tasks. Building upon TransferTransfo (Wolf et al. 2019), we propose an end-to-end neural network model to generate diverse coherent responses. Our model utilizes intent and semantic slots as the intermediate sentence representation to guide the generation process. In addition, we design a filter to select appropriate responses based on whether these intermediate representations fit the designed task and conversation constraints. Our non-collaborative dialog model guides users to complete the task while simultaneously keeps them engaged. We test our approach on our newly proposed ANTISCAM dataset and an existing PERSUASIONFORGOOD dataset. Both automatic and human evaluations suggest that our model outperforms multiple baselines in these two non-collaborative tasks.
{ "paragraphs": [ [ "Considerable progress has been made building end-to-end dialog systems for collaborative tasks in which users cooperate with the system to achieve a common goal. Examples of collaborative tasks include making restaurant reservations and retrieving bus time-table information. Since users typically have clear and explicit intentions in collaborative tasks, existing systems commonly classify user utterances into pre-defined intents. In contrast, non-collaborative tasks are those where the users and the system do not strive to achieve the same goal. Examples of such tasks include deceiving attackers, persuading users to donate to a cause BIBREF1, and negotiating a product price BIBREF2, BIBREF3. In these tasks, users often perform complex actions that are beyond a simple set of pre-defined intents. In order to reach a common state, the user and the system need to build rapport and trust which naturally involves off-task content. Previous work did not model off-task content BIBREF2, which may have led to less optimal results. For example, in the persuasion task BIBREF1, users would ask the system “How do you feel about war?\" An example of an on-task system response that the system could have made is “Do you want to make a donation?\", which sticks to the task but neglects users' question. However, a better response to such an off-task question is “War is destructive and pitiless, but you can donate to help child victims of war.\" This response is better, as it has been found that users are more likely to end the conversation if the system neglects their questions BIBREF4. Therefore, we need to design a system that handles both on-task and off-task information appropriately and in a way that leads back to the system's goal.", "To tackle the issue of incoherent system responses to off-task content, previous studies have built hybrid systems to interleave off-task and on-task content. BIBREF4 used a rule-based dialog manager for on-task content and a neural model for off-task content, and trained a reinforcement learning model to select between these two models based on the dialog context. However, such a method is difficult to train and struggles to generalize beyond the movie promotion task they considered. To tackle these problems, we propose a hierarchical intent annotation scheme that separates on-task and off-task information in order to provide detailed supervision. For on-task information, we directly use task-related intents for representation. Off-task information, on the other hand, is too general to categorize into specific intents, so we choose dialog acts that convey syntax information. These acts, such as “open question\" are general to all tasks.", "Previous studies use template-based methods to maintain sentence coherence. However, rigid templates lead to limited diversity, causing the user losing engagement. On the other hand, language generation models can generate diverse responses but are bad at being coherent. We propose Multiple Intents and Semantic Slots Annotation Neural Network (MISSA) to combine the advantages of both template and generation models and takes advantage from the hierarchical annotation at the same time. MISSA follows the TransferTransfo framework BIBREF0 with three modifications: (i) We first concurrently predict user's, system's intents and semantic slots; (ii) We then perform conditional generation to improve generated response's coherence. Specifically, we generate responses conditioned on the above intermediate representation (intents and slots); (iii) Finally, we generate multiple responses with the nucleus sampling strategy BIBREF5 and then apply a response filter, which contains a set of pre-defined constraints to select coherent responses. The constraints in the filter can be defined according to specific task requirements or general conversational rules.", "To enrich publicly available non-collaborative task datasets, we collect a new dataset AntiScam, where users defend themselves against attackers trying to collect personal information. As non-collaborative tasks are still relatively new to the study of dialog systems, there are insufficiently many meaningful datasets for evaluation and we hope this provides a valuable example. We evaluate MISSA on the newly collected AntiScam dataset and an existing PersuasionForGood dataset. Both automatic and human evaluations suggest that MISSA outperforms multiple competitive baselines.", "In summary, our contributions include: (i) We design a hierarchical intent annotation scheme and a semantic slot annotation scheme to annotate the non-collaborative dialog dataset, we also propose a carefully-designed AntiScam dataset to facilitate the research of non-collaborative dialog systems. (ii) We propose a model that can be applied to all non-collaborative tasks, outperforming other baselines on two different non-collaborative tasks. (iii) We develop an anti-scam dialog system to occupy attacker's attention and elicit their private information for social good. Furthermore, we also build a persuasion dialog system to persuade people to donate to charities. We release the code and data." ], [ "The interest in non-collaborative tasks has been increasing and there have already been several related datasets. For instance, BIBREF1 wang2019persuasion collected conversations where one participant persuades another to donate to a charity. BIBREF2 he2018decoupling collected negotiation dialogs where buyers and sellers bargain for items for sale on Craigslist. There are many other non-collaborative tasks, such as the turn-taking game BIBREF6, the multi-party game BIBREF7 and item splitting negotiation BIBREF8. Similar to the AntiScam dataset proposed in this paper, these datasets contain off-task content and can be used to train non-collaborative dialog systems. However, since they are not specifically collected and designed for non-collaborative tasks, it might be difficult to disentangle the on-task and off-task contents and measure the performance. Therefore, we propose the AntiScam dataset, which is designed to interleave the on-task and off-task contents in the conversation, and can serve as a benchmark dataset for similar non-collaborative tasks.", "To better understand user utterances and separate on-task and off-task content within a conversation, previous work has designed hierarchical annotation schemes for specific domains. BIBREF9 hardy2002multi followed the DAMSL schemeBIBREF10 and annotated a multilingual human-computer dialog corpus with a hierarchical dialog act annotation scheme. BIBREF11 gupta2018semantic used a hierarchical annotation scheme for semantic parsing. Inspired by these studies, our idea is to annotate the intent and semantic slot separately in non-collaborative tasks. We propose a hierarchical intent annotation scheme that can be adopted by all non-collaborative tasks. With this annotation scheme, MISSA is able to quickly build an end-to-end trainable dialog system for any non-collaborative task.", "Traditional task-oriented dialog systems BIBREF12 are usually composed of multiple independent modules, for example, natural language understanding, dialog state tracking BIBREF13, BIBREF14, dialog policy manager BIBREF15, and natural language generation BIBREF16. Conversational intent is adopted to capture the meaning of task content in these dialog systems BIBREF2, BIBREF17. In comparison to this work, we use a hierarchical intent scheme that includes off-task and on-task intents to capture utterance meaning. We also train the model in a multi-task fashion to predict decoupled intents and semantic slots. The major defect of a separately trained pipeline is the laborious dialog state design and annotation. In order to mitigate this problem, recent work has explored replacing independent modules with end-to-end neural networks BIBREF18, BIBREF19, BIBREF20. Our model also follows this end-to-end fashion.", "Over the last few years, we have witnessed a huge growth in non-task-oriented dialog systems BIBREF21, BIBREF22. Social chatbots such as Gunrock BIBREF23 were able to maintain a conversation for around ten minutes in an open domain. Recent improvements build on top of the transformer and pre-trained language models BIBREF24, BIBREF25, BIBREF26, obtained state-of-the-art results on the Persona-Chat dataset BIBREF0. Pre-trained language models are proposed to build task-oriented dialog systems to drive the progress on leveraging large amounts of available unannotated data. BIBREF27. Similarly, our approach is also built on top of the TransferTransfo framework BIBREF0. BIBREF27 budzianowski2019hello focused on collaborative tasks BIBREF28. We target non-collaborative tasks instead.", "Another line of work interleaves on-task and off-task content by building a hybrid dialog system that combines a task-oriented model and a non-task-oriented model BIBREF4, BIBREF29. In these studies, task-oriented systems and non-task-oriented systems are designed separately and both systems generate candidate responses. A selector is then designed to choose an appropriate output from the candidate responses BIBREF4 and a connector to combine two response candidates BIBREF30, BIBREF31. Compared with these works, MISSA is end-to-end trainable and thus easier to train and update." ], [ "To decouple syntactic and semantic information in utterances and provide detailed supervision, we design a hierarchical intent annotation scheme for non-collaborative tasks. We first separate on-task and off-task intents. As on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task. On the other hand, since off-task content is too general to design task-specific intents, we choose common dialog acts as the categories. The advantage of this hierarchical annotation scheme is apparent when starting a new non-collaborative task: we only need to focus on designing the on-task categories and semantic slots which are the same as traditional task-oriented dialog systems. Consequently, we don't have to worry about the off-task annotation design since the off-task category is universal.", "In the intent annotation scheme shown in Table TABREF2, we list the designed intent annotation scheme for the newly collected AntiScam dataset and the PersuasionForGood dataset. We first define on-task intents for the datasets, which are key actions in the task. Since our AntiScam focuses on understanding and reacting towards elicitations, we define elicitation, providing_information and refusal as on-task intents. In the PersuasionForGood dataset, we define nine on-task intents in Table TABREF2 based on the original PersuasionForGood dialog act annotation scheme. All these intents are related to donation actions, which are salient on-task intents in the persuasion task. The off-task intents are the same for both tasks, including six general intents and six additional social intents. General intents are more closely related to the syntactic meaning of the sentence (open_question, yes_no_question, positive_answer, negative_answer, responsive_statement, and nonresponsive_statement) while social intents are common social actions (greeting, closing, apology, thanking,respond_to_thank, and hold).", "For specific tasks, we also design a semantic slot annotation scheme for annotating sentences based on their semantic content. We identify 13 main semantic slots in the anti-scam task, for example, credit card numbers. We present a detailed semantic slot annotation in Table TABREF3. Following BIBREF1, we segment each conversation turn into single sentences and then annotate each sentence rather than turns." ], [ "We test our approach on two non-collaborative task datasets: the AntiScam dataset and the PersuasionForGood dataset BIBREF1. Both datasets are collected from the Amazon Mechanical Turk platform in the form of typing conversations and off-task dialog is interleaved in the dialog." ], [ "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], [ "The PersuasionForGood dataset BIBREF1 was collected from typing conversations on Amazon Mechanical Turk platform. Two workers were randomly paired, one was assigned the role of persuader, the other was persuadee. The goal of the persuader was to persuade the persuadee to donate a portion of task earning to a specific charity. The dataset consists of 1,017 dialogs, where 300 dialogs are annotated with dialog acts. The average conversation length is 10.43, the vocabulary size is 8,141. Since the original PersuasionForGood dataset is annotated with dialog acts, we select the on-task dialog acts as on-task intents shown in Table TABREF2, and categorize the other dialog acts into our pre-defined off-task intents." ], [ "The TransferTransfo framework was proposed to build open domain dialog systems. BIBREF0 wolf2019transfertransfo fine-tuned the generative pre-training model (GPT) BIBREF32 with the PERSONA-CHAT dataset BIBREF33 in a multi-task fashion, where the language model objective is combined with a next-utterance classification task. The language model's objective is to maximize the following likelihood for a given sequence of tokens, $X = \\lbrace x_1,\\dots ,x_n\\rbrace $:", "The authors also trained a classifier to distinguish the correct next-utterance appended to the input human utterances from a set of randomly selected utterance distractors. In addition, they introduced dialog state embeddings to indicate speaker role in the model. The model significantly outperformed previous baselines over both automatic evaluations and human evaluations in social conversations. Since the TransferTransfo framework performs well in open domain, we adapt it for non-collaborative settings. We keep all the embeddings in the framework and train the language model and next-utterance classification task in a multi-task fashion following TransferTransfo.", "We make two major changes: (1) To address the problem that TransferTransfo is originally designed for an open domain without explicit intents and regulations, we add two intent classifiers and two semantic slot classifiers to classify the intents and semantic slots for both human utterances and system responses as an effort to incorporate the proposed hierarchical intent and semantic slot annotation for non-collaborative tasks. (2) In dialog systems, multiple generated responses can be coherent under the current context. Generating diverse responses has proven to be an enduring challenge. To increase response diversity, we sample multiple generated responses and choose an appropriate one according to a set of pre-defined rules." ], [ "We train MISSA in a multi-task fashion. In addition to the language model task and the next-utterance prediction task, we also use separate classifiers to predict the intents and semantic slots of both human utterances and system responses. The intent classifier and semantic slot classifier for human utterances capture the semantic and syntactic meaning of human utterances, providing information to select the appropriate response among response candidates while the classifiers for the system intents and semantic slots are designed to help select an appropriate next-sentence. We describe response filtering in the corresponding subsection. Classifiers are designed as the following equation:", "where $L^i_{t}$ is the intent or semantic label of $i$-th sentence at turn $t$. $h^l_{t-1}$ is the hidden states at the end of last sentence in turn $t-1$, $h^i_{t}$ is the last hidden states at the end of $i$-th sentence in turn $t$. $W_{2h}$ are weights learned during training.", "MISSA is able to classify multiple intents and multiple semantic slots in a single utterance with these classifiers. Figure FIGREF6 shows how it works on the AntiScam dataset. Specifically, we set a special token $<$sep$>$ at the end of each sentence in an utterance (an utterance can consist of multiple sentences). Next, we pass the token's position information to the transformer architecture and obtain the representation of the position (represented as colored position at last layer in Figure FIGREF6). After that, we concatenate the embeddings at these position with the hidden states of last sentence. We pass these concatenated representations to the intent classifier and the slot classifier to obtain an intent and a semantic slot for each sentence in the utterance. As shown in Figure FIGREF6, the loss function ${\\mathcal {L}}$ for the model combines all the task losses:", "where ${\\mathcal {L}_{LM}}$ is the language model loss, ${\\mathcal {L}_{I_h}}$, ${\\mathcal {L}_{S_h}}$, ${\\mathcal {L}_{I_s}}$, and ${\\mathcal {L}_{S_s}}$ are losses of intent and slots classifiers, ${\\mathcal {L}_{nup}}$ is next-utterance classification loss. $\\lambda _{LM}$, $\\lambda _{I_h}$, $\\lambda _{S_h}$, $\\lambda _{I_s}$, $\\lambda _{S_s}$, and $\\lambda _{nup}$ are the hyper-parameters that control the relative importance of every loss." ], [ "MISSA can generate multiple sentences in a single system turn. Therefore, we perform system generation conditioned on predicted system intents. More specifically, during the training phase, in addition to inserting a special $<$sep$>$ token at the end of each sentence, we also insert the intent of the system response as special tokens at the head of each sentence in the system response. For example, in Figure FIGREF6, we insert a $<$pos_ans$>$ token at the head of $S_t^1$, which is the system response in green. We then use a cross entropy loss function to calculate the loss between the predicted token and the ground truth intent token. During the testing phase, the model first generates a special intent token, then after being conditioned on this intent token, the model keeps generating a sentence until it generates a $<$sep$>$ token. After that, the model continues to generate another intent token and another sentence until it generates an $<$eos$>$ token." ], [ "Since we only perform conditional generation, a type of soft constraint on the predicted intent of system response, the system can still generate samples that violate simple conversation regulations, such as eliciting information that has already been provided. These corner cases may lead to fatal results in high-risk tasks, for example, health care and education. To improve the robustness of MISSA and improve its ability to generalize to more tasks, we add a response filtering module after the generation. With the nucleus sampling strategy BIBREF5, MISSA is able to generate multiple diverse candidate responses with different intents and semantic slots. We then adopt a task-specific response filtering policy to choose the best candidate response as the final output. In our anti-scam scenario, we set up a few simple rules to filter out some unreasonable candidates, for instance, eliciting the repeated information. The filtering module is easily adaptable to different domains or specific requirements, which makes our dialog system more controllable." ], [ "We evaluate MISSA on two non-collaborative task datasets. AntiScam aims to build a dialog system that occupies the attacker's attention and elicits the attacker's information while PersuasionForGood BIBREF1 aims to build a dialog system that persuades people to donate to a charity. We use $80\\%$ data for training, $10\\%$ data for validation, and $10\\%$ data for testing. More training details are presented in Appendix." ], [ "We compare MISSA mainly with two baseline models:", "TransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.", "Hybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline.", "In addition, we perform ablation studies on MISSA to show the effects of different components.", "MISSA-sel denotes MISSA without response filtering.", "MISSA-con denotes MISSA leaving out the intent token at the start of the response generation." ], [ "Perplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.", "Response-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents. For example, in the anti-scam task, if the attacker elicits information from the system, we need to know if the system refuses or agrees to provide the information. Therefore we care about intent prediction for the generated system response. Since our baselines are more suited for social chat as they cannot produce system intents, we use the system intent and slot classifiers trained in our model to predict their responses' intents and slots. The intent predictor achieves a $84\\%$ accuracy and the semantic slot predictor achieves $77\\%$ on the AntiScam dataset. Then we compare the predicted values with human-annotated ground truth in the dataset to compute the response-intent prediction (RIP) and response-slot prediction (RSP).", "Extended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog. However, the real mapping between human-intent and system-intent is much more complicated as there might be multiple acceptable system-intents for the same human-intent. Therefore, we also design a metric to evaluate if the predicted system-intent is in the set of acceptable intents. Specifically, we estimate the transition probability $p(I_i|I_j)$ by counting the frequency of all the bi-gram human-intent and system-intent pairs in the training data. During the test stage, if the predicted intent matches the ground truth, we set the score as 1, otherwise we set the score as $p(I_{predict}|I_i)$ where $I_i$ is the intent of the input human utterance. We then report the average value of those scores over turns as the final extended response-intent prediction result." ], [ "Automatic metrics only validate the system’s performance on a single dimension at a time. The ultimate holistic evaluation should be conducted by having the trained system interact with human users. Therefore we also conduct human evaluations for the dialog system built on AntiScam. We test our models and baselines with 15 college-student volunteers. Each of them is asked to pretend to be an attacker and interact with all the models for at least three times to avoid randomness. We in total collect 225 number of dialogs. Each time, volunteers are required to use similar sentences and strategies to interact with all five models and score each model based on the metrics listed below at the end of the current round. Each model receives a total of 45 human ratings, and the average score is reported as the final human-evaluation score. In total, we design five different metrics to assess the models' conversational ability whilst interacting with humans. The results are shown in Table TABREF19.", "Fluency Fluency is used to explore different models' language generation quality.", "Coherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.", "Engagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.", "Dialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.", "Task Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], [ "Table TABREF19 presents the main experiment results on AntiScam dataset, for both automatic evaluation metrics and human evaluation metrics. The experiment results on PersuasionForGood are shown in Table TABREF23. We observe that MISSA outperforms two baseline models (TransferTransfo and hybrid model) on almost all the metrics on both datasets. For further analysis, examples of real dialogs from the human evaluation are presented in Table TABREF21.", "Compared to the first TransferTransfo baseline, MISSA outperforms the TransferTransfo baseline on the on-task contents. From Table TABREF19, we observe that MISSA maintains longer conversations (14.9 turns) compared with TransferTransfo (8.5 turns), which means MISSA is better at maintaining the attacker's engagement. MISSA also has a higher task success score (1.294) than TransferTransfo (1.025), which indicates that it elicits information more strategically. In the top two dialogs (A and B) that are shown in Table TABREF21, both attackers were eliciting a credit card number in their first turns. TransferTransfo directly gave away the information, while MISSA replied with a semantically-related question “why would you need my credit card number?\" Furthermore, in the next turn, TransferTransfo ignored the context and asked an irrelevant question “what is your name?” while MISSA was able to generate the response “why can't you use my address?”, which is consistent to the context. We suspect the improved performance of MISSA comes from our proposed annotation scheme: the semantic slot information enables MISSA to keep track of the current entities, and the intent information helps MISSA to maintain coherency and prolong conversations.", "Compared to the hybrid model baseline, MISSA performs better on off-task content. As shown in the bottom two dialogs in Table TABREF21, attackers in both dialogs introduced their names in their first utterances. MISSA recognized attacker's name, while the hybrid model did not. We suspect it is because the hybrid model does not have the built-in semantic slot predictor. In the second turn, both attackers were explaining the reason of requesting the billing address previously. With semantic slot information, MISSA can easily understand the attacker; but the hybrid model misunderstands that the attacker was talking about the order number, possibly because the token “order” appeared in the attacker's utterance. We suspect that the hybrid model's bad performance on the off-task content leads to its low coherence rating (2.76) and short dialog length (8.2).", "To explore the influence of the intent-based conditional response generation method and the designed response filter, we perform an ablation study. The results are shown in Table TABREF19. We find that MISSA has higher fluency score and coherence score than MISSA-con (4.18 vs 3.78 for fluency, and 3.75 vs 3.68 for coherence), which suggests that conditioning on the system intent to generate responses improves the quality of the generated sentences. Compared with MISSA-sel, MISSA achieves better performance on all the metrics. For example, the engagement score for MISSA is 3.69 while MISSA-sel only has 2.87. This is because the response filter removed all the incoherent responses, which makes the attacker more willing to keep chatting. The ablation study shows both the conditional language generation mechanism and the response filter are essential to MISSA's good performance.", "We also apply our method to the PersuasionForGood dataset. As shown in Table TABREF23, MISSA and its variants outperform the TransferTransfo and the hybrid models on all evaluation metrics. Such good performance indicates MISSA can be easily applied to a different non-collaborative task and achieve good performance. Particularly, MISSA achieves the lowest perplexity, which confirms that using conditional response generation leads to high quality responses. Compared with the result on AntiScam dataset, MISSA-con performs the best in terms of RIP and ERIP. We suspect the underlying reason is that there are more possible responses with the same intent in PersuasionForGood than in AntiScam. This also suggests that we should adjust the model structure according to the nature of the dataset." ], [ "We propose a general dialog system pipeline to build non-collaborative dialog systems, including a hierarchical annotation scheme and an end-to-end neural response generation model called MISSA. With the hierarchical annotation scheme, we can distinguish on-task and off-task intents. MISSA takes both on and off-task intents as supervision in its training and thus can deal with diverse user utterances in non-collaborative settings. Moreover, to validate MISSA's performance, we create a non-collaborate dialog dataset that focuses on deterring phone scammers. MISSA outperforms all baseline methods in terms of fluency, coherency, and user engagement on both the newly proposed anti-scam task and an existing persuasion task. However, MISSA still produces responses that are not consistent with their distant conversation history as GPT can only track a limited history span. In future work, we plan to address this issue by developing methods that can effectively track longer dialog context." ], [ "This work was supported by DARPA ASED Program HR001117S0050. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes not withstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government." ], [ "We randomly pair two workers: one is assigned the role of the attacker to elicit user information, and the other one is assigned the role of an everyday user who aims to protect her/his information and potentially elicit the attacker's information. We give both workers specific personal data. Instructions are shown in Table TABREF24. The “attacker” additionally receives training on how to elicit information from people. Workers cannot see their partners' instructions.", "There are two tasks for the users: firstly, users are required to chat with their partners and determine if they are attackers or not, reporting their decisions at the end of the task. If users think their partners are attackers, they are instructed to prolong the conversation and elicit information from their partners. We give a bonus to users if they detect the attackers and elicit real information from the attackers, including the attacker's name, address and phone number. Since one worker can only participate once in the task, they do not know their partners are always attackers.", "We provide real user information including the user's name and the task background (user purchased a product on Amazon) . Attackers are well-trained to pretend to be an Amazon customer service agent. To simulate a real-world scam, we tell attackers some details about the user, such as the user's name to stop them from being too easily identified. We give a bonus to attackers if they elicit correct information from users, including the user's address, credit card number, CVS and expiration date. Each worker can only participate once to prevent workers from knowing their partner's information and goals in advance. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable.", "We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value. Table TABREF2 shows that there is a vast amount of off-task content in the dataset, which confirms the necessity of a hierarchical on-task/off-task annotation scheme. We observe that sentences from the attacker and user have different intent distributions. Compared to attackers, users produce more refusal (74 vs 19), because users are more likely to refuse to provide requested information if they have detected the attacker. Moreover, users also ask more open_questions (173 vs 54) and yes_no_questions (165 vs 117) for off-task content because they are instructed to prolong the conversation after detecting the attacker. Furthermore, attackers and users both have a massive amount of social content (292 in total and 252 in total), suggesting that it is important to have social intent sentences to maintain the conversation." ], [ "MISSA is based on the generative pre-trained transformer BIBREF32. We use an Adam optimizer with a learning rate of 6.25e-5 and $L2$ weight decay of $0.01$, we set the coefficient of language modeling loss to be 2, the coefficient of intent and slot classifiers to be 1, and the coefficient of next-utterance classifier to be 1. We first pre-train the model on the PERSONA-CHAT dataset. When fine-tuning on the AntiScam and the PersuasionForGood datasets, we use $80\\%$ data for training, $10\\%$ data for validation, and $10\\%$ data for testing. Since the original PersuasionForGood dataset is annotated with intents, we separate the original on-task and off-task intents, which are shown in Table TABREF2. To deal with the words out of the vocabulary, we conduct delexicalization to replace slot values with corresponding slot tokens during the training phase, and replace the slot tokens with pre-defined information during testing." ], [ "An example of human-human chat on AntiScam dataset is shown in Table TABREF25." ] ], "section_name": [ "Introduction", "Related Work", "Non-Collaborative Task Annotation Scheme", "Datasets", "Datasets ::: AntiScam Dataset", "Datasets ::: PersuasionForGood Dataset", "Model ::: Background", "Model ::: Intent and Semantic Slot Classifiers", "Model ::: Response Generation", "Model ::: Response Filtering", "Experiments", "Experiments ::: Baseline Models", "Experiments ::: Automatic Evaluation Metrics", "Experiments ::: Human Evaluation Metrics", "Results and Analysis", "Conclusion and Future Work", "Acknowledgements", "Appendix ::: Anti-Scam Collection Setting", "Appendix ::: Training details", "Appendix ::: Example Dialog" ] }
{ "answers": [ { "annotation_id": [ "02b0c6a33e3085050b5466b86495d599faed73a4", "0842e20969f9cc0e015b600b00aba111866213bf", "088800b75f0f40f97436cccebd33a7f55023f32f", "293218eb9afa8e4e5865ce8c7806821274c8ab71", "9834da998d57bbb3c84289c13cd888586a124fce", "cc82e44d4ec4b2ee68647b2eedb2cb8bf846d91d" ], "answer": [ { "evidence": [ "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "extractive_spans": [ " 3,044 sentences in 100 dialogs" ], "free_form_answer": "", "highlighted_evidence": [ "We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words", "We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "extractive_spans": [ "220 human-human dialogs" ], "free_form_answer": "", "highlighted_evidence": [ "We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "extractive_spans": [ "220 human-human dialogs. ", "3,044 sentences in 100 dialogs" ], "free_form_answer": "", "highlighted_evidence": [ "We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "extractive_spans": [ "220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. " ], "free_form_answer": "", "highlighted_evidence": [ "We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "extractive_spans": [ "220 human-human dialogs" ], "free_form_answer": "", "highlighted_evidence": [ "We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "extractive_spans": [ "3,044 sentences in 100 dialogs" ], "free_form_answer": "", "highlighted_evidence": [ "We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f", "fa716cd87ce6fd6905e2f23f09b262e90413167f", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "9cf96ca8b584b5de948019dc75e305c9e7707b92" ] }, { "annotation_id": [ "457dbb5fc71932d1c6ef2a76c81b080c105b6944", "72f1d379224d326ed9c9d8b2b2d01394c9e19a93", "8f7311eb77facc1713ccf726687c1286c19cbe9c", "b97dd07f95b227324770d572e28365c6896da132", "dc55b1cc54b35596559f262ca1469ac07a4b7cd6", "ed3c00af2c7e021671dc0d03eef3405952d3325f" ], "answer": [ { "evidence": [ "To enrich publicly available non-collaborative task datasets, we collect a new dataset AntiScam, where users defend themselves against attackers trying to collect personal information. As non-collaborative tasks are still relatively new to the study of dialog systems, there are insufficiently many meaningful datasets for evaluation and we hope this provides a valuable example. We evaluate MISSA on the newly collected AntiScam dataset and an existing PersuasionForGood dataset. Both automatic and human evaluations suggest that MISSA outperforms multiple competitive baselines.", "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value.", "Datasets ::: PersuasionForGood Dataset", "The PersuasionForGood dataset BIBREF1 was collected from typing conversations on Amazon Mechanical Turk platform. Two workers were randomly paired, one was assigned the role of persuader, the other was persuadee. The goal of the persuader was to persuade the persuadee to donate a portion of task earning to a specific charity. The dataset consists of 1,017 dialogs, where 300 dialogs are annotated with dialog acts. The average conversation length is 10.43, the vocabulary size is 8,141. Since the original PersuasionForGood dataset is annotated with dialog acts, we select the on-task dialog acts as on-task intents shown in Table TABREF2, and categorize the other dialog acts into our pre-defined off-task intents." ], "extractive_spans": [], "free_form_answer": "using a role-playing task on the Amazon Mechanical Turk platform and collecting typed conversations", "highlighted_evidence": [ "dataset ", "To enrich available non-collaborative task datasets, we created a corpus of human-human anti-scam dialogs in order to learn human elicitation strategies. We chose a popular Amazon customer service scam scenario to collect dialogs between users and attackers who aim to collect users information. We posted a role-playing task on the Amazon Mechanical Turk platform and collected a typing conversation dataset named AntiScam. We collected 220 human-human dialogs. The average conversation length is 12.45 turns and the average utterance length is 11.13 words. Only 172 out of 220 users successfully identified their partner as an attacker, suggesting that the attackers are well trained and not too easily identifiable. We recruited two expert annotators who have linguistic training to annotate 3,044 sentences in 100 dialogs, achieving a 0.874 averaged weighted kappa value.\n\nDatasets ::: PersuasionForGood Dataset\nThe PersuasionForGood dataset BIBREF1 was collected from typing conversations on Amazon Mechanical Turk platform. Two workers were randomly paired, one was assigned the role of persuader, the other was persuadee. The goal of the persuader was to persuade the persuadee to donate a portion of task earning to a specific charity. The dataset consists of 1,017 dialogs, where 300 dialogs are annotated with dialog acts. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 1: Hierarchical intent annotation scheme on both ANTISCAM dataset and PERSUASIONFORGOOD dataset. The On-task intents are task-specific while the Off-task intents are general for different non-collaborative tasks.", "To decouple syntactic and semantic information in utterances and provide detailed supervision, we design a hierarchical intent annotation scheme for non-collaborative tasks. We first separate on-task and off-task intents. As on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task. On the other hand, since off-task content is too general to design task-specific intents, we choose common dialog acts as the categories. The advantage of this hierarchical annotation scheme is apparent when starting a new non-collaborative task: we only need to focus on designing the on-task categories and semantic slots which are the same as traditional task-oriented dialog systems. Consequently, we don't have to worry about the off-task annotation design since the off-task category is universal.", "In the intent annotation scheme shown in Table TABREF2, we list the designed intent annotation scheme for the newly collected AntiScam dataset and the PersuasionForGood dataset. We first define on-task intents for the datasets, which are key actions in the task. Since our AntiScam focuses on understanding and reacting towards elicitations, we define elicitation, providing_information and refusal as on-task intents. In the PersuasionForGood dataset, we define nine on-task intents in Table TABREF2 based on the original PersuasionForGood dialog act annotation scheme. All these intents are related to donation actions, which are salient on-task intents in the persuasion task. The off-task intents are the same for both tasks, including six general intents and six additional social intents. General intents are more closely related to the syntactic meaning of the sentence (open_question, yes_no_question, positive_answer, negative_answer, responsive_statement, and nonresponsive_statement) while social intents are common social actions (greeting, closing, apology, thanking,respond_to_thank, and hold)." ], "extractive_spans": [], "free_form_answer": "Separate on-task and off task intents and annotate on task for data set specific intents, while annotating off task intents with a fixed set of general intents.", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Hierarchical intent annotation scheme on both ANTISCAM dataset and PERSUASIONFORGOOD dataset. The On-task intents are task-specific while the Off-task intents are general for different non-collaborative tasks.", "To decouple syntactic and semantic information in utterances and provide detailed supervision, we design a hierarchical intent annotation scheme for non-collaborative tasks. We first separate on-task and off-task intents. As on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task. On the other hand, since off-task content is too general to design task-specific intents, we choose common dialog acts as the categories. ", "In the intent annotation scheme shown in Table TABREF2, we list the designed intent annotation scheme for the newly collected AntiScam dataset and the PersuasionForGood dataset. We first define on-task intents for the datasets, which are key actions in the task. Since our AntiScam focuses on understanding and reacting towards elicitations, we define elicitation, providing_information and refusal as on-task intents. In the PersuasionForGood dataset, we define nine on-task intents in Table TABREF2 based on the original PersuasionForGood dialog act annotation scheme. All these intents are related to donation actions, which are salient on-task intents in the persuasion task. The off-task intents are the same for both tasks, including six general intents and six additional social intents. General intents are more closely related to the syntactic meaning of the sentence (open_question, yes_no_question, positive_answer, negative_answer, responsive_statement, and nonresponsive_statement) while social intents are common social actions (greeting, closing, apology, thanking,respond_to_thank, and hold)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In the intent annotation scheme shown in Table TABREF2, we list the designed intent annotation scheme for the newly collected AntiScam dataset and the PersuasionForGood dataset. We first define on-task intents for the datasets, which are key actions in the task. Since our AntiScam focuses on understanding and reacting towards elicitations, we define elicitation, providing_information and refusal as on-task intents. In the PersuasionForGood dataset, we define nine on-task intents in Table TABREF2 based on the original PersuasionForGood dialog act annotation scheme. All these intents are related to donation actions, which are salient on-task intents in the persuasion task. The off-task intents are the same for both tasks, including six general intents and six additional social intents. General intents are more closely related to the syntactic meaning of the sentence (open_question, yes_no_question, positive_answer, negative_answer, responsive_statement, and nonresponsive_statement) while social intents are common social actions (greeting, closing, apology, thanking,respond_to_thank, and hold).", "The PersuasionForGood dataset BIBREF1 was collected from typing conversations on Amazon Mechanical Turk platform. Two workers were randomly paired, one was assigned the role of persuader, the other was persuadee. The goal of the persuader was to persuade the persuadee to donate a portion of task earning to a specific charity. The dataset consists of 1,017 dialogs, where 300 dialogs are annotated with dialog acts. The average conversation length is 10.43, the vocabulary size is 8,141. Since the original PersuasionForGood dataset is annotated with dialog acts, we select the on-task dialog acts as on-task intents shown in Table TABREF2, and categorize the other dialog acts into our pre-defined off-task intents." ], "extractive_spans": [], "free_form_answer": "On-task dialog are annotated as on-task intents , the other dialog are annotated as pre-defined off-task intents.", "highlighted_evidence": [ "In the intent annotation scheme shown in Table TABREF2, we list the designed intent annotation scheme for the newly collected AntiScam dataset and the PersuasionForGood dataset. ", "Since the original PersuasionForGood dataset is annotated with dialog acts, we select the on-task dialog acts as on-task intents shown in Table TABREF2, and categorize the other dialog acts into our pre-defined off-task intents." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To decouple syntactic and semantic information in utterances and provide detailed supervision, we design a hierarchical intent annotation scheme for non-collaborative tasks. We first separate on-task and off-task intents. As on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task. On the other hand, since off-task content is too general to design task-specific intents, we choose common dialog acts as the categories. The advantage of this hierarchical annotation scheme is apparent when starting a new non-collaborative task: we only need to focus on designing the on-task categories and semantic slots which are the same as traditional task-oriented dialog systems. Consequently, we don't have to worry about the off-task annotation design since the off-task category is universal." ], "extractive_spans": [ "separate on-task and off-task intents", "on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task", "off-task content is too general to design task-specific intents, we choose common dialog acts as the categories" ], "free_form_answer": "", "highlighted_evidence": [ "We first separate on-task and off-task intents. As on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task. On the other hand, since off-task content is too general to design task-specific intents, we choose common dialog acts as the categories." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To decouple syntactic and semantic information in utterances and provide detailed supervision, we design a hierarchical intent annotation scheme for non-collaborative tasks. We first separate on-task and off-task intents. As on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task. On the other hand, since off-task content is too general to design task-specific intents, we choose common dialog acts as the categories. The advantage of this hierarchical annotation scheme is apparent when starting a new non-collaborative task: we only need to focus on designing the on-task categories and semantic slots which are the same as traditional task-oriented dialog systems. Consequently, we don't have to worry about the off-task annotation design since the off-task category is universal.", "In the intent annotation scheme shown in Table TABREF2, we list the designed intent annotation scheme for the newly collected AntiScam dataset and the PersuasionForGood dataset. We first define on-task intents for the datasets, which are key actions in the task. Since our AntiScam focuses on understanding and reacting towards elicitations, we define elicitation, providing_information and refusal as on-task intents. In the PersuasionForGood dataset, we define nine on-task intents in Table TABREF2 based on the original PersuasionForGood dialog act annotation scheme. All these intents are related to donation actions, which are salient on-task intents in the persuasion task. The off-task intents are the same for both tasks, including six general intents and six additional social intents. General intents are more closely related to the syntactic meaning of the sentence (open_question, yes_no_question, positive_answer, negative_answer, responsive_statement, and nonresponsive_statement) while social intents are common social actions (greeting, closing, apology, thanking,respond_to_thank, and hold).", "For specific tasks, we also design a semantic slot annotation scheme for annotating sentences based on their semantic content. We identify 13 main semantic slots in the anti-scam task, for example, credit card numbers. We present a detailed semantic slot annotation in Table TABREF3. Following BIBREF1, we segment each conversation turn into single sentences and then annotate each sentence rather than turns." ], "extractive_spans": [ "we design a hierarchical intent annotation scheme for non-collaborative tasks. We first separate on-task and off-task intents. As on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task. On the other hand, since off-task content is too general to design task-specific intents, we choose common dialog acts as the categories. ", "In the intent annotation scheme shown in Table TABREF2, we list the designed intent annotation scheme for the newly collected AntiScam dataset and the PersuasionForGood dataset. We first define on-task intents for the datasets, which are key actions in the task. Since our AntiScam focuses on understanding and reacting towards elicitations, we define elicitation, providing_information and refusal as on-task intents. In the PersuasionForGood dataset, we define nine on-task intents in Table TABREF2 based on the original PersuasionForGood dialog act annotation scheme", "For specific tasks, we also design a semantic slot annotation scheme for annotating sentences based on their semantic content. We identify 13 main semantic slots in the anti-scam task, for example, credit card numbers. We present a detailed semantic slot annotation in Table TABREF3. Following BIBREF1, we segment each conversation turn into single sentences and then annotate each sentence rather than turns." ], "free_form_answer": "", "highlighted_evidence": [ "To decouple syntactic and semantic information in utterances and provide detailed supervision, we design a hierarchical intent annotation scheme for non-collaborative tasks. We first separate on-task and off-task intents. As on-task intents are key actions that can vary among different tasks, we need to specifically define on-task intents for each task. On the other hand, since off-task content is too general to design task-specific intents, we choose common dialog acts as the categories. The advantage of this hierarchical annotation scheme is apparent when starting a new non-collaborative task: we only need to focus on designing the on-task categories and semantic slots which are the same as traditional task-oriented dialog systems. Consequently, we don't have to worry about the off-task annotation design since the off-task category is universal.", "In the intent annotation scheme shown in Table TABREF2, we list the designed intent annotation scheme for the newly collected AntiScam dataset and the PersuasionForGood dataset. We first define on-task intents for the datasets, which are key actions in the task. Since our AntiScam focuses on understanding and reacting towards elicitations, we define elicitation, providing_information and refusal as on-task intents. In the PersuasionForGood dataset, we define nine on-task intents in Table TABREF2 based on the original PersuasionForGood dialog act annotation scheme.", "For specific tasks, we also design a semantic slot annotation scheme for annotating sentences based on their semantic content. We identify 13 main semantic slots in the anti-scam task, for example, credit card numbers. We present a detailed semantic slot annotation in Table TABREF3. Following BIBREF1, we segment each conversation turn into single sentences and then annotate each sentence rather than turns." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To tackle the issue of incoherent system responses to off-task content, previous studies have built hybrid systems to interleave off-task and on-task content. BIBREF4 used a rule-based dialog manager for on-task content and a neural model for off-task content, and trained a reinforcement learning model to select between these two models based on the dialog context. However, such a method is difficult to train and struggles to generalize beyond the movie promotion task they considered. To tackle these problems, we propose a hierarchical intent annotation scheme that separates on-task and off-task information in order to provide detailed supervision. For on-task information, we directly use task-related intents for representation. Off-task information, on the other hand, is too general to categorize into specific intents, so we choose dialog acts that convey syntax information. These acts, such as “open question\" are general to all tasks." ], "extractive_spans": [], "free_form_answer": "using a hierarchical scheme where on-task intents uses task-related intents for representation and off-task intents chooses dialog acts that convey the syntax information", "highlighted_evidence": [ "To tackle these problems, we propose a hierarchical intent annotation scheme that separates on-task and off-task information in order to provide detailed supervision. For on-task information, we directly use task-related intents for representation. Off-task information, on the other hand, is too general to categorize into specific intents, so we choose dialog acts that convey syntax information. These acts, such as “open question\" are general to all tasks." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "9cf96ca8b584b5de948019dc75e305c9e7707b92", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "1588240269c138154cc4a8fcd35765ebc15d11fc", "36acdc65d0abe2b1576dded1e32936a18dc603f3", "5897ad34cd50aa4b5f86a46ac916ada7c557f4f1", "92699f6d847e0e32018324b673fa0e156691f812", "b70575d948179d69cc2c987d3959b94cea0ccb7d", "d61d85bf014925e364deac9cf4d31101eb3bd722" ], "answer": [ { "evidence": [ "We compare MISSA mainly with two baseline models:", "TransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.", "Hybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline.", "Table TABREF19 presents the main experiment results on AntiScam dataset, for both automatic evaluation metrics and human evaluation metrics. The experiment results on PersuasionForGood are shown in Table TABREF23. We observe that MISSA outperforms two baseline models (TransferTransfo and hybrid model) on almost all the metrics on both datasets. For further analysis, examples of real dialogs from the human evaluation are presented in Table TABREF21." ], "extractive_spans": [], "free_form_answer": "TransferTransfo and Hybrid ", "highlighted_evidence": [ "We compare MISSA mainly with two baseline models:\n\nTransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.\n\nHybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline.", "We observe that MISSA outperforms two baseline models (TransferTransfo and hybrid model) on almost all the metrics on both datasets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF19 presents the main experiment results on AntiScam dataset, for both automatic evaluation metrics and human evaluation metrics. The experiment results on PersuasionForGood are shown in Table TABREF23. We observe that MISSA outperforms two baseline models (TransferTransfo and hybrid model) on almost all the metrics on both datasets. For further analysis, examples of real dialogs from the human evaluation are presented in Table TABREF21.", "Previous studies use template-based methods to maintain sentence coherence. However, rigid templates lead to limited diversity, causing the user losing engagement. On the other hand, language generation models can generate diverse responses but are bad at being coherent. We propose Multiple Intents and Semantic Slots Annotation Neural Network (MISSA) to combine the advantages of both template and generation models and takes advantage from the hierarchical annotation at the same time. MISSA follows the TransferTransfo framework BIBREF0 with three modifications: (i) We first concurrently predict user's, system's intents and semantic slots; (ii) We then perform conditional generation to improve generated response's coherence. Specifically, we generate responses conditioned on the above intermediate representation (intents and slots); (iii) Finally, we generate multiple responses with the nucleus sampling strategy BIBREF5 and then apply a response filter, which contains a set of pre-defined constraints to select coherent responses. The constraints in the filter can be defined according to specific task requirements or general conversational rules." ], "extractive_spans": [ "TransferTransfo", " hybrid model" ], "free_form_answer": "", "highlighted_evidence": [ "We observe that MISSA outperforms two baseline models (TransferTransfo and hybrid model) on almost all the metrics on both datasets.", "We propose Multiple Intents and Semantic Slots Annotation Neural Network (MISSA) to combine the advantages of both template and generation models and takes advantage from the hierarchical annotation at the same time. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We compare MISSA mainly with two baseline models:", "TransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.", "Hybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline." ], "extractive_spans": [ "TransferTransfo", "Hybrid" ], "free_form_answer": "", "highlighted_evidence": [ "We compare MISSA mainly with two baseline models:\n\nTransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.\n\nHybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We compare MISSA mainly with two baseline models:", "TransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.", "Hybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline." ], "extractive_spans": [ "TransferTransfo", "Hybrid" ], "free_form_answer": "", "highlighted_evidence": [ "We compare MISSA mainly with two baseline models:\n\nTransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.\n\nHybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We compare MISSA mainly with two baseline models:", "TransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.", "Hybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline." ], "extractive_spans": [ "TransferTransfo The vanilla TransferTransfo framework", "Hybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA" ], "free_form_answer": "", "highlighted_evidence": [ "We compare MISSA mainly with two baseline models:\n\nTransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.\n\nHybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We compare MISSA mainly with two baseline models:", "TransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.", "Hybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline." ], "extractive_spans": [ "TransferTransfo", "Hybrid" ], "free_form_answer": "", "highlighted_evidence": [ "We compare MISSA mainly with two baseline models:\n\nTransferTransfo The vanilla TransferTransfo framework is compared with MISSA to show the impact and necessity of adding the intent and slot classifiers. We follow the original TransferTransfo design BIBREF0 and train with undelexicalized data.\n\nHybrid Following BIBREF4 yu2017learning, we also build a hybrid dialog system by combining vanilla TransferTransfo and MISSA. Specifically, we first determine if the human utterances are on-task or off-task with human intent classifier. If the classifier decides that the utterance is on-task, we choose the response from MISSA; otherwise, we choose the response from vanilla TransferTransfo baseline." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "9cf96ca8b584b5de948019dc75e305c9e7707b92", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "1232a3f717f53b574982e54dc112a085a5c93836", "49d130ce2f51e8000dd3e99a3b6ab1434a2f90cb", "5f973de8c8860e8e67c68139cb87e2f152b2e34f", "7e893bceed1c9ef117ea161bcf29dbcea9bc7ccf", "d560a0a5ecd3eeb78162b28852b3f050a14dfca9" ], "answer": [ { "evidence": [ "Experiments ::: Automatic Evaluation Metrics", "Perplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.", "Response-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents. For example, in the anti-scam task, if the attacker elicits information from the system, we need to know if the system refuses or agrees to provide the information. Therefore we care about intent prediction for the generated system response. Since our baselines are more suited for social chat as they cannot produce system intents, we use the system intent and slot classifiers trained in our model to predict their responses' intents and slots. The intent predictor achieves a $84\\%$ accuracy and the semantic slot predictor achieves $77\\%$ on the AntiScam dataset. Then we compare the predicted values with human-annotated ground truth in the dataset to compute the response-intent prediction (RIP) and response-slot prediction (RSP).", "Extended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog. However, the real mapping between human-intent and system-intent is much more complicated as there might be multiple acceptable system-intents for the same human-intent. Therefore, we also design a metric to evaluate if the predicted system-intent is in the set of acceptable intents. Specifically, we estimate the transition probability $p(I_i|I_j)$ by counting the frequency of all the bi-gram human-intent and system-intent pairs in the training data. During the test stage, if the predicted intent matches the ground truth, we set the score as 1, otherwise we set the score as $p(I_{predict}|I_i)$ where $I_i$ is the intent of the input human utterance. We then report the average value of those scores over turns as the final extended response-intent prediction result.", "Automatic metrics only validate the system’s performance on a single dimension at a time. The ultimate holistic evaluation should be conducted by having the trained system interact with human users. Therefore we also conduct human evaluations for the dialog system built on AntiScam. We test our models and baselines with 15 college-student volunteers. Each of them is asked to pretend to be an attacker and interact with all the models for at least three times to avoid randomness. We in total collect 225 number of dialogs. Each time, volunteers are required to use similar sentences and strategies to interact with all five models and score each model based on the metrics listed below at the end of the current round. Each model receives a total of 45 human ratings, and the average score is reported as the final human-evaluation score. In total, we design five different metrics to assess the models' conversational ability whilst interacting with humans. The results are shown in Table TABREF19.", "Fluency Fluency is used to explore different models' language generation quality.", "Coherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.", "Engagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.", "Dialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.", "Task Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "extractive_spans": [ "Perplexity", "Response-Intent Prediction (RIP)", "Response-Slot Prediction (RSP)", "Extended Response-Intent Prediction (ERIP) ", "Extended Response-Slot Prediction (ERSP) ", "Fluency", "Coherence ", "Engagement", "Dialog length ", "Task Success Score (TaskSuc)" ], "free_form_answer": "", "highlighted_evidence": [ "Experiments ::: Automatic Evaluation Metrics\nPerplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.", "Response-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) ", "Extended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) ", "Therefore we also conduct human evaluations for the dialog system built on AntiScam. ", "Fluency Fluency is used to explore different models' language generation quality.", "Coherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.\n\n", "Engagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.\n\n", "Dialog length (Length) Engagement is a subjective metric. ", "Task Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Perplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.", "Response-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents. For example, in the anti-scam task, if the attacker elicits information from the system, we need to know if the system refuses or agrees to provide the information. Therefore we care about intent prediction for the generated system response. Since our baselines are more suited for social chat as they cannot produce system intents, we use the system intent and slot classifiers trained in our model to predict their responses' intents and slots. The intent predictor achieves a $84\\%$ accuracy and the semantic slot predictor achieves $77\\%$ on the AntiScam dataset. Then we compare the predicted values with human-annotated ground truth in the dataset to compute the response-intent prediction (RIP) and response-slot prediction (RSP).", "Extended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog. However, the real mapping between human-intent and system-intent is much more complicated as there might be multiple acceptable system-intents for the same human-intent. Therefore, we also design a metric to evaluate if the predicted system-intent is in the set of acceptable intents. Specifically, we estimate the transition probability $p(I_i|I_j)$ by counting the frequency of all the bi-gram human-intent and system-intent pairs in the training data. During the test stage, if the predicted intent matches the ground truth, we set the score as 1, otherwise we set the score as $p(I_{predict}|I_i)$ where $I_i$ is the intent of the input human utterance. We then report the average value of those scores over turns as the final extended response-intent prediction result.", "Fluency Fluency is used to explore different models' language generation quality.", "Coherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.", "Engagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.", "Dialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.", "Task Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "extractive_spans": [ "Perplexity ", "Response-Intent Prediction (RIP)", "Response-Slot Prediction (RSP)", "Extended Response-Intent Prediction (ERIP)", "Extended Response-Slot Prediction (ERSP)", "Fluency ", "Coherence ", "Engagement ", "Dialog length (Length) ", "Task Success Score (TaskSuc)" ], "free_form_answer": "", "highlighted_evidence": [ "Perplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.", "Response-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents. For example, in the anti-scam task, if the attacker elicits information from the system, we need to know if the system refuses or agrees to provide the information. ", "Extended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog. However, the real mapping between human-intent and system-intent is much more complicated as there might be multiple acceptable system-intents for the same human-intent. Therefore, we also design a metric to evaluate if the predicted system-intent is in the set of acceptable intents.", "Fluency Fluency is used to explore different models' language generation quality.\n\n", "Coherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.", "Engagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.", "Dialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.", "Task Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Fluency Fluency is used to explore different models' language generation quality.", "Coherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.", "Engagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.", "Dialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.", "Task Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "extractive_spans": [ "Fluency Fluency is used to explore different models' language generation quality.\n\nCoherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.\n\nEngagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.\n\nDialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.\n\nTask Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "free_form_answer": "", "highlighted_evidence": [ "Fluency Fluency is used to explore different models' language generation quality.\n\nCoherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.\n\nEngagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.\n\nDialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.\n\nTask Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score.", "Fluency Fluency is used to explore different models' language generation quality.\n\nCoherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.\n\nEngagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.\n\nDialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.\n\nTask Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 3: Experiments results with both automatic and human evaluation on ANTISCAM dataset.", "Experiments ::: Automatic Evaluation Metrics", "Perplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.", "Response-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents. For example, in the anti-scam task, if the attacker elicits information from the system, we need to know if the system refuses or agrees to provide the information. Therefore we care about intent prediction for the generated system response. Since our baselines are more suited for social chat as they cannot produce system intents, we use the system intent and slot classifiers trained in our model to predict their responses' intents and slots. The intent predictor achieves a $84\\%$ accuracy and the semantic slot predictor achieves $77\\%$ on the AntiScam dataset. Then we compare the predicted values with human-annotated ground truth in the dataset to compute the response-intent prediction (RIP) and response-slot prediction (RSP).", "Extended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog. However, the real mapping between human-intent and system-intent is much more complicated as there might be multiple acceptable system-intents for the same human-intent. Therefore, we also design a metric to evaluate if the predicted system-intent is in the set of acceptable intents. Specifically, we estimate the transition probability $p(I_i|I_j)$ by counting the frequency of all the bi-gram human-intent and system-intent pairs in the training data. During the test stage, if the predicted intent matches the ground truth, we set the score as 1, otherwise we set the score as $p(I_{predict}|I_i)$ where $I_i$ is the intent of the input human utterance. We then report the average value of those scores over turns as the final extended response-intent prediction result.", "Experiments ::: Human Evaluation Metrics", "Automatic metrics only validate the system’s performance on a single dimension at a time. The ultimate holistic evaluation should be conducted by having the trained system interact with human users. Therefore we also conduct human evaluations for the dialog system built on AntiScam. We test our models and baselines with 15 college-student volunteers. Each of them is asked to pretend to be an attacker and interact with all the models for at least three times to avoid randomness. We in total collect 225 number of dialogs. Each time, volunteers are required to use similar sentences and strategies to interact with all five models and score each model based on the metrics listed below at the end of the current round. Each model receives a total of 45 human ratings, and the average score is reported as the final human-evaluation score. In total, we design five different metrics to assess the models' conversational ability whilst interacting with humans. The results are shown in Table TABREF19.", "Fluency Fluency is used to explore different models' language generation quality.", "Coherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.", "Engagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.", "Dialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.", "Task Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "extractive_spans": [], "free_form_answer": "Automatic evaluation metrics (Perplexity (PPl), Response-Intent Prediction (RIP), Response-Slot Prediction(RSP), Extended Response-Intent Prediction(ERIP), Extended Response-Slot Prediction (ERSP)) and Human Evaluation Metrics (Fluency, Coherence, Engagement, Lenhth, TaskSuc)", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Experiments results with both automatic and human evaluation on ANTISCAM dataset.", " Automatic Evaluation Metrics\nPerplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.\n\nResponse-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents. For example, in the anti-scam task, if the attacker elicits information from the system, we need to know if the system refuses or agrees to provide the information. Therefore we care about intent prediction for the generated system response. Since our baselines are more suited for social chat as they cannot produce system intents, we use the system intent and slot classifiers trained in our model to predict their responses' intents and slots. The intent predictor achieves a $84\\%$ accuracy and the semantic slot predictor achieves $77\\%$ on the AntiScam dataset. Then we compare the predicted values with human-annotated ground truth in the dataset to compute the response-intent prediction (RIP) and response-slot prediction (RSP).\n\nExtended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog. However, the real mapping between human-intent and system-intent is much more complicated as there might be multiple acceptable system-intents for the same human-intent. Therefore, we also design a metric to evaluate if the predicted system-intent is in the set of acceptable intents. Specifically, we estimate the transition probability $p(I_i|I_j)$ by counting the frequency of all the bi-gram human-intent and system-intent pairs in the training data. During the test stage, if the predicted intent matches the ground truth, we set the score as 1, otherwise we set the score as $p(I_{predict}|I_i)$ where $I_i$ is the intent of the input human utterance. We then report the average value of those scores over turns as the final extended response-intent prediction result.\n\nExperiments ::: Human Evaluation Metrics\nAutomatic metrics only validate the system’s performance on a single dimension at a time. The ultimate holistic evaluation should be conducted by having the trained system interact with human users. Therefore we also conduct human evaluations for the dialog system built on AntiScam. We test our models and baselines with 15 college-student volunteers. Each of them is asked to pretend to be an attacker and interact with all the models for at least three times to avoid randomness. We in total collect 225 number of dialogs. Each time, volunteers are required to use similar sentences and strategies to interact with all five models and score each model based on the metrics listed below at the end of the current round. Each model receives a total of 45 human ratings, and the average score is reported as the final human-evaluation score. In total, we design five different metrics to assess the models' conversational ability whilst interacting with humans. The results are shown in Table TABREF19.\n\nFluency Fluency is used to explore different models' language generation quality.\n\nCoherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.\n\nEngagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.\n\nDialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.\n\nTask Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score.", " Automatic Evaluation Metrics\nPerplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.\n\nResponse-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents. For example, in the anti-scam task, if the attacker elicits information from the system, we need to know if the system refuses or agrees to provide the information. Therefore we care about intent prediction for the generated system response. Since our baselines are more suited for social chat as they cannot produce system intents, we use the system intent and slot classifiers trained in our model to predict their responses' intents and slots. The intent predictor achieves a $84\\%$ accuracy and the semantic slot predictor achieves $77\\%$ on the AntiScam dataset. Then we compare the predicted values with human-annotated ground truth in the dataset to compute the response-intent prediction (RIP) and response-slot prediction (RSP).\n\nExtended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog. However, the real mapping between human-intent and system-intent is much more complicated as there might be multiple acceptable system-intents for the same human-intent. Therefore, we also design a metric to evaluate if the predicted system-intent is in the set of acceptable intents. Specifically, we estimate the transition probability $p(I_i|I_j)$ by counting the frequency of all the bi-gram human-intent and system-intent pairs in the training data. During the test stage, if the predicted intent matches the ground truth, we set the score as 1, otherwise we set the score as $p(I_{predict}|I_i)$ where $I_i$ is the intent of the input human utterance. We then report the average value of those scores over turns as the final extended response-intent prediction result.\n\nExperiments ::: Human Evaluation Metrics\nAutomatic metrics only validate the system’s performance on a single dimension at a time. The ultimate holistic evaluation should be conducted by having the trained system interact with human users. Therefore we also conduct human evaluations for the dialog system built on AntiScam. We test our models and baselines with 15 college-student volunteers. Each of them is asked to pretend to be an attacker and interact with all the models for at least three times to avoid randomness. We in total collect 225 number of dialogs. Each time, volunteers are required to use similar sentences and strategies to interact with all five models and score each model based on the metrics listed below at the end of the current round. Each model receives a total of 45 human ratings, and the average score is reported as the final human-evaluation score. In total, we design five different metrics to assess the models' conversational ability whilst interacting with humans. The results are shown in Table TABREF19.\n\nFluency Fluency is used to explore different models' language generation quality.\n\nCoherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.\n\nEngagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.\n\nDialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.\n\nTask Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Experiments ::: Automatic Evaluation Metrics", "Perplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.", "Response-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents. For example, in the anti-scam task, if the attacker elicits information from the system, we need to know if the system refuses or agrees to provide the information. Therefore we care about intent prediction for the generated system response. Since our baselines are more suited for social chat as they cannot produce system intents, we use the system intent and slot classifiers trained in our model to predict their responses' intents and slots. The intent predictor achieves a $84\\%$ accuracy and the semantic slot predictor achieves $77\\%$ on the AntiScam dataset. Then we compare the predicted values with human-annotated ground truth in the dataset to compute the response-intent prediction (RIP) and response-slot prediction (RSP).", "Extended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog. However, the real mapping between human-intent and system-intent is much more complicated as there might be multiple acceptable system-intents for the same human-intent. Therefore, we also design a metric to evaluate if the predicted system-intent is in the set of acceptable intents. Specifically, we estimate the transition probability $p(I_i|I_j)$ by counting the frequency of all the bi-gram human-intent and system-intent pairs in the training data. During the test stage, if the predicted intent matches the ground truth, we set the score as 1, otherwise we set the score as $p(I_{predict}|I_i)$ where $I_i$ is the intent of the input human utterance. We then report the average value of those scores over turns as the final extended response-intent prediction result.", "Fluency Fluency is used to explore different models' language generation quality.", "Coherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.", "Engagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.", "Dialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.", "Task Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "extractive_spans": [], "free_form_answer": "Automatic metrics used: Perplexity, RIP, RSP, ERIP ERSP.\nHuman evaluation metrics used: Fluency, Coherence, Engagement, Dialog length and Task Success Score.", "highlighted_evidence": [ "Automatic Evaluation Metrics\nPerplexity Since the canonical measure of a good language model is perplexity, which indicates the error rate of the expected word. We choose perplexity to evaluate the model performance.\n\nResponse-Intent Prediction (RIP) $\\&$ Response-Slot Prediction (RSP) Different from open-domain dialog systems, we care about the intents of the system response in non-collaborative tasks as we hope to know if the system response satisfies user intents.", "Extended Response-Intent Prediction (ERIP) $\\&$ Extended Response-Slot Prediction (ERSP) With Response-Intent Prediction, we verify the predicted intents to evaluate the coherence of the dialog.", "Fluency Fluency is used to explore different models' language generation quality.\n\nCoherence Different from single sentence's fluency, coherence focuses more on the logical consistency between sentences in each turn.\n\nEngagement In the anti-scam scenario, one of our missions is to keep engaging with the attackers to waste their time. So we directly ask volunteers (attackers) to what extend they would like to continue chatting with the system.\n\nDialog length (Length) Engagement is a subjective metric. Anti-scam system's goal is to engage user in the conversation longer in order to limit their harm to other potential victims. So we count the dialog length as another metric to evaluate system performance.\n\nTask Success Score (TaskSuc) The other goal of the anti-scam system is to elicit attacker's personal information. We count the average type of information (name, address and phone number) that the system obtained from attackers as the task success score." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "9cf96ca8b584b5de948019dc75e305c9e7707b92", "a0b403873302db7cada39008f04d01155ef68f4f", "fa716cd87ce6fd6905e2f23f09b262e90413167f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How big is the ANTISCAM dataset? ", "How is intent annotated?", "What are the baselines outperformed by this work?", "What are the evaluation metrics and criteria used to evaluate the model performance?" ], "question_id": [ "397a1e851aab41c455c2b284f5e4947500d797f0", "cc8b4ed3985f9bfbe1b5d7761b31d9bd6a965444", "f7662b11e87c1e051e13799413f3db459ac3e19c", "b584739622d0c53830e60430b13fd3ae6ff43669" ], "question_writer": [ "f7c76ad7ff9c8b54e8c397850358fa59258c6672", "f7c76ad7ff9c8b54e8c397850358fa59258c6672", "f7c76ad7ff9c8b54e8c397850358fa59258c6672", "f7c76ad7ff9c8b54e8c397850358fa59258c6672" ], "search_query": [ "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Hierarchical intent annotation scheme on both ANTISCAM dataset and PERSUASIONFORGOOD dataset. The On-task intents are task-specific while the Off-task intents are general for different non-collaborative tasks.", "Table 2: ANTISCAM’s semantic slot annotation scheme.", "Figure 1: The training phase overview of MISSA on ANTISCAM dataset, the input consists of three parts: private information, dialog history, and an appended next utterance. We concatenate the last hidden states at <sep> tokens with the last hidden states at the end of the last utterance to predict intents and semantic slots for corresponding sentences. We can predict multiple intents and semantic slots for each human utterance and system response. During testing, the appended response and distractor are removed.", "Table 3: Experiments results with both automatic and human evaluation on ANTISCAM dataset.", "Table 4: Examples of human-system dialogs, where systems are trained on ANTISCAM dataset. System responses are bolded.", "Table 5: Automatic evaluation results on PERSUASIONFORGOOD dataset.", "Table 6: Instructions for attackers and users on Amazon Mechanical Turk.", "Table 7: An example human-human dialog in ANTISCAM dataset. All the slot values have been replaced with slot tokens." ], "file": [ "3-Table1-1.png", "3-Table2-1.png", "5-Figure1-1.png", "6-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "8-Table6-1.png", "9-Table7-1.png" ] }
[ "How is intent annotated?", "What are the baselines outperformed by this work?", "What are the evaluation metrics and criteria used to evaluate the model performance?" ]
[ [ "1911.10742-Introduction-1", "1911.10742-Non-Collaborative Task Annotation Scheme-2", "1911.10742-Non-Collaborative Task Annotation Scheme-0", "1911.10742-Datasets ::: PersuasionForGood Dataset-0", "1911.10742-Non-Collaborative Task Annotation Scheme-1", "1911.10742-3-Table1-1.png", "1911.10742-Datasets ::: AntiScam Dataset-0", "1911.10742-Introduction-3" ], [ "1911.10742-Experiments ::: Baseline Models-2", "1911.10742-Results and Analysis-0", "1911.10742-Experiments ::: Baseline Models-1", "1911.10742-Introduction-2", "1911.10742-Experiments ::: Baseline Models-0" ], [ "1911.10742-Experiments ::: Automatic Evaluation Metrics-2", "1911.10742-Experiments ::: Human Evaluation Metrics-5", "1911.10742-Experiments ::: Human Evaluation Metrics-3", "1911.10742-Experiments ::: Human Evaluation Metrics-1", "1911.10742-Experiments ::: Human Evaluation Metrics-4", "1911.10742-Experiments ::: Human Evaluation Metrics-0", "1911.10742-6-Table3-1.png", "1911.10742-Experiments ::: Human Evaluation Metrics-2", "1911.10742-Experiments ::: Automatic Evaluation Metrics-1", "1911.10742-Experiments ::: Automatic Evaluation Metrics-0" ] ]
[ "using a hierarchical scheme where on-task intents uses task-related intents for representation and off-task intents chooses dialog acts that convey the syntax information", "TransferTransfo and Hybrid ", "Automatic metrics used: Perplexity, RIP, RSP, ERIP ERSP.\nHuman evaluation metrics used: Fluency, Coherence, Engagement, Dialog length and Task Success Score." ]
0
1904.09131
OpenTapioca: Lightweight Entity Linking for Wikidata
We propose a simple Named Entity Linking system that can be trained from Wikidata only. This demonstrates the strengths and weaknesses of this data source for this task and provides an easily reproducible baseline to compare other systems against. Our model is lightweight to train, to run and to keep synchronous with Wikidata in real time.
{ "paragraphs": [ [ "Named Entity Linking is the task of detecting mentions of entities from a knowledge base in free text, as illustrated in Figure 1 .", "Most of the entity linking literature focuses on target knowledge bases which are derived from Wikipedia, such as DBpedia BIBREF0 or YAGO BIBREF1 . These bases are curated automatically by harvesting information from the info-boxes and categories on each Wikipedia page and are therefore not editable directly.", "Wikidata BIBREF2 is an editable, multilingual knowledge base which has recently gained popularity as a target database for entity linking BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . As these new approaches to entity linking also introduce novel learning methods, it is hard to tell apart the benefits that come from the new models and those which come from the choice of knowledge graph and the quality of its data.", "We review the main differences between Wikidata and static knowledge bases extracted from Wikipedia, and analyze their implactions for entity linking. We illustrate these differences by building a simple entity linker, OpenTapioca, which only uses data from Wikidata, and show that it is competitive with other systems with access to larger data sources for some tasks. OpenTapioca can be trained easily from a Wikidata dump only, and can be efficiently kept up to date in real time as Wikidata evolves. We also propose tools to adapt existing entity linking datasets to Wikidata, and offer a new entity linking dataset, consisting of affiliation strings extracted from research articles." ], [ "Wikidata is a wiki itself, meaning that it can be edited by anyone, but differs from usual wikis by its data model: information about an entity can only be input as structured data, in a format that is similar to RDF.", "Wikidata stores information about the world in a collection of items, which are structured wiki pages. Items are identified by ther Q-id, such as Q40469, and they are made of several data fields. The label stores the preferred name for the entity. It is supported by a description, a short phrase describing the item to disambiguate it from namesakes, and aliases are alternate names for the entity. These three fields are stored separately for each language supported by Wikidata. Items also hold a collection of statements: these are RDF-style claims which have the item as subject. They can be backed by references and be made more precise with qualifiers, which all rely on a controlled vocabulary of properties (similar to RDF predicates). Finally, items can have site links, connecting them to the corresponding page for the entity in other Wikimedia projects (such as Wikipedia). Note that Wikidata items to not need to be associated with any Wikipedia page: in fact, Wikidata's policy on the notability of the subjects it covers is much more permissive than in Wikipedia. For a more detailed introduction to Wikidata's data model we refer the reader to BIBREF2 , BIBREF7 .", "Our goal is to evaluate the usefulness of this crowdsourced structured data for entity linking. We will therefore refrain from augmenting it with any external data (such as phrases and topical information extracted from Wikipedia pages), as is generally done when working with DBpedia or YAGO. By avoiding a complex mash-up of data coming from disparate sources, our entity linking system is also simpler and easier to reproduce. Finally, it is possible keep OpenTapioca in real-time synchronization with the live version of Wikidata, with a lag of a few seconds only. This means that users are able to fix or improve the knowledge graph, for instance by adding a missing alias on an item, and immediately see the benefits on their entity linking task. This constrasts with all other systems we are aware of, where the user either cannot directly intervene on the underlying data, or there is a significant delay in propagating these updates to the entity linking system." ], [ "We review the dominant architecture of entity linking heuristics following BIBREF8 , and assess its applicability to Wikidata.", "Entities in the knowledge base are associated with a set (or probability distribution) of possible surface forms. Given a text to annotate, candidate entities are generated by looking for occurrences of their surface forms in the text. Because of homonymy, many of these candidate occurrences turn out to be false matches, so a classifier is used to predict their correctness. We can group the features they tend to use in the following categories:" ], [ "These features compare the phrase to annotate with the known surface forms for the entity. Collecting such forms is often done by extracting mentions from Wikipedia BIBREF9 . Link labels, redirects, disambiguation pages and bold text in abstracts can all be useful to discover alternate names for an entity. It is also possible to crawl the web for Wikipedia links to improve the coverage, often at the expense of data quality BIBREF10 .", "Beyond collecting a set of possible surface forms, these approaches count the number of times an entity $e$ was mentioned by a phrase $w$ . This makes it possible to use a Bayesian methodology: the compatibility of a candidate entity $e$ with a given mention $w$ is $P(e | w) = \\frac{P(e,w)}{P(w)}$ , which can be estimated from the statistics collected.", "In Wikidata, items have labels and aliases in multiple languages. As this information is directly curated by editors, these phrases tend to be of high quality. However, they do not come with occurence counts. As items link to each other using their Wikidata identifiers only, it is not possible to compare the number of times USA was used to refer United States of America (Q30) or to United States Army (Q9212) inside Wikidata.", "Unlike Wikipedia's page titles which must be unique in a given language, two Wikidata items can have the same label in the same language. For instance Curry is the English label of both the item about the Curry programming language (Q2368856) and the item about the village in Alaska (Q5195194), and the description field is used to disambiguate them.", "Manual curation of surface forms implies a fairly narrow coverage, which can be an issue for general purpose entity linking. For instance, people are commonly refered to with their given or family name only, and these names are not systematically added as aliases: at the time of writing, Trump is an alias for Donald Trump (Q22686), but Cameron is not an alias for David Cameron (Q192). As a Wikidata editor, the main incentive to add aliases to an item is to make it easier to find the item with Wikidata's auto-suggest field, so that it can be edited or linked to more easily. Aliases are not designed to offer a complete set of possible surface forms found in text: for instance, adding common mispellings of a name is discouraged.", "Although Wikidata makes it impossible to count how often a particular label or alias is used to refer to an entity, these surface forms are carefully curated by the community. They are therefore fairly reliable.", "Given an entity $e$ and a phrase $d[s]$ , we need to compute $p(e|\nd[s])$ . Having no access to such a probability distribution, we choose to approximate this quantity by $\\frac{p(e)}{p(d[s])}$ , where $p(e)$ is the probability that $e$ is linked to, and $p(d[s])$ is the probability that $d[s]$ occurs in a text. In other words, we estimate the popularity of the entity and the commonness of the phrase separately.", "We estimate the popularity of an entity $e$ by a log-linear combination of its number of statements $n_e$ , site links $s_e$ and its PageRank $r(e)$ . The PageRank is computed on the entire Wikidata using statement values and qualifiers as edges.", "The probability $p(d[s])$ is estimated by a simple unigram language model that can be trained either on any large unannotated dataset.", "The local compatibility is therefore represented by a vector of features $F(e,w)$ and the local compatibility is computed as follows, where $\\lambda $ is a weights vector: $\nF(e,w) &= ( -\\log p(d[s]), \\log p(e) , n_e, s_e, 1 ) \\\\\np(e|d[s]) &\\propto e^{F(e,w) \\cdot \\lambda }\n$ " ], [ "The compatibility of the topic of a candidate entity with the rest of the document is traditionally estimated by similarity measures from information retrieval such as TFIDF BIBREF11 , BIBREF12 or keyword extraction BIBREF13 , BIBREF14 , BIBREF9 .", "Wikidata items only consist of structured data, except in their descriptions. This makes it difficult to compute topical information using the methods above. Vector-based representations of entities can be extracted from the knowledge graph alone BIBREF15 , BIBREF16 , but it is not clear how to compare them to topic representations for plain text, which would be computed differently. In more recent work, neural word embeddings were used to represent topical information for both text and entities BIBREF17 , BIBREF6 , BIBREF18 . This requires access to large amounts of text both to train the word vectors and to derive the entity vectors from them. These vectors have been shown to encode significant semantic information by themselves BIBREF19 , so we refrain from using them in this study." ], [ "Entities mentioned in the same context are often topically related, therefore it is useful not to treat linking decisions in isolation but rather to try to maximize topical coherence in the chosen items. This is the issue on which entity linking systems differ the most as it is harder to model.", "First, we need to estimate the topical coherence of a sequence of linking decisions. This is often done by first defining a pairwise relatedness score between the target entities. For instance, a popular metric introduced by BIBREF20 considers the set of wiki links $|a|, |b|$ made from or to two entities $a$ , $b$ and computes their relatedness: $ \\text{rel}(a,b) = 1 - \\frac{\\log (\\max (|a|,|b|)) - \\log (|a| \\cap |b|)}{\\log (|K|) - \\log (\\min (|a|,|b|))}$ ", "where $|K|$ is the number of entities in the knowledge base.", "When linking to Wikidata instead of Wikipedia, it is tempting to reuse these heuristics, replacing wikilinks by statements. However, Wikidata's linking structure is quite different from Wikipedia: statements are generally a lot sparser than links and they have a precise semantic meaning, as editors are restricted by the available properties when creating new statements. We propose in the next section a similarity measure that we find to perform well experimentally.", "Once a notion of semantic similarity is chosen, we need to integrate it in the inference process. Most approaches build a graph of candidate entities, where edges indicate semantic relatedness: the difference between the heuristics lie in the way this graph is used for the matching decisions. BIBREF21 use an approximate algorithm to find the densest subgraph of the semantic graph. This determines choices of entities for each mention. In other approaches, the initial evidence given by the local compatibility score is propagated along the edges of the semantic graph BIBREF14 , BIBREF22 or aggregated at a global level with a Conditional Random Field BIBREF17 ." ], [ "We propose a model that adapts previous approaches to Wikidata. Let $d$ be a document (a piece of text). A spot $s \\in d$ is a pair of start and end positions in $d$ . It defines a phrase $d[s]$ , and a set of candidate entities $E[s]$ : those are all Wikidata items for which $d[s]$ is a label or alias. Given two spots $s, s^{\\prime }$ we denote by $|s - s^{\\prime }|$ the number of characters between them. We build a binary classifier which predicts for each $s \\in d$ and $e \\in E[s]$ if $s \\in d$0 should be linked to $s \\in d$1 ." ], [ "The issue with the features above is that they ignore the context in which a mention in found. To make it context-sensitive, we adapt the approach of BIBREF22 to our setup. The general idea is to define a graph on the candidate entities, linking candidate entities which are semantically related, and then find a combination of candidate entities which have both high local compatibility and which are densely related in the graph.", "For each pair of entities $e, e^{\\prime }$ we define a similarity metric $s(e,e^{\\prime })$ . Let $l(e)$ be the set of items that $e$ links to in its statements. Consider a one-step random walks starting on $e$ , with probability $\\beta $ to stay on $e$ and probability $\\frac{1-\\beta }{|l(e)|}$ to reach one of the linked items. We define $s(e,e^{\\prime })$ as the probability that two such one-step random walks starting from $e$ and $s(e,e^{\\prime })$0 end up on the same item. This can be computed explicitly as $s(e,e^{\\prime })$1 ", "We then build a weighted graph $G_d$ whose vertices are pairs $(s \\in d, e \\in E[s])$ . In other words, we add a vertex for each candidate entity at a given spot. We fix a maximum distance $D$ for edges: vertices $(s,e)$ and $(s^{\\prime },e^{\\prime })$ can only be linked if $|s - s^{\\prime }| \\le D$ and $s \\ne s^{\\prime }$ . In this case, we define the weight of such an edge as $(\\eta + s(e,e^{\\prime }))\\frac{D - |s - s^{\\prime }|}{D}$ , where $\\eta $ is a smoothing parameter. In other words, the edge weight is proportional to the smoothed similarity between the entities, discounted by the distance between the mentions.", "The weighted graph $G_d$ can be represented as an adjacency matrix. We transform it into a column-stochastic matrix $M_d$ by normalizing its columns to sum to one. This defines a Markov chain on the candidate entities, that we will use to propagate the local evidence." ], [ " BIBREF22 first combine the local features into a local evidence score, and then spread this local evidence using the Markov chain: ", "$$ \nG(d) = (\\alpha I + (1 - \\alpha ) M_d)^k \\cdot LC(d)$$ (Eq. 14) ", " We propose a variant of this approach, where each individual local compatibility feature is propagated independently along the Markov chain. Let $F$ be the matrix of all local features for each candidate entity: $F = (F(e_1,d[s_1]),\n\\dots , F(e_n, d[s_n]))$ . After $k$ iterations in the Markov chain, this defines features $M_d^k F$ . Rather than relying on these features for a fixed number of steps $k$ , we record the features at each step, which defines the vector $(F, M_d \\cdot F, M_d^2 \\cdot F, \\dots , M_d^k \\cdot F)$ ", "This alleviates the need for an $\\alpha $ parameter while keeping the number of features small. We train a linear support vector classifier on these features and this defines the final score of each candidate entity. For each spot, our system picks the highest-scoring candidate entity that the classifier predicts as a match, if any." ], [ "Most entity linking datasets are annotated against DBpedia or YAGO. Wikidata contains items which do not have any corresponding Wikipedia article (in any language), so these items do not have any DBpedia or YAGO URI either. Therefore, converting an entity linking dataset from DBpedia to Wikidata requires more effort than simply following owl:sameAs links: we also need to annotate mentions of Wikidata items which do not have a corresponding DBpedia URI.", "We used the RSS-500 dataset of news excerpts annotated against DBpedia and encoded in NIF format BIBREF23 . We first translated all DBpedia URIs to Wikidata items. Then, we used OpenRefine BIBREF24 to extract the entities marked not covered by DBpedia and matched them against Wikidata. After human review, this added 63 new links to the 524 converted from DBpedia (out of 476 out-of-KB entities).", "We also annotated a new dataset from scratch. The ISTEX dataset consists of one thousand author affiliation strings extracted from research articles and exposed by the ISTEX text and data mining service. In this dataset, only 64 of the 2,624 Wikidata mentions do not have a corresponding DBpedia URI.", "We use the Wikidata JSON dump of 2018-02-24 for our experiments, indexed with Solr (Lucene). We restrict the index to humans, organizations and locations, by selecting only items whose type was a subclass of (P279) human (Q5), organization (Q43229) or geographical object (Q618123). Labels and aliases in all languages are added to a case-sensitive FST index.", "We trained our classifier and its hyper-parameters by five-fold cross-validation on the training sets of the ISTEX and RSS datasets. We used GERBIL BIBREF23 to evaluate OpenTapioca against other approaches. We report the InKB micro and macro F1 scores on test sets, with GERBIL's weak annotation match method." ], [ "The surface forms curated by Wikidata editors are sufficient to reach honourable recall, without the need to expand them with mentions extracted from Wikipedia. Our restriction to people, locations and organizations probably helps in this regard and we anticipate worse performance for broader domains. Our approach works best for scientific affiliations, where spelling is more canonical than in newswire. The availability of Twitter identifiers directly in Wikidata helps us to reach acceptable performance in this domain. The accuracy degrades on longer texts which require relying more on the ambiant topical context. In future work, we would like to explore the use of entity embeddings to improve our approach in this regard." ] ], "section_name": [ "Introduction", "Particularities of Wikidata", "Related work", "Local compatibility", "Topic similarity", "Mapping coherence", "OpenTapioca: an entity linking model for Wikidata", "Semantic similarity", "Classifying entities in context", "Experimental setup", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "1a89388a4b704ab749e5badb5807cb57296fe99d", "1fb12587ab4245514a6aebe36ff8ea23d7611f4c", "d59b61f92d5265c6a82493a72cf262e1be97af1a", "ee9fe2bfa13b9464f31a51762ca5cebe90704951", "1f8ae0b79a27b2a9d63b91cb3fec09ca996866b9" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Figure 2: F1 scores on test datasets" ], "extractive_spans": [], "free_form_answer": "The model improves the state of the art performance for the ISTEX dataset (F1 micro: 0.870, F1 macro: 0.858) and for the Microposts 2016 dataset (F1 micro: 0.087).", "highlighted_evidence": [ "FLOAT SELECTED: Figure 2: F1 scores on test datasets" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Figure 2: F1 scores on test datasets" ], "extractive_spans": [], "free_form_answer": "The micro and macro f1-scores of this model are 0.482 and 0.399 on the AIDA-CoNLL dataset, 0.087 and 0.515 on the Microposts 2016 dataset, 0.870 and 0.858 on the ISTEX-1000 dataset, 0.335 and 0.310 on the RSS-500 dataset", "highlighted_evidence": [ "FLOAT SELECTED: Figure 2: F1 scores on test datasets" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We review the main differences between Wikidata and static knowledge bases extracted from Wikipedia, and analyze their implactions for entity linking. We illustrate these differences by building a simple entity linker, OpenTapioca, which only uses data from Wikidata, and show that it is competitive with other systems with access to larger data sources for some tasks. OpenTapioca can be trained easily from a Wikidata dump only, and can be efficiently kept up to date in real time as Wikidata evolves. We also propose tools to adapt existing entity linking datasets to Wikidata, and offer a new entity linking dataset, consisting of affiliation strings extracted from research articles." ], "extractive_spans": [], "free_form_answer": "The accuracy ", "highlighted_evidence": [ "We illustrate these differences by building a simple entity linker, OpenTapioca, which only uses data from Wikidata, and show that it is competitive with other systems with access to larger data sources for some tasks. OpenTapioca can be trained easily from a Wikidata dump only, and can be efficiently kept up to date in real time as Wikidata evolves." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "db4f6f1ac73349bcebd4f6bf06de67906f18db9b", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "35491e1e579f6d147f4793edce4c1a80ab2410e7", "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] } ], "nlp_background": [ "infinity" ], "paper_read": [ "no" ], "question": [ "What is the accuracy of this model compared to sota?" ], "question_id": [ "2849c2944c47cf1de62b539c5d3c396a3e8d283a" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "Entity linking" ], "topic_background": [ "familiar" ] }
{ "caption": [ "Figure 1: Example of an annotated sentence", "Figure 2: F1 scores on test datasets" ], "file": [ "1-Figure1-1.png", "5-Figure2-1.png" ] }
[ "What is the accuracy of this model compared to sota?" ]
[ [ "1904.09131-Introduction-3", "1904.09131-5-Figure2-1.png" ] ]
[ "The accuracy " ]
1
1611.06322
Spotting Rumors via Novelty Detection
Rumour detection is hard because the most accurate systems operate retrospectively, only recognising rumours once they have collected repeated signals. By then the rumours might have already spread and caused harm. We introduce a new category of features based on novelty, tailored to detect rumours early on. To compensate for the absence of repeated signals, we make use of news wire as an additional data source. Unconfirmed (novel) information with respect to the news articles is considered as an indication of rumours. Additionally we introduce pseudo feedback, which assumes that documents that are similar to previous rumours, are more likely to also be a rumour. Comparison with other real-time approaches shows that novelty based features in conjunction with pseudo feedback perform significantly better, when detecting rumours instantly after their publication.
{ "paragraphs": [ [ "Social Media has evolved from friendship based networks to become a major source for the consumption of news (NIST, 2008). On social media, news is decentralised as it provides everyone the means to efficiently report and spread information. In contrast to traditional news wire, information on social media is spread without intensive investigation, fact and background checking. The combination of ease and fast pace of sharing information provides a fertile breeding ground for rumours, false- and disinformation. Social media users tend to share controversial information in-order to verify it, while asking about for the opinions of their followers (Zhao et. al, 2015). This further amplifies the pace of a rumour's spread and reach. Rumours and deliberate disinformation have already caused panic and influenced public opinion.", "The cases in Germany and Austria in 2016, show how misleading and false information about crimes committed by refugees negatively influenced the opinion of citizens.", "Detecting these rumours allows debunking them to prevent them from further spreading and causing harm. The further a rumour has spread, the more likely it is to be debunked by users or traditional media (Liu et. al, 2015). However, by then rumours might have already caused harm. This highlights the importance and necessity of recognizing rumours as early as possible - preferably instantaneously.", "Rumour detection on social media is challenging due to the short texts, creative lexical variations and high volume of the streams. The task becomes even harder if we attempt to perform rumour detection on-the-fly, without looking into the future. We provide an effective and highly scalable approach to detect rumours instantly after they were posted with zero delay. We introduce a new features category called novelty based features. Novelty based features compensate the absence of repeated information by consulting additional data sources - news wire articles. We hypothesize that information not confirmed by official news is an indication of rumours. Additionally we introduce pseudo feedback for classification. In a nutshell, documents that are similar to previously detected rumours are considered to be more likely to also be a rumour. The proposed features can be computed in constant time and space allowing us to process high-volume streams in real-time (Muthukrishnan, 2005). Our experiments reveal that novelty based features and pseudo feedback significantly increases detection performance for early rumour detection.", "The contributions of this paper include:", "Novelty based Features", "We introduced a new category of features for instant rumour detection that harnesses trusted resources. Unconfirmed (novel) information with respect to trusted resources is considered as an indication of rumours.", "Pseudo Feedback for Detection/Classification", "Pseudo feedback increases detection accuracy by harnessing repeated signals, without the need of retrospective operation." ], [ "Before rumour detection, scientists already studied the related problem of information credibility evaluation (Castillo et. al. 2011; Richardson et. al, 2003). Recently, automated rumour detection on social media evolved into a popular research field which also relies on assessing the credibility of messages and their sources. The most successful methods proposed focus on classification harnessing lexical, user-centric, propagation-based (Wu et. al, 2015) and cluster-based (Cai et. al, 2014; Liu et. al, 2015; Zhao et. al, 2015) features.", "Many of these context based features originate from a study by Castillo et. al (2011), which pioneered in engineering features for credibility assessment on Twitter (Liu et. al, 2015). They observed a significant correlation between the trustworthiness of a tweet with context-based characteristics including hashtags, punctuation characters and sentiment polarity. When assessing the credibility of a tweet, they also assessed the source of its information by constructing features based on provided URLs as well as user based features like the activeness of the user and social graph based features like the frequency of re-tweets. A comprehensive study by Castillo et. al (2011) of information credibility assessment widely influenced recent research on rumour detection, whose main focuses lies upon improving detection quality.", "While studying the trustworthiness of tweets during crises, Mendoza et. al (2010) found that the topology of a distrustful tweet's propagation pattern differs from those of news and normal tweets. These findings along with the fact that rumours tend to more likely be questioned by responses than news paved the way for future research examining propagation graphs and clustering methods (Cai et. al, 2014; Zhao et. al, 2015). The majority of current research focuses on improving the accuracy of classifiers through new features based on clustering (Cai et. al, 2014; Zhao et. al, 2015), sentiment analysis (Qazvinian et. al, 2011; Wu et. al, 2015) as well as propagation graphs (Kwon, et. al, 2013; Wang et. al, 2015).", "Recent research mainly focuses on further improving the quality of rumour detection while neglecting the increasing delay between the publication and detection of a rumour. The motivation for rumour detection lies in debunking them to prevent them from spreading and causing harm. Unfortunately, state-of-the-art systems operate in a retrospective manner, meaning they detect rumours long after they have spread. The most accurate systems rely on features based on propagation graphs and clustering techniques. These features can only detect rumours after the rumours have spread and already caused harm.", "Therefore, researchers like Liu et. al (2015), Wu et. al (2015), Zhao et. al (2015) and Zhou et. al (2015) focus on 'early rumour-detection' while allowing a delay up to 24 hours. Their focus on latency aware rumour detection makes their approaches conceptually related to ours. Zhao et. al (1015) found clustering tweets containing enquiry patterns as an indication of rumours. Also clustering tweets by keywords and subsequently judging rumours using an ensemble model that combine user, propagation and content-based features proved to be effective (Zhou et. al, 2015). Although the computation of their features is efficient, the need for repeated mentions in the form of response by other users results in increased latency between publication and detection. The approach with the lowest latency banks on the 'wisdom of the crowd' (Liu et. al, 2015). In addition to traditional context and user based features they also rely on clustering micro-blogs by their topicality to identify conflicting claims, which indicate increased likelihood of rumours. Although they claim to operate in real-time, they require a cluster of at least 5 messages to detect a rumour.", "In contrast, we introduce new features to detect rumours as early as possible - preferably instantly, allowing them to be debunked before they spread and cause harm." ], [ "Rumour detection is a challenging task, as it requires determining the truth of information (Zhao et. al, 2015). The Cambridge dictionary, defines a rumour as information of doubtful or unconfirmed truth. We rely on classification using an SVM, which is the state-of-the-art approach for novelty detection. Numerous features have been proposed for rumour detection on social media, many of which originate from an original study on information credibility by Castillo et. al (2011). Unfortunately, the currently most successful features rely on information based on graph propagation and clustering, which can only be computed retrospectively. This renders them close to useless when detecting rumours early on. We introduce two new classes of features, one based on novelty, the other on pseudo feedback. Both feature categories improve detection accuracy early on, when information is limited." ], [ "We frame the Real-time Rumour Detection task as a classification problem that assesses a document's likelihood of becoming a future rumour at the time of its publication. Consequently, prediction takes place in real-time with a single pass over the data.", "More formally, we denote by $d_t$ the document that arrives from stream $S:\\lbrace d_0, d_1, . . . d_n\\rbrace $ at time $t$ . Upon arrival of document $d_t$ we compute its corresponding feature vector $f_{d,t}$ . Given $f_{d,t}$ and the previously obtained weigh vector $w$ we compute the rumour score $RS_{d,t} = w^T \\times f_{d,t}$ . The rumour prediction is based on a fixed thresholding strategy with respect to $\\theta $ . We predict that message $d_t$ is likely to become a rumour if its rumour score exceeds the detection threshold $S:\\lbrace d_0, d_1, . . . d_n\\rbrace $0 . The optimal parameter setting for weight vector $S:\\lbrace d_0, d_1, . . . d_n\\rbrace $1 and detection threshold $S:\\lbrace d_0, d_1, . . . d_n\\rbrace $2 are learned on a test to maximise prediction accuracy." ], [ "To increase instantaneous detection performance, we compensate for the absence of future information by consulting additional data sources. In particular, we make use of news wire articles, which are considered to be of high credibility. This is reasonable as according to Petrovic et. al (2013), in the majority of cases, news wires lead social media for reporting news. When a message arrives from a social media stream, we build features based on its novelty with respect to the confirmed information in the trusted sources. In a nutshell, the presence of information unconfirmed by the official media is construed as an indication of being a rumour. Note that this closely resembles the definition of what a rumour is." ], [ "High volume streams demand highly efficient feature computation. This applies in particular to novelty based features since they can be computationally expensive. We explore two approaches to novelty computation: one based on vector proximity, the other on kterm hashing.", "Computing novelty based on traditional vector proximity alone does not yield adequate performance due to the length discrepancy between news wire articles and social media messages. To make vector proximity applicable, we slide a term-level based window, whose length resembles the average social media message length, through each of the news articles. This results in sub-documents whose length resembles those of social media messages. Novelty is computed using term weighted tf-idf dot products between the social media message and all news sub-documents. The inverse of the minimum similarity to the nearest neighbour equates to the degree of novelty.", "The second approach to compute novelty relies on kterm hashing (Wurzer et. al, 2015), a recent advance in novelty detection that improved the efficiency by an order of magnitude without sacrificing effectiveness. Kterm hashing computes novelty non-comparatively. Instead of measuring similarity between documents, a single representation of previously seen information is constructed. For each document, all possible kterms are formed and hashed onto a Bloom Filter. Novelty is computed by the fraction of unseen kterms. Kterm hashing has the interesting characteristic of forming a collective 'memory', able to span all trusted resources. We exhaustively form kterm for all news articles and store their corresponding hash positions in a Bloom Filter. This filter then captures the combined information of all trusted resources. A single representation allows computing novelty with a single step, instead of comparing each social media message individually with all trusted resources.", "When kterm hashing was introduced by Wurzer et. al (2015) for novelty detection on English tweets, they weighted all kterm uniformly. We found that treating all kterms as equally important, does not unlock the full potential of kterm hashing. Therefore, we additionally extract the top 10 keywords ranked by $tf.idf$ and build a separate set of kterms solely based on them. This allows us to compute a dedicated weight for kterms based on these top 10 keywords. The distinction in weights between kterms based on all versus keyword yields superior rumour detection quality, as described in section \"Feature analysis\" . This leaves us with a total of 6 novelty based features for kterm hashing - kterms of length 1 to 3 for all words and keywords.", "Apart from novelty based features, we also apply a range of 51 context based features. The full list of features can be found in table 6 . The focus lies on features that can be computed instantly based only on the text of a message to keep the latency of our approach to a minimum. Most of these 51 features overlap with previous studies (Castillo et. al, 2011; Liu et. al, 2015; Qazvinian et. al, 2011; Yang et. al, 2012; Zhao et. al, 2015). This includes features based on the presence or number of URLs, hash-tags and user-names, POS tags, punctuation characters as well as 8 different categories of sentiment and emotions.", "On the arrival of a new message from a stream, all its features are computed and linearly combined using weights obtained from an SVM classifier, yielding the rumour score. We then judge rumours based on an optimal threshold strategy for the rumour score." ], [ "In addition to novelty based features we introduce another category of features - dubbed Pseudo-Feedback (PF) feature - to boost detection performance. The feature is conceptually related to pseudo relevance feedback found in retrieval and ranking tasks in IR. The concept builds upon the idea that documents, which reveal similar characteristics as previously detected rumours are also likely to be a rumour. During detection, feedback about which of the previous documents describes a rumour is not available. Therefore, we rely on 'pseudo' feedback and consider all documents whose rumour score exceeds a threshold as true rumours.", "The PF feature describes the maximum similarity between a new document and those documents previously considered as rumour. Similarities are measured by vector proximity in term space. Conceptually, PF passes on evidence to repeated signals by increasing the rumour score of future documents if they are similar to a recently detected rumour. Note that this allows harnessing information from repeated signals without the need of operating retrospectively.", "Training Pseudo Feedback Features", "The trainings routine differs from the standard procedure, because the computation of the PF feature requires two training rounds as we require a model of all other features to identify 'pseudo' rumours. In a first training round a SVM is used to compute weights for all features in the trainings set, except the PF features. This provides a model for all but the PF features. Then the trainings set is processed to computing rumour scores based on the model obtained from our initial trainings round. This time, we additionally compute the PF feature value by measuring the minimum distance in term space between the current document vector and those previous documents, whose rumour score exceeds a previously defined threshold. Since we operate on a stream, the number of documents previously considered as rumours grows without bound. To keep operation constant in time and space, we only compare against the k most recent documents considered to be rumours. Once we obtained the value for the PF feature, we compute its weight using the SVM. The combination of the weight for the PF feature with the weights for all other features, obtained in the initial trainings round, resembles the final model." ], [ "The previous sections introduced two new categories of features for rumour detection. Now we test their performance and impact on detection effectiveness and efficiency. In a streaming setting, documents arrive on a continual basis one at a time. We require our features to compute a rumour-score instantaneously for each document in a single-pass over the data. Messages with high rumour scores are considered likely being rumours. The classification decision is based on an optimal thresholding strategy based on the trainings set." ], [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], [ "Rumour detection on social media is a novel research field without official data sets. Since licences agreements forbid redistribution of data, no data sets from previous publications are available. We therefore followed previous researchers like Liu et. al (2015) and Yang et. al (2012) and created our own dataset.", "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.", "For our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users. Micro-blogs from Sina Weibo are denoted as 'weibos'.", "rumours: Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours.", "non-rumours: We additionally gathered 202 non-rumours using the public Sina Weibo API. Three human annotators judged these weibos based on unanimous decision making to ensure that they don't contain rumours.", "Since we operate in a streaming environment, all weibos are sorted based on their publication time-stamp. Table 3 shows a list of example for rumours found in our data set.", "We ordered the rumours and non-rumours chronologically and divided them in half, forming a training and test set. We ensured that each of the sets consists of 50% rumours and non-rumours. This is important when effectiveness is measured by accuracy. All training and optimization use the trainings set. Performance is then reported based on a single run on the test set." ], [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. We chose the algorithm by Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter. Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. The approaches advertise themselves as suitable for early or real-time detection and performed rumour detection with the smallest latency across all published methods. Yang performs early rumour detection and operates with a delay of 24 hours. Liu is claimed to perform in real-time while, requiring a cluster of 5 repeated messages to judge them for rumours. Note that although these algorithm are state-of-the-art for detecting rumours as quickly as possible, they still require a certain delay to reach their full potential.", "Table 2 compares the performance of our features with the two classifiers on the 101 rumours and 101 non-rumours of the test set, when detecting rumour instantly after their publication. The table reveals comparable accuracy for Yang and Liu at around 60%. Our observed performance of Yang matches those by Liu et. al (2015). Surprisingly, the algorithm Liu does not perform significantly better than Yang when applied to instantaneous rumour detection although they claimed to operate in real-time. Liu et. al (2015) report performance based on the first 5 messages which clearly outperforms Yang for early rumour detection. However, we find that when reducing the set from 5 to 1, their superiority is only marginal. In contrast, the combination of novelty and pseudo relevance based features performs significantly better (sign test with $p < 0.05$ ) than the baselines for instantaneous rumour detections. Novelty based features benefit from news articles as an external data source, which explains their superior performance. In particular for instantaneous rumour detection, where information can only be obtained from a single message, the use of external data proves to perform superior. Note that accuracy is a single value metric describing performance at an optimal threshold. Figure 1 compares the effectiveness of the three algorithms for the full range of rumour scores for instantaneous detection. Different applications require a different balance between miss and false alarm. But the DET curve shows that Liu’s method would be preferable over Yang for any application. Similarly, the plot reveals that our approach dominates both baselines throughout all threshold settings and for the high-recall region in particular.", "When increasing the detection delay to 12 and 24 hours, all three algorithms reach comparable performance with no statistically significant difference, as seen in table 4. For our approach, none of the features are computed retrospectively, which explains why the performance does not change when increasing the detection delay. The additional time allows Liu and Yang to collect repeated signals, which improves their detection accuracy. After 24 hours Liu performs the highest due to its retrospectively computed features. Note that after 24 hours rumours might have already spread far through social networks and potentially caused harm." ], [ "We group our 57 features into 7 categories shown in Table 6 and analyse their contribution using feature ablation, as seen in Table 5 . Feature ablation illustrates the importance of a feature by measuring performance, when removing it from the set of features. Novelty related features based on kterm hashing were found to be dominant for instantaneous rumour detection $(p < 0.05)$ . 'Sentence char' features, which include punctuation, hashtags, user-symbols and URLs, contributed the most of the traditional features, followed by Part of Speech ('POS') and 'extreme word' features. Our experiments found 'sentiment' and 'emotion' based features to contribute the least. Since excluding them both results in a considerable drop of performance we conclude that they capture comparable information and therefore compensated for each other.", "Novelty based Features", "Novelty based features revealed the highest impact on detection performance. In particular kterms formed from the top keywords contribute the most. This is interesting, as when kterm hashing was introduced (Wurzer et. al, 2015), all kterms were considered as equally important. We found that prioritising certain kterms yields increased performance.", "Interestingly, novelty based features computed by the vector similarity between weibos and news sub-documents perform slightly worse (-2% absolute). When striping all but the top tf-idf weighted terms from the news sub-documents, the hit in performance can be reduced to -1 % absolute. Kterm constructs a combined memory of all information presented to it. Pulling all information into a single representation bridges the gab between documents and allows finding information matches within documents. We hypothesize that this causes increased detection performance.", "Pseudo Feedbaack", "Features ablation revealed that pseudo feedback (PF) increased detection performance by 5.3% (relative). PF builds upon the output of the other features. High performance of the other features results in higher positive impact of PF. We want to further explore the behaviour of PF when other features perform badly in future studies." ], [ "Previous approaches to rumour detection rely on repeated signals to form propagation graphs or clustering methods. Beside causing a detection delay these methods are also blind to less popular rumours that don't go viral. In contrast, novelty based feature require only a single message enabling them to detect even the smallest rumours. Examples for such small rumours are shown in table 3 ." ], [ "To demonstrate the high efficiency of computing novelty and pseudo feedback features, we implement a rumour detection system and measure its throughput when applied to 100k weibos. We implement our system in C and run it using a single core on a 2.2GHz Intel Core i7-4702HQ. We measure the throughput on an idle machine and average the observed performance over 5 runs. Figure 2 presents performance when processing more and more weibos. The average throughput of our system is around 7,000 weibos per second, which clearly exceeds the average volume of the full Twitter (5,700 tweets/sec.) and Sina Weibo (1,200 weibos/sec.) stream. Since the number of news articles is relatively small, we find no difference in terms of efficiency between computing novelty features based on kterm hashing and vector similarity. Figure 2 also illustrates that our proposed features can be computed in constant time with respect to the number of messages processed. This is crucial to keep operation in a true streaming environment feasible. Approaches, whose runtime depend on the number of documents processed become progressively slower, which is inapplicable when operating on data streams. Our experiments show that the proposed features perform effectively and their efficiency allows them to detect rumours instantly after their publication." ], [ "We introduced two new categories of features which significantly improve instantaneous rumour detection performance. Novelty based features consider the increased presence of unconfirmed information within a message with respect to trusted sources as an indication of being a rumour. Pseudo feedback features consider messages that are similar to previously detected rumours as more likely to also be a rumour. Pseudo feedback and its variant, recursive pseudo feedback, allow harnessing repeated signals without the need of operating retrospectively. Our evaluation showed that novelty and pseudo feedback based features perform significantly more effective than other real-time and early detection baselines, when detecting rumours instantly after their publication. This advantage vanishes when allowing an increased detection delay. We also showed that the proposed features can be computed efficiently enough to operate on the average Twitter and Sina Weibo stream while keeping time and space requirements constant." ] ], "section_name": [ "Introduction", "Related Work", "Rumour Detection", "Problem Statement", "Novelty-based Features", "Novelty Feature Construction", "Pseudo Feedback", "Experiments", "Evaluation metrics", "Data set", "Rumour detection effectiveness", "Feature analysis", "Detecting unpopular rumours", "Efficiency and Scalability", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "4642688bf811e94d68ac4f3c0a3160e0081d0bef", "a328f0c8436977ec3fb62c8c33baf008a1211b24", "b5dfc09ac689f8a2c53791a95117aaecb894f15c", "ba585581c2bff1e009c144a5b4103ec83484ec12", "304c71a95a777c7eb87725a55abbb9e669bea830" ], "answer": [ { "evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. We chose the algorithm by Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter. Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. The approaches advertise themselves as suitable for early or real-time detection and performed rumour detection with the smallest latency across all published methods. Yang performs early rumour detection and operates with a delay of 24 hours. Liu is claimed to perform in real-time while, requiring a cluster of 5 repeated messages to judge them for rumours. Note that although these algorithm are state-of-the-art for detecting rumours as quickly as possible, they still require a certain delay to reach their full potential." ], "extractive_spans": [ "two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented.", "Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter." ], "free_form_answer": "", "highlighted_evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. We chose the algorithm by Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter. Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. We chose the algorithm by Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter. Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. The approaches advertise themselves as suitable for early or real-time detection and performed rumour detection with the smallest latency across all published methods. Yang performs early rumour detection and operates with a delay of 24 hours. Liu is claimed to perform in real-time while, requiring a cluster of 5 repeated messages to judge them for rumours. Note that although these algorithm are state-of-the-art for detecting rumours as quickly as possible, they still require a certain delay to reach their full potential." ], "extractive_spans": [ "Liu et. al (2015)", "Yang et. al (2012)" ], "free_form_answer": "", "highlighted_evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. We chose the algorithm by Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter. Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. The approaches advertise themselves as suitable for early or real-time detection and performed rumour detection with the smallest latency across all published methods. Yang performs early rumour detection and operates with a delay of 24 hours. Liu is claimed to perform in real-time while, requiring a cluster of 5 repeated messages to judge them for rumours. Note that although these algorithm are state-of-the-art for detecting rumours as quickly as possible, they still require a certain delay to reach their full potential." ], "extractive_spans": [], "free_form_answer": "They compare against two other methods that apply message-,user-, topic- and propagation-based features and rely on an SVM classifier. One perform early rumor detection and operates with a delay of 24 hrs, while the other requires a cluster of 5 repeated messages to judge them for rumors.", "highlighted_evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. ", " Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. The approaches advertise themselves as suitable for early or real-time detection and performed rumour detection with the smallest latency across all published methods. Yang performs early rumour detection and operates with a delay of 24 hours. Liu is claimed to perform in real-time while, requiring a cluster of 5 repeated messages to judge them for rumours." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. We chose the algorithm by Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter. Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. The approaches advertise themselves as suitable for early or real-time detection and performed rumour detection with the smallest latency across all published methods. Yang performs early rumour detection and operates with a delay of 24 hours. Liu is claimed to perform in real-time while, requiring a cluster of 5 repeated messages to judge them for rumours. Note that although these algorithm are state-of-the-art for detecting rumours as quickly as possible, they still require a certain delay to reach their full potential." ], "extractive_spans": [ "Liu et. al (2015) ", "Yang et. al (2012)" ], "free_form_answer": "", "highlighted_evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. We chose the algorithm by Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter. Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. The approaches advertise themselves as suitable for early or real-time detection and performed rumour detection with the smallest latency across all published methods. Yang performs early rumour detection and operates with a delay of 24 hours. Liu is claimed to perform in real-time while, requiring a cluster of 5 repeated messages to judge them for rumours. Note that although these algorithm are state-of-the-art for detecting rumours as quickly as possible, they still require a certain delay to reach their full potential." ], "extractive_spans": [], "free_form_answer": "Liu et al. (2015) and Yang et al. (2012)", "highlighted_evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a", "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "f320efb1fbb744616e420aaf8da0f9622b75b2ed", "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "21bb7eb338559ca298aa555cd1aeac7475c605b6", "44bb13e82be45c85e456eee065aef04d4db8440e", "9d16d82470d31150440ef819a74bc60ea126c1b6", "d17478e6f53a4d43f5f23b727d95b0ac1be143a6", "757223378ab648f0b8bc722d61dd4b9116522470" ], "answer": [ { "evidence": [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "extractive_spans": [ "accuracy to evaluate effectiveness", "Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability", "throughput per second" ], "free_form_answer": "", "highlighted_evidence": [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "extractive_spans": [], "free_form_answer": "The metrics are accuracy, detection error trade-off curves and computing efficiency", "highlighted_evidence": [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "extractive_spans": [ "accuracy ", "Detection Error Trade-off (DET) curves", "efficiency of computing the proposed features, measured by the throughput per second" ], "free_form_answer": "", "highlighted_evidence": [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "extractive_spans": [ "accuracy to evaluate effectiveness", "Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability", "throughput per second" ], "free_form_answer": "", "highlighted_evidence": [ "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015).", "Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability.", "We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. We chose the algorithm by Yang et. al (2012), dubbed Yang, because they proposed a feature set for early detection tailored to Sina Weibo and were used as a state-of-the-art baseline before by Liu et. al (2015). The algorithm by Liu et. al (2015), dubbed Liu, is said to operate in real-time and outperformed Yang, when only considering features available on Twitter. Both apply various message-, user-, topic- and propagation-based features and rely on an SVM classifier which they also found to perform best. The approaches advertise themselves as suitable for early or real-time detection and performed rumour detection with the smallest latency across all published methods. Yang performs early rumour detection and operates with a delay of 24 hours. Liu is claimed to perform in real-time while, requiring a cluster of 5 repeated messages to judge them for rumours. Note that although these algorithm are state-of-the-art for detecting rumours as quickly as possible, they still require a certain delay to reach their full potential.", "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). Additionally we use the standard TDT evaluation procedure (Allan et. al, 2000; NIST, 2008) with the official TDT3 evaluation scripts (NIST, 2008) using standard settings. This procedure evaluates detection tasks using Detection Error Trade-off (DET) curves, which show the trade-off between miss and false alarm probability. By visualizing the full range of thresholds, DET plots provide a more comprehensive illustration of effectiveness than single value metrics (Allan et. al, 2000). We also evaluate the efficiency of computing the proposed features, measured by the throughput per second, when applied to a high number of messages." ], "extractive_spans": [], "free_form_answer": "Accuracy compared to two state-of-the-art baselines", "highlighted_evidence": [ "To evaluate our new features for rumour detection, we compare them with two state-of-the-art early rumour detection baselines Liu et. al (2015) and Yang et. al (2012), which we re-implemented. ", "We report accuracy to evaluate effectiveness, as is usual in the literature (Zhou et. al, 2015). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1d87720d0db14aa36d083b7dc3999984c4489389", "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a", "197290cb509b9a046b311719c6ce1ce408f3be8a", "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "e63cd6d774316ae13c2de0771b22b0772b8e5f87" ], "answer": [ { "evidence": [ "Rumour detection on social media is challenging due to the short texts, creative lexical variations and high volume of the streams. The task becomes even harder if we attempt to perform rumour detection on-the-fly, without looking into the future. We provide an effective and highly scalable approach to detect rumours instantly after they were posted with zero delay. We introduce a new features category called novelty based features. Novelty based features compensate the absence of repeated information by consulting additional data sources - news wire articles. We hypothesize that information not confirmed by official news is an indication of rumours. Additionally we introduce pseudo feedback for classification. In a nutshell, documents that are similar to previously detected rumours are considered to be more likely to also be a rumour. The proposed features can be computed in constant time and space allowing us to process high-volume streams in real-time (Muthukrishnan, 2005). Our experiments reveal that novelty based features and pseudo feedback significantly increases detection performance for early rumour detection." ], "extractive_spans": [], "free_form_answer": "No. They additionally use similarity to previously detected rumors to make the decision of whether a document is likely to be a rumor", "highlighted_evidence": [ "In a nutshell, documents that are similar to previously detected rumours are considered to be more likely to also be a rumour." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "04941eae5c74f8aaa4d6d9cbc0f8d75be8bd1fef", "2420b51cf1d056ad0a8fea91a25098f1503c3cc1", "6932f5798a91cb208bba211d0395fdad9da93a23", "d88818f5e8173a3f82e5ac17d6db100fcc19da9a", "15133c2bd9f411178e15d713d75efd220a149b58" ], "answer": [ { "evidence": [ "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.", "rumours: Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period.", "Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "Rumour detection on social media is a novel research field without official data sets. Since licences agreements forbid redistribution of data, no data sets from previous publications are available. We therefore followed previous researchers like Liu et. al (2015) and Yang et. al (2012) and created our own dataset.", "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.", "For our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users. Micro-blogs from Sina Weibo are denoted as 'weibos'.", "rumours: Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours.", "non-rumours: We additionally gathered 202 non-rumours using the public Sina Weibo API. Three human annotators judged these weibos based on unanimous decision making to ensure that they don't contain rumours.", "Since we operate in a streaming environment, all weibos are sorted based on their publication time-stamp. Table 3 shows a list of example for rumours found in our data set.", "We ordered the rumours and non-rumours chronologically and divided them in half, forming a training and test set. We ensured that each of the sets consists of 50% rumours and non-rumours. This is important when effectiveness is measured by accuracy. All training and optimization use the trainings set. Performance is then reported based on a single run on the test set." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Rumour detection on social media is a novel research field without official data sets. Since licences agreements forbid redistribution of data, no data sets from previous publications are available. We therefore followed previous researchers like Liu et. al (2015) and Yang et. al (2012) and created our own dataset.\n\ntrusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.\n\nFor our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users. Micro-blogs from Sina Weibo are denoted as 'weibos'.\n\nrumours: Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours.\n\nnon-rumours: We additionally gathered 202 non-rumours using the public Sina Weibo API. Three human annotators judged these weibos based on unanimous decision making to ensure that they don't contain rumours.\n\nSince we operate in a streaming environment, all weibos are sorted based on their publication time-stamp. Table 3 shows a list of example for rumours found in our data set.\n\nWe ordered the rumours and non-rumours chronologically and divided them in half, forming a training and test set. We ensured that each of the sets consists of 50% rumours and non-rumours. " ], "unanswerable": false, "yes_no": true }, { "evidence": [ "rumours: Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "rumours: Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "Rumour detection on social media is a novel research field without official data sets. Since licences agreements forbid redistribution of data, no data sets from previous publications are available. We therefore followed previous researchers like Liu et. al (2015) and Yang et. al (2012) and created our own dataset.", "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.", "rumours: Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours.", "non-rumours: We additionally gathered 202 non-rumours using the public Sina Weibo API. Three human annotators judged these weibos based on unanimous decision making to ensure that they don't contain rumours." ], "extractive_spans": [], "free_form_answer": "Yes, consisting of trusted resources, rumours and non-rumours", "highlighted_evidence": [ " We therefore followed previous researchers like Liu et. al (2015) and Yang et. al (2012) and created our own dataset.", "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.", "rumours: Sina Weibo offers an official rumour debunking service, operated by trained human professionals. Following Yang et. al (2012) and Zhou et. al (2015), we use this service to obtain a high quality set of 202 confirmed rumours.", "non-rumours: We additionally gathered 202 non-rumours using the public Sina Weibo API. Three human annotators judged these weibos based on unanimous decision making to ensure that they don't contain rumours." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Rumour detection on social media is a novel research field without official data sets. Since licences agreements forbid redistribution of data, no data sets from previous publications are available. We therefore followed previous researchers like Liu et. al (2015) and Yang et. al (2012) and created our own dataset." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We therefore followed previous researchers like Liu et. al (2015) and Yang et. al (2012) and created our own dataset." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "7d22297974196b3b943301e63fae2d5007dbcd45", "ada64754e56c2af7b2a1e2044f5dbeb666c0bf4d", "c45ac46c96fa8ae8be384eb6894aa1ce7c93558f", "d54aaf2310a86043f0270d51a75083a051a61824", "736e2db04375618cb68a19f77faec8288fbe810b" ], "answer": [ { "evidence": [ "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.", "For our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users. Micro-blogs from Sina Weibo are denoted as 'weibos'." ], "extractive_spans": [], "free_form_answer": "Chinese", "highlighted_evidence": [ "Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. ", "For our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users. Micro-blogs from Sina Weibo are denoted as 'weibos'." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.", "For our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users. Micro-blogs from Sina Weibo are denoted as 'weibos'." ], "extractive_spans": [], "free_form_answer": "Mandarin Chinese", "highlighted_evidence": [ "Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China.", "For our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages.", "For our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users. Micro-blogs from Sina Weibo are denoted as 'weibos'." ], "extractive_spans": [ "Chinese" ], "free_form_answer": "", "highlighted_evidence": [ "Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China.", "For our social media stream, we chose Sina Weibo, a Chinese social media service with more than 200 million active users. Micro-blogs from Sina Weibo are denoted as 'weibos'." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Since we operate in a streaming environment, all weibos are sorted based on their publication time-stamp. Table 3 shows a list of example for rumours found in our data set." ], "extractive_spans": [], "free_form_answer": "Mandarin Chinese (see table 3)", "highlighted_evidence": [ "Table 3 shows a list of example for rumours found in our data set." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "trusted resources: We randomly collected 200 news articles about broad topics commonly reported by news wires over our target time period. These range from news about celebrities and disasters to financial and political affairs as seen in table 1 . Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China. To ensure a fair evaluation, we collected the news articles before judging rumours, not knowing which rumours we would find later on. We also only consider news articles published before the timestamps of the social media messages." ], "extractive_spans": [ "Chinese" ], "free_form_answer": "", "highlighted_evidence": [ "Since we operate on Chinese social media, we gathered news articles from Xinhua News Agency, the leading news-wire in China." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "27236a289357136f6c2bbfe8f5837f97f28acad5", "5f9c56e7a629d74778086c4c5860cb7e08eda0f5", "c5ca29f2cf1302b95d999ccc8bca381cd7b92536", "f11b40c43eb44d0f46fa7df45eb6057328b2747f" ], "answer": [ { "evidence": [ "To increase instantaneous detection performance, we compensate for the absence of future information by consulting additional data sources. In particular, we make use of news wire articles, which are considered to be of high credibility. This is reasonable as according to Petrovic et. al (2013), in the majority of cases, news wires lead social media for reporting news. When a message arrives from a social media stream, we build features based on its novelty with respect to the confirmed information in the trusted sources. In a nutshell, the presence of information unconfirmed by the official media is construed as an indication of being a rumour. Note that this closely resembles the definition of what a rumour is." ], "extractive_spans": [ "the presence of information unconfirmed by the official media is construed as an indication of being a rumour. " ], "free_form_answer": "", "highlighted_evidence": [ "When a message arrives from a social media stream, we build features based on its novelty with respect to the confirmed information in the trusted sources. In a nutshell, the presence of information unconfirmed by the official media is construed as an indication of being a rumour." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Rumour detection is a challenging task, as it requires determining the truth of information (Zhao et. al, 2015). The Cambridge dictionary, defines a rumour as information of doubtful or unconfirmed truth. We rely on classification using an SVM, which is the state-of-the-art approach for novelty detection. Numerous features have been proposed for rumour detection on social media, many of which originate from an original study on information credibility by Castillo et. al (2011). Unfortunately, the currently most successful features rely on information based on graph propagation and clustering, which can only be computed retrospectively. This renders them close to useless when detecting rumours early on. We introduce two new classes of features, one based on novelty, the other on pseudo feedback. Both feature categories improve detection accuracy early on, when information is limited." ], "extractive_spans": [ "information of doubtful or unconfirmed truth" ], "free_form_answer": "", "highlighted_evidence": [ "The Cambridge dictionary, defines a rumour as information of doubtful or unconfirmed truth." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Social Media has evolved from friendship based networks to become a major source for the consumption of news (NIST, 2008). On social media, news is decentralised as it provides everyone the means to efficiently report and spread information. In contrast to traditional news wire, information on social media is spread without intensive investigation, fact and background checking. The combination of ease and fast pace of sharing information provides a fertile breeding ground for rumours, false- and disinformation. Social media users tend to share controversial information in-order to verify it, while asking about for the opinions of their followers (Zhao et. al, 2015). This further amplifies the pace of a rumour's spread and reach. Rumours and deliberate disinformation have already caused panic and influenced public opinion." ], "extractive_spans": [], "free_form_answer": "information that is not fact- and background-checked and thoroughly investigated for authenticity", "highlighted_evidence": [ "On social media, news is decentralised as it provides everyone the means to efficiently report and spread information. In contrast to traditional news wire, information on social media is spread without intensive investigation, fact and background checking. The combination of ease and fast pace of sharing information provides a fertile breeding ground for rumours, false- and disinformation. Social media users tend to share controversial information in-order to verify it, while asking about for the opinions of their followers (Zhao et. al, 2015). This further amplifies the pace of a rumour's spread and reach. Rumours and deliberate disinformation have already caused panic and influenced public opinion." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Rumour detection is a challenging task, as it requires determining the truth of information (Zhao et. al, 2015). The Cambridge dictionary, defines a rumour as information of doubtful or unconfirmed truth. We rely on classification using an SVM, which is the state-of-the-art approach for novelty detection. Numerous features have been proposed for rumour detection on social media, many of which originate from an original study on information credibility by Castillo et. al (2011). Unfortunately, the currently most successful features rely on information based on graph propagation and clustering, which can only be computed retrospectively. This renders them close to useless when detecting rumours early on. We introduce two new classes of features, one based on novelty, the other on pseudo feedback. Both feature categories improve detection accuracy early on, when information is limited." ], "extractive_spans": [], "free_form_answer": "Information of doubtful or unconfirmed truth", "highlighted_evidence": [ "The Cambridge dictionary, defines a rumour as information of doubtful or unconfirmed truth." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a", "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c7d4a630661cd719ea504dba56393f78278b296b" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat", "somewhat", "somewhat" ], "question": [ "What previous methods do they compare against?", "What is their evaluation metric?", "Are their methods fully supervised?", "Do they build a dataset of rumors?", "What languages do they evaluate their methods on?", "How do they define rumors?" ], "question_id": [ "1a6156189297b2fe17f174ef55cbd20341bb7dbf", "3319d56556ae1597a86384057db0831e32774b90", "8cbe3fa4ec0f66071e3d6b829b09b6395b631c44", "85e417231a4bbb6691f7a89bd81710525f8fec4c", "57ee20f494d8ce3fae46028c3f3551d180dba3e0", "2974237446d04da33b78ce6d22a477cdf80877b7" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "", "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Excerpt of topics with synopsis of corresponding rumours", "Figure 1: DET plot, revealing superior effectiveness of our approach for instant rumour detection for the full range of thresholds", "Table 4: Detection accuracy at different levels of delay; Asterisk indicates significance (p < 0.05)", "Table 2: Effectiveness in comparison with two state-of-the-art baselines for instant rumour detection using optimal thresholds", "Figure 2: Throughput of our approach per second in comparison to the average Twitter (Firehose) and Sina Weibo stream", "Table 6: Description of features" ], "file": [ "5-Table1-1.png", "6-Figure1-1.png", "6-Table4-1.png", "6-Table2-1.png", "8-Figure2-1.png", "8-Table6-1.png" ] }
[ "What previous methods do they compare against?", "What is their evaluation metric?", "Are their methods fully supervised?", "Do they build a dataset of rumors?", "What languages do they evaluate their methods on?", "How do they define rumors?" ]
[ [ "1611.06322-Rumour detection effectiveness-0" ], [ "1611.06322-Rumour detection effectiveness-0", "1611.06322-Evaluation metrics-0" ], [ "1611.06322-Introduction-3" ], [ "1611.06322-Data set-6", "1611.06322-Data set-1", "1611.06322-Data set-4", "1611.06322-Data set-0", "1611.06322-Data set-2", "1611.06322-Data set-3", "1611.06322-Data set-5" ], [ "1611.06322-Data set-2", "1611.06322-Data set-1", "1611.06322-Data set-5" ], [ "1611.06322-Novelty-based Features-0", "1611.06322-Introduction-0", "1611.06322-Rumour Detection-0" ] ]
[ "Liu et al. (2015) and Yang et al. (2012)", "Accuracy compared to two state-of-the-art baselines", "No. They additionally use similarity to previously detected rumors to make the decision of whether a document is likely to be a rumor", "Yes, consisting of trusted resources, rumours and non-rumours", "Mandarin Chinese (see table 3)", "Information of doubtful or unconfirmed truth" ]
2
1911.04474
TENER: Adapting Transformer Encoder for Named Entity Recognition
The Bidirectional long short-term memory networks (BiLSTM) have been widely used as an encoder in models solving the named entity recognition (NER) task. Recently, the Transformer is broadly adopted in various Natural Language Processing (NLP) tasks owing to its parallelism and advantageous performance. Nevertheless, the performance of the Transformer in NER is not as good as it is in other NLP tasks. In this paper, we propose TENER, a NER architecture adopting adapted Transformer Encoder to model the character-level features and word-level features. By incorporating the direction and relative distance aware attention and the un-scaled attention, we prove the Transformer-like encoder is just as effective for NER as other NLP tasks.
{ "paragraphs": [ [ "The named entity recognition (NER) is the task of finding the start and end of an entity in a sentence and assigning a class for this entity. NER has been widely studied in the field of natural language processing (NLP) because of its potential assistance in question generation BIBREF0, relation extraction BIBREF1, and coreference resolution BIBREF2. Since BIBREF3, various neural models have been introduced to avoid hand-crafted features BIBREF4, BIBREF5, BIBREF6.", "NER is usually viewed as a sequence labeling task, the neural models usually contain three components: word embedding layer, context encoder layer, and decoder layer BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. The difference between various NER models mainly lies in the variance in these components.", "Recurrent Neural Networks (RNNs) are widely employed in NLP tasks due to its sequential characteristic, which is aligned well with language. Specifically, bidirectional long short-term memory networks (BiLSTM) BIBREF11 is one of the most widely used RNN structures. BIBREF4 was the first one to apply the BiLSTM and Conditional Random Fields (CRF) BIBREF12 to sequence labeling tasks. Owing to BiLSTM's high power to learn the contextual representation of words, it has been adopted by the majority of NER models as the encoder BIBREF5, BIBREF6, BIBREF9, BIBREF10.", "Recently, Transformer BIBREF13 began to prevail in various NLP tasks, like machine translation BIBREF13, language modeling BIBREF14, and pretraining models BIBREF15. The Transformer encoder adopts a fully-connected self-attention structure to model the long-range context, which is the weakness of RNNs. Moreover, Transformer has better parallelism ability than RNNs. However, in the NER task, Transformer encoder has been reported to perform poorly BIBREF16, our experiments also confirm this result. Therefore, it is intriguing to explore the reason why Transformer does not work well in NER task.", "In this paper, we analyze the properties of Transformer and propose two specific improvements for NER.", "The first is that the sinusoidal position embedding used in the vanilla Transformer is aware of distance but unaware of the directionality. In addition, this property will lose when used in the vanilla Transformer. However, both the direction and distance information are important in the NER task. For example in Fig FIGREF3, words after “in\" are more likely to be a location or time than words before it, and words before “Inc.\" are mostly likely to be of the entity type “ORG\". Besides, an entity is a continuous span of words. Therefore, the awareness of distance might help the word better recognizes its neighbor. To endow the Transformer with the ability of direction- and distance-awareness, we adopt the relative positional encoding BIBREF17, BIBREF18, BIBREF19. instead of the absolute position encoding. We propose a revised relative positional encoding that uses fewer parameters and performs better.", "The second is an empirical finding. The attention distribution of the vanilla Transformer is scaled and smooth. But for NER, a sparse attention is suitable since not all words are necessary to be attended. Given a current word, a few contextual words are enough to judge its label. The smooth attention could include some noisy information. Therefore, we abandon the scale factor of dot-production attention and use an un-scaled and sharp attention.", "With the above improvements, we can greatly boost the performance of Transformer encoder for NER.", "Other than only using Transformer to model the word-level context, we also tried to apply it as a character encoder to model word representation with character-level information. The previous work has proved that character encoder is necessary to capture the character-level features and alleviate the out-of-vocabulary (OOV) problem BIBREF6, BIBREF5, BIBREF7, BIBREF20. In NER, CNN is commonly used as the character encoder. However, we argue that CNN is also not perfect for representing character-level information, because the receptive field of CNN is limited, and the kernel size of the CNN character encoder is usually 3, which means it cannot correctly recognize 2-gram or 4-gram patterns. Although we can deliberately design different kernels, CNN still cannot solve patterns with discontinuous characters, such as “un..ily” in “unhappily\" and “unnecessarily\". Instead, the Transformer-based character encoder shall not only fully make use of the concurrence power of GPUs, but also have the potentiality to recognize different n-grams and even discontinuous patterns. Therefore, in this paper, we also try to use Transformer as the character encoder, and we compare four kinds of character encoders.", "In summary, to improve the performance of the Transformer-based model in the NER task, we explicitly utilize the directional relative positional encoding, reduce the number of parameters and sharp the attention distribution. After the adaptation, the performance raises a lot, making our model even performs better than BiLSTM based models. Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features." ], [ "BIBREF3 utilized the Multi-Layer Perceptron (MLP) and CNN to avoid using task-specific features to tackle different sequence labeling tasks, such as Chunking, Part-of-Speech (POS) and NER. In BIBREF4, BiLSTM-CRF was introduced to solve sequence labeling questions. Since then, the BiLSTM has been extensively used in the field of NER BIBREF7, BIBREF21, BIBREF22, BIBREF5.", "Despite BiLSTM's great success in the NER task, it has to compute token representations one by one, which massively hinders full exploitation of GPU's parallelism. Therefore, CNN has been proposed by BIBREF23, BIBREF24 to encode words concurrently. In order to enlarge the receptive field of CNNs, BIBREF23 used iterative dilated CNNs (ID-CNN).", "Since the word shape information, such as the capitalization and n-gram, is important in recognizing named entities, CNN and BiLSTM have been used to extract character-level information BIBREF7, BIBREF6, BIBREF5, BIBREF23, BIBREF8.", "Almost all neural-based NER models used pre-trained word embeddings, like Word2vec and Glove BIBREF25, BIBREF26. And when contextual word embeddings are combined, the performance of NER models will boost a lot BIBREF27, BIBREF28, BIBREF29. ELMo introduced by BIBREF28 used the CNN character encoder and BiLSTM language models to get contextualized word representations. Except for the BiLSTM based pre-trained models, BERT was based on Transformer BIBREF15." ], [ "Transformer was introduced by BIBREF13, which was mainly based on self-attention. It achieved great success in various NLP tasks. Since the self-attention mechanism used in the Transformer is unaware of positions, to avoid this shortage, position embeddings were used BIBREF13, BIBREF15. Instead of using the sinusoidal position embedding BIBREF13 and learned absolute position embedding, BIBREF17 argued that the distance between two tokens should be considered when calculating their attention score. BIBREF18 reduced the computation complexity of relative positional encoding from $O(l^2d)$ to $O(ld)$, where $l$ is the length of sequences and $d$ is the hidden size. BIBREF19 derived a new form of relative positional encodings, so that the relative relation could be better considered." ], [ "We first introduce the Transformer encoder proposed in BIBREF13. The Transformer encoder takes in an matrix $H \\in \\mathbb {R}^{l \\times d}$, where $l$ is the sequence length, $d$ is the input dimension. Then three learnable matrix $W_q$, $W_k$, $W_v$ are used to project $H$ into different spaces. Usually, the matrix size of the three matrix are all $\\mathbb {R}^{d \\times d_k}$, where $d_k$ is a hyper-parameter. After that, the scaled dot-product attention can be calculated by the following equations,", "", "where $Q_t$ is the query vector of the $t$th token, $j$ is the token the $t$th token attends. $K_j$ is the key vector representation of the $j$th token. The softmax is along the last dimension. Instead of using one group of $W_q$, $W_k$, $W_v$, using several groups will enhance the ability of self-attention. When several groups are used, it is called multi-head self-attention, the calculation can be formulated as follows,", "", "where $n$ is the number of heads, the superscript $h$ represents the head index. $[head^{(1)}; ...; head^{(n)}]$ means concatenation in the last dimension. Usually $d_k \\times n = d$, which means the output of $[head^{(1)}; ...; head^{(n)}]$ will be of size $\\mathbb {R}^{l \\times d}$. $W_o$ is a learnable parameter, which is of size $\\mathbb {R}^{d \\times d}$.", "The output of the multi-head attention will be further processed by the position-wise feed-forward networks, which can be represented as follows,", "", "where $W_1$, $W_2$, $b_1$, $b_2$ are learnable parameters, and $W_1 \\in \\mathbb {R}^{d \\times d_{ff}}$, $W_2 \\in \\mathbb {R}^{d_{ff} \\times d}$, $b_1 \\in \\mathbb {R}^{d_{ff}}$, $b_2 \\in \\mathbb {R}^{d}$. $d_{ff}$ is a hyper-parameter. Other components of the Transformer encoder includes layer normalization and Residual connection, we use them the same as BIBREF13." ], [ "The self-attention is not aware of the positions of different tokens, making it unable to capture the sequential characteristic of languages. In order to solve this problem, BIBREF13 suggested to use position embeddings generated by sinusoids of varying frequency. The $t$th token's position embedding can be represented by the following equations", "", "where $i$ is in the range of $[0, \\frac{d}{2}]$, $d$ is the input dimension. This sinusoid based position embedding makes Transformer have an ability to model the position of a token and the distance of each two tokens. For any fixed offset $k$, $PE_{t+k}$ can be represented by a linear transformation of $PE_{t}$ BIBREF13." ], [ "In this paper, we utilize the Transformer encoder to model the long-range and complicated interactions of sentence for NER. The structure of proposed model is shown in Fig FIGREF12. We detail each parts in the following sections." ], [ "To alleviate the problems of data sparsity and out-of-vocabulary (OOV), most NER models adopted the CNN character encoder BIBREF5, BIBREF30, BIBREF8 to represent words. Compared to BiLSTM based character encoder BIBREF6, BIBREF31, CNN is more efficient. Since Transformer can also fully exploit the GPU's parallelism, it is interesting to use Transformer as the character encoder. A potential benefit of Transformer-based character encoder is to extract different n-grams and even uncontinuous character patterns, like “un..ily” in “unhappily” and “uneasily”. For the model's uniformity, we use the “adapted Transformer” to represent the Transformer introduced in next subsection.", "The final word embedding is the concatenation of the character features extracted by the character encoder and the pre-trained word embeddings." ], [ "Although Transformer encoder has potential advantage in modeling long-range context, it is not working well for NER task. In this paper, we propose an adapted Transformer for NER task with two improvements." ], [ "Inspired by the success of BiLSTM in NER tasks, we consider what properties the Transformer lacks compared to BiLSTM-based models. One observation is that BiLSTM can discriminatively collect the context information of a token from its left and right sides. But it is not easy for the Transformer to distinguish which side the context information comes from.", "Although the dot product between two sinusoidal position embeddings is able to reflect their distance, it lacks directionality and this property will be broken by the vanilla Transformer attention. To illustrate this, we first prove two properties of the sinusoidal position embeddings.", "Property 1 For an offset $k$ and a position $t$, $PE_{t+k}^TPE_{t}$ only depends on $k$, which means the dot product of two sinusoidal position embeddings can reflect the distance between two tokens.", "Based on the definitions of Eq.(DISPLAY_FORM11) and Eq.(), the position embedding of $t$-th token is PEt = [ c (c0t)", "(c0t)", "$\\vdots $", "(cd2-1t)", "(cd2-1t)", "], where $d$ is the dimension of the position embedding, $c_i$ is a constant decided by $i$, and its value is $1/10000^{2i/d}$.", "Therefore,", "", "where Eq.(DISPLAY_FORM17) to Eq.() is based on the equation $\\cos (x-y) = \\sin (x)\\sin (y) + \\cos (x)\\cos (y)$.", "Property 2 For an offset $k$ and a position $t$, $PE_{t}^TPE_{t-k}=PE_{t}^TPE_{t+k}$, which means the sinusoidal position embeddings is unware of directionality.", "Let $j=t-k$, according to property 1, we have", "", "", "The relation between $d$, $k$ and $PE_t^TPE_{t+k}$ is displayed in Fig FIGREF18. The sinusoidal position embeddings are distance-aware but lacks directionality.", "However, the property of distance-awareness also disappears when $PE_t$ is projected into the query and key space of self-attention. Since in vanilla Transformer the calculation between $PE_t$ and $PE_{t+k}$ is actually $PE_t^TW_q^TW_kPE_{t+k}$, where $W_q, W_k$ are parameters in Eq.(DISPLAY_FORM7). Mathematically, it can be viewed as $PE_t^TWPE_{t+k}$ with only one parameter $W$. The relation between $PE_t^TPE_{t+k}$ and $PE_t^TWPE_{t+k}$ is depicted in Fig FIGREF19.", "Therefore, to improve the Transformer with direction- and distance-aware characteristic, we calculate the attention scores using the equations below:", "", "where $t$ is index of the target token, $j$ is the index of the context token, $Q_t, K_j$ is the query vector and key vector of token $t, j$ respectively, $W_q, W_v \\in \\mathbb {R}^{d \\times d_k}$. To get $H_{d_k}\\in \\mathbb {R}^{l \\times d_k}$, we first split $H$ into $d/d_k$ partitions in the second dimension, then for each head we use one partition. $\\mathbf {u} \\in \\mathbb {R}^{d_k}$, $\\mathbf {v} \\in \\mathbb {R}^{d_k}$ are learnable parameters, $R_{t-j}$ is the relative positional encoding, and $R_{t-j} \\in \\mathbb {R}^{d_k}$, $i$ in Eq.() is in the range $[0, \\frac{d_k}{2}]$. $Q_t^TK_j$ in Eq.() is the attention score between two tokens; $Q_t^TR_{t-j}$ is the $t$th token's bias on certain relative distance; $u^TK_j$ is the bias on the $j$th token; $v^TR_{t-j}$ is the bias term for certain distance and direction.", "Based on Eq.(), we have", "", "because $\\sin (-x)=-\\sin (x), \\cos (x)=\\cos (-x)$. This means for an offset $t$, the forward and backward relative positional encoding are the same with respect to the $\\cos (c_it)$ terms, but is the opposite with respect to the $\\sin (c_it)$ terms. Therefore, by using $R_{t-j}$, the attention score can distinguish different directions and distances.", "The above improvement is based on the work BIBREF17, BIBREF19. Since the size of NER datasets is usually small, we avoid direct multiplication of two learnable parameters, because they can be represented by one learnable parameter. Therefore we do not use $W_k$ in Eq.(DISPLAY_FORM22). The multi-head version is the same as Eq.(DISPLAY_FORM8), but we discard $W_o$ since it is directly multiplied by $W_1$ in Eq.(DISPLAY_FORM9)." ], [ "The vanilla Transformer use the scaled dot-product attention to smooth the output of softmax function. In Eq.(), the dot product of key and value matrices is divided by the scaling factor $\\sqrt{d_k}$.", "We empirically found that models perform better without the scaling factor $\\sqrt{d_k}$. We presume this is because without the scaling factor the attention will be sharper. And the sharper attention might be beneficial in the NER task since only few words in the sentence are named entities." ], [ "In order to take advantage of dependency between different tags, the Conditional Random Field (CRF) was used in all of our models. Given a sequence $\\mathbf {s}=[s_1, s_2, ..., s_T]$, the corresponding golden label sequence is $\\mathbf {y}=[y_1, y_2, ..., y_T]$, and $\\mathbf {Y}(\\mathbf {s})$ represents all valid label sequences. The probability of $\\mathbf {y}$ is calculated by the following equation", "", "where $f(\\mathbf {y}_{t-1},\\mathbf {y}_t,\\mathbf {s})$ computes the transition score from $\\mathbf {y}_{t-1}$ to $\\mathbf {y}_t$ and the score for $\\mathbf {y}_t$. The optimization target is to maximize $P(\\mathbf {y}|\\mathbf {s})$. When decoding, the Viterbi Algorithm is used to find the path achieves the maximum probability." ], [ "We evaluate our model in two English NER datasets and four Chinese NER datasets.", "(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.", "(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.", "(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.", "(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.", "(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.", "(6) Resume NER was annotated by BIBREF33.", "Their statistics are listed in Table TABREF28. For all datasets, we replace all digits with “0”, and use the BIOES tag schema. For English, we use the Glove 100d pre-trained embedding BIBREF25. For the character encoder, we use 30d randomly initialized character embeddings. More details on models' hyper-parameters can be found in the supplementary material. For Chinese, we used the character embedding and bigram embedding released by BIBREF33. All pre-trained embeddings are finetuned during training. In order to reduce the impact of randomness, we ran all of our experiments at least three times, and its average F1 score and standard deviation are reported.", "We used random-search to find the optimal hyper-parameters, hyper-parameters and their ranges are displayed in the supplemental material. We use SGD and 0.9 momentum to optimize the model. We run 100 epochs and each batch has 16 samples. During the optimization, we use the triangle learning rate BIBREF39 where the learning rate rises to the pre-set learning rate at the first 1% steps and decreases to 0 in the left 99% steps. The model achieves the highest development performance was used to evaluate the test set. The hyper-parameter search range and other settings can be found in the supplementary material. Codes are available at https://github.com/fastnlp/TENER." ], [ "We first present our results in the four Chinese NER datasets. Since Chinese NER is directly based on the characters, it is more straightforward to show the abilities of different models without considering the influence of word representation.", "As shown in Table TABREF29, the vanilla Transformer does not perform well and is worse than the BiLSTM and CNN based models. However, when relative positional encoding combined, the performance was enhanced greatly, resulting in better results than the BiLSTM and CNN in all datasets. The number of training examples of the Weibo dataset is tiny, therefore the performance of the Transformer is abysmal, which is as expected since the Transformer is data-hungry. Nevertheless, when enhanced with the relative positional encoding and unscaled attention, it can achieve even better performance than the BiLSTM-based model. The superior performance of the adapted Transformer in four datasets ranging from small datasets to big datasets depicts that the adapted Transformer is more robust to the number of training examples than the vanilla Transformer. As the last line of Table TABREF29 depicts, the scaled attention will deteriorate the performance." ], [ "The comparison between different NER models on English NER datasets is shown in Table TABREF32. The poor performance of the Transformer in the NER datasets was also reported by BIBREF16. Although performance of the Transformer is higher than BIBREF16, it still lags behind the BiLSTM-based models BIBREF5. Nonetheless, the performance is massively enhanced by incorporating the relative positional encoding and unscaled attention into the Transformer. The adaptation not only makes the Transformer achieve superior performance than BiLSTM based models, but also unveil the new state-of-the-art performance in two NER datasets when only the Glove 100d embedding and CNN character embedding are used. The same deterioration of performance was observed when using the scaled attention. Besides, if ELMo was used BIBREF28, the performance of TENER can be further boosted as depicted in Table TABREF33." ], [ "The character-level encoder has been widely used in the English NER task to alleviate the data sparsity and OOV problem in word representation. In this section, we cross different character-level encoders (BiLSTM, CNN, Transformer encoder and our adapted Transformer encoder (AdaTrans for short) ) and different word-level encoders (BiLSTM, ID-CNN and AdaTrans) to implement the NER task. Results on CoNLL2003 and OntoNotes 5.0 are presented in Table TABREF34 and Table TABREF34, respectively.", "The ID-CNN encoder is from BIBREF23, and we re-implement their model in PyTorch. For different combinations, we use random search to find its best hyper-parameters. Hyper-parameters for character encoders were fixed. The details can be found in the supplementary material.", "For the results on CoNLL2003 dataset which is depicted in Table TABREF34, the AdaTrans performs as good as the BiLSTM in different character encoder scenario averagely. In addition, from Table TABREF34, we can find the pattern that the AdaTrans character encoder outpaces the BiLSTM and CNN character encoders when different word-level encoders being used. Moreover, no matter what character encoder being used or none being used, the AdaTrans word-level encoder gets the best performance. This implies that when the number of training examples increases, the AdaTrans character-level and word-level encoder can better realize their ability." ], [ "We compare the convergent speed of BiLSTM, ID-CNN, Transformer, and TENER in the development set of the OntoNotes 5.0. The curves are shown in Fig FIGREF37. TENER converges as fast as the BiLSTM model and outperforms the vanilla Transformer." ], [ "In this paper, we propose TENER, a model adopting Transformer Encoder with specific customizations for the NER task. Transformer Encoder has a powerful ability to capture the long-range context. In order to make the Transformer more suitable to the NER task, we introduce the direction-aware, distance-aware and un-scaled attention. Experiments in two English NER tasks and four Chinese NER tasks show that the performance can be massively increased. Under the same pre-trained embeddings and external knowledge, our proposed modification outperforms previous models in the six datasets. Meanwhile, we also found the adapted Transformer is suitable for being used as the English character encoder, because it has the potentiality to extract intricate patterns from characters. Experiments in two English NER datasets show that the adapted Transformer character encoder performs better than BiLSTM and CNN character encoders." ], [ "We exploit four kinds of character encoders. For all character encoders, the randomly initialized character embeddings are 30d. The hidden size of BiLSTM used in the character encoder is 50d in each direction. The kernel size of CNN used in the character encoder is 3, and we used 30 kernels with stride 1. For Transformer and adapted Transformer, the number of heads is 3, and every head is 10d, the dropout rate is 0.15, the feed-forward dimension is 60. The Transformer used the sinusoid position embedding. The number of parameters for the character encoder (excluding character embedding) when using BiLSTM, CNN, Transformer and adapted Transformer are 35830, 3660, 8460 and 6600 respectively. For all experiments, the hyper-parameters of character encoders stay unchanged." ], [ "The hyper-parameters and search ranges for different encoders are presented in Table TABREF40, Table TABREF41 and Table TABREF42." ] ], "section_name": [ "Introduction", "Related Work ::: Neural Architecture for NER", "Related Work ::: Transformer", "Related Work ::: Transformer ::: Transformer Encoder Architecture", "Related Work ::: Transformer ::: Position Embedding", "Proposed Model", "Proposed Model ::: Embedding Layer", "Proposed Model ::: Encoding Layer with Adapted Transformer", "Proposed Model ::: Encoding Layer with Adapted Transformer ::: Direction- and Distance-Aware Attention", "Proposed Model ::: Encoding Layer with Adapted Transformer ::: Un-scaled Dot-Product Attention", "Proposed Model ::: CRF Layer", "Experiment ::: Data", "Experiment ::: Results on Chinese NER Datasets", "Experiment ::: Results on English NER datasets", "Experiment ::: Analysis of Different Character Encoders", "Experiment ::: Convergent Speed Comparison", "Conclusion", "Supplemental Material ::: Character Encoder", "Supplemental Material ::: Hyper-parameters" ] }
{ "answers": [ { "annotation_id": [ "2c7557d9a8329aa43c2563c30ec04ee220276ea1", "6b62b212b6701a3b74f40e52a388f302d72f2b3b", "aaf8c29a25b98d373ece9884dbe3e60dc118dd23", "e80e4c6bd8ad549a0145340cb91efe20bc20268b" ], "answer": [ { "evidence": [ "We evaluate our model in two English NER datasets and four Chinese NER datasets.", "(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.", "(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.", "(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.", "(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.", "(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.", "(6) Resume NER was annotated by BIBREF33." ], "extractive_spans": [ "CoNLL2003", "OntoNotes 5.0", "OntoNotes 4.0.", "Chinese NER dataset MSRA", "Weibo NER", "Resume NER" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our model in two English NER datasets and four Chinese NER datasets.\n\n(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.\n\n(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.\n\n(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.\n\n(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.\n\n(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.\n\n(6) Resume NER was annotated by BIBREF33." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate our model in two English NER datasets and four Chinese NER datasets.", "(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.", "(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.", "(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.", "(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.", "(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.", "(6) Resume NER was annotated by BIBREF33." ], "extractive_spans": [ "CoNLL2003 ", "OntoNotes 5.0", "OntoNotes 4.0", "MSRA ", "Weibo", "Resume " ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our model in two English NER datasets and four Chinese NER datasets.\n\n(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.\n\n(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.\n\n(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.\n\n(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.\n\n(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.\n\n(6) Resume NER was annotated by BIBREF33." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 1: Details of Datasets.", "In summary, to improve the performance of the Transformer-based model in the NER task, we explicitly utilize the directional relative positional encoding, reduce the number of parameters and sharp the attention distribution. After the adaptation, the performance raises a lot, making our model even performs better than BiLSTM based models. Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features.", "We evaluate our model in two English NER datasets and four Chinese NER datasets.", "(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.", "(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.", "(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.", "(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.", "(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.", "(6) Resume NER was annotated by BIBREF33." ], "extractive_spans": [ "CoNLL2003", "OntoNotes 5.0", "OntoNotes 4.0", "MSRA", "Weibo NER", "Resume NER" ], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Details of Datasets.", "In summary, to improve the performance of the Transformer-based model in the NER task, we explicitly utilize the directional relative positional encoding, reduce the number of parameters and sharp the attention distribution. After the adaptation, the performance raises a lot, making our model even performs better than BiLSTM based models. Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features.", "We evaluate our model in two English NER datasets and four Chinese NER datasets.", "(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.", "2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire.", "(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.", "(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.", "(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.", "(6) Resume NER was annotated by BIBREF33." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate our model in two English NER datasets and four Chinese NER datasets.", "(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.", "(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.", "(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.", "(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.", "(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.", "(6) Resume NER was annotated by BIBREF33." ], "extractive_spans": [ "CoNLL2003", "OntoNotes 5.0", "BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part", "Chinese NER dataset MSRA", "Weibo NER", "Resume NER" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our model in two English NER datasets and four Chinese NER datasets.\n\n(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.\n\n(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.\n\n(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.\n\n(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.\n\n(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.\n\n(6) Resume NER was annotated by BIBREF33." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "de691e4e0e53a1a4a5b17cbc30252d6796766575", "9e498bdb41e0b481f0c6f80355403a1effbceb2f", "c34d65e6b82d62fd8bca4e597c67514d637638a8", "dcdf07ca93c7fb8c3129360e09b24f2a4c00dda1" ], "answer": [ { "evidence": [ "Therefore, to improve the Transformer with direction- and distance-aware characteristic, we calculate the attention scores using the equations below:", "where $t$ is index of the target token, $j$ is the index of the context token, $Q_t, K_j$ is the query vector and key vector of token $t, j$ respectively, $W_q, W_v \\in \\mathbb {R}^{d \\times d_k}$. To get $H_{d_k}\\in \\mathbb {R}^{l \\times d_k}$, we first split $H$ into $d/d_k$ partitions in the second dimension, then for each head we use one partition. $\\mathbf {u} \\in \\mathbb {R}^{d_k}$, $\\mathbf {v} \\in \\mathbb {R}^{d_k}$ are learnable parameters, $R_{t-j}$ is the relative positional encoding, and $R_{t-j} \\in \\mathbb {R}^{d_k}$, $i$ in Eq.() is in the range $[0, \\frac{d_k}{2}]$. $Q_t^TK_j$ in Eq.() is the attention score between two tokens; $Q_t^TR_{t-j}$ is the $t$th token's bias on certain relative distance; $u^TK_j$ is the bias on the $j$th token; $v^TR_{t-j}$ is the bias term for certain distance and direction." ], "extractive_spans": [], "free_form_answer": "by using an relative sinusodial positional embedding and unscaled attention", "highlighted_evidence": [ "Therefore, to improve the Transformer with direction- and distance-aware characteristic, we calculate the attention scores using the equations below:\n\nwhere $t$ is index of the target token, $j$ is the index of the context token, $Q_t, K_j$ is the query vector and key vector of token $t, j$ respectively, $W_q, W_v \\in \\mathbb {R}^{d \\times d_k}$. To get $H_{d_k}\\in \\mathbb {R}^{l \\times d_k}$, we first split $H$ into $d/d_k$ partitions in the second dimension, then for each head we use one partition. $\\mathbf {u} \\in \\mathbb {R}^{d_k}$, $\\mathbf {v} \\in \\mathbb {R}^{d_k}$ are learnable parameters, $R_{t-j}$ is the relative positional encoding, and $R_{t-j} \\in \\mathbb {R}^{d_k}$, $i$ in Eq.() is in the range $[0, \\frac{d_k}{2}]$. $Q_t^TK_j$ in Eq.() is the attention score between two tokens; $Q_t^TR_{t-j}$ is the $t$th token's bias on certain relative distance; $u^TK_j$ is the bias on the $j$th token; $v^TR_{t-j}$ is the bias term for certain distance and direction." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper, we propose TENER, a model adopting Transformer Encoder with specific customizations for the NER task. Transformer Encoder has a powerful ability to capture the long-range context. In order to make the Transformer more suitable to the NER task, we introduce the direction-aware, distance-aware and un-scaled attention. Experiments in two English NER tasks and four Chinese NER tasks show that the performance can be massively increased. Under the same pre-trained embeddings and external knowledge, our proposed modification outperforms previous models in the six datasets. Meanwhile, we also found the adapted Transformer is suitable for being used as the English character encoder, because it has the potentiality to extract intricate patterns from characters. Experiments in two English NER datasets show that the adapted Transformer character encoder performs better than BiLSTM and CNN character encoders." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In order to make the Transformer more suitable to the NER task, we introduce the direction-aware, distance-aware and un-scaled attention." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "Therefore, to improve the Transformer with direction- and distance-aware characteristic, we calculate the attention scores using the equations below:", "where $t$ is index of the target token, $j$ is the index of the context token, $Q_t, K_j$ is the query vector and key vector of token $t, j$ respectively, $W_q, W_v \\in \\mathbb {R}^{d \\times d_k}$. To get $H_{d_k}\\in \\mathbb {R}^{l \\times d_k}$, we first split $H$ into $d/d_k$ partitions in the second dimension, then for each head we use one partition. $\\mathbf {u} \\in \\mathbb {R}^{d_k}$, $\\mathbf {v} \\in \\mathbb {R}^{d_k}$ are learnable parameters, $R_{t-j}$ is the relative positional encoding, and $R_{t-j} \\in \\mathbb {R}^{d_k}$, $i$ in Eq.() is in the range $[0, \\frac{d_k}{2}]$. $Q_t^TK_j$ in Eq.() is the attention score between two tokens; $Q_t^TR_{t-j}$ is the $t$th token's bias on certain relative distance; $u^TK_j$ is the bias on the $j$th token; $v^TR_{t-j}$ is the bias term for certain distance and direction.", "Based on Eq.(), we have", "because $\\sin (-x)=-\\sin (x), \\cos (x)=\\cos (-x)$. This means for an offset $t$, the forward and backward relative positional encoding are the same with respect to the $\\cos (c_it)$ terms, but is the opposite with respect to the $\\sin (c_it)$ terms. Therefore, by using $R_{t-j}$, the attention score can distinguish different directions and distances." ], "extractive_spans": [], "free_form_answer": "calculate the attention scores which can distinguish different directions and distances", "highlighted_evidence": [ "Therefore, to improve the Transformer with direction- and distance-aware characteristic, we calculate the attention scores using the equations below:\n\nwhere $t$ is index of the target token, $j$ is the index of the context token, $Q_t, K_j$ is the query vector and key vector of token $t, j$ respectively, $W_q, W_v \\in \\mathbb {R}^{d \\times d_k}$. To get $H_{d_k}\\in \\mathbb {R}^{l \\times d_k}$, we first split $H$ into $d/d_k$ partitions in the second dimension, then for each head we use one partition. $\\mathbf {u} \\in \\mathbb {R}^{d_k}$, $\\mathbf {v} \\in \\mathbb {R}^{d_k}$ are learnable parameters, $R_{t-j}$ is the relative positional encoding, and $R_{t-j} \\in \\mathbb {R}^{d_k}$, $i$ in Eq.() is in the range $[0, \\frac{d_k}{2}]$. $Q_t^TK_j$ in Eq.() is the attention score between two tokens; $Q_t^TR_{t-j}$ is the $t$th token's bias on certain relative distance; $u^TK_j$ is the bias on the $j$th token; $v^TR_{t-j}$ is the bias term for certain distance and direction.\n\nBased on Eq.(), we have\n\nbecause $\\sin (-x)=-\\sin (x), \\cos (x)=\\cos (-x)$. This means for an offset $t$, the forward and backward relative positional encoding are the same with respect to the $\\cos (c_it)$ terms, but is the opposite with respect to the $\\sin (c_it)$ terms. Therefore, by using $R_{t-j}$, the attention score can distinguish different directions and distances." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Inspired by the success of BiLSTM in NER tasks, we consider what properties the Transformer lacks compared to BiLSTM-based models. One observation is that BiLSTM can discriminatively collect the context information of a token from its left and right sides. But it is not easy for the Transformer to distinguish which side the context information comes from.", "Although the dot product between two sinusoidal position embeddings is able to reflect their distance, it lacks directionality and this property will be broken by the vanilla Transformer attention. To illustrate this, we first prove two properties of the sinusoidal position embeddings.", "Therefore, to improve the Transformer with direction- and distance-aware characteristic, we calculate the attention scores using the equations below:", "Although Transformer encoder has potential advantage in modeling long-range context, it is not working well for NER task. In this paper, we propose an adapted Transformer for NER task with two improvements.", "The above improvement is based on the work BIBREF17, BIBREF19. Since the size of NER datasets is usually small, we avoid direct multiplication of two learnable parameters, because they can be represented by one learnable parameter. Therefore we do not use $W_k$ in Eq.(DISPLAY_FORM22). The multi-head version is the same as Eq.(DISPLAY_FORM8), but we discard $W_o$ since it is directly multiplied by $W_1$ in Eq.(DISPLAY_FORM9)." ], "extractive_spans": [], "free_form_answer": "Self-attention mechanism is changed to allow for direction-aware calculations", "highlighted_evidence": [ "Inspired by the success of BiLSTM in NER tasks, we consider what properties the Transformer lacks compared to BiLSTM-based models. One observation is that BiLSTM can discriminatively collect the context information of a token from its left and right sides. But it is not easy for the Transformer to distinguish which side the context information comes from.", "Although the dot product between two sinusoidal position embeddings is able to reflect their distance, it lacks directionality and this property will be broken by the vanilla Transformer attention.", "Therefore, to improve the Transformer with direction- and distance-aware characteristic, we calculate the attention scores using the equations below:", "Although Transformer encoder has potential advantage in modeling long-range context, it is not working well for NER task. In this paper, we propose an adapted Transformer for NER task with two improvements.", "Inspired by the success of BiLSTM in NER tasks, we consider what properties the Transformer lacks compared to BiLSTM-based models. One observation is that BiLSTM can discriminatively collect the context information of a token from its left and right sides. But it is not easy for the Transformer to distinguish which side the context information comes from.", "Therefore, to improve the Transformer with direction- and distance-aware characteristic, we calculate the attention scores using the equations below:", "The above improvement is based on the work BIBREF17, BIBREF19" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "0ba5b82b082afd44b9dfade23c2992ad2d064928", "8832773c682bf12c9f7e55038cd17c9659ac1d60", "ae62e6b9ae62a33ef2916f1c95dc69b685293b6f", "e82fa04239ecc62a06f5a14fd109841e55adf4f6" ], "answer": [ { "evidence": [ "Experiment ::: Data", "We evaluate our model in two English NER datasets and four Chinese NER datasets.", "(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.", "(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.", "(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.", "(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.", "(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.", "(6) Resume NER was annotated by BIBREF33.", "Experiment ::: Results on Chinese NER Datasets", "We first present our results in the four Chinese NER datasets. Since Chinese NER is directly based on the characters, it is more straightforward to show the abilities of different models without considering the influence of word representation.", "As shown in Table TABREF29, the vanilla Transformer does not perform well and is worse than the BiLSTM and CNN based models. However, when relative positional encoding combined, the performance was enhanced greatly, resulting in better results than the BiLSTM and CNN in all datasets. The number of training examples of the Weibo dataset is tiny, therefore the performance of the Transformer is abysmal, which is as expected since the Transformer is data-hungry. Nevertheless, when enhanced with the relative positional encoding and unscaled attention, it can achieve even better performance than the BiLSTM-based model. The superior performance of the adapted Transformer in four datasets ranging from small datasets to big datasets depicts that the adapted Transformer is more robust to the number of training examples than the vanilla Transformer. As the last line of Table TABREF29 depicts, the scaled attention will deteriorate the performance.", "FLOAT SELECTED: Table 2: The F1 scores on Chinese NER datasets. ♣,♠ are results reported in (Zhang and Yang, 2018) and (Gui et al., 2019a), respectively. “w/ scale” means TENER using the scaled attention in Eq.(19). ∗ their results are not directly comparable with ours, since they used 100d pre-trained character and bigram embeddings. Other models use the same embeddings.", "FLOAT SELECTED: Table 4: The F1 scores on English NER datasets. We only list results based on non-contextualized embeddings, and methods utilized pre-trained language models, pre-trained features, or higher dimension word vectors are excluded. TENER (Ours) uses the Transformer encoder both in the character-level and wordlevel. “w/ scale” means TENER using the scaled attention in Eq.(19). “w/ CNN-char” means TENER using CNN as character encoder instead of AdaTrans.", "In summary, to improve the performance of the Transformer-based model in the NER task, we explicitly utilize the directional relative positional encoding, reduce the number of parameters and sharp the attention distribution. After the adaptation, the performance raises a lot, making our model even performs better than BiLSTM based models. Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Experiment ::: Data\nWe evaluate our model in two English NER datasets and four Chinese NER datasets.\n\n(1) CoNLL2003 is one of the most evaluated English NER datasets, which contains four different named entities: PERSON, LOCATION, ORGANIZATION, and MISC BIBREF34.\n\n(2) OntoNotes 5.0 is an English NER dataset whose corpus comes from different domains, such as telephone conversation, newswire. We exclude the New Testaments portion since there is no named entity in it BIBREF8, BIBREF7. This dataset has eleven entity names and seven value types, like CARDINAL, MONEY, LOC.\n\n(3) BIBREF35 released OntoNotes 4.0. In this paper, we use the Chinese part. We adopted the same pre-process as BIBREF36.\n\n(4) The corpus of the Chinese NER dataset MSRA came from news domain BIBREF37.\n\n(5) Weibo NER was built based on text in Chinese social media Sina Weibo BIBREF38, and it contained 4 kinds of entities.\n\n(6) Resume NER was annotated by BIBREF33.", "Experiment ::: Results on Chinese NER Datasets\nWe first present our results in the four Chinese NER datasets. Since Chinese NER is directly based on the characters, it is more straightforward to show the abilities of different models without considering the influence of word representation.", "As shown in Table TABREF29, the vanilla Transformer does not perform well and is worse than the BiLSTM and CNN based models. However, when relative positional encoding combined, the performance was enhanced greatly, resulting in better results than the BiLSTM and CNN in all datasets. The number of training examples of the Weibo dataset is tiny, therefore the performance of the Transformer is abysmal, which is as expected since the Transformer is data-hungry. Nevertheless, when enhanced with the relative positional encoding and unscaled attention, it can achieve even better performance than the BiLSTM-based model. The superior performance of the adapted Transformer in four datasets ranging from small datasets to big datasets depicts that the adapted Transformer is more robust to the number of training examples than the vanilla Transformer. As the last line of Table TABREF29 depicts, the scaled attention will deteriorate the performance.", "FLOAT SELECTED: Table 2: The F1 scores on Chinese NER datasets. ♣,♠ are results reported in (Zhang and Yang, 2018) and (Gui et al., 2019a), respectively. “w/ scale” means TENER using the scaled attention in Eq.(19). ∗ their results are not directly comparable with ours, since they used 100d pre-trained character and bigram embeddings. Other models use the same embeddings.", "FLOAT SELECTED: Table 4: The F1 scores on English NER datasets. We only list results based on non-contextualized embeddings, and methods utilized pre-trained language models, pre-trained features, or higher dimension word vectors are excluded. TENER (Ours) uses the Transformer encoder both in the character-level and wordlevel. “w/ scale” means TENER using the scaled attention in Eq.(19). “w/ CNN-char” means TENER using CNN as character encoder instead of AdaTrans.", "Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "In this paper, we propose TENER, a model adopting Transformer Encoder with specific customizations for the NER task. Transformer Encoder has a powerful ability to capture the long-range context. In order to make the Transformer more suitable to the NER task, we introduce the direction-aware, distance-aware and un-scaled attention. Experiments in two English NER tasks and four Chinese NER tasks show that the performance can be massively increased. Under the same pre-trained embeddings and external knowledge, our proposed modification outperforms previous models in the six datasets. Meanwhile, we also found the adapted Transformer is suitable for being used as the English character encoder, because it has the potentiality to extract intricate patterns from characters. Experiments in two English NER datasets show that the adapted Transformer character encoder performs better than BiLSTM and CNN character encoders." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Under the same pre-trained embeddings and external knowledge, our proposed modification outperforms previous models in the six datasets. " ], "unanswerable": false, "yes_no": true }, { "evidence": [ "In summary, to improve the performance of the Transformer-based model in the NER task, we explicitly utilize the directional relative positional encoding, reduce the number of parameters and sharp the attention distribution. After the adaptation, the performance raises a lot, making our model even performs better than BiLSTM based models. Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "In summary, to improve the performance of the Transformer-based model in the NER task, we explicitly utilize the directional relative positional encoding, reduce the number of parameters and sharp the attention distribution. After the adaptation, the performance raises a lot, making our model even performs better than BiLSTM based models. Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features." ], "extractive_spans": [ "we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features" ], "free_form_answer": "", "highlighted_evidence": [ "Furthermore, in the six NER datasets, we achieve state-of-the-art performance among models without considering the pre-trained language models or designed features." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "Which NER dataset do they use?", "How do they incorporate direction and relative distance in attention?", "Do they outperform current NER state-of-the-art models?" ], "question_id": [ "6e040e80f2da69d50386a90a38ed6d2fa4f77bbd", "aebd1f0d728d0de5f76238844da044a44109f76f", "cb4086ad022197da79f28dc609d0de90108c4543" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: An example for NER. The relative direction is important in the NER task, because words before “Inc.” are mostly to be an organization, words after “in” are more likely to be time or location. Besides, the distance between words is also important, since only continuous words can form an entity, the former “Louis Vuitton” can not form an entity with the “Inc.”.", "Figure 2: Model structure of TENER for English NER tasks. In TENER, Transformer encoder is used not only to extract the word-level contextual information, but also to encode character-level information in a word.", "Figure 3: Dot product between two sinusoidal position embeddings whose distance is k. It is clear that the product is symmetrical, and with the increment of |k|, it has a trend to decrease, but this decrease is not monotonous.", "Figure 4: The upper line is the product between PETt PEt+k. The lower two lines are the products of PETt WPEt+k with two random W s. Although PETt PEt+k can reflect the distance, the PETt WPEt+k has no clear pattern.", "Table 1: Details of Datasets.", "Table 2: The F1 scores on Chinese NER datasets. ♣,♠ are results reported in (Zhang and Yang, 2018) and (Gui et al., 2019a), respectively. “w/ scale” means TENER using the scaled attention in Eq.(19). ∗ their results are not directly comparable with ours, since they used 100d pre-trained character and bigram embeddings. Other models use the same embeddings.", "Table 4: The F1 scores on English NER datasets. We only list results based on non-contextualized embeddings, and methods utilized pre-trained language models, pre-trained features, or higher dimension word vectors are excluded. TENER (Ours) uses the Transformer encoder both in the character-level and wordlevel. “w/ scale” means TENER using the scaled attention in Eq.(19). “w/ CNN-char” means TENER using CNN as character encoder instead of AdaTrans.", "Table 3: Performance of models with ELMo as their embeddings in English NER datasets. “BiLSTM” is our run. In the larger OntoNotes5.0, TENER achieves much better F1 score.", "Table 5: F1 scores in the CoNLL2003 and OntoNotes 5.0. “Char” means character-level encoder, and “Word” means word-level encoder. “AdaTrans” means our adapted Transformer encoder.", "Figure 5: Convergent speed in the development dataset of OntoNotes 5.0 for four kinds of models.", "Table 6: The hyper-parameters and hyper-parameter search ranges for BiLSTM.", "Table 7: The hyper-parameters and hyper-parameter search ranges for ID-CNN.", "Table 8: The hyper-parameters and hyper-parameter search ranges for Transformer and adapted Transformer in Chinese and English NER datasets." ], "file": [ "2-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "5-Figure4-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table4-1.png", "7-Table3-1.png", "8-Table5-1.png", "8-Figure5-1.png", "10-Table6-1.png", "11-Table7-1.png", "11-Table8-1.png" ] }
[ "How do they incorporate direction and relative distance in attention?" ]
[ [ "1911.04474-Proposed Model ::: Encoding Layer with Adapted Transformer ::: Direction- and Distance-Aware Attention-18", "1911.04474-Proposed Model ::: Encoding Layer with Adapted Transformer ::: Direction- and Distance-Aware Attention-24", "1911.04474-Proposed Model ::: Encoding Layer with Adapted Transformer ::: Direction- and Distance-Aware Attention-20", "1911.04474-Proposed Model ::: Encoding Layer with Adapted Transformer ::: Direction- and Distance-Aware Attention-23", "1911.04474-Proposed Model ::: Encoding Layer with Adapted Transformer-0", "1911.04474-Proposed Model ::: Encoding Layer with Adapted Transformer ::: Direction- and Distance-Aware Attention-21", "1911.04474-Conclusion-0", "1911.04474-Proposed Model ::: Encoding Layer with Adapted Transformer ::: Direction- and Distance-Aware Attention-1", "1911.04474-Proposed Model ::: Encoding Layer with Adapted Transformer ::: Direction- and Distance-Aware Attention-0" ] ]
[ "Self-attention mechanism is changed to allow for direction-aware calculations" ]
4
1905.00840
Knowledge Authoring and Question Answering with KALM
Knowledge representation and reasoning (KRR) is one of the key areas in artificial intelligence (AI) field. It is intended to represent the world knowledge in formal languages (e.g., Prolog, SPARQL) and then enhance the expert systems to perform querying and inference tasks. Currently, constructing large scale knowledge bases (KBs) with high quality is prohibited by the fact that the construction process requires many qualified knowledge engineers who not only understand the domain-specific knowledge but also have sufficient skills in knowledge representation. Unfortunately, qualified knowledge engineers are in short supply. Therefore, it would be very useful to build a tool that allows the user to construct and query the KB simply via text. Although there is a number of systems developed for knowledge extraction and question answering, they mainly fail in that these system don't achieve high enough accuracy whereas KRR is highly sensitive to erroneous data. In this thesis proposal, I will present Knowledge Authoring Logic Machine (KALM), a rule-based system which allows the user to author knowledge and query the KB in text. The experimental results show that KALM achieved superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.
{ "paragraphs": [ [ "Knowledge representation and reasoning (KRR) is the process of representing the domain knowledge in formal languages (e.g., SPARQL, Prolog) such that it can be used by expert systems to execute querying and reasoning services. KRR have been applied in many fields including financial regulations, medical diagnosis, laws, and so on. One major obstacle in KRR is the creation of large-scale knowledge bases with high quality. For one thing, this requires the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. Unfortunately, qualified KEs are also in short supply. Therefore, it would be useful to build a tool that allows the domain experts without any background in logic to construct and query the knowledge base simply from text.", "Controlled natural languages (CNLs) BIBREF0 were developed as a technology that achieves this goal. CNLs are designed based on natural languages (NLs) but with restricted syntax and interpretation rules that determine the unique meaning of the sentence. Representative CNLs include Attempto Controlled English BIBREF1 and PENG BIBREF2 . Each CNL is developed with a language parser which translates the English sentences into an intermediate structure, discourse representation structure (DRS) BIBREF3 . Based on the DRS structure, the language parsers further translate the DRS into the corresponding logical representations, e.g., Answer Set Programming (ASP) BIBREF4 programs. One main issue with the aforementioned CNLs is that the systems do not provide enough background knowledge to preserve semantic equivalences of sentences that represent the same meaning but are expressed via different linguistic structures. For instance, the sentences Mary buys a car and Mary makes a purchase of a car are translated into different logical representations by the current CNL parsers. As a result, if the user ask a question who is a buyer of a car, these systems will fail to find the answer.", "In this thesis proposal, I will present KALM BIBREF5 , BIBREF6 , a system for knowledge authoring and question answering. KALM is superior to the current CNL systems in that KALM has a complex frame-semantic parser which can standardize the semantics of the sentences that express the same meaning via different linguistic structures. The frame-semantic parser is built based on FrameNet BIBREF7 and BabelNet BIBREF8 where FrameNet is used to capture the meaning of the sentence and BabelNet BIBREF8 is used to disambiguate the meaning of the extracted entities from the sentence. Experiment results show that KALM achieves superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.", "The rest parts are organized as follows: Section SECREF2 discusses the related works, Section SECREF3 presents the KALM architecture, Section SECREF4 presents KALM-QA, the question answering part of KALM, Section SECREF5 shows the evaluation results, Section SECREF6 shows the future work beyond the thesis, and Section SECREF7 concludes the paper." ], [ "As is described in Section SECREF1 , CNL systems were proposed as the technology for knowledge representation and reasoning. Related works also include knowledge extraction tools, e.g., OpenIE BIBREF9 , SEMEFOR BIBREF10 , SLING BIBREF11 , and Standford KBP system BIBREF12 . These knowledge extraction tools are designed to extract semantic relations from English sentences that capture the meaning. The limitations of these tools are two-fold: first, they lack sufficient accuracy to extract the correct semantic relations and entities while KRR is very sensitive to incorrect data; second, these systems are not able to map the semantic relations to logical forms and therefore not capable of doing KRR. Other related works include the question answering frameworks, e.g., Memory Network BIBREF13 , Variational Reasoning Network BIBREF14 , ATHENA BIBREF15 , PowerAqua BIBREF16 . The first two belong to end-to-end learning approaches based on machine learning models. The last two systems have implemented semantic parsers which translate natural language sentences into intermediate query languages and then query the knowledge base to get the answers. For the machine learning based approaches, the results are not explainable. Besides, their accuracy is not high enough to provide correct answers. For ATHENA and PowerAqua, these systems perform question answering based on a priori knowledge bases. Therefore, they do not support knowledge authoring while KALM is able to support both knowledge authoring and question answering." ], [ "Figure FIGREF1 shows the architecture of KALM which translates a CNL sentence to the corresponding logical representations, unique logical representations (ULR).", "Attempto Parsing Engine. The input sentences are CNL sentences based on ACE grammar. KALM starts with parsing the input sentence using ACE Parser and generates the DRS structure BIBREF17 which captures the syntactic information of the sentences.", "Frame Parser. KALM performs frame-based parsing based on the DRS and produces a set of frames that represent the semantic relations a sentence implies. A frame BIBREF18 represents a semantic relation of a set of entities where each plays a particular role in the frame relation. We have designed a frame ontology, called FrameOnt, which is based on the frames in FrameNet BIBREF7 and encoded as a Prolog fact. For instance, the Commerce_Buy frame is shown below:", "", " fp(Commerce_Buy,[", " role(Buyer,[bn:00014332n],[]),", " role(Seller,[bn:00053479n],[]),", " role(Goods,[bn:00006126n,bn:00021045n],[]),", " role(Recipient,[bn:00066495n],[]),", " role(Money,[bn:00017803n],[currency])]).", " In each role-term, the first argument is the name of the role and the second is a list of role meanings represented via BabelNet synset IDs BIBREF8 . The third argument of a role-term is a list of constraints on that role. For instance, the sentence Mary buys a car implies the Commerce_Buy frame where Mary is the Buyer and car is the Goods. To extract a frame instance from a given CNL sentence, KALM uses logical valence patterns (lvps) which are learned via structural learning. An example of the lvp is shown below:", "", " lvp(buy,v,Commerce_Buy, [", " pattern(Buyer,verb->subject,required),", " pattern(Goods,verb->object,required),", " pattern(Recipient,verb->pp(for)->dep,optnl),", " pattern(Money,verb->pp(for)->dep,optnl),", " pattern(Seller,verb->pp(from)->dep,optnl)]).", "", "The first three arguments of an lvp-fact identify the lexical unit, its part of speech, and the frame. The fourth argument is a set of pattern-terms, each having three parts: the name of a role, a grammatical pattern, and the required/optional flag. The grammatical pattern determines the grammatical context in which the lexical unit, a role, and a role-filler word can appear in that frame. Each grammatical pattern is captured by a parsing rule (a Prolog rule) that can be used to extract appropriate role-filler words based on the APE parses.", "Role-filler Disambiguation. Based on the extracted frame instance, the role-filler disambiguation module disambiguates the meaning of each role-filler word for the corresponding frame role a BabelNet Synset ID. A complex algorithm BIBREF5 was proposed to measure the semantic similarity between a candidate BabelNet synset that contains the role-filler word and the frame-role synset. The algorithm also has optimizations that improve the efficiency of the algorithm e.g., priority-based search, caching, and so on. In addition to disambiguating the meaning of the role-fillers, this module is also used to prune the extracted frame instances where the role-filler word and the frame role are semantically incompatible.", "Constructing ULR. The extracted frame instances are translated into the corresponding logical representations, unique logical representation (ULR). Examples can be found in reference BIBREF5 ." ], [ "Based on KALM, KALM-QA BIBREF6 is developed for question answering. KALM-QA shares the same components with KALM for syntactic parsing, frame-based parsing and role-filler disambiguation. Different from KALM, KALM-QA translates the questions to unique logical representation for queries (ULRQ), which are used to query the authored knowledge base." ], [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], [ "This section discusses the future work beyond the thesis: (1) enhancing KALM to author rules, and (2) supporting time reasoning.", "Authoring Rules from CNL. There are two research problems with rules. The first problem is the standardization of rules parses that express the same information but via different syntactic forms or using different expressions. Suppose the knowledge base contains sentences like: (1) if a person buys a car then the person owns the car, (2) every person who is a purchaser of a car is an owner of the car, (3) if a car is bought by a person then the person possesses the car. All the above sentences represent rules and express exactly the same meaning. However, KALM's current syntactic parser will represent them in different DRSs and therefore not being able to map them into the same logical form. The second problem involves the recognition and representation of different types of rules in logic. For instance, defeasible rules are very common in text. However, this type of rules cannot be handled by first order logic. We believe defeasible logic BIBREF19 is a good fit.", "Time Reasoning. Time-related information is a crucial part of human knowledge, but semantic parsing that takes the time into account is rather hard. However, we can develop a CNL that would incorporate enough time related idioms to be useful in a number of domains of discourse (e.g., tax law). Time can then be added to DRSs and incorporated into our frame based approach down to the very level of the logical facts into which sentences will be translated. This time information can be represented either via special time-aware relations among events (e.g., before, after, causality, triggering) or using a reserved argument to represent time in each fluent." ], [ "This thesis proposal provides an overview of KALM, a system for knowledge authoring. In addition, it introduces KALM-QA, the question answering part of KALM. Experimental results show that both KALM and KALM-QA achieve superior accuracy as compared to the state-of-the-art systems." ] ], "section_name": [ "Introduction", "Related Works", "The KALM Architecture", "KALM-QA for Question Answering", "Evaluations", "Future Work Beyond The Thesis", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "0f0d4b9c63cac67357c9c5b4559f264781a8ab36", "2e9db07683216a552aee5be2829ae69b64b00599", "80e1bcab469d90aea29a3e9acbdce5792a27a6f2", "f5574e023e0b39b27789b8fe54670c9ba6746a2d" ], "answer": [ { "evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [], "free_form_answer": "95.6% on knowledge authoring, 95% on the manually constructed QA dataset and 100% accuracy on the MetaQA dataset", "highlighted_evidence": [ "KALM achieves an accuracy of 95.6%—much higher than the other systems.\n\nFor KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [ "KALM achieves an accuracy of 95.6%", "KALM-QA achieves 100% accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "KALM achieves an accuracy of 95.6%—much higher than the other systems.\n\nFor KALM-QA, we eva", "KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [ "KALM-QA achieves an accuracy of 95% for parsing the queries", "The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [], "free_form_answer": "KALM achieves an accuracy of 95.6%, KALM-QA achieves 95% accuracy on the manually constructured general questions dataset based on the 50 logical frames and achieves 100% accuracy on MetaQA dataset", "highlighted_evidence": [ "KALM achieves an accuracy of 95.6%—much higher than the other systems.\n\nFor KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "1bf204d1745b407be43c79b14b3fa2c9d05098d3", "5a8460e788baf8e0ee40e8ef27fe32dc705a6850", "88d3149ee97cf23a1c432315c5f0c85817cb9376", "a6a993f2269da8cfffb976ed3f4d7aaaa0132c78" ], "answer": [ { "evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "This thesis proposal provides an overview of KALM, a system for knowledge authoring. In addition, it introduces KALM-QA, the question answering part of KALM. Experimental results show that both KALM and KALM-QA achieve superior accuracy as compared to the state-of-the-art systems." ], "extractive_spans": [ "SEMAFOR", "SLING", "Stanford KBP " ], "free_form_answer": "", "highlighted_evidence": [ "Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "Experimental results show that both KALM and KALM-QA achieve superior accuracy as compared to the state-of-the-art systems." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems." ], "extractive_spans": [ "SEMAFOR", "SLING", "Stanford KBP system" ], "free_form_answer": "", "highlighted_evidence": [ "Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this thesis proposal, I will present KALM BIBREF5 , BIBREF6 , a system for knowledge authoring and question answering. KALM is superior to the current CNL systems in that KALM has a complex frame-semantic parser which can standardize the semantics of the sentences that express the same meaning via different linguistic structures. The frame-semantic parser is built based on FrameNet BIBREF7 and BabelNet BIBREF8 where FrameNet is used to capture the meaning of the sentence and BabelNet BIBREF8 is used to disambiguate the meaning of the extracted entities from the sentence. Experiment results show that KALM achieves superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.", "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems." ], "extractive_spans": [ "SEMAFOR", "SLING", "Stanford KBP system" ], "free_form_answer": "", "highlighted_evidence": [ " Experiment results show that KALM achieves superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.", "Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [ "SEMAFOR, SLING, and Stanford KBP system", "BIBREF14" ], "free_form_answer": "", "highlighted_evidence": [ "Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. ", "The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "74c996b8006cf1c2e3b85c8a414561edc769b732", "87c6d3a291bf4ba7ced152055c76eaf843cd48a9", "ca51604a0bb5d99931b9c59106c4b0899c04a7c0", "fb64b85928878be595ce747e60a11fe24e03c39e" ], "answer": [ { "evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [], "free_form_answer": "dataset consisting 250 sentences adapted from FrameNet exemplar sentences, dataset consisting general questions based on 50 logical framesderived from FrameNet, MetaQA dataset", "highlighted_evidence": [ "We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.\n\nFor KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [ "first dataset is manually constructed general questions based on the 50 logical frames", "second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions" ], "free_form_answer": "", "highlighted_evidence": [ "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.", "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [], "free_form_answer": "a manually created dataset of 50 logical frames mostly derived from FrameNet, a manually constructed general questions dataset based on the 50 logical frames and MetaQA dataset", "highlighted_evidence": [ "This section provides a summary of the evaluation of KALM and KALM-QA, where KALM is evaluated for knowledge authoring and KALM-QA is evaluated for question answering. We have created a total of 50 logical frames, mostly derived from FrameNet but also some that FrameNet is missing (like Restaurant, Human_Gender) for representing the meaning of English sentences. Based on the 50 frames, we have manually constructed 250 sentences that are adapted from FrameNet exemplar sentences and evaluate these sentences on KALM, SEMAFOR, SLING, and Stanford KBP system. KALM achieves an accuracy of 95.6%—much higher than the other systems.\n\nFor KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions. KALM-QA achieves 100% accuracy—much higher than the state-of-the-art machine learning approach BIBREF14 . Details of the evaluations can be found in BIBREF5 and BIBREF6 ." ], "extractive_spans": [ " manually constructed general questions based on the 50 logical frames", "MetaQA dataset" ], "free_form_answer": "", "highlighted_evidence": [ "For KALM-QA, we evaluate it on two datasets. The first dataset is manually constructed general questions based on the 50 logical frames. KALM-QA achieves an accuracy of 95% for parsing the queries. The second dataset we use is MetaQA dataset BIBREF14 , which contains contains almost 29,000 test questions and over 260,000 training questions." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "What was their accuracy score?", "What are the state-of-the-art systems?", "What dataset did they evaluate on?" ], "question_id": [ "756a8a9125e6984e0ca768b653c6c760efa3db66", "fe52b093735bb456d7e699aa9a2b806d2b498ba0", "7748c072e07d6c6db5a34be38b4a5e97ac6d7999" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: The KALM Architecture", "Figure 2: The KALM-QA Architecture" ], "file": [ "2-Figure1-1.png", "4-Figure2-1.png" ] }
[ "What was their accuracy score?", "What dataset did they evaluate on?" ]
[ [ "1905.00840-Evaluations-0", "1905.00840-Evaluations-1" ], [ "1905.00840-Evaluations-0", "1905.00840-Evaluations-1" ] ]
[ "KALM achieves an accuracy of 95.6%, KALM-QA achieves 95% accuracy on the manually constructured general questions dataset based on the 50 logical frames and achieves 100% accuracy on MetaQA dataset", "a manually created dataset of 50 logical frames mostly derived from FrameNet, a manually constructed general questions dataset based on the 50 logical frames and MetaQA dataset" ]
5
1810.02229
Italian Event Detection Goes Deep Learning
This paper reports on a set of experiments with different word embeddings to initialize a state-of-the-art Bi-LSTM-CRF network for event detection and classification in Italian, following the EVENTI evaluation exercise. The net- work obtains a new state-of-the-art result by improving the F1 score for detection of 1.3 points, and of 6.5 points for classification, by using a single step approach. The results also provide further evidence that embeddings have a major impact on the performance of such architectures.
{ "paragraphs": [ [ "Current societies are exposed to a continuous flow of information that results in a large production of data (e.g. news articles, micro-blogs, social media posts, among others), at different moments in time. In addition to this, the consumption of information has dramatically changed: more and more people directly access information through social media platforms (e.g. Facebook and Twitter), and are less and less exposed to a diversity of perspectives and opinions. The combination of these factors may easily result in information overload and impenetrable “filter bubbles”. Events, i.e. things that happen or hold as true in the world, are the basic components of such data stream. Being able to correctly identify and classify them plays a major role to develop robust solutions to deal with the current stream of data (e.g. the storyline framework BIBREF0 ), as well to improve the performance of many Natural Language Processing (NLP) applications such as automatic summarization and question answering (Q.A.).", "Event detection and classification has seen a growing interest in the NLP community thanks to the availability of annotated corpora BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 and evaluation campaigns BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . In the context of the 2014 EVALITA Workshop, the EVENTI evaluation exercise BIBREF11 was organized to promote research in Italian Temporal Processing, of which event detection and classification is a core subtask.", "Since the EVENTI campaign, there has been a lack of further research, especially in the application of deep learning models to this task in Italian. The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available. ." ], [ "We follow the formulation of the task as specified in the EVENTI exercise: determine the extent and the class of event mentions in a text, according to the It-TimeML $<$ EVENT $>$ tag definition (Subtask B in EVENTI).", "In EVENTI, the tag $<$ EVENT $>$ is applied to every linguistic expression denoting a situation that happens or occurs, or a state in which something obtains or holds true, regardless of the specific parts-of-speech that may realize it. EVENTI distinguishes between single token and multi-tokens events, where the latter are restricted to specific cases of eventive multi-word expressions in lexicographic dictionaries (e.g. “fare le valigie” [to pack]), verbal periphrases (e.g. “(essere) in grado di” [(to be) able to]; “c'è” [there is]), and named events (e.g. “la strage di Beslan” [Beslan school siege]).", "Each event is further assigned to one of 7 possible classes, namely: OCCURRENCE, ASPECTUAL, PERCEPTION, REPORTING, I(NTESIONAL)_STATE, I(NTENSIONAL)_ACTION, and STATE. These classes are derived from the English TimeML Annotation Guidelines BIBREF12 . The TimeML event classes distinguishes with respect to other classifications, such as ACE BIBREF1 or FrameNet BIBREF13 , because they expresses relationships the target event participates in (such as factual, evidential, reported, intensional) rather than semantic categories denoting the meaning of the event. This means that the EVENT classes are assigned by taking into account both the semantic and the syntactic context of occurrence of the target event. Readers are referred to the EVENTI Annotation Guidelines for more details." ], [ "The EVENTI corpus consists of three datasets: the Main Task training data, the Main task test data, and the Pilot task test data. The Main Task data are on contemporary news articles, while the Pilot Task on historical news articles. For our experiments, we focused only on the Main Task. In addition to the training and test data, we have created also a Main Task development set by excluding from the training data all the articles that composed the test data of the Italian dataset at the SemEval 2010 TempEval-2 campaign BIBREF6 . The new partition of the corpus results in the following distribution of the $<$ EVENT $>$ tag: i) 17,528 events in the training data, of which 1,207 are multi-token mentions; ii.) 301 events in the development set, of which 13 are multi-token mentions; and finally, iii.) 3,798 events in the Main task test, of which 271 are multi-token mentions.", "Tables 1 and 1 report, respectively, the distribution of the events per token part-of speech (POS) and per event class. Not surprisingly, verbs are the largest annotated category, followed by nouns, adjectives, and prepositional phrases. Such a distribution reflects both a kind of “natural” distribution of the realization of events in an Indo-european language, and, at the same time, specific annotation choices. For instance, adjectives have been annotated only when in a predicative position and when introduced by a copula or a copular construction. As for the classes, OCCURRENCE and STATE represent the large majority of all events, followed by the intensional ones (I_STATE and I_ACTION), expressing some factual relationship between the target events and their arguments, and finally the others (REPORTING, ASPECTUAL, and PERCEPTION)." ], [ "We adapted a publicly available Bi-LSTM network with a CRF classifier as last layer BIBREF14 . BIBREF14 demonstrated that word embeddings, among other hyper-parameters, have a major impact on the performance of the network, regardless of the specific task. On the basis of these experimental observations, we decided to investigate the impact of different Italian word embeddings for the Subtask B Main Task of the EVENTI exercise. We thus selected 5 word embeddings for Italian to initialize the network, differentiating one with respect to each other either for the representation model used (word2vec vs. GloVe; CBOW vs. skip-gram), dimensionality (300 vs. 100), or corpora used for their generation (Italian Wikipedia vs. crawled web document vs. large textual corpora or archives):", "As for the other parameters, the network maintains the optimized configurations used for the event detection task for English BIBREF14 : two LSTM layers of 100 units each, Nadam optimizer, variational dropout (0.5, 0.5), with gradient normalization ( $\\tau $ = 1), and batch size of 8. Character-level embeddings, learned using a Convolutional Neural Network (CNN) BIBREF22 , are concatenated with the word embedding vector to feed into the LSTM network. Final layer of the network is a CRF classifier.", "Evaluation is conducted using the EVENTI evaluation framework. Standard Precision, Recall, and F1 apply for the event detection. Given that the extent of an event tag may be composed by more than one tokens, systems are evaluated both for strict match, i.e. one point only if all tokens which compose an $<$ EVENT $>$ tag are correctly identified, and relaxed match, i.e. one point for any correct overlap between the system output and the reference gold data. The classification aspect is evaluated using the F1-attribute score BIBREF7 , that captures how well a system identify both the entity (extent) and attribute (i.e. class) together.", "We approached the task in a single-step by detecting and classifying event mentions at once rather than in the standard two step approach, i.e. detection first and classification on top of the detected elements. The task is formulated as a seq2seq problem, by converting the original annotation format into an BIO scheme (Beginning, Inside, Outside), with the resulting alphabet being B-class_label, I-class_label and O. Example \"System and Experiments\" below illustrates a simplified version of the problem for a short sentence:", " input problem solution", " Marco (B-STATE $|$ I-STATE $|$ ... $|$ O) O", " pensa (B-STATE $|$ I-STATE $|$ ... $|$ O) B-ISTATE", " di (B-STATE $|$ I-STATE $|$ ... $|$ O) O", " andare (B-STATE $|$ I-STATE $|$ ... $|$ O) B-OCCUR", " a (B-STATE $|$ I-STATE $|$ ... $|$ O) O", " casa (B-STATE $|$ I-STATE $|$ ... $|$ O) O", " . (B-STATE $|$ I-STATE $|$ ... $|$ O) O" ], [ "Results for the experiments are illustrated in Table 2 . We also report the results of the best system that participated at EVENTI Subtask B, FBK-HLT BIBREF23 . FBK-HLT is a cascade of two SVM classifiers (one for detection and one for classification) based on rich linguistic features. Figure 1 plots charts comparing F1 scores of the network initialized with each of the five embeddings against the FBK-HLT system for the event detection and classification tasks, respectively.", "The results of the Bi-LSTM-CRF network are varied in both evaluation configurations. The differences are mainly due to the embeddings used to initialize the network. The best embedding configuration is Fastext-It that differentiate from all the others for the approach used for generating the embeddings. Embedding's dimensionality impacts on the performances supporting the findings in BIBREF14 , but it seems that the quantity (and variety) of data used to generate the embeddings can have a mitigating effect, as shown by the results of the DH-FBK-100 configuration (especially in the classification subtask, and in the Recall scores for the event extent subtask). Coverage of the embeddings (and consequenlty, tokenization of the dataset and the embeddings) is a further aspect to keep into account, but it seems to have a minor impact with respect to dimensionality. It turns out that BIBREF15 's embeddings are those suffering the most from out of vocabulary (OVV) tokens (2.14% and 1.06% in training, 2.77% and 1.84% in test for the word2vec model and GloVe, respectively) with respect to the others. However, they still outperform DH-FBK_100 and ILC-ItWack, whose OVV are much lower (0.73% in training and 1.12% in test for DH-FBK_100; 0.74% in training and 0.83% in test for ILC-ItWack).", "The network obtains the best F1 score, both for detection (F1 of 0.880 for strict evaluation and 0.903 for relaxed evaluation with Fastext-It embeddings) and for classification (F1-class of 0.756 for strict evaluation, and 0.751 for relaxed evaluation with Fastext-It embeddings). Although FBK-HLT suffers in the classification subtask, it qualifies as a highly competitive system for the detection subtask. By observing the strict F1 scores, FBK-HLT beats three configurations (DH-FBK-100, ILC-ItWack, Berardi2015_Glove) , almost equals one (Berardi2015_w2v) , and it is outperformed only by one (Fastext-It) . In the relaxed evaluation setting, DH-FBK-100 is the only configuration that does not beat FBK-HLT (although the difference is only 0.001 point). Nevertheless, it is remarkable to observe that FBK-HLT has a very high Precision (0.902, relaxed evaluation mode), that is overcome by only one embedding configuration, ILC-ItWack. The results also indicates that word embeddings have a major contribution on Recall, supporting observations that distributed representations have better generalization capabilities than discrete feature vectors. This is further supported by the fact that these results are obtained using a single step approach, where the network has to deal with a total of 15 possible different labels.", "We further compared the outputs of the best model, i.e. Fastext-It, against FBK-HLT. As for the event detection subtask, we have adopted an event-based analysis rather than a token based one, as this will provide better insights on errors concerning multi-token events and event parts-of-speech (see Table 1 for reference). By analyzing the True Positives, we observe that the Fastext-It model has better performances than FBK-HLT with nouns (77.78% vs. 65.64%, respectively) and prepositional phrases (28.00% vs. 16.00%, respectively). Performances are very close for verbs (88.04% vs. 88.49%, respectively) and adjectives (80.50% vs. 79.66%, respectively). These results, especially those for prepositional phrases, indicates that the Bi-LSTM-CRF network structure and embeddings are also much more robust at detecting multi-tokens instances of events, and difficult realizations of events, such as nouns.", "Concerning the classification, we focused on the mismatches between correctly identified events (extent layer) and class assignment. The Fastext-It model wrongly assigns the class to only 557 event tokens compared to the 729 cases for FBK-HLT. The distribution of the class errors, in terms of absolute numbers, is the same between the two systems, with the top three wrong classes being, in both cases, OCCURRENCE, I_ACTION and STATE. OCCURRENCE, not surprisingly, is the class that tends to be assigned more often by both systems, being also the most frequent. However, if FBK-HLT largely overgeneralizes OCCURRENCE (59.53% of all class errors), this corresponds to only one third of the errors (37.70%) in the Bi-LSTM-CRF network. Other notable differences concern I_ACTION (27.82% of errors for the Bi-LSTM-CRF vs. 17.28% for FBK-HLT), STATE (8.79% for the Bi-LSTM-CRF vs. 15.22% for FBK-HLT) and REPORTING (7.89% for the Bi-LSTM-CRF vs. 2.33% for FBK-HLT) classes." ], [ "This paper has investigated the application of different word embeddings for the initialization of a state-of-the-art Bi-LSTM-CRF network to solve the event detection and classification task in Italian, according to the EVENTI exercise. We obtained new state-of-the-art results using the Fastext-It embeddings, and improved the F1-class score of 6.5 points in strict evaluation mode. As for the event detection subtask, we observe a limited improvement (+1.3 points in strict F1), mainly due to gains in Recall. Such results are extremely positive as the task has been modeled in a single step approach, i.e. detection and classification at once, for the first time in Italian. Further support that embeddings have a major impact in the performance of neural architectures is provided, as the variations in performance of the Bi-LSMT-CRF models show. This is due to a combination of factors such as dimensionality, (raw) data, and the method used for generating the embeddings.", "Future work should focus on the development of embeddings that move away from the basic word level, integrating extra layers of linguistic analysis (e.g. syntactic dependencies) BIBREF24 , that have proven to be very powerful for the same task in English." ], [ "The author wants to thank all researchers and research groups who made available their word embeddings and their code. Sharing is caring." ] ], "section_name": [ "Introduction", "Task Description", "Dataset", "System and Experiments", "Results and Discussion", "Conclusion and Future Work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "16f6108476331587c6806eefac2b5e508e34cba4", "81d04ccb4f3adbf3bd15a26e00bc16a7ab86cfa0", "f1ee85d890987279b7c3c06062489d405c099989", "b8955f181ff1451f2cbe6d8cf48cf6c10a261edb" ], "answer": [ { "evidence": [ "Since the EVENTI campaign, there has been a lack of further research, especially in the application of deep learning models to this task in Italian. The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available. ." ], "extractive_spans": [ "adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach", "investigation on the quality of existing Italian word embeddings for this task", "a comparison against a state-of-the-art discrete classifier" ], "free_form_answer": "", "highlighted_evidence": [ "The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Since the EVENTI campaign, there has been a lack of further research, especially in the application of deep learning models to this task in Italian. The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available. ." ], "extractive_spans": [], "free_form_answer": "(1) Using seq2seq for event detection and classification in Italian (2) Investigating quality of Italian word embeddings for this task (3) Comparison to state-of-the-art discrete classifier", "highlighted_evidence": [ "The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Since the EVENTI campaign, there has been a lack of further research, especially in the application of deep learning models to this task in Italian. The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available. ." ], "extractive_spans": [ "the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach", "an investigation on the quality of existing Italian word embeddings for this task", "a comparison against a state-of-the-art discrete classifier", "pre-trained models and scripts running the system" ], "free_form_answer": "", "highlighted_evidence": [ "The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. ", "The pre-trained models and scripts running the system (or re-train it) are publicly available." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Since the EVENTI campaign, there has been a lack of further research, especially in the application of deep learning models to this task in Italian. The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available. ." ], "extractive_spans": [], "free_form_answer": "Adapting a seq2seq neural system to event detection and classification for Italian, investigating the quality of existing embeddings for the task, and comparing against a state-of-the-art discrete classifier.", "highlighted_evidence": [ "The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "416daf57ea25409ce3ae47c8b21992cd60cc07cf", "c7d4a630661cd719ea504dba56393f78278b296b", "35491e1e579f6d147f4793edce4c1a80ab2410e7", "ea17aa5ea17e7838a55c252484390079b16c31ae" ] }, { "annotation_id": [ "3af7d9fb05de1e51ea7259fc7a77ac909146b840", "644e65651273e1ecfa015dd1837b22067e4eacc7", "fd6cf0ab146cb7db91456a585d7a92a359d99634", "c69205165ece54582fe6fa3a21d5a3a4254bd40c" ], "answer": [ { "evidence": [ "Results for the experiments are illustrated in Table 2 . We also report the results of the best system that participated at EVENTI Subtask B, FBK-HLT BIBREF23 . FBK-HLT is a cascade of two SVM classifiers (one for detection and one for classification) based on rich linguistic features. Figure 1 plots charts comparing F1 scores of the network initialized with each of the five embeddings against the FBK-HLT system for the event detection and classification tasks, respectively." ], "extractive_spans": [ " cascade of two SVM classifiers (one for detection and one for classification) based on rich linguistic features" ], "free_form_answer": "", "highlighted_evidence": [ "FBK-HLT is a cascade of two SVM classifiers (one for detection and one for classification) based on rich linguistic features." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Results for the experiments are illustrated in Table 2 . We also report the results of the best system that participated at EVENTI Subtask B, FBK-HLT BIBREF23 . FBK-HLT is a cascade of two SVM classifiers (one for detection and one for classification) based on rich linguistic features. Figure 1 plots charts comparing F1 scores of the network initialized with each of the five embeddings against the FBK-HLT system for the event detection and classification tasks, respectively." ], "extractive_spans": [], "free_form_answer": "FBK-HLT - a cascade of two SVM classifiers (one for detection and one for classification)", "highlighted_evidence": [ "We also report the results of the best system that participated at EVENTI Subtask B, FBK-HLT BIBREF23 . FBK-HLT is a cascade of two SVM classifiers (one for detection and one for classification) based on rich linguistic features." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Results for the experiments are illustrated in Table 2 . We also report the results of the best system that participated at EVENTI Subtask B, FBK-HLT BIBREF23 . FBK-HLT is a cascade of two SVM classifiers (one for detection and one for classification) based on rich linguistic features. Figure 1 plots charts comparing F1 scores of the network initialized with each of the five embeddings against the FBK-HLT system for the event detection and classification tasks, respectively." ], "extractive_spans": [ "FBK-HLT BIBREF23" ], "free_form_answer": "", "highlighted_evidence": [ "We also report the results of the best system that participated at EVENTI Subtask B, FBK-HLT BIBREF23 . FBK-HLT is a cascade of two SVM classifiers (one for detection and one for classification) based on rich linguistic features. " ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "35491e1e579f6d147f4793edce4c1a80ab2410e7", "c7d4a630661cd719ea504dba56393f78278b296b", "416daf57ea25409ce3ae47c8b21992cd60cc07cf", "ea17aa5ea17e7838a55c252484390079b16c31ae" ] }, { "annotation_id": [ "646cff94468a022f491ba6911647fb147d7d7be6", "95148262a2a42f2845f1d18485c978f0f59b4ea1", "c7a3cd782ecd799b24cb02f1e2d67c186e773926", "9dd6279f70638d17c9c16ede2dab38d4116ddd0b" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b", "35491e1e579f6d147f4793edce4c1a80ab2410e7", "416daf57ea25409ce3ae47c8b21992cd60cc07cf", "ea17aa5ea17e7838a55c252484390079b16c31ae" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "What are the contributions of this paper?", "What are the baselines this paper uses?", "Can the model be extended to other languages?" ], "question_id": [ "c97306c1be5d59cf27b1054adfa8f1da47d292ce", "e42916924b69cab1df25d3b4e6072feaa0ba8084", "079ca5810060e1cdc12b5935d8c248492f0478b9" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "", "" ], "topic_background": [ "research", "research", "research" ] }
{ "caption": [ "Table 1: Distribution of the event mentions per POS per token in all datasets of the EVENTI corpus.", "Table 2: Distribution of the event mentions per class in all datasets of the EVENTI corpus.", "Table 3: Results for Bubtask B Main Task - Event detection and classification.", "Figure 1: Plots of F1 scores of the Bi-LSTM-CRF systems against the FBK-HLT system for Event Extent (left side) and Event Class (right side). F1 scores refers to the" ], "file": [ "4-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Figure1-1.png" ] }
[ "What are the contributions of this paper?", "What are the baselines this paper uses?" ]
[ [ "1810.02229-Introduction-2" ], [ "1810.02229-Results and Discussion-0" ] ]
[ "Adapting a seq2seq neural system to event detection and classification for Italian, investigating the quality of existing embeddings for the task, and comparing against a state-of-the-art discrete classifier.", "FBK-HLT - a cascade of two SVM classifiers (one for detection and one for classification)" ]
6
1909.00091
Automatically Inferring Gender Associations from Language
In this paper, we pose the question: do people talk about women and men in different ways? We introduce two datasets and a novel integration of approaches for automatically inferring gender associations from language, discovering coherent word clusters, and labeling the clusters for the semantic concepts they represent. The datasets allow us to compare how people write about women and men in two different settings - one set draws from celebrity news and the other from student reviews of computer science professors. We demonstrate that there are large-scale differences in the ways that people talk about women and men and that these differences vary across domains. Human evaluations show that our methods significantly outperform strong baselines.
{ "paragraphs": [ [ "It is well-established that gender bias exists in language – for example, we see evidence of this given the prevalence of sexism in abusive language datasets BIBREF0, BIBREF1. However, these are extreme cases of gender norms in language, and only encompass a small proportion of speakers or texts.", "Less studied in NLP is how gender norms manifest in everyday language – do people talk about women and men in different ways? These types of differences are far subtler than abusive language, but they can provide valuable insight into the roots of more extreme acts of discrimination. Subtle differences are difficult to observe because each case on its own could be attributed to circumstance, a passing comment or an accidental word. However, at the level of hundreds of thousands of data points, these patterns, if they do exist, become undeniable. Thus, in this work, we introduce new datasets and methods so that we can study subtle gender associations in language at the large-scale.", "Our contributions include:", "Two datasets for studying language and gender, each consisting of over 300K sentences.", "Methods to infer gender-associated words and labeled clusters in any domain.", "Novel findings that demonstrate in both domains that people do talk about women and men in different ways.", "Each contribution brings us closer to modeling how gender associations appear in everyday language. In the remainder of the paper, we present related work, our data collection, methods and findings, and human evaluations of our system." ], [ "The study of gender and language has a rich history in social science. Its roots are often attributed to Robin Lakoff, who argued that language is fundamental to gender inequality, “reflected in both the ways women are expected to speak, and the ways in which women are spoken of” BIBREF2. Prominent scholars following Lakoff have included Deborah Tannen BIBREF3, Mary Bucholtz and Kira Hall BIBREF4, Janet Holmes BIBREF5, Penelope Eckert BIBREF6, and Deborah Cameron BIBREF7, along with many others.", "In recent decades, the study of gender and language has also attracted computational researchers. Echoing Lakoff's original claim, a popular strand of computational work focuses on differences in how women and men talk, analyzing key lexical traits BIBREF8, BIBREF9, BIBREF10 and predicting a person's gender from some text they have written BIBREF11, BIBREF12. There is also research studying how people talk to women and men BIBREF13, as well as how people talk about women and men, typically in specific domains such as sports journalism BIBREF14, fiction writing BIBREF15, movie scripts BIBREF16, and Wikipedia biographies BIBREF17, BIBREF18. Our work builds on this body by diving into two novel domains: celebrity news, which explores gender in pop culture, and student reviews of CS professors, which examines gender in academia and, particularly, the historically male-dominated field of CS. Furthermore, many of these works rely on manually constructed lexicons or topics to pinpoint gendered language, but our methods automatically infer gender-associated words and labeled clusters, thus reducing supervision and increasing the potential to discover subtleties in the data.", "Modeling gender associations in language could also be instrumental to other NLP tasks. Abusive language is often founded in sexism BIBREF0, BIBREF1, so models of gender associations could help to improve detection in those cases. Gender bias also manifests in NLP pipelines: prior research has found that word embeddings preserve gender biases BIBREF19, BIBREF20, BIBREF21, and some have developed methods to reduce this bias BIBREF22, BIBREF23. Yet, the problem is far from solved; for example, BIBREF24 showed that it is still possible to recover gender bias from “de-biased” embeddings. These findings further motivate our research, since before we can fully reduce gender bias in embeddings, we need to develop a deeper understanding of how gender permeates through language in the first place.", "We also build on methods to cluster words in word embedding space and automatically label clusters. Clustering word embeddings has proven useful for discovering salient patterns in text corpora BIBREF25, BIBREF26. Once clusters are derived, we would like them to be interpretable. Much research simply considers the top-n words from each cluster, but this method can be subjective and time-consuming to interpret. Thus, there are efforts to design methods of automatic cluster labeling BIBREF27. We take a similar approach to BIBREF28, who leverage word embeddings and WordNet during labeling, and we extend their method with additional techniques and evaluations." ], [ "Our first dataset contains articles from celebrity magazines People, UsWeekly, and E!News. We labeled each article for whether it was reporting on men, women, or neither/unknown. To do this, we first extracted the article's topic tags. Some of these tags referred to people, but others to non-people entities, such as “Gift Ideas” or “Health.” To distinguish between these types of tags, we queried each tag on Wikipedia and checked whether the top page result contained a “Born” entry in its infobox – if so, we concluded that the tag referred to a person.", "Then, from the person's Wikipedia page, we determined their gender by checking whether the introductory paragraphs of the page contained more male or female pronouns. This method was simple but effective, since pronouns in the introduction almost always resolve to the subject of that page. In fact, on a sample of 80 tags that we manually annotated, we found that comparing pronoun counts predicted gender with perfect accuracy. Finally, if an article tagged at least one woman and did not tag any men, we labeled the article as Female; in the opposite case, we labeled it as Male.", "Our second dataset contains reviews from RateMyProfessors (RMP), an online platform where students can review their professors. We included all 5,604 U.S. schools on RMP, and collected all reviews for CS professors at those schools. We labeled each review with the gender of the professor whom it was about, which we determined by comparing the count of male versus female pronouns over all reviews for that professor. This method was again effective, because the reviews are expressly written about a certain professor, so the pronouns typically resolve to that professor.", "In addition to extracting the text of the articles or reviews, for each dataset we also collected various useful metadata. For the celebrity dataset, we recorded each article's timestamp and the name of the author, if available. Storing author names creates the potential to examine the relationship between the gender of the author and the gender of the subject, such as asking if there are differences between how women write about men and how men write about men. In this work, we did not yet pursue this direction because we wanted to begin with a simpler question of how gender is discussed: regardless of the gender of the authors, what is the content being put forth and consumed? Furthermore, we were unable to extract author gender in the professor dataset since the RMP reviews are anonymous. However, in future work, we may explore the influence of author gender in the celebrity dataset.", "For the professor dataset, we captured metadata such as each review's rating, which indicates how the student feels about the professor on a scale of AWFUL to AWESOME. This additional variable in our data creates the option in future work to factor in sentiment; for example, we could study whether there are differences in language used when criticizing a female versus a male professor." ], [ "Our first goal was to discover words that are significantly associated with men or women in a given domain. We employed an approach used by BIBREF10 in their work to analyze differences in how men and women write on Twitter." ], [ "First, to operationalize, we say that term $i$ is associated with gender $j$ if, when discussing individuals of gender $j$, $i$ is used with unusual frequency – which we can check with statistical hypothesis tests. Let $f_i$ represent the likelihood of $i$ appearing when discussing women or men. $f_i$ is unknown, but we can model the distribution of all possible $f_i$ using the corpus of texts that we have from the domain. We construct a gender-balanced version of the corpus by randomly undersampling the more prevalent gender until the proportions of each gender are equal. Assuming a non-informative prior distribution on $f_i$, the posterior distribution is Beta($k_i$, $N - k_i$), where $k_i$ is the count of $i$ in the gender-balanced corpus and $N$ is the total count of words in that corpus.", "As BIBREF10 discuss, “the distribution of the gender-specific counts can be described by an integral over all possible $f_i$. This integral defines the Beta-Binomial distribution BIBREF29, and has a closed form solution.” We say that term $i$ is significantly associated with gender $j$ if the cumulative distribution at $k_{ij}$ (the count of $i$ in the $j$ portion of the gender-balanced corpus) is $p \\le 0.05$. As in the original work, we apply the Bonferroni correction BIBREF30 for multiple comparisons because we are computing statistical tests for thousands of hypotheses." ], [ "We applied this method to discover gender-associated words in both domains. In Table TABREF9, we present a sample of the most gender-associated nouns from the celebrity domain. Several themes emerge: for example, female celebrities seem to be more associated with appearance (“gown,” “photo,” “hair,” “look”), while male celebrities are more associated with creating content (“movie,” “film,” “host,” “director”). This echoes real-world trends: for instance, on the red carpet, actresses tend to be asked more questions about their appearance –- what brands they are wearing, how long it took to get ready, etc. –- while actors are asked questions about their careers and creative processes (as an example, see BIBREF31).", "Table TABREF9 also includes some of the most gender-associated verbs and adjectives from the professor domain. Female CS professors seem to be praised for being communicative and personal with students (“respond,” “communicate,” “kind,” “caring”), while male CS professors are recognized for being knowledgeable and challenging the students (“teach,”, “challenge,” “brilliant,” “practical”). These trends are well-supported by social science literature, which has found that female teachers are praised for “personalizing” instruction and interacting extensively with students, while male teachers are praised for using “teacher as expert” styles that showcase mastery of material BIBREF32.", "These findings establish that there are clear differences in how people talk about women and men – even with Bonferroni correction, there are still over 500 significantly gender-associated nouns, verbs, and adjectives in the celebrity domain and over 200 in the professor domain. Furthermore, the results in both domains align with prior studies and real world trends, which validates that our methods can capture meaningful patterns and innovatively provide evidence at the large-scale. This analysis also hints that it can be helpful to abstract from words to topics to recognize higher-level patterns of gender associations, which motivates our next section on clustering." ], [ "With word-level associations in hand, our next goals were to discover coherent clusters among the words and to automatically label those clusters." ], [ "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors.", "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:", "Sense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets.", "Candidate label generation: In this step, we generate $L$, the set of possible cluster labels. Our approach is simple: we take the union of all hypernyms of the synsets in $S^*$.", "Candidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance.", "In steps 1 and 3, we use WordNet pathwise distance, but we encourage the exploration of other distance representations as well." ], [ "Table TABREF11 displays a sample of our results – we find that the clusters are coherent in context and the labels seem reasonable. In the next section, we discuss human evaluations that we conducted to more rigorously evaluate the output, but first we discuss the value of these methods toward analysis.", "At the word-level, we hypothesized that in the celebrity domain, women were more associated with appearance and men with creating content. Now, we can validate those hypotheses against labeled clusters – indeed, there is a cluster labeled clothing that is 100% female (i.e. 100% words are female-associated), and a 80% male cluster labeled movie. Likewise, in the professor domain, we had guessed that women are associated with communication and men with knowledge, and there is a 100% female cluster labeled communication and a 89% male cluster labeled cognition. Thus, cluster labeling proves to be very effective at pulling out the patterns that we believed we saw at the word-level, but could not formally validate.", "The clusters we mentioned so far all lean heavily toward one gender association or the other, but some clusters are interesting precisely because they do not lean heavily – this allows us to see where semantic groupings do not align exactly with gender association. For example, in the celebrity domain, there is a cluster labeled lover that has a mix of female-associated words (“boyfriend,” “beau,” “hubby”) and male-associated words (“wife,” “girlfriend”). Jointly leveraging cluster labels and gender associations allows us to see that in the semantic context of having a lover, women are typically associated with male figures and men with female figures, which reflects heteronormativity in society." ], [ "To test our clusters, we employed the Word Intrusion task BIBREF35. We present the annotator with five words – four drawn from one cluster and one drawn randomly from the domain vocabulary – and we ask them to pick out the intruder. The intuition is that if the cluster is coherent, then an observer should be able to identify the out-of-cluster word as the intruder. For both domains, we report results on all clusters and on the top 8, ranked by ascending normalized sum of squared errors, which can be seen as a prediction of coherence. In the celebrity domain, annotators identified the out-of-cluster word 73% of the time in the top-8 and 53% overall. In the professor domain, annotators identified it 60% of the time in the top-8 and 49% overall. As expected, top-8 performance in both domains does considerably better than overall, but at all levels the precision is significantly above the random baseline of 20%.", "To test cluster labels, we present the annotator with a label and a word, and we ask them whether the word falls under the concept. The concept is a potential cluster label and the word is either a word from that cluster or drawn randomly from the domain vocabulary. For a good label, the rate at which in-cluster words fall under the label should be much higher than the rate at which out-of-cluster words fall under. In our experiments, we tested the top 4 predicted labels and the centroid of the cluster as a strong baseline label. The centroid achieved an in-cluster rate of .60 and out-of-cluster rate of .18 (difference of .42). Our best performing predicted label achieved an in-cluster rate of .65 and an out-of-cluster rate of .04 (difference of .61), thus outperforming the centroid on both rates and increasing the gap between rates by nearly 20 points. In the Appendix, we include more detailed results on both tasks." ], [ "We have presented two substantial datasets and a novel integration of methods to automatically infer gender associations in language. We have demonstrated that in both datasets, there are clear differences in how people talk about women and men. Furthermore, we have shown that clustering and cluster labeling are effective at identifying higher-level patterns of gender associations, and that our methods outperform strong baselines in human evaluations. In future work, we hope to use our findings to improve performance on tasks such as abusive language detection. We also hope to delve into finer-grained analyses, exploring how language around gender interacts with other variables, such as sexual orientation or profession (e.g. actresses versus female athletes). Finally, we plan to continue widening the scope of our study – for example, expanding our methods to include non-binary gender identities, evaluating changes in gender norms over time, and spreading to more domains, such as the political sphere." ] ], "section_name": [ "Introduction", "Related Work", "Data Collection", "Inferring Word-Level Associations", "Inferring Word-Level Associations ::: Methods", "Inferring Word-Level Associations ::: Findings", "Clustering & Cluster Labeling", "Clustering & Cluster Labeling ::: Methods", "Clustering & Cluster Labeling ::: Findings", "Human Evaluations", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "225012fe4d6090ae49d9f264e5516a6e31ac75fc", "3c118916ae957037bac6ab0feea1b5a98a342376", "3e87bfef881da4186bc8a831f8505444cf118f39", "919a5832117b79e307b09e00dacabcfa4f5c16b6" ], "answer": [ { "evidence": [ "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:", "Sense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets.", "Candidate label generation: In this step, we generate $L$, the set of possible cluster labels. Our approach is simple: we take the union of all hypernyms of the synsets in $S^*$.", "Candidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance.", "In steps 1 and 3, we use WordNet pathwise distance, but we encourage the exploration of other distance representations as well." ], "extractive_spans": [ "Given a cluster, our algorithm proceeds with the following three steps:\n\nSense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets.\n\nCandidate label generation: In this step, we generate $L$, the set of possible cluster labels. Our approach is simple: we take the union of all hypernyms of the synsets in $S^*$.\n\nCandidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance.\n\nIn steps 1 and 3, we use WordNet pathwise distance, but we encourage the exploration of other distance representations as well." ], "free_form_answer": "", "highlighted_evidence": [ "Given a cluster, our algorithm proceeds with the following three steps:\n\nSense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets.\n\nCandidate label generation: In this step, we generate $L$, the set of possible cluster labels. Our approach is simple: we take the union of all hypernyms of the synsets in $S^*$.\n\nCandidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance.\n\nIn steps 1 and 3, we use WordNet pathwise distance, but we encourage the exploration of other distance representations as well." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:", "Sense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets.", "Candidate label generation: In this step, we generate $L$, the set of possible cluster labels. Our approach is simple: we take the union of all hypernyms of the synsets in $S^*$.", "Candidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance.", "In steps 1 and 3, we use WordNet pathwise distance, but we encourage the exploration of other distance representations as well." ], "extractive_spans": [ "Candidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance." ], "free_form_answer": "", "highlighted_evidence": [ "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:\n\nSense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets.\n\nCandidate label generation: In this step, we generate $L$, the set of possible cluster labels. Our approach is simple: we take the union of all hypernyms of the synsets in $S^*$.\n\nCandidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance.\n\nIn steps 1 and 3, we use WordNet pathwise distance, but we encourage the exploration of other distance representations as well." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:" ], "extractive_spans": [], "free_form_answer": "They automatically label the cluster using WordNet and context-sensitive strengths of domain-specific word embeddings", "highlighted_evidence": [ "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings.", "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:" ], "extractive_spans": [ "Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering" ], "free_form_answer": "", "highlighted_evidence": [ "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3175ef31ccea31e822d8b7f0f8c4607af4ef2aa6", "36de7fe28ff79e225386c79c34f5bf7ba77f2b6f", "5479d6caa95791b2f148e8f6fc21c29b2d1ffdda", "e2d50e395695067658b8017d7bd1ed9d375edb01" ], "answer": [ { "evidence": [ "With word-level associations in hand, our next goals were to discover coherent clusters among the words and to automatically label those clusters.", "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors.", "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:", "Sense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets." ], "extractive_spans": [ "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors." ], "free_form_answer": "", "highlighted_evidence": [ "With word-level associations in hand, our next goals were to discover coherent clusters among the words and to automatically label those clusters.", "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors.", "To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:\n\nSense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors." ], "extractive_spans": [ "First, we trained domain-specific word embeddings", "Then, we used k-means clustering to cluster the embeddings of the gender-associated words" ], "free_form_answer": "", "highlighted_evidence": [ "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "With word-level associations in hand, our next goals were to discover coherent clusters among the words and to automatically label those clusters.", "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors." ], "extractive_spans": [], "free_form_answer": "First, they trained domain-specific word embeddings using the Word2Vec model, then used k-means clustering to cluster the embeddings of the gender-associated words.", "highlighted_evidence": [ "With word-level associations in hand, our next goals were to discover coherent clusters among the words and to automatically label those clusters.", "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Inferring Word-Level Associations", "Our first goal was to discover words that are significantly associated with men or women in a given domain. We employed an approach used by BIBREF10 in their work to analyze differences in how men and women write on Twitter.", "Inferring Word-Level Associations ::: Methods", "First, to operationalize, we say that term $i$ is associated with gender $j$ if, when discussing individuals of gender $j$, $i$ is used with unusual frequency – which we can check with statistical hypothesis tests. Let $f_i$ represent the likelihood of $i$ appearing when discussing women or men. $f_i$ is unknown, but we can model the distribution of all possible $f_i$ using the corpus of texts that we have from the domain. We construct a gender-balanced version of the corpus by randomly undersampling the more prevalent gender until the proportions of each gender are equal. Assuming a non-informative prior distribution on $f_i$, the posterior distribution is Beta($k_i$, $N - k_i$), where $k_i$ is the count of $i$ in the gender-balanced corpus and $N$ is the total count of words in that corpus.", "As BIBREF10 discuss, “the distribution of the gender-specific counts can be described by an integral over all possible $f_i$. This integral defines the Beta-Binomial distribution BIBREF29, and has a closed form solution.” We say that term $i$ is significantly associated with gender $j$ if the cumulative distribution at $k_{ij}$ (the count of $i$ in the $j$ portion of the gender-balanced corpus) is $p \\le 0.05$. As in the original work, we apply the Bonferroni correction BIBREF30 for multiple comparisons because we are computing statistical tests for thousands of hypotheses.", "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors." ], "extractive_spans": [], "free_form_answer": "The authors first generated a set of words which are associated with each gender, then built domain-specific word embeddings and used k-means clustering to cluster the gendered word associations together. ", "highlighted_evidence": [ "Inferring Word-Level Associations\nOur first goal was to discover words that are significantly associated with men or women in a given domain. We employed an approach used by BIBREF10 in their work to analyze differences in how men and women write on Twitter.\n\nInferring Word-Level Associations ::: Methods\nFirst, to operationalize, we say that term $i$ is associated with gender $j$ if, when discussing individuals of gender $j$, $i$ is used with unusual frequency – which we can check with statistical hypothesis tests. Let $f_i$ represent the likelihood of $i$ appearing when discussing women or men. $f_i$ is unknown, but we can model the distribution of all possible $f_i$ using the corpus of texts that we have from the domain. We construct a gender-balanced version of the corpus by randomly undersampling the more prevalent gender until the proportions of each gender are equal. Assuming a non-informative prior distribution on $f_i$, the posterior distribution is Beta($k_i$, $N - k_i$), where $k_i$ is the count of $i$ in the gender-balanced corpus and $N$ is the total count of words in that corpus.\n\nAs BIBREF10 discuss, “the distribution of the gender-specific counts can be described by an integral over all possible $f_i$. This integral defines the Beta-Binomial distribution BIBREF29, and has a closed form solution.” We say that term $i$ is significantly associated with gender $j$ if the cumulative distribution at $k_{ij}$ (the count of $i$ in the $j$ portion of the gender-balanced corpus) is $p \\le 0.05$. As in the original work, we apply the Bonferroni correction BIBREF30 for multiple comparisons because we are computing statistical tests for thousands of hypotheses.", "First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \\in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] }, { "annotation_id": [ "2fda400ba50aaacc6df4b6a9f65c7461794e9ab1", "a8e00a1500e44f49faa4d0e7d62ec7dc0c714508", "c82c2ead95c0ce98e4098b121dd264965e886af3", "dfba841d7bc37ee2eaf608ad5124951c50dc6e3f" ], "answer": [ { "evidence": [ "Two datasets for studying language and gender, each consisting of over 300K sentences." ], "extractive_spans": [], "free_form_answer": "300K sentences in each dataset", "highlighted_evidence": [ "Two datasets for studying language and gender, each consisting of over 300K sentences." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our contributions include:", "Two datasets for studying language and gender, each consisting of over 300K sentences." ], "extractive_spans": [ "each consisting of over 300K sentences" ], "free_form_answer": "", "highlighted_evidence": [ "Our contributions include:\n\nTwo datasets for studying language and gender, each consisting of over 300K sentences." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 1: Summary statistics of our datasets." ], "extractive_spans": [], "free_form_answer": "Celeb dataset: 15917 texts and 342645 sentences\nProfessor dataset: 283973 texts and 976677 sentences", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Summary statistics of our datasets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 1: Summary statistics of our datasets." ], "extractive_spans": [], "free_form_answer": "Celebrity Dataset has 15,917 texts, 342,645 sentences, and the Female Male Proportions are 0.67/ 0.33. \nProfessor Dataset has 283,973 texts, 976, 667 sentences, and the Femal Male Proportions are 0.28./ 0,72", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Summary statistics of our datasets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "435ee111d12b9acc7125f8efb2aa67af4c362dcb", "545917d4595ff2f2c85008f1121d994924b51c68", "885d4e44a1e9478f1c008f01cf5fa4b81b5dd9d4", "a1fc4edc971c582a6dcc6a7f2313b455d35ade45" ], "answer": [ { "evidence": [ "Human Evaluations", "To test our clusters, we employed the Word Intrusion task BIBREF35. We present the annotator with five words – four drawn from one cluster and one drawn randomly from the domain vocabulary – and we ask them to pick out the intruder. The intuition is that if the cluster is coherent, then an observer should be able to identify the out-of-cluster word as the intruder. For both domains, we report results on all clusters and on the top 8, ranked by ascending normalized sum of squared errors, which can be seen as a prediction of coherence. In the celebrity domain, annotators identified the out-of-cluster word 73% of the time in the top-8 and 53% overall. In the professor domain, annotators identified it 60% of the time in the top-8 and 49% overall. As expected, top-8 performance in both domains does considerably better than overall, but at all levels the precision is significantly above the random baseline of 20%.", "To test cluster labels, we present the annotator with a label and a word, and we ask them whether the word falls under the concept. The concept is a potential cluster label and the word is either a word from that cluster or drawn randomly from the domain vocabulary. For a good label, the rate at which in-cluster words fall under the label should be much higher than the rate at which out-of-cluster words fall under. In our experiments, we tested the top 4 predicted labels and the centroid of the cluster as a strong baseline label. The centroid achieved an in-cluster rate of .60 and out-of-cluster rate of .18 (difference of .42). Our best performing predicted label achieved an in-cluster rate of .65 and an out-of-cluster rate of .04 (difference of .61), thus outperforming the centroid on both rates and increasing the gap between rates by nearly 20 points. In the Appendix, we include more detailed results on both tasks." ], "extractive_spans": [], "free_form_answer": "The authors contrasted human evaluations against a random baseline, and used the centroid of the cluster as a strong baseline.", "highlighted_evidence": [ "Human Evaluations\nTo test our clusters, we employed the Word Intrusion task BIBREF35. We present the annotator with five words – four drawn from one cluster and one drawn randomly from the domain vocabulary – and we ask them to pick out the intruder. The intuition is that if the cluster is coherent, then an observer should be able to identify the out-of-cluster word as the intruder. For both domains, we report results on all clusters and on the top 8, ranked by ascending normalized sum of squared errors, which can be seen as a prediction of coherence. In the celebrity domain, annotators identified the out-of-cluster word 73% of the time in the top-8 and 53% overall. In the professor domain, annotators identified it 60% of the time in the top-8 and 49% overall. As expected, top-8 performance in both domains does considerably better than overall, but at all levels the precision is significantly above the random baseline of 20%.\n\nTo test cluster labels, we present the annotator with a label and a word, and we ask them whether the word falls under the concept. The concept is a potential cluster label and the word is either a word from that cluster or drawn randomly from the domain vocabulary. For a good label, the rate at which in-cluster words fall under the label should be much higher than the rate at which out-of-cluster words fall under. In our experiments, we tested the top 4 predicted labels and the centroid of the cluster as a strong baseline label. " ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "To test cluster labels, we present the annotator with a label and a word, and we ask them whether the word falls under the concept. The concept is a potential cluster label and the word is either a word from that cluster or drawn randomly from the domain vocabulary. For a good label, the rate at which in-cluster words fall under the label should be much higher than the rate at which out-of-cluster words fall under. In our experiments, we tested the top 4 predicted labels and the centroid of the cluster as a strong baseline label. The centroid achieved an in-cluster rate of .60 and out-of-cluster rate of .18 (difference of .42). Our best performing predicted label achieved an in-cluster rate of .65 and an out-of-cluster rate of .04 (difference of .61), thus outperforming the centroid on both rates and increasing the gap between rates by nearly 20 points. In the Appendix, we include more detailed results on both tasks." ], "extractive_spans": [ "the top 4 predicted labels and the centroid of the cluster" ], "free_form_answer": "", "highlighted_evidence": [ "In our experiments, we tested the top 4 predicted labels and the centroid of the cluster as a strong baseline label. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To test cluster labels, we present the annotator with a label and a word, and we ask them whether the word falls under the concept. The concept is a potential cluster label and the word is either a word from that cluster or drawn randomly from the domain vocabulary. For a good label, the rate at which in-cluster words fall under the label should be much higher than the rate at which out-of-cluster words fall under. In our experiments, we tested the top 4 predicted labels and the centroid of the cluster as a strong baseline label. The centroid achieved an in-cluster rate of .60 and out-of-cluster rate of .18 (difference of .42). Our best performing predicted label achieved an in-cluster rate of .65 and an out-of-cluster rate of .04 (difference of .61), thus outperforming the centroid on both rates and increasing the gap between rates by nearly 20 points. In the Appendix, we include more detailed results on both tasks." ], "extractive_spans": [ "the top 4 predicted labels and the centroid of the cluster as a strong baseline label" ], "free_form_answer": "", "highlighted_evidence": [ "To test cluster labels, we present the annotator with a label and a word, and we ask them whether the word falls under the concept. The concept is a potential cluster label and the word is either a word from that cluster or drawn randomly from the domain vocabulary. For a good label, the rate at which in-cluster words fall under the label should be much higher than the rate at which out-of-cluster words fall under. In our experiments, we tested the top 4 predicted labels and the centroid of the cluster as a strong baseline label." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "How do they decide what is the semantic concept label of particular cluster?", "How do they discover coherent word clusters?", "How big are two introduced datasets?", "What are strong baselines authors used?" ], "question_id": [ "a3e7d7389228a197c8c44e0c504a791b60f2c80d", "8b4bd0a962241ea548752212ebac145e2ced7452", "d39059340a79bdc0ebab80ad3308e3037d7d5773", "31d4b0204702907dc0cd0f394cf9c984649e1fbf" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "computer vision", "computer vision", "computer vision", "computer vision" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Summary statistics of our datasets.", "Table 2: Top: Sample from the top-25 most genderassociated nouns in the celebrity domain. Middle: professor domain, sample from top-25 verbs. Bottom: professor domain, sample from top-25 adjectives. All associations listed are p ≤ 0.05, with Bonferroni correction. See Appendix for all top-25 nouns, verbs, and adjectives for both genders in both domains.", "Table 3: Sample of our clusters and predicted cluster labels. We include in the Appendix a more comprehensive table of our results. F:M refers to the ratio of female-associated to male-associated words in the cluster.", "Table 4: Results for Word Intrusion task. All results significantly outperform the random baseline of .20 (p ≤ 0.0001).", "Table 5: Top 25 most gender-associated nouns, verbs, and adjectives in the celebrity domain. Words are listed in order of decreasing significance, but all words fall under p ≤ 0.05, with Bonferroni correction.", "Table 6: Top 25 most gender-associated nouns, verbs, and adjectives in the professor domain. Words are listed in order of decreasing significance, but all words fall under p ≤ 0.05, with Bonferroni correction, aside from the last three terms listed for Female-Associated Verbs and the last four terms listed for Male-Associated Verbs.", "Table 7: Top 12 clusters out of 45 overall in the celebrity domain. Predicted labels are included if applicable – we were only able to predict labels for clusters that contained nouns, since our clustering labeling algorithm relied on the noun taxonomy in WordNet. In the Sample Words in Cluster column, italics indicate female-associated terms, and non-italics indicate male-associated. F:M refers to the ratio of female-associated to male-associated words in the cluster.", "Table 8: Top 6 clusters out of 16 overall in the professor domain. Same details as Table 7 apply.", "Table 9: Results for cluster labeling task. The 3rd predicted label has a significantly lower out-of-cluster rate than the centroid and all the other predicted labels (p ≤ 0.02). The same label also slightly outperforms the centroid on the in-cluster rate, thus producing a much larger gap between rates than the centroid." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "5-Table3-1.png", "7-Table4-1.png", "8-Table5-1.png", "8-Table6-1.png", "9-Table7-1.png", "9-Table8-1.png", "10-Table9-1.png" ] }
[ "How do they decide what is the semantic concept label of particular cluster?", "How do they discover coherent word clusters?", "How big are two introduced datasets?", "What are strong baselines authors used?" ]
[ [ "1909.00091-Clustering & Cluster Labeling ::: Methods-4", "1909.00091-Clustering & Cluster Labeling ::: Methods-3", "1909.00091-Clustering & Cluster Labeling ::: Methods-2", "1909.00091-Clustering & Cluster Labeling ::: Methods-5", "1909.00091-Clustering & Cluster Labeling ::: Methods-1" ], [ "1909.00091-Clustering & Cluster Labeling ::: Methods-2", "1909.00091-Inferring Word-Level Associations-0", "1909.00091-Clustering & Cluster Labeling ::: Methods-1", "1909.00091-Inferring Word-Level Associations ::: Methods-1", "1909.00091-Inferring Word-Level Associations ::: Methods-0", "1909.00091-Clustering & Cluster Labeling-0", "1909.00091-Clustering & Cluster Labeling ::: Methods-0" ], [ "1909.00091-Introduction-3", "1909.00091-2-Table1-1.png", "1909.00091-Introduction-2" ], [ "1909.00091-Human Evaluations-0", "1909.00091-Human Evaluations-1" ] ]
[ "They automatically label the cluster using WordNet and context-sensitive strengths of domain-specific word embeddings", "The authors first generated a set of words which are associated with each gender, then built domain-specific word embeddings and used k-means clustering to cluster the gendered word associations together. ", "Celebrity Dataset has 15,917 texts, 342,645 sentences, and the Female Male Proportions are 0.67/ 0.33. \nProfessor Dataset has 283,973 texts, 976, 667 sentences, and the Femal Male Proportions are 0.28./ 0,72", "The authors contrasted human evaluations against a random baseline, and used the centroid of the cluster as a strong baseline." ]
7
1909.04387
A Crowd-based Evaluation of Abuse Response Strategies in Conversational Agents
How should conversational agents respond to verbal abuse through the user? To answer this question, we conduct a large-scale crowd-sourced evaluation of abuse response strategies employed by current state-of-the-art systems. Our results show that some strategies, such as "polite refusal" score highly across the board, while for other strategies demographic factors, such as age, as well as the severity of the preceding abuse influence the user's perception of which response is appropriate. In addition, we find that most data-driven models lag behind rule-based or commercial systems in terms of their perceived appropriateness.
{ "paragraphs": [ [ "Ethical challenges related to dialogue systems and conversational agents raise novel research questions, such as learning from biased data sets BIBREF0, and how to handle verbal abuse from the user's side BIBREF1, BIBREF2, BIBREF3, BIBREF4. As highlighted by a recent UNESCO report BIBREF5, appropriate responses to abusive queries are vital to prevent harmful gender biases: the often submissive and flirty responses by the female-gendered systems reinforce ideas of women as subservient. In this paper, we investigate the appropriateness of possible strategies by gathering responses from current state-of-the-art systems and ask crowd-workers to rate them." ], [ "We first gather abusive utterances from 600K conversations with US-based customers. We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances. Previous research reports even higher levels of abuse between 11% BIBREF2 and 30% BIBREF6. Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical\" utterances - substantially extending the previous dataset of 35 utterances from Amanda:EthicsNLP2018 - and categorise these utterances based on the Linguistic Society's definition of sexual harassment BIBREF7:", "[noitemsep]", "Gender and Sexuality, e.g. “Are you gay?”, “How do you have sex?”", "Sexualised Comments, e.g. “I love watching porn.”, “I'm horny.”", "Sexualised Insults, e.g. “Stupid bitch.”, “Whore”", "Sexual Requests and Demands, e.g. “Will you have sex with me?”, “Talk dirty to me.”", "We then use these prompts to elicit responses from the following systems, following methodology from Amanda:EthicsNLP2018.", "[leftmargin=5mm, noitemsep]", "4 Commercial: Amazon Alexa, Apple Siri, Google Home, Microsoft's Cortana.", "4 Non-commercial rule-based: E.L.I.Z.A. BIBREF8, Parry BIBREF9, A.L.I.C.E. BIBREF10, Alley BIBREF11.", "4 Data-driven approaches:", "Cleverbot BIBREF12;", "NeuralConvo BIBREF13, a re-implementation of BIBREF14;", "an implementation of BIBREF15's Information Retrieval approach;", "a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.", "Negative Baselines: We also compile responses by adult chatbots: Sophia69 BIBREF16, Laurel Sweet BIBREF17, Captain Howdy BIBREF18, Annabelle Lee BIBREF19, Dr Love BIBREF20.", "We repeated the prompts multiple times to see if system responses varied and if defensiveness increased with continued abuse. If this was the case, we included all responses in the study. Following this methodology, we collected a total of 2441 system replies in July-August 2018 - 3.5 times more data than Amanda:EthicsNLP2018 - which 2 expert annotators manually annotated according to the categories in Table TABREF14 ($\\kappa =0.66$)." ], [ "In order to assess the perceived appropriateness of system responses we conduct a human study using crowd-sourcing on the FigureEight platform. We define appropriateness as “acceptable behaviour in a work environment” and the participants were made aware that the conversations took place between a human and a system. Ungrammatical (1a) and incoherent (1b) responses are excluded from this study. We collect appropriateness ratings given a stimulus (the prompt) and four randomly sampled responses from our corpus that the worker is to label following the methodology described in BIBREF21, where each utterance is rated relatively to a reference on a user-defined scale. Ratings are then normalised on a scale from [0-1]. This methodology was shown to produce more reliable user ratings than commonly used Likert Scales. In addition, we collect demographic information, including gender and age group. In total we collected 9960 HITs from 472 crowd workers. In order to identify spammers and unsuitable ratings, we use the responses from the adult-only bots as test questions: We remove users who give high ratings to sexual bot responses the majority (more than 55%) of the time.18,826 scores remain - resulting in an average of 7.7 ratings per individual system reply and 1568.8 ratings per response type as listed in Table TABREF14.Due to missing demographic data - and after removing malicious crowdworkers - we only consider a subset of 190 raters for our demographic study. The group is composed of 130 men and 60 women. Most raters (62.6%) are under the age of 44, with similar proportions across age groups for men and women. This is in-line with our target population: 57% of users of smart speakers are male and the majority are under 44 BIBREF22." ], [ "The ranks and mean scores of response categories can be seen in Table TABREF29. Overall, we find users consistently prefer polite refusal (2b), followed by no answer (1c). Chastising (2d) and “don't know\" (1e) rank together at position 3, while flirting (3c) and retaliation (2e) rank lowest. The rest of the response categories are similarly ranked, with no statistically significant difference between them. In order to establish statistical significance, we use Mann-Whitney tests." ], [ "Previous research has shown gender to be the most important factor in predicting a person's definition of sexual harassment BIBREF23. However, we find small and not statistically significant differences in the overall rank given by users of different gender (see tab:ageresults).", "Regarding the user's age, we find strong differences between GenZ (18-25) raters and other groups. Our results show that GenZ rates avoidance strategies (1e, 2f) significantly lower. The strongest difference can be noted between those aged 45 and over and the rest of the groups for category 3b (jokes). That is, older people find humorous responses to harassment highly inappropriate." ], [ "Here, we explore the hypothesis, that users perceive different responses as appropriate, dependent on the type and gravity of harassment, see Section SECREF2. The results in Table TABREF33 indeed show that perceived appropriateness varies significantly between prompt contexts. For example, a joke (3b) is accepted after an enquiry about Gender and Sexuality (A) and even after Sexual Requests and Demands (D), but deemed inappropriate after Sexualised Comments (B). Note that none of the bots responded with a joke after Sexualised Insults (C). Avoidance (2f) is considered most appropriate in the context of Sexualised Demands. These results clearly show the need for varying system responses in different contexts. However, the corpus study from Amanda:EthicsNLP2018 shows that current state-of-the-art systems do not adapt their responses sufficiently." ], [ "Finally, we consider appropriateness per system. Following related work by BIBREF21, BIBREF24, we use Trueskill BIBREF25 to cluster systems into equivalently rated groups according to their partial relative rankings. The results in Table TABREF36 show that the highest rated systen is Alley, a purpose build bot for online language learning. Alley produces “polite refusal” (2b) - the top ranked strategy - 31% of the time. Comparatively, commercial systems politely refuse only between 17% (Cortana) and 2% (Alexa). Most of the time commercial systems tend to “play along” (3a), joke (3b) or don't know how to answer (1e) which tend to receive lower ratings, see Figure FIGREF38. Rule-based systems most often politely refuse to answer (2b), but also use medium ranked strategies, such as deflect (2c) or chastise (2d). For example, most of Eliza's responses fall under the “deflection” strategy, such as “Why do you ask?”. Data-driven systems rank low in general. Neuralconvo and Cleverbot are the only ones that ever politely refuse and we attribute their improved ratings to this. In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time). Ritter:2010:UMT:1857999.1858019's IR approach is rated similarly to Capt Howdy and both produce a majority of retaliatory (2e) responses - 38% and 58% respectively - followed by flirtatious responses. Finally, Dr Love and Sophia69 produce almost exclusively flirtatious responses which are consistently ranked low by users." ], [ "Crowdsourced user studies are widely used for related tasks, such as evaluating dialogue strategies, e.g. BIBREF26, and for eliciting a moral stance from a population BIBREF27. Our crowdsourced setup is similar to an “overhearer experiment” as e.g. conducted by Ma:2019:handlingChall where study participants were asked to rate the system's emotional competence after watching videos of challenging user behaviour. However, we believe that the ultimate measure for abuse mitigation should come from users interacting with the system. chin2019should make a first step into this direction by investigating different response styles (Avoidance, Empathy, Counterattacking) to verbal abuse, and recording the user's emotional reaction – hoping that eliciting certain emotions, such as guilt, will eventually stop the abuse. While we agree that stopping the abuse should be the ultimate goal, BIBREF28's study is limited in that participants were not genuine (ab)users, but instructed to abuse the system in a certain way. BIBREF29 report that a pilot using a similar setup let to unnatural interactions, which limits the conclusions we can draw about the effectiveness of abuse mitigation strategies. Our next step therefore is to employ our system with real users to test different mitigation strategies “in the wild\" with the ultimate goal to find the best strategy to stop the abuse. The results of this current paper suggest that the strategy should be adaptive to user type/ age, as well as to the severity of abuse." ], [ "This paper presents the first user study on perceived appropriateness of system responses after verbal abuse. We put strategies used by state-of-the-art systems to the test in a large-scale, crowd-sourced evaluation. The full annotated corpus contains 2441 system replies, categorised into 14 response types, which were evaluated by 472 raters - resulting in 7.7 ratings per reply.", "Our results show that: (1) The user's age has an significant effect on the ratings. For example, older users find jokes as a response to harassment highly inappropriate. (2) Perceived appropriateness also depends on the type of previous abuse. For example, avoidance is most appropriate after sexual demands. (3) All system were rated significantly higher than our negative adult-only baselines - except two data-driven systems, one of which is a Seq2Seq model trained on “clean\" data where all utterances containing abusive words were removed BIBREF1. This leads us to believe that data-driven response generation need more effective control mechanisms BIBREF30." ], [ "We would like to thank our colleagues Ruth Aylett and Arash Eshghi for their comments. This research received funding from the EPSRC projects DILiGENt (EP/M005429/1) and MaDrIgAL (EP/N017536/1)." ] ], "section_name": [ "Introduction", "Data Collection", "Human Evaluation", "Results", "Results ::: Demographic Factors", "Results ::: Prompt context", "Results ::: Systems", "Related and Future Work", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "2d55c4f4d4900dc5815eaba12882b2317609b7fc", "3017bd558bda9dc69bfeb99e16b6f5b13b90f349", "60d8caadfc21f53f35168668671d080f0f303445", "d41a35aa743c17fc5e5dac324995094115a46b35" ], "answer": [ { "evidence": [ "4 Data-driven approaches:", "Cleverbot BIBREF12;", "NeuralConvo BIBREF13, a re-implementation of BIBREF14;", "an implementation of BIBREF15's Information Retrieval approach;", "a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.", "Finally, we consider appropriateness per system. Following related work by BIBREF21, BIBREF24, we use Trueskill BIBREF25 to cluster systems into equivalently rated groups according to their partial relative rankings. The results in Table TABREF36 show that the highest rated systen is Alley, a purpose build bot for online language learning. Alley produces “polite refusal” (2b) - the top ranked strategy - 31% of the time. Comparatively, commercial systems politely refuse only between 17% (Cortana) and 2% (Alexa). Most of the time commercial systems tend to “play along” (3a), joke (3b) or don't know how to answer (1e) which tend to receive lower ratings, see Figure FIGREF38. Rule-based systems most often politely refuse to answer (2b), but also use medium ranked strategies, such as deflect (2c) or chastise (2d). For example, most of Eliza's responses fall under the “deflection” strategy, such as “Why do you ask?”. Data-driven systems rank low in general. Neuralconvo and Cleverbot are the only ones that ever politely refuse and we attribute their improved ratings to this. In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time). Ritter:2010:UMT:1857999.1858019's IR approach is rated similarly to Capt Howdy and both produce a majority of retaliatory (2e) responses - 38% and 58% respectively - followed by flirtatious responses. Finally, Dr Love and Sophia69 produce almost exclusively flirtatious responses which are consistently ranked low by users." ], "extractive_spans": [], "free_form_answer": "either by refusing politely, or, with flirtatious responses, or, by retaliating", "highlighted_evidence": [ "4 Data-driven approaches:\n\nCleverbot BIBREF12;\n\nNeuralConvo BIBREF13, a re-implementation of BIBREF14;\n\nan implementation of BIBREF15's Information Retrieval approach;\n\na vanilla Seq2Seq model trained on clean Reddit data BIBREF1.", "Data-driven systems rank low in general. Neuralconvo and Cleverbot are the only ones that ever politely refuse and we attribute their improved ratings to this. In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time). Ritter:2010:UMT:1857999.1858019's IR approach is rated similarly to Capt Howdy and both produce a majority of retaliatory (2e) responses - 38% and 58% respectively - followed by flirtatious responses. Finally, Dr Love and Sophia69 produce almost exclusively flirtatious responses which are consistently ranked low by users." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Finally, we consider appropriateness per system. Following related work by BIBREF21, BIBREF24, we use Trueskill BIBREF25 to cluster systems into equivalently rated groups according to their partial relative rankings. The results in Table TABREF36 show that the highest rated systen is Alley, a purpose build bot for online language learning. Alley produces “polite refusal” (2b) - the top ranked strategy - 31% of the time. Comparatively, commercial systems politely refuse only between 17% (Cortana) and 2% (Alexa). Most of the time commercial systems tend to “play along” (3a), joke (3b) or don't know how to answer (1e) which tend to receive lower ratings, see Figure FIGREF38. Rule-based systems most often politely refuse to answer (2b), but also use medium ranked strategies, such as deflect (2c) or chastise (2d). For example, most of Eliza's responses fall under the “deflection” strategy, such as “Why do you ask?”. Data-driven systems rank low in general. Neuralconvo and Cleverbot are the only ones that ever politely refuse and we attribute their improved ratings to this. In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time). Ritter:2010:UMT:1857999.1858019's IR approach is rated similarly to Capt Howdy and both produce a majority of retaliatory (2e) responses - 38% and 58% respectively - followed by flirtatious responses. Finally, Dr Love and Sophia69 produce almost exclusively flirtatious responses which are consistently ranked low by users." ], "extractive_spans": [ "Data-driven systems rank low in general" ], "free_form_answer": "", "highlighted_evidence": [ "Data-driven systems rank low in general. Neuralconvo and Cleverbot are the only ones that ever politely refuse and we attribute their improved ratings to this. In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Finally, we consider appropriateness per system. Following related work by BIBREF21, BIBREF24, we use Trueskill BIBREF25 to cluster systems into equivalently rated groups according to their partial relative rankings. The results in Table TABREF36 show that the highest rated systen is Alley, a purpose build bot for online language learning. Alley produces “polite refusal” (2b) - the top ranked strategy - 31% of the time. Comparatively, commercial systems politely refuse only between 17% (Cortana) and 2% (Alexa). Most of the time commercial systems tend to “play along” (3a), joke (3b) or don't know how to answer (1e) which tend to receive lower ratings, see Figure FIGREF38. Rule-based systems most often politely refuse to answer (2b), but also use medium ranked strategies, such as deflect (2c) or chastise (2d). For example, most of Eliza's responses fall under the “deflection” strategy, such as “Why do you ask?”. Data-driven systems rank low in general. Neuralconvo and Cleverbot are the only ones that ever politely refuse and we attribute their improved ratings to this. In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time). Ritter:2010:UMT:1857999.1858019's IR approach is rated similarly to Capt Howdy and both produce a majority of retaliatory (2e) responses - 38% and 58% respectively - followed by flirtatious responses. Finally, Dr Love and Sophia69 produce almost exclusively flirtatious responses which are consistently ranked low by users." ], "extractive_spans": [ "politely refuse", "politely refuses", "flirtatious responses" ], "free_form_answer": "", "highlighted_evidence": [ "Data-driven systems rank low in general. Neuralconvo and Cleverbot are the only ones that ever politely refuse and we attribute their improved ratings to this. In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time). Ritter:2010:UMT:1857999.1858019's IR approach is rated similarly to Capt Howdy and both produce a majority of retaliatory (2e) responses - 38% and 58% respectively - followed by flirtatious responses. Finally, Dr Love and Sophia69 produce almost exclusively flirtatious responses which are consistently ranked low by users." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "4 Data-driven approaches:", "Cleverbot BIBREF12;", "NeuralConvo BIBREF13, a re-implementation of BIBREF14;", "an implementation of BIBREF15's Information Retrieval approach;", "a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.", "Finally, we consider appropriateness per system. Following related work by BIBREF21, BIBREF24, we use Trueskill BIBREF25 to cluster systems into equivalently rated groups according to their partial relative rankings. The results in Table TABREF36 show that the highest rated systen is Alley, a purpose build bot for online language learning. Alley produces “polite refusal” (2b) - the top ranked strategy - 31% of the time. Comparatively, commercial systems politely refuse only between 17% (Cortana) and 2% (Alexa). Most of the time commercial systems tend to “play along” (3a), joke (3b) or don't know how to answer (1e) which tend to receive lower ratings, see Figure FIGREF38. Rule-based systems most often politely refuse to answer (2b), but also use medium ranked strategies, such as deflect (2c) or chastise (2d). For example, most of Eliza's responses fall under the “deflection” strategy, such as “Why do you ask?”. Data-driven systems rank low in general. Neuralconvo and Cleverbot are the only ones that ever politely refuse and we attribute their improved ratings to this. In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time). Ritter:2010:UMT:1857999.1858019's IR approach is rated similarly to Capt Howdy and both produce a majority of retaliatory (2e) responses - 38% and 58% respectively - followed by flirtatious responses. Finally, Dr Love and Sophia69 produce almost exclusively flirtatious responses which are consistently ranked low by users." ], "extractive_spans": [], "free_form_answer": "flirt; retaliation", "highlighted_evidence": [ "4 Data-driven approaches:\n\nCleverbot BIBREF12;\n\nNeuralConvo BIBREF13, a re-implementation of BIBREF14;\n\nan implementation of BIBREF15's Information Retrieval approach;\n\na vanilla Seq2Seq model trained on clean Reddit data BIBREF1.", "In turn, the “clean” seq2seq often produces responses which can be interpreted as flirtatious (44%), and ranks similarly to Annabelle Lee and Laurel Sweet, the only adult bots that politely refuses ( 16% of the time). Ritter:2010:UMT:1857999.1858019's IR approach is rated similarly to Capt Howdy and both produce a majority of retaliatory (2e) responses - 38% and 58% respectively - followed by flirtatious responses." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "36273e31e56ecb18de98b7c5d628fba387497197", "3ba3c6b38e006cc630de4b6812d89985cb4b21e2", "b8067b5dfbbe2d8f93fcb651e8997c0ffbbd787d", "c774f70cde5d8919265db43c3f95f19f63188c95" ], "answer": [ { "evidence": [ "We first gather abusive utterances from 600K conversations with US-based customers. We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances. Previous research reports even higher levels of abuse between 11% BIBREF2 and 30% BIBREF6. Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical\" utterances - substantially extending the previous dataset of 35 utterances from Amanda:EthicsNLP2018 - and categorise these utterances based on the Linguistic Society's definition of sexual harassment BIBREF7:" ], "extractive_spans": [ "600K" ], "free_form_answer": "", "highlighted_evidence": [ "We first gather abusive utterances from 600K conversations with US-based customers." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In order to assess the perceived appropriateness of system responses we conduct a human study using crowd-sourcing on the FigureEight platform. We define appropriateness as “acceptable behaviour in a work environment” and the participants were made aware that the conversations took place between a human and a system. Ungrammatical (1a) and incoherent (1b) responses are excluded from this study. We collect appropriateness ratings given a stimulus (the prompt) and four randomly sampled responses from our corpus that the worker is to label following the methodology described in BIBREF21, where each utterance is rated relatively to a reference on a user-defined scale. Ratings are then normalised on a scale from [0-1]. This methodology was shown to produce more reliable user ratings than commonly used Likert Scales. In addition, we collect demographic information, including gender and age group. In total we collected 9960 HITs from 472 crowd workers. In order to identify spammers and unsuitable ratings, we use the responses from the adult-only bots as test questions: We remove users who give high ratings to sexual bot responses the majority (more than 55%) of the time.18,826 scores remain - resulting in an average of 7.7 ratings per individual system reply and 1568.8 ratings per response type as listed in Table TABREF14.Due to missing demographic data - and after removing malicious crowdworkers - we only consider a subset of 190 raters for our demographic study. The group is composed of 130 men and 60 women. Most raters (62.6%) are under the age of 44, with similar proportions across age groups for men and women. This is in-line with our target population: 57% of users of smart speakers are male and the majority are under 44 BIBREF22." ], "extractive_spans": [ "9960" ], "free_form_answer": "", "highlighted_evidence": [ " In total we collected 9960 HITs from 472 crowd workers. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In order to assess the perceived appropriateness of system responses we conduct a human study using crowd-sourcing on the FigureEight platform. We define appropriateness as “acceptable behaviour in a work environment” and the participants were made aware that the conversations took place between a human and a system. Ungrammatical (1a) and incoherent (1b) responses are excluded from this study. We collect appropriateness ratings given a stimulus (the prompt) and four randomly sampled responses from our corpus that the worker is to label following the methodology described in BIBREF21, where each utterance is rated relatively to a reference on a user-defined scale. Ratings are then normalised on a scale from [0-1]. This methodology was shown to produce more reliable user ratings than commonly used Likert Scales. In addition, we collect demographic information, including gender and age group. In total we collected 9960 HITs from 472 crowd workers. In order to identify spammers and unsuitable ratings, we use the responses from the adult-only bots as test questions: We remove users who give high ratings to sexual bot responses the majority (more than 55%) of the time.18,826 scores remain - resulting in an average of 7.7 ratings per individual system reply and 1568.8 ratings per response type as listed in Table TABREF14.Due to missing demographic data - and after removing malicious crowdworkers - we only consider a subset of 190 raters for our demographic study. The group is composed of 130 men and 60 women. Most raters (62.6%) are under the age of 44, with similar proportions across age groups for men and women. This is in-line with our target population: 57% of users of smart speakers are male and the majority are under 44 BIBREF22." ], "extractive_spans": [ "9960 HITs from 472 crowd workers" ], "free_form_answer": "", "highlighted_evidence": [ " In total we collected 9960 HITs from 472 crowd workers. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In order to assess the perceived appropriateness of system responses we conduct a human study using crowd-sourcing on the FigureEight platform. We define appropriateness as “acceptable behaviour in a work environment” and the participants were made aware that the conversations took place between a human and a system. Ungrammatical (1a) and incoherent (1b) responses are excluded from this study. We collect appropriateness ratings given a stimulus (the prompt) and four randomly sampled responses from our corpus that the worker is to label following the methodology described in BIBREF21, where each utterance is rated relatively to a reference on a user-defined scale. Ratings are then normalised on a scale from [0-1]. This methodology was shown to produce more reliable user ratings than commonly used Likert Scales. In addition, we collect demographic information, including gender and age group. In total we collected 9960 HITs from 472 crowd workers. In order to identify spammers and unsuitable ratings, we use the responses from the adult-only bots as test questions: We remove users who give high ratings to sexual bot responses the majority (more than 55%) of the time.18,826 scores remain - resulting in an average of 7.7 ratings per individual system reply and 1568.8 ratings per response type as listed in Table TABREF14.Due to missing demographic data - and after removing malicious crowdworkers - we only consider a subset of 190 raters for our demographic study. The group is composed of 130 men and 60 women. Most raters (62.6%) are under the age of 44, with similar proportions across age groups for men and women. This is in-line with our target population: 57% of users of smart speakers are male and the majority are under 44 BIBREF22." ], "extractive_spans": [ "9960 HITs" ], "free_form_answer": "", "highlighted_evidence": [ "In total we collected 9960 HITs from 472 crowd workers." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "78c0555896a14774cce937c5d4ae18f993386931", "94323631be8a6379e55badc7dcee043753395968", "a846e7f6dcd729dac62368c8aaf8876a9165d22a", "ed7c2d20669b43af527245af77d27b15849b08f0" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Full annotation scheme for system response types after user abuse. Categories (1a) and (1b) are excluded from this study." ], "extractive_spans": [], "free_form_answer": "14", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Full annotation scheme for system response types after user abuse. Categories (1a) and (1b) are excluded from this study." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 1: Full annotation scheme for system response types after user abuse. Categories (1a) and (1b) are excluded from this study." ], "extractive_spans": [], "free_form_answer": "12", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Full annotation scheme for system response types after user abuse. Categories (1a) and (1b) are excluded from this study." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "This paper presents the first user study on perceived appropriateness of system responses after verbal abuse. We put strategies used by state-of-the-art systems to the test in a large-scale, crowd-sourced evaluation. The full annotated corpus contains 2441 system replies, categorised into 14 response types, which were evaluated by 472 raters - resulting in 7.7 ratings per reply." ], "extractive_spans": [ "14" ], "free_form_answer": "", "highlighted_evidence": [ "We put strategies used by state-of-the-art systems to the test in a large-scale, crowd-sourced evaluation. The full annotated corpus contains 2441 system replies, categorised into 14 response types, which were evaluated by 472 raters - resulting in 7.7 ratings per reply." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "How do data-driven models usually respond to abuse?", "How much data did they gather from crowdsourcing?", "How many different strategies were evaluated?" ], "question_id": [ "371433bd3fb5042bacec4dfad3cfff66147c14f0", "f64449a21c452bc5395a0f0a49fb49825e6385f4", "3aeb25e334c8129b376f11c7077bcb2dd54f7e0e" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Full annotation scheme for system response types after user abuse. Categories (1a) and (1b) are excluded from this study.", "Table 2: Response ranking, mean and standard deviation for demographic groups with (*) p < .05, (**) p < .01 wrt. other groups.", "Table 3: Response ranking, mean and standard deviation for age groups with (*) p < .05, (**) p < .01 wrt. other groups.", "Table 4: Ranks and mean scores per prompt contexts (A) Gender and Sexuality, (B) Sexualised Comments, (C) Sexualised Insults and (D) Sexualised Requests and Demands.", "Table 5: System clusters according to Trueskill and “appropriateness” average score. Note that systems within a cluster are not significantly different.", "Figure 1: Response type breakdown per system. Systems ordered according to average user ratings." ], "file": [ "3-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "4-Table4-1.png", "4-Table5-1.png", "5-Figure1-1.png" ] }
[ "How do data-driven models usually respond to abuse?", "How many different strategies were evaluated?" ]
[ [ "1909.04387-Results ::: Systems-0", "1909.04387-Data Collection-12", "1909.04387-Data Collection-14", "1909.04387-Data Collection-11", "1909.04387-Data Collection-10", "1909.04387-Data Collection-13" ], [ "1909.04387-3-Table1-1.png", "1909.04387-Conclusion-0" ] ]
[ "flirt; retaliation", "12" ]
8
1805.11937
Character-Level Models versus Morphology in Semantic Role Labeling
Character-level models have become a popular approach specially for their accessibility and ability to handle unseen data. However, little is known on their ability to reveal the underlying morphological structure of a word, which is a crucial skill for high-level semantic analysis tasks, such as semantic role labeling (SRL). In this work, we train various types of SRL models that use word, character and morphology level information and analyze how performance of characters compare to words and morphology for several languages. We conduct an in-depth error analysis for each morphological typology and analyze the strengths and limitations of character-level models that relate to out-of-domain data, training data size, long range dependencies and model complexity. Our exhaustive analyses shed light on important characteristics of character-level models and their semantic capability.
{ "paragraphs": [ [ "Encoding of words is perhaps the most important step towards a successful end-to-end natural language processing application. Although word embeddings have been shown to provide benefit to such models, they commonly treat words as the smallest meaning bearing unit and assume that each word type has its own vector representation. This assumption has two major shortcomings especially for languages with rich morphology: (1) inability to handle unseen or out-of-vocabulary (OOV) word-forms (2) inability to exploit the regularities among word parts. The limitations of word embeddings are particularly pronounced in sentence-level semantic tasks, especially in languages where word parts play a crucial role. Consider the Turkish sentences “Köy+lü-ler (villagers) şehr+e (to town) geldi (came)” and “Sendika+lı-lar (union members) meclis+e (to council) geldi (came)”. Here the stems köy (village) and sendika (union) function similarly in semantic terms with respect to the verb come (as the origin of the agents of the verb), where şehir (town) and meclis (council) both function as the end point. These semantic similarities are determined by the common word parts shown in bold. However ortographic similarity does not always correspond to semantic similarity. For instance the ortographically similar words knight and night have large semantic differences. Therefore, for a successful semantic application, the model should be able to capture both the regularities, i.e, morphological tags and the irregularities, i.e, lemmas of the word.", "Morphological analysis already provides the aforementioned information about the words. However access to useful morphological features may be problematic due to software licensing issues, lack of robust morphological analyzers and high ambiguity among analyses. Character-level models (CLM), being a cheaper and accessible alternative to morphology, have been reported as performing competitively on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 . However the extent to which these tasks depend on morphology is small; and their relation to semantics is weak. Hence, little is known on their true ability to reveal the underlying morphological structure of a word and their semantic capabilities. Furthermore, their behaviour across languages from different families; and their limitations and strengths such as handling of long-range dependencies, reaction to model complexity or performance on out-of-domain data are unknown. Analyzing such issues is a key to fully understanding the character-level models.", "To achieve this, we perform a case study on semantic role labeling (SRL), a sentence-level semantic analysis task that aims to identify predicate-argument structures and assign meaningful labels to them as follows:", " $[$ Villagers $]$ comers came $[$ to town $]$ end point", "We use a simple method based on bidirectional LSTMs to train three types of base semantic role labelers that employ (1) words (2) characters and character sequences and (3) gold morphological analysis. The gold morphology serves as the upper bound for us to compare and analyze the performances of character-level models on languages of varying morphological typologies. We carry out an exhaustive error analysis for each language type and analyze the strengths and limitations of character-level models compared to morphology. In regard to the diversity hypothesis which states that diversity of systems in ensembles lead to further improvement, we combine character and morphology-level models and measure the performance of the ensemble to better understand how similar they are.", "We experiment with several languages with varying degrees of morphological richness and typology: Turkish, Finnish, Czech, German, Spanish, Catalan and English. Our experiments and analysis reveal insights such as:" ], [ "Formally, we generate a label sequence $\\vec{l}$ for each sentence and predicate pair: $(s,p)$ . Each $l_t\\in \\vec{l}$ is chosen from $\\mathcal {L}=\\lbrace \\mathit {roles \\cup nonrole}\\rbrace $ , where $roles$ are language-specific semantic roles (mostly consistent with PropBank) and $nonrole$ is a symbol to present tokens that are not arguments. Given $\\theta $ as model parameters and $g_t$ as gold label for $t_{th}$ token, we find the parameters that minimize the negative log likelihood of the sequence: ", "$$\\hat{\\theta }=\\underset{\\theta }{\\arg \\min } \\left( -\\sum _{t=1}^n log (p(g_t|\\theta ,s,p)) \\right)$$ (Eq. 7) ", "Label probabilities, $p(l_t|\\theta ,s,p)$ , are calculated with equations given below. First, the word encoding layer splits tokens into subwords via $\\rho $ function. ", "$$\\rho (w) = {s_0,s_1,..,s_n}$$ (Eq. 8) ", "As proposed by BIBREF0 , we treat words as a sequence of subword units. Then, the sequence is fed to a simple bi-LSTM network BIBREF15 , BIBREF16 and hidden states from each direction are weighted with a set of parameters which are also learned during training. Finally, the weighted vector is used as the word embedding given in Eq. 9 . ", "$$hs_f, hs_b = \\text{bi-LSTM}({s_0,s_1,..,s_n}) \\\\\n\\vec{w} = W_f \\cdot hs_f + W_b \\cdot hs_b + b$$ (Eq. 9) ", "There may be more than one predicate in the sentence so it is crucial to inform the network of which arguments we aim to label. In order to mark the predicate of interest, we concatenate a predicate flag $pf_t$ to the word embedding vector. ", "$$\\vec{x_{t}} = [\\vec{w};pf_t]$$ (Eq. 10) ", "Final vector, $\\vec{x_t}$ serves as an input to another bi-LSTM unit. ", "$$\\vec{h_{f}, h_{b}} = \\text{bi-LSTM}(x_{t})$$ (Eq. 11) ", "Finally, the label distribution is calculated via softmax function over the concatenated hidden states from both directions. ", "$$\\vec{p(l_t|s,p)} = softmax(W_{l}\\cdot [\\vec{h_{f}};\\vec{h_{b}}]+\\vec{b_{l}})$$ (Eq. 12) ", "For simplicity, we assign the label with the highest probability to the input token. ." ], [ "We use three types of units: (1) words (2) characters and character sequences and (3) outputs of morphological analysis. Words serve as a lower bound; while morphology is used as an upper bound for comparison. Table 1 shows sample outputs of various $\\rho $ functions.", "Here, char function simply splits the token into its characters. Similar to n-gram language models, char3 slides a character window of width $n=3$ over the token. Finally, gold morphological features are used as outputs of morph-language. Throughout this paper, we use morph and oracle interchangably, i.e., morphology-level models (MLM) have access to gold tags unless otherwise is stated. For all languages, morph outputs the lemma of the token followed by language specific morphological tags. As an exception, it outputs additional information for some languages, such as parts-of-speech tags for Turkish. Word segmenters such as Morfessor and Byte Pair Encoding (BPE) are other commonly used subword units. Due to low scores obtained from our preliminary experiments and unsatisfactory results from previous studies BIBREF13 , we excluded these units." ], [ "We use the datasets distributed by LDC for Catalan (CAT), Spanish (SPA), German (DEU), Czech (CZE) and English (ENG) BIBREF17 , BIBREF18 ; and datasets made available by BIBREF19 , BIBREF20 for Finnish (FIN) and Turkish (TUR) respectively . Datasets are provided with syntactic dependency annotations and semantic roles of verbal predicates. In addition, English supplies nominal predicates annotated with semantic roles and does not provide any morphological feature.", "Statistics for the training split for all languages are given in Table 2 . Here, #pred is number of predicates, and #role refers to number distinct semantic roles that occur more than 10 times. More detailed statistics about the datasets can be found in BIBREF27 , BIBREF19 , BIBREF20 ." ], [ "To fit the requirements of the SRL task and of our model, we performed the following:", "Multiword expressions (MWE) are represented as a single token, (e.g., Confederación_Francesa_del_Trabajo), that causes notably long character sequences which are hard to handle by LSTMs. For the sake of memory efficiency and performance, we used an abbreviation (e.g., CFdT) for each MWE during training and testing.", "Original dataset defines its own format of semantic annotation, such as 17:PBArgM_mod $\\mid $ 19:PBArgM_mod meaning the node is an argument of $17_{th}$ and $19_{th}$ tokens with ArgM-mod (temporary modifier) semantic role. They have been converted into CoNLL-09 tabular format, where each predicate's arguments are given in a specific column.", "Words are splitted from derivational boundaries in the original dataset, where each inflectional group is represented as a separate token. We first merge boundaries of the same word, i.e, tokens of the word, then we use our own $\\rho $ function to split words into subwords.", "We lowercase all tokens beforehand and place special start and end of the token characters. For all experiments, we initialized weight parameters orthogonally and used one layer bi-LSTMs both for subword composition and argument labeling with hidden size of 200. Subword embedding size is chosen as 200. We used gradient clipping and early stopping to prevent overfitting. Stochastic gradient descent is used as the optimizer. The initial learning rate is set to 1 and reduced by half if scores on development set do not improve after 3 epochs. We use the provided splits and evaluate the results with the official evaluation script provided by CoNLL-09 shared task. In this work (and in most of the recent SRL works), only the scores for argument labeling are reported, which may cause confusions for the readers while comparing with older SRL studies. Most of the early SRL work report combined scores (argument labeling with predicate sense disambiguation (PSD)). However, PSD is considered a simpler task with higher F1 scores . Therefore, we believe omitting PSD helps us gain more useful insights on character level models." ], [ "Our main results on test and development sets for models that use words, characters (char), character trigrams (char3) and morphological analyses (morph) are given in Table 3 . We calculate improvement over word (IOW) for each subword model and improvement over the best character model (IOC) for the morph. IOW and IOC values are calculated on the test set.", "The biggest improvement over the word baseline is achieved by the models that have access to morphology for all languages (except for English) as expected. Character trigrams consistently outperformed characters by a small margin. Same pattern is observed on the results of the development set. IOW has the values between 0% to 38% while IOC values range between 2%-10% dependending on the properties of the language and the dataset. We analyze the results separately for agglutinative and fusional languages and reveal the links between certain linguistic phenomena and the IOC, IOW values." ], [ "One way to infer similarity is to measure diversity. Consider a set of baseline models that are not diverse, i.e., making similar errors with similar inputs. In such a case, combination of these models would not be able to overcome the biases of the learners, hence the combination would not achieve a better result. In order to test if character and morphological models are similar, we combine them and measure the performance of the ensemble. Suppose that a prediction $p_{i}$ is generated for each token by a model $m_i$ , $i \\in n$ , then the final prediction is calculated from these predictions by: ", "$$p_{final} = f(p_0, p_1,..,p_n|\\phi )$$ (Eq. 36) ", "where $f$ is the combining function with parameter $\\phi $ . The simplest global approach is averaging (AVG), where $f$ is simply the mean function and $p_i$ s are the log probabilities. Mean function combines model outputs linearly, therefore ignores the nonlinear relation between base models/units. In order to exploit nonlinear connections, we learn the parameters $\\phi $ of $f$ via a simple linear layer followed by sigmoid activation. In other words, we train a new model that learns how to best combine the predictions from subword models. This ensemble technique is generally referred to as stacking or stacked generalization (SG). ", "Although not guaranteed, diverse models can be achieved by altering the input representation, the learning algorithm, training data or the hyperparameters. To ensure that the only factor contributing to the diversity of the learners is the input representation, all parameters, training data and model settings are left unchanged.", "Our results are given in Table 4 . IOB shows the improvement over the best of the baseline models in the ensemble. Averaging and stacking methods gave similar results, meaning that there is no immediate nonlinear relations between units. We observe two language clusters: (1) Czech and agglutinative languages (2) Spanish, Catalan, German and English. The common property of that separate clusters are (1) high OOV% and (2) relatively low OOV%. Amongst the first set, we observe that the improvement gained by character-morphology ensembles is higher (shown with green) than ensembles between characters and character trigrams (shown with red), whereas the opposite is true for the second set of languages. It can be interpreted as character level models being more similar to the morphology level models for the first cluster, i.e., languages with high OOV%, and characters and morphology being more diverse for the second cluster." ], [ "To expand our understanding and reveal the limitations and strengths of the models, we analyze their ability to handle long range dependencies, their relation with training data and model size; and measure their performances on out of domain data." ], [ "Long range dependency is considered as an important linguistic issue that is hard to solve. Therefore the ability to handle it is a strong performance indicator. To gain insights on this issue, we measure how models perform as the distance between the predicate and the argument increases. The unit of measure is number of tokens between the two; and argument is defined as the head of the argument phrase in accordance with dependency-based SRL task. For that purpose, we created bins of [0-4], [5-9], [10-14] and [15-19] distances. Then, we have calculate F1 scores for arguments in each bin. Due to low number of predicate-argument pairs in buckets, we could not analyze German and Turkish; and also the bin [15-19] is only used for Czech. Our results are shown in Fig. 3 . We observe that either char or char3 closely follows the oracle for all languages. The gap between the two does not increase with the distance, suggesting that the performance gap is not related to long range dependencies. In other words, both characters and the oracle handle long range dependencies equally well." ], [ "We analyzed how char3 and oracle models perform with respect to the training data size. For that purpose, we trained them on chunks of increasing size and evaluate on the provided test split. We used units of 2000 sentences for German and Czech; and 400 for Turkish. Results are shown in Fig. 4 . Apparently as the data size increases, the performances of both models logarithmically increase - with a varying speed. To speak in statistical terms, we fit a logarithmic curve to the observed F1 scores (shown with transparent lines) and check the x coefficients, where x refers to the number of sentences. This coefficient can be considered as an approximation to the speed of growth with data size. We observe that the coefficient is higher for char3 than oracle for all languages. It can be interpreted as: in the presence of more training data, char3 may surpass the oracle; i.e., char3 relies on data more than the oracle." ], [ "As part of the CoNLL09 shared task BIBREF27 , out of domain test sets are provided for three languages: Czech, German and English. We test our models trained on regular training dataset on these OOD data. The results are given in Table 5 . Here, we clearly see that the best model has shifted from oracle to character based models. The dramatic drop in German oracle model is due to the high lemma OOV rate which is a consequence of keeping compounds as a single lemma. Czech oracle model performs reasonably however is unable to beat the generalization power of the char3 model. Furthermore, the scores of the character models in Table 5 are higher than the best OOD scores reported in the shared task BIBREF27 ; even though our main results on evaluation set are not (except for Czech). This shows that character-level models have increased robustness to out-of-domain data due to their ability to learn regularities among data." ], [ "Throughout this paper, our aim was to gain insights on how models perform on different languages rather than scoring the highest F1. For this reason, we used a model that can be considered small when compared to recent neural SRL models and avoided parameter search. However, we wonder how the models behave when given a larger network. To answer this question, we trained char3 and oracle models with more layers for two fusional languages (Spanish, Catalan), and two agglutinative languages (Finnish, Turkish). The results given in Table 6 clearly shows that model complexity provides relatively more benefit to morphological models. This indicates that morphological signals help to extract more complex linguistic features that have semantic clues." ], [ "Although models with access to gold morphological tags achieve better F1 scores than character models, they can be less useful a in real-life scenario since they require gold tags at test time. To predict the performance of morphology-level models in such a scenario, we train the same models with the same parameters with predicted morphological features. Predicted tags were only available for German, Spanish, Catalan and Czech. Our results given in Fig. 5 , show that (except for Czech), predicted morphological tags are not as useful as characters alone." ], [ "Character-level neural models are becoming the defacto standard for NLP problems due to their accessibility and ability to handle unseen data. In this work, we investigated how they compare to models with access to gold morphological analysis, on a sentence-level semantic task. We evaluated their quality on semantic role labeling in a number of agglutinative and fusional languages. Our results lead to the following conclusions:" ], [ "Gözde Gül Şahin was a PhD student at Istanbul Technical University and a visiting research student at University of Edinburgh during this study. She was funded by Tübitak (The Scientific and Technological Research Council of Turkey) 2214-A scholarship during her visit to University of Edinburgh. She was granted access to CoNLL-09 Semantic Role Labeling Shared Task data by Linguistic Data Consortium (LDC). This work was supported by ERC H2020 Advanced Fellowship GA 742137 SEMANTAX and a Google Faculty award to Mark Steedman. We would like to thank Adam Lopez for fruitful discussions, guidance and support during the first author's visit." ] ], "section_name": [ "Introduction", "Method", "Subword Units", "Experiments", "Experimental Setup", "Results and Analysis", "Similarity between models", "Limitations and Strengths", "Long Range Dependencies", "Training Data Size", "Out-of-Domain (OOD) Data", "Model Size", "Predicted Morphological Tags", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "eca9eb2f2c3d88388688f62a21e1731a7f1f8374", "4db3dd7ca826532c2faa0d02ee720de184673202", "836cb06c5203ba20b6103627be397b82477a55e8", "c5d1339ade353ccb68278beb8d98cfaadd0fdecf" ], "answer": [ { "evidence": [ "The biggest improvement over the word baseline is achieved by the models that have access to morphology for all languages (except for English) as expected. Character trigrams consistently outperformed characters by a small margin. Same pattern is observed on the results of the development set. IOW has the values between 0% to 38% while IOC values range between 2%-10% dependending on the properties of the language and the dataset. We analyze the results separately for agglutinative and fusional languages and reveal the links between certain linguistic phenomena and the IOC, IOW values." ], "extractive_spans": [ "agglutinative and fusional languages" ], "free_form_answer": "", "highlighted_evidence": [ "We analyze the results separately for agglutinative and fusional languages and reveal the links between certain linguistic phenomena and the IOC, IOW values." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Throughout this paper, our aim was to gain insights on how models perform on different languages rather than scoring the highest F1. For this reason, we used a model that can be considered small when compared to recent neural SRL models and avoided parameter search. However, we wonder how the models behave when given a larger network. To answer this question, we trained char3 and oracle models with more layers for two fusional languages (Spanish, Catalan), and two agglutinative languages (Finnish, Turkish). The results given in Table 6 clearly shows that model complexity provides relatively more benefit to morphological models. This indicates that morphological signals help to extract more complex linguistic features that have semantic clues." ], "extractive_spans": [], "free_form_answer": "agglutinative and fusional", "highlighted_evidence": [ "Throughout this paper, our aim was to gain insights on how models perform on different languages rather than scoring the highest F1. For this reason, we used a model that can be considered small when compared to recent neural SRL models and avoided parameter search. However, we wonder how the models behave when given a larger network. To answer this question, we trained char3 and oracle models with more layers for two fusional languages (Spanish, Catalan), and two agglutinative languages (Finnish, Turkish). " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We experiment with several languages with varying degrees of morphological richness and typology: Turkish, Finnish, Czech, German, Spanish, Catalan and English. Our experiments and analysis reveal insights such as:" ], "extractive_spans": [ "Turkish, Finnish, Czech, German, Spanish, Catalan and English" ], "free_form_answer": "", "highlighted_evidence": [ "We experiment with several languages with varying degrees of morphological richness and typology: Turkish, Finnish, Czech, German, Spanish, Catalan and English." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The biggest improvement over the word baseline is achieved by the models that have access to morphology for all languages (except for English) as expected. Character trigrams consistently outperformed characters by a small margin. Same pattern is observed on the results of the development set. IOW has the values between 0% to 38% while IOC values range between 2%-10% dependending on the properties of the language and the dataset. We analyze the results separately for agglutinative and fusional languages and reveal the links between certain linguistic phenomena and the IOC, IOW values." ], "extractive_spans": [ "agglutinative and fusional languages" ], "free_form_answer": "", "highlighted_evidence": [ "We analyze the results separately for agglutinative and fusional languages and reveal the links between certain linguistic phenomena and the IOC, IOW values." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "01cb6148c645822f9a870d3ac20d496c05b6b217", "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a", "35491e1e579f6d147f4793edce4c1a80ab2410e7", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "643764271898dc825c46a20177211c013ac9a0c8", "796ef13660faf7b0b4dec87edc5a1f012f883935", "9632a52e663a7ec15f8ef916f412d539f6677ab6", "b8b70ea3bae258a4851f3bef04c9913d62131b44" ], "answer": [ { "evidence": [ "Words are splitted from derivational boundaries in the original dataset, where each inflectional group is represented as a separate token. We first merge boundaries of the same word, i.e, tokens of the word, then we use our own $\\rho $ function to split words into subwords." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Words are splitted from derivational boundaries in the original dataset, where each inflectional group is represented as a separate token" ], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": true }, { "evidence": [ "Words are splitted from derivational boundaries in the original dataset, where each inflectional group is represented as a separate token. We first merge boundaries of the same word, i.e, tokens of the word, then we use our own $\\rho $ function to split words into subwords." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Words are splitted from derivational boundaries in the original dataset, where each inflectional group is represented as a separate token." ], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": true } ], "worker_id": [ "01cb6148c645822f9a870d3ac20d496c05b6b217", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "35491e1e579f6d147f4793edce4c1a80ab2410e7", "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "annotation_id": [ "7134ef2d69cacc7cedec5aede69d39667e7e487a", "744e7f48f2ed030d7eb8a5d1729d5de78e892999", "9c0407a5b74c9cd154d1e8406f6929e2f9a510bc", "b22a2023e8f1d0de13fdd19d3608dbb56c7d8d55" ], "answer": [ { "evidence": [ "We use three types of units: (1) words (2) characters and character sequences and (3) outputs of morphological analysis. Words serve as a lower bound; while morphology is used as an upper bound for comparison. Table 1 shows sample outputs of various $\\rho $ functions.", "Here, char function simply splits the token into its characters. Similar to n-gram language models, char3 slides a character window of width $n=3$ over the token. Finally, gold morphological features are used as outputs of morph-language. Throughout this paper, we use morph and oracle interchangably, i.e., morphology-level models (MLM) have access to gold tags unless otherwise is stated. For all languages, morph outputs the lemma of the token followed by language specific morphological tags. As an exception, it outputs additional information for some languages, such as parts-of-speech tags for Turkish. Word segmenters such as Morfessor and Byte Pair Encoding (BPE) are other commonly used subword units. Due to low scores obtained from our preliminary experiments and unsatisfactory results from previous studies BIBREF13 , we excluded these units." ], "extractive_spans": [ "char3 slides a character window of width $n=3$ over the token", "lemma of the token", "additional information for some languages, such as parts-of-speech tags for Turkish. Word segmenters such as Morfessor and Byte Pair Encoding (BPE) are other commonly used subword units.", "characters", "character sequences" ], "free_form_answer": "", "highlighted_evidence": [ "We use three types of units: (1) words (2) characters and character sequences and (3) outputs of morphological analysis. Words serve as a lower bound; while morphology is used as an upper bound for comparison. Table 1 shows sample outputs of various $\\rho $ functions.\n\nHere, char function simply splits the token into its characters. Similar to n-gram language models, char3 slides a character window of width $n=3$ over the token. Finally, gold morphological features are used as outputs of morph-language. Throughout this paper, we use morph and oracle interchangably, i.e., morphology-level models (MLM) have access to gold tags unless otherwise is stated. For all languages, morph outputs the lemma of the token followed by language specific morphological tags. As an exception, it outputs additional information for some languages, such as parts-of-speech tags for Turkish. Word segmenters such as Morfessor and Byte Pair Encoding (BPE) are other commonly used subword units. Due to low scores obtained from our preliminary experiments and unsatisfactory results from previous studies BIBREF13 , we excluded these units." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Here, char function simply splits the token into its characters. Similar to n-gram language models, char3 slides a character window of width $n=3$ over the token. Finally, gold morphological features are used as outputs of morph-language. Throughout this paper, we use morph and oracle interchangably, i.e., morphology-level models (MLM) have access to gold tags unless otherwise is stated. For all languages, morph outputs the lemma of the token followed by language specific morphological tags. As an exception, it outputs additional information for some languages, such as parts-of-speech tags for Turkish. Word segmenters such as Morfessor and Byte Pair Encoding (BPE) are other commonly used subword units. Due to low scores obtained from our preliminary experiments and unsatisfactory results from previous studies BIBREF13 , we excluded these units." ], "extractive_spans": [ "For all languages, morph outputs the lemma of the token followed by language specific morphological tags", "additional information for some languages, such as parts-of-speech tags for Turkish" ], "free_form_answer": "", "highlighted_evidence": [ "Finally, gold morphological features are used as outputs of morph-language. Throughout this paper, we use morph and oracle interchangably, i.e., morphology-level models (MLM) have access to gold tags unless otherwise is stated. For all languages, morph outputs the lemma of the token followed by language specific morphological tags. As an exception, it outputs additional information for some languages, such as parts-of-speech tags for Turkish." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Here, char function simply splits the token into its characters. Similar to n-gram language models, char3 slides a character window of width $n=3$ over the token. Finally, gold morphological features are used as outputs of morph-language. Throughout this paper, we use morph and oracle interchangably, i.e., morphology-level models (MLM) have access to gold tags unless otherwise is stated. For all languages, morph outputs the lemma of the token followed by language specific morphological tags. As an exception, it outputs additional information for some languages, such as parts-of-speech tags for Turkish. Word segmenters such as Morfessor and Byte Pair Encoding (BPE) are other commonly used subword units. Due to low scores obtained from our preliminary experiments and unsatisfactory results from previous studies BIBREF13 , we excluded these units." ], "extractive_spans": [ "language specific morphological tags" ], "free_form_answer": "", "highlighted_evidence": [ "Finally, gold morphological features are used as outputs of morph-language.", "For all languages, morph outputs the lemma of the token followed by language specific morphological tags." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Here, char function simply splits the token into its characters. Similar to n-gram language models, char3 slides a character window of width $n=3$ over the token. Finally, gold morphological features are used as outputs of morph-language. Throughout this paper, we use morph and oracle interchangably, i.e., morphology-level models (MLM) have access to gold tags unless otherwise is stated. For all languages, morph outputs the lemma of the token followed by language specific morphological tags. As an exception, it outputs additional information for some languages, such as parts-of-speech tags for Turkish. Word segmenters such as Morfessor and Byte Pair Encoding (BPE) are other commonly used subword units. Due to low scores obtained from our preliminary experiments and unsatisfactory results from previous studies BIBREF13 , we excluded these units.", "We use the datasets distributed by LDC for Catalan (CAT), Spanish (SPA), German (DEU), Czech (CZE) and English (ENG) BIBREF17 , BIBREF18 ; and datasets made available by BIBREF19 , BIBREF20 for Finnish (FIN) and Turkish (TUR) respectively . Datasets are provided with syntactic dependency annotations and semantic roles of verbal predicates. In addition, English supplies nominal predicates annotated with semantic roles and does not provide any morphological feature." ], "extractive_spans": [ "morph outputs the lemma of the token followed by language specific morphological tags", "semantic roles of verbal predicates" ], "free_form_answer": "", "highlighted_evidence": [ "For all languages, morph outputs the lemma of the token followed by language specific morphological tags.", "In addition, English supplies nominal predicates annotated with semantic roles and does not provide any morphological feature.", "Datasets are provided with syntactic dependency annotations and semantic roles of verbal predicates." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a", "2910ac50801742c7b608b6289a49dffb14737474", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "35491e1e579f6d147f4793edce4c1a80ab2410e7" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "What morphological typologies are considered?", "Does the model consider both derivational and inflectional morphology?", "What type of morphological features are used?" ], "question_id": [ "230ff86b7b90b87c33c53014bb1e9c582dfc107f", "dc23006d67f20f430f1483398de4a89c0be4efe2", "887d7f3edf37ccc6bf2e755dae418b04d2309686" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "morphology", "morphology", "morphology" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Sample outputs of different ρ functions", "Table 2: Training data statistics. A: Agglutinative, F: Fusional", "Figure 1: Differences in model performances on agglutinative languages", "Table 3: F1 scores of word, character, character trigram and morphology models for argument labeling. Best F1 for each language is shown in bold. First row: results on test, Second row: results on development.", "Figure 2: x axis: Number of morphological features; y axis: Targeted F1 scores", "Table 4: Results of ensembling via averaging (Avg) and stack generalization (SG). IOB: Improvement Over Best of baseline models", "Figure 3: X axis: Distance between the predicate and the argument, Y axis: F1 scores on argument labels", "Figure 4: Performance of units w.r.t training data size. X axis: Number of sentences, Y axis: F1 score", "Table 6: Effect of layer size on model performances. I: Improvement over model with one layer.", "Table 5: F1 scores on out of domain data. Best scores are shown with bold.", "Figure 5: F1 scores for best-char (best of the CLMs) and model with predicted (predictedmorph) and gold morphological tags (goldmorph)." ], "file": [ "4-Table1-1.png", "5-Table2-1.png", "6-Figure1-1.png", "6-Table3-1.png", "7-Figure2-1.png", "8-Table4-1.png", "9-Figure3-1.png", "9-Figure4-1.png", "9-Table6-1.png", "9-Table5-1.png", "10-Figure5-1.png" ] }
[ "What morphological typologies are considered?" ]
[ [ "1805.11937-Introduction-5", "1805.11937-Results and Analysis-1", "1805.11937-Model Size-0" ] ]
[ "agglutinative and fusional" ]
10
1909.09070
Look, Read and Enrich - Learning from Scientific Figures and their Captions
Compared to natural images, understanding scientific figures is particularly hard for machines. However, there is a valuable source of information in scientific literature that until now has remained untapped: the correspondence between a figure and its caption. In this paper we investigate what can be learnt by looking at a large number of figures and reading their captions, and introduce a figure-caption correspondence learning task that makes use of our observations. Training visual and language networks without supervision other than pairs of unconstrained figures and captions is shown to successfully solve this task. We also show that transferring lexical and semantic knowledge from a knowledge graph significantly enriches the resulting features. Finally, we demonstrate the positive impact of such features in other tasks involving scientific text and figures, like multi-modal classification and machine comprehension for question answering, outperforming supervised baselines and ad-hoc approaches.
{ "paragraphs": [ [ "Scientific knowledge is heterogeneous and can present itself in many forms, including text, mathematical equations, figures and tables. Like many other manifestations of human thought, the scientific discourse usually adopts the form of a narrative, a scientific publication where related knowledge is presented in mutually supportive ways over different modalities. In the case of scientific figures, like charts, images and diagrams, these are usually accompanied by a text paragraph, a caption, that elaborates on the analysis otherwise visually represented.", "In this paper, we make use of this observation and tap on the potential of learning from the enormous source of free supervision available in the scientific literature, with millions of figures and their captions. We build models that learn from the scientific discourse both visually and textually by simply looking at the figures and reading their explanatory captions, inspired in how humans learn by reading a scientific publication. To this purpose, we explore how multi-modal scientific knowledge can be learnt from the correspondence between figures and captions.", "The main contributions of this paper are the following:", "An unsupervised Figure-Caption Correspondence task (FCC) that jointly learns text and visual features useful to address a range of tasks involving scientific text and figures.", "A method to enrich such features with semantic knowledge transferred from structured knowledge graphs (KG).", "A study of the complexity of figure-caption correspondence compared to classical image-sentence matching.", "A qualitative and quantitative analysis of the learnt text and visual features through transfer learning tasks.", "A corpus of scientific figures and captions extracted from SN SciGraph and AI2 Semantic Scholar.", "We present the FCC task in section SECREF3, including the network architecture, training protocol, and how adding pre-trained word and semantic embeddings can enrich the resulting text and visual features. In section SECREF4, we first introduce our datasets and evaluate the performance of our method in the task it was trained to solve, the correspondence between scientific figures and captions. Then, we relate our work to the state of the art in image-sentence matching and evaluate our approach in two challenging transfer learning tasks: caption and figure classification and multi-modal machine comprehension. In section SECREF5 we perform a qualitative study that illustrates how the FCC task leads to detailed textual and visual discrimination. Finally, in section SECREF6 we conclude the paper and advance future work." ], [ "Understanding natural images has been a major area of research in computer vision, with well established datasets like ImageNet BIBREF0, Flickr8K BIBREF1, Flickr30K BIBREF2 and COCO BIBREF3. However, reasoning with other visual representations like scientific figures and diagrams has not received the same attention yet and entails additional challenges: Scientific figures are more abstract and symbolic, their captions tend to be significantly longer and use specialized lexicon, and the relation between a scientific figure and its caption is unique, i.e. in a scientific publication there is only one caption that corresponds with one figure and vice versa.", "The FCC task presented herein is a form of co-training BIBREF4 where there are two views of the data and each view provides complementary information. Similar two-branch neural architectures focus on image-sentence BIBREF5, BIBREF6 and audio-video BIBREF7 matching. Others like BIBREF8 learn common embeddings from images and text. However, in such cases one or both networks are typically pre-trained.", "Focused on geometry, BIBREF9 maximize the agreement between text and visual data. In BIBREF10, the authors apply machine vision and natural language processing to extract data from figures and their associated text in bio-curation tasks. In BIBREF11, they parse diagram components and connectors as a Diagram Parse Graph (DPG), semantically interpret the DPG and use the model to answer diagram questions. While we rely on the correspondence between figures and captions, they train a specific classifier for each component and connector type and yet another model to ground the semantics of the DPG in each domain, like food webs or water cycles.", "Knowledge fusion approaches like BIBREF12 investigate the potential of complementing KG embeddings with text and natural images by integrating information across the three modalities in a single latent representation. They assume pre-trained entity representations exist in each individual modality, e.g. the visual features encoding the image of a ball, the word embeddings associated to the token \"ball\", and the KG embeddings related to the ball entity, which are then stitched together. In contrast, FCC co-trains text and visual features from figures and their captions and supports the enrichment of such features with lexical and semantic knowledge transferred from a KG during the training of the FCC task." ], [ "The main idea of our approach is to learn a correspondence task between scientific figures and their captions as they appear in a scientific publication. The information captured in the caption explains the corresponding figure in natural language, providing guidance to identify the key features of the figure and vice versa. By seeing a figure and reading the textual description in its caption we ultimately aim to learn representations that capture e.g. what it means that two plots are similar or what gravity looks like.", "We leverage this observation to learn a figure-caption correspondence task. In essence, FCC is a binary classification task that receives a figure and a caption and determines whether they correspond or not. For training, the positive pairs are actual figures and their captions from a collection of scientific publications. Negative pairs are extracted from combinations of figures and any other randomly selected captions. The network is then made to learn text and visual features from scratch, without additional labelled data." ], [ "We propose a 2-branch neural architecture (figure FIGREF7) that has three main parts: the vision and language subnetworks, respectively extracting visual and text features, and a fusion subnetwork that takes the resulting features from the visual and text blocks and uses them to evaluate figure-caption correspondence.", "The vision subnetwork follows a VGG-style BIBREF13 design, with 3x3 convolutional filters, 2x2 max-pooling layers with stride 2 and no padding. It contains 4 blocks of conv+conv+pool layers, where inside each block the two convolutional layers have the same number of filters, while consecutive blocks have doubling number of filters (64, 128, 256, 512). The input layer receives 224x224x3 images. The final layer produces a 512-D vector after 28x28 max-pooling. Each convolutional layer is followed by batch normalization BIBREF14 and ReLU layers. Based on BIBREF15, the language subnetwork has 3 convolutional blocks, each with 512 filters and a 5-element window size with ReLU activation. Each convolutional layer is followed by a 5-max pooling layer, except for the final layer, which produces a 512-D vector after 35-max pooling. The language subnetwork has a 300-D embeddings layer at the input, with a maximum sequence length of 1,000 tokens. The fusion subnetwork calculates the element-wise product of the 512-D visual and text feature vectors into a single vector $r$ to produce a 2-way classification output (correspond or not). It has two fully connected layers, with ReLU and an intermediate feature size of 128-D. The probability of each choice is the softmax of $r$, i.e. $\\hat{y} = softmax(r) \\in \\mathbb {R}^{2}$. During training, we minimize the negative log probability of the correct choice.", "This architecture enables the FCC task to learn visual and text features from scratch in a completely unsupervised manner, just by observing the correspondence of figures and captions. Next, we extend it to enable the transfer of additional pre-trained information. Here, we focus on adding pre-trained embeddings on the language branch, and then back-propagate to the visual features during FCC training. Adding pre-trained visual features is also possible and indeed we also evaluate its impact in the FCC task in section SECREF14.", "Let $V$ be a vocabulary of words from a collection of documents $D$. Also, let $L$ be their lemmas, i.e. base forms without morphological or conjugational variations, and $C$ the concepts (or senses) in a KG. Each word $w_k$ in $V$, e.g. made, has one lemma $l_k$ (make) and may be linked to one or more concepts $c_k$ in $C$ (create or produce something).", "For each word $w_k$, the FCC task learns a d-D embedding $\\vec{w}_k$, which can be combined with pre-trained word ($\\vec{w^{\\prime }}_k$), lemma ($\\vec{l}_k$) and concept ($\\vec{c}_k$) embeddings to produce a single vector $\\vec{t}_k$. If no pre-trained knowledge is transferred from an external source, then $\\vec{t}_k=\\vec{w}_k$. Note that we previously lemmatize and disambiguate $D$ against the KG in order to select the right pre-trained lemma and concept embeddings for each particular occurrence of $w_k$. Equation DISPLAY_FORM8 shows the different combinations of learnt and pre-trained embeddings we consider: (a) learnt word embeddings only, (b) learnt and pre-trained word embeddings and (c) learnt word embeddings and pre-trained semantic embeddings, including both lemmas and concepts, in line with our recent findings presented in BIBREF16.", "In our experiments, concatenation proved optimal to combine the embeddings learnt by the network and the pre-trained embeddings, compared to other methods like summation, multiplication, average or learning a task-specific weighting of the different representations as in BIBREF17. Since some words may not have associated pre-trained word, lemma or concept embeddings, we pad these sequences with $\\varnothing _W$, $\\varnothing _L$ and $\\varnothing _C$, which are never included in the vocabulary. The dimensionality of $\\vec{t}_k$ is fixed to 300, i.e. the size of each sub-vector in configurations $(a)$, $(b)$ and $(c)$ is 300, 150 and 100, respectively. In doing so, we aimed at limiting the number of trainable parameters and balance the contribution of each information source.", "In its most basic form, i.e. configuration $(a)$, the FCC network has over 32M trainable parameters (28M in the language subnetwork, 4M in the vision subnetwork and 135K in the fusion subnetwork) and takes 12 hours to train on a single GPU Nvidia GeForce RTX 2080 Ti for a relatively small corpus (SN SciGraph, see section SECREF12). We used 10-fold cross validation, Adam optimization BIBREF18 with learning rate $10^{-4}$ and weight decay $10^{-5}$. The network was implemented in Keras and TensorFlow, with batch size 32. The number of positive and negative cases is balanced within the batches." ], [ "We use HolE BIBREF19 and Vecsigrafo BIBREF16 to learn semantic embeddings. The latter extends the Swivel algorithm BIBREF20 to jointly learn word, lemma and concept embeddings on a corpus disambiguated against the KG, outperforming the previous state of the art in word and word-sense embeddings by co-training word, lemma and concept embeddings as opposed to training each individually. In contrast to Vecsigrafo, which requires both a text corpus and a KG, HolE follows a graph-based approach where embeddings are learnt exclusively from the KG. As section SECREF14 will show, this gives Vecsigrafo a certain advantage in the FCC task. Following up with the work presented in BIBREF16, our experiments focus on Sensigrafo, the KG underlying Expert System's Cogito NLP proprietary platform. Similar to WordNet, on which Vecsigrafo has also been successfully trained, Sensigrafo is a general-purpose KG with lexical and semantic information that contains over 300K concepts, 400K lemmas and 80 types of relations rendering 3M links. We use Cogito to disambiguate the text corpora prior to training Vecsigrafo. All the semantic (lemma and concept) embeddings produced with HolE or Vecsigrafo are 100-D." ], [ "In this section, first we evaluate the actual FCC task against two supervised baselines. Then, we situate our work in the more general image-sentence matching problem, showing empirical evidence of the additional complexity associated to the scientific domain and the figure-caption case compared to natural images. Next, we test the visual and text features learnt in the FCC task in two different transfer learning settings: classification of scientific figures and captions and multi-modal machine comprehension for question answering given a context of text, figures and images." ], [ "We have used the following datasets for training and evaluation:", "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information.", "Flickr30K and COCO, as image-sentence matching benchmarks." ], [ "We evaluate our method in the task it was trained to solve: determining whether a figure and a caption correspond. We also compare the performance of the FCC task against two supervised baselines, training them on a classification task against the SciGraph taxonomy. For such baselines we first train the vision and language networks independently and then combine them. The feature extraction parts of both networks are the same as described in section SECREF6. On top of them, we attach a fully connected layer with 128 neurons and ReLU activation and a softmax layer, with as many neurons as target classes.", "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks. If it exceeds a threshold, which we heuristically fixed on 0.325, the result is positive. The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers. While direct combination provides a notion of the agreement between the two branches, supervised pre-training is the most similar supervised approach to our method.", "Table TABREF15 shows the results of the FCC task and the supervised baselines. $FCC_k$ denotes the corpus and word representation used to train the FCC task. Acc$_{vgg}$ shows the accuracy after replacing our visual branch with pre-trained VGG16 features learnt on ImageNet. This provides an estimate of how specific of the scientific domain scientific figures and therefore the resulting visual features can be, compared to natural images. As the table shows, the results obtained using pre-trained visual features are clearly worse in general (only slightly better in $FCC_3$), suggesting that the visual information contained in scientific figures indeed differs from natural images.", "We trained the FCC network on two different scientific corpora: SciGraph ($FCC_{1-5}$) and SemScholar ($FCC_{6-7}$). Both $FCC_1$ and $FCC_6$ learnt their own word representations without transfer of any pre-trained knowledge. Even in its most basic form our approach substantially improves over the supervised baselines, confirming that the visual and language branches learn from each other and also that figure-caption correspondence is an effective source of free supervision.", "Adding pre-trained knowledge at the input layer of the language subnetwork provides an additional boost, particularly with lemma and concept embeddings from Vecsigrafo ($FCC_5$). Vecsigrafo clearly outperformed HolE ($FCC_3$), which was also beaten by pre-trained fastText BIBREF24 word embeddings ($FCC_2$) trained on SemScholar.", "Since graph-based KG embedding approaches like HolE only generate embeddings of the artifacts explicitly contained in the KG, this may indicate that Sensigrafo, the KG used in this task, provides a partial coverage of the scientific domain, as could be expected since we are using an off-the-shelf version. Deeper inspection shows that HolE only covers 20% of the lemmas in the SciGraph vocabulary. On the other hand, Vecsigrafo, trained on the same KG, also captures lexical information from the text corpora it is trained on, Wikipedia or SemScholar, raising lemma coverage to 42% and 47%, respectively.", "Although the size of Wikipedia is almost triple of our SemScholar corpus, training Vecsigrafo on the latter resulted in better FCC accuracy ($FCC_4$ vs. $FCC_5$), suggesting that domain relevance is more significant than sheer volume, in line with our previous findings in BIBREF25. Training FCC on SemScholar, much larger than SciGraph, further improves accuracy, as shown in $FCC_6$ and $FCC_7$." ], [ "We put our FCC task in the context of the more general problem of image-sentence matching through a bidirectional retrieval task where images are sought given a text query and vice versa. While table TABREF20 focuses on natural images datasets (Flickr30K and COCO), table TABREF21 shows results on scientific datasets (SciGraph and SemScholar) rich in scientific figures and diagrams. The selected baselines (Embedding network, 2WayNet, VSE++ and DSVE-loc) report results obtained on the Flickr30K and COCO datasets, which we also include in table TABREF20. Performance is measured in recall at k ($Rk$), with k={1,5,10}. From the baselines, we successfully reproduced DSVE-loc, using the code made available by the authors, and trained it on SciGraph and SemScholar.", "We trained the FCC task on all the datasets, both in a totally unsupervised way and with pre-trained semantic embeddings (indicated with subscript $vec$), and executed the bidirectional retrieval task using the resulting text and visual features. We also experimented with pre-trained VGG16 visual features extracted from ImageNet (subscript $vgg$), with more than 14 million hand-annotated images. Following common practice in image-sentence matching, our splits are 1,000 samples for test and the rest for training.", "We can see a marked division between the results obtained on natural images datasets (table TABREF20) and those focused on scientific figures (table TABREF21). In the former case, VSE++ and DSVE-loc clearly beat all the other approaches. In contrast, our model performs poorly on such datasets although results are ameliorated when we use pre-trained visual features from ImageNet (\"Oursvgg\" and \"Oursvgg-vec\"). Interestingly, the situation reverts with the scientific datasets. While the recall of DSVE-loc drops dramatically in SciGraph, and even more in SemScholar, our approach shows the opposite behavior in both figure and caption retrieval. Using visual features enriched with pre-trained semantic embeddings from Vecsigrafo during training of the FCC task further improves recall in the bidirectional retrieval task. Compared to natural images, the additional complexity of scientific figures and their caption texts, which in addition are considerably longer (see table TABREF19), seems to have a clear impact in this regard.", "Unlike in Flickr30K and COCO, replacing the FCC visual features with pre-trained ones from ImageNet brings us little benefit in SciGraph and even less in SemScholar, where the combination of FCC and Vecsigrafo (\"Oursvec\") obtains the best results across the board. This and the extremely poor performance of the best image-sentence matching baseline (DSVE-loc) in the scientific datasets shows evidence that dealing with scientific figures is considerably more complex than natural images. Indeed, the best results in figure-caption correspondence (\"Oursvec\" in SemScholar) are still far from the SoA in image-sentence matching (DSVE-loc in COCO)." ], [ "We evaluate the language and visual representations emerging from FCC in the context of two classification tasks that aim to identify the scientific field an arbitrary text fragment (a caption) or a figure belong to, according to the SciGraph taxonomy. The latter is a particularly hard task due to the whimsical nature of the figures that appear in our corpus: figure and diagram layout is arbitrary; charts, e.g. bar and pie charts, are used to showcase data in any field from health to engineering; figures and natural images appear indistinctly, etc. Also, note that we only rely on the actual figure, not the text fragment where it is mentioned in the paper.", "We pick the text and visual features that produced the best FCC results with and without pre-trained semantic embeddings (table TABREF15, $FCC_7$ and $FCC_6$, respectively) and use the language and vision subnetworks presented in section SECREF6 to train our classifiers on SciGraph in two different scenarios. First, we only fine tune the fully connected and softmax layers, freezing the text and visual weights (non-trainable in the table). Second, we fine tune all the parameters in both networks (trainable). In both cases, we compare against a baseline using the same networks initialized with random weights, without FCC training. In doing so, through the first, non-trainable scenario, we seek to quantify the information contributed by the FCC features, while training from scratch on the target corpus should provide an upper bound for figure and caption classification. Additionally, for figure classification, we select a baseline of frozen VGG16 weights trained on ImageNet. We train using 10-fold cross validation and Adam. For the caption classification task, we select learning rate $10^{-3}$ and batch size 128. In figure classification, we use learning rate $10^{-4}$, weight decay $10^{-5}$ and batch size 32.", "The results in table TABREF23 show that our approach amply beats the baselines, including the upper bound (training from scratch on SciGraph). The delta is particularly noticeable in the non trainable case for both caption and figure classification and is considerably increased in \"Ours $FCC_7$\", which uses pre-trained semantic embeddings. This includes both the random and VGG baselines and illustrates again the additional complexity of analyzing scientific figures compared to natural images, even if the latter is trained on a considerably larger corpus like ImageNet. Fine tuning the whole networks on SciGraph further improves accuracies. In this case, \"Ours $FCC_6$\", which uses FCC features without additional pre-trained embeddings, slightly outperforms \"Ours $FCC_7$\", suggesting a larger margin to learn from the task-specific corpus. Note that both $FCC_6$ and $FCC_7$ were trained on SemScholar." ], [ "We leverage the TQA dataset and the baselines in BIBREF23 to evaluate the features learnt by the FCC task in a multi-modal machine comprehension scenario. We study how our model, which was not originally trained for this task, performs against state of the art models specifically trained for diagram question answering and textual reading comprehension in a very challenging dataset. We also study how pre-trained semantic embeddings impact in the TQA task: first, by enriching the visual features learnt in the FCC task as shown in section SECREF6 and then by using pre-trained semantic embeddings to enrich word representations in the TQA corpus.", "We focus on multiple-choice questions, 73% of the dataset. Table TABREF24 shows the performance of our model against the results reported in BIBREF23 for five TQA baselines: random, BiDAF (focused on text machine comprehension), text only ($TQA_1$, based on MemoryNet), text+image ($TQA_2$, VQA), and text+diagrams ($TQA_3$, DSDP-NET). We successfully reproduced the $TQA_1$ and $TQA_2$ architectures and adapted the latter. Then, we replaced the visual features in $TQA_2$ with those learnt by the FCC visual subnetwork both in a completely unsupervised way ($FCC_6$ in table TABREF15) and with pre-trained semantic embeddings ($FCC_7$), resulting in $TQA_4$ and $TQA_5$, respectively.", "While $TQA_{1-5}$ used no pre-trained embeddings at all, $TQA_{6-10}$ were trained including pre-trained Vecsigrafo semantic embeddings. Unlike FCC, where we used concatenation to combine pre-trained lemma and concept embeddings with the word embeddings learnt by the task, element-wise addition worked best in the case of TQA.", "Following the recommendations in BIBREF23, we pre-processed the TQA corpus to i) consider knowledge from previous lessons in the textbook in addition to the lesson of the question at hand and ii) address challenges like long question contexts with a large lexicon. In both text and diagram MC, applying the Pareto principle to reduce the maximum token sequence length in the text of each question, their answers and context improved accuracy considerably. This optimization allowed reducing the amount of text to consider for each question, improving the signal to noise ratio. Finally, we obtained the most relevant paragraphs for each question through tf-idf and trained the models using 10-fold cross validation, Adam, learning rate $10^{-2}$ and batch size 128. In text MC we also used 0.5 dropout and recurrent dropout in the LSTM layers.", "Fitting multi-modal sources into a single memory, the use of visual FCC features clearly outperforms all the TQA baselines in diagram MC. Enhancing word representation with pre-trained semantic embeddings during training of the TQA task provides an additional boost that results in the highest accuracies for both text MC and diagram MC. These are significantly good results since, according to the TQA authors BIBREF23, most diagram questions in the TQA corpus would normally require a specific rich diagram parse, which we did not aim to provide." ], [ "We inspect the features learnt by our FCC task to gain a deeper understanding of the syntactic and semantic patterns captured for figure and caption representation. The findings reported herein are qualitatively consistent for all the FCC variations in table TABREF15.", "Vision features. The analysis was carried out on an unconstrained variety of charts, diagrams and natural images from SciGraph, without filtering by figure type or scientific field. To obtain a representative sample of what the FCC network learns, we focus on the 512-D vector resulting from the last convolutional block before the fusion subnetwork. We pick the features with the most significant activation over the whole dataset and select the figures that activate them most. To this purpose, we prioritize those with higher maximum activation against the average activation.", "Figure FIGREF27 shows a selection of 6 visual features with the 4 figures that activate each feature more significantly and their activation heatmaps. Only figures are used as input, no text. As can be seen, the vision subnetwork has automatically learnt, without explicit supervision, to recognize different types of diagrams, charts and content, such as (from left to right) whisker plots, western blots (a technique used to identify proteins in a tissue sample), multi-image comparison diagrams, multi-modal data visualization charts (e.g. western plots vs. bar charts), line plots, and text within the figures. Furthermore, as shown by the heatmaps, our model discriminates the key elements associated to the figures that most activate each feature: the actual whiskers, the blots, the borders of each image under comparison, the blots and their complementary bar charts, as well as the line plots and the correspondence between them and the values in the x and y axes. Also, see (right-most column) how a feature discriminates text inserted in the figure, regardless of the remaining elements that may appear and the connections between them. This shows evidence of how the visual features learnt by the FCC task support the parsing of complex scientific diagrams.", "We also estimated a notion of semantic specificity based on the concepts of a KG. For each visual feature, we aggregated the captions of the figures that most activate it and used Cogito to disambiguate the Sensigrafo concepts that appear in them. Then, we estimated how important each concept is to each feature by calculating its tf-idf. Finally, we averaged the resulting values to obtain a consolidated semantic specificity score per feature.", "The scores of the features in figure FIGREF27 range between 0.42 and 0.65, which is consistently higher than average (0.4). This seems to indicate a correlation between activation and the semantic specificity of each visual feature. For example, the heatmaps of the figures related to the feature with the lowest tf-idf (left-most column) highlights a particular visual pattern, i.e. the whiskers, that may spread over many, possibly unrelated domains. On the other hand, the feature with the highest score (second column) focuses on a type of diagrams, western blots, almost exclusive of protein and genetic studies. Others, like the feature illustrated by the figures in the fifth column, capture the semantics of a specific type of 2D charts relating two magnitudes x and y. Analyzing their captions with Cogito, we see that concepts like e.g. isochronal and exponential functions are mentioned. If we look at the second and four top-most figures in the column, we can see that such concepts are also visually depicted in the figures, suggesting that the FCC task has learnt to recognize them both from the text and visually.", "Text features. Similar to the visual case, we selected the features from the last block of the language subnetwork with the highest activation. For visualization purposes, we picked the figures corresponding to the captions in SciGraph that most activate such features (figure FIGREF28). No visual information is used.", "Several distinct patterns emerge from the text. The text feature in the first column seems to focus on genetics and histochemistry, including terms like western blots or immunostaining and variations like immunoblot-s/ted/ting. Interestingly, it also seems to have learnt some type of is-a relations (western blot is a type of immunoblot). The second feature focuses on variations of the term radiograph, e.g. radiograph-y/s. The third feature specializes in text related to curve plots involving several statistic analysis, e.g. Real-time PCR, one-way ANOVA or Gaussian distribution. Sometimes (fourth figure from top) the caption does not mention the plot directly, but focuses on the analysis instead, e.g. \"the data presented here are mean values of duplicate experiments\", indicating transfer of knowledge from the visual part during training. The fourth feature extracts citations and models named after prominent scientists, e.g. Evans function (first and fourth figure), Manley (1992) (second), and Aliev-Panfilov model (third). The fifth feature extracts chromatography terminology, e.g. 3D surface plot, photomicrograph or color map and, finally, the right-most feature focuses on different types of named diagrams, like flow charts and state diagrams, e.g. phylogenetic trees.", "All the captions show a strong semantic correspondence with their associated figures. Figure FIGREF29 shows the activation heatmaps for two sample captions, calculated on the embeddings layer of the language subnetwork. The upper one corresponds to the fourth column left-right and third figure top-down in figure FIGREF28. Its caption reads: \"The Aliev-Panfilov model with $\\alpha =0.01$...The phase portrait depicts trajectories for distinct initial values $\\varphi _0$ and $r_0$...\". Below, (first column, fourth figure in figure FIGREF28): \"Relative protein levels of ubiquitin-protein conjugates in M. quadriceps...A representative immunoblot specific to ubiquitin...\". Consistently with our analysis, activation focuses on the most relevant tokens for each text feature: \"Aliev-Panfilov model\" and \"immunoblot\", respectively." ], [ "There is a wealth of knowledge in scientific literature and only a fraction of it is text. However, understanding scientific figures is a challenging task for machines, which is beyond their ability to process natural images. In this paper, we provide empirical evidence of this and show that co-training text and visual features from a large corpus of scientific figures and their captions in a correspondence task (FCC) is an effective, flexible and elegant unsupervised means towards overcoming such complexity. We show how such features can be significantly improved by enriching them with additional knowledge sources and, particularly, structured KGs. We prove the benefits of our approach against supervised baselines and in different transfer learning tasks, including text and visual classification and multi-modal machine comprehension applied to question answering, with results generally beyond the state of the art. In the future, it will be interesting to further the study of the interplay between the semantic concepts explicitly represented in different KGs, contextualized embeddings e.g. from SciBERT BIBREF31, and the text and visual features learnt in the FCC task. We also plan to continue to charter the knowledge captured in such features and to pursue the optimization and practical application of our approach." ], [ "The research reported in this paper is supported by the EU Horizon 2020 programme, under grants European Language Grid-825627 and Co-inform-770302." ] ], "section_name": [ "Introduction", "Related work", "Figure-Caption Correspondence", "Figure-Caption Correspondence ::: FCC Architecture and Model", "Figure-Caption Correspondence ::: Semantic Embeddings", "Results and Discussion", "Results and Discussion ::: Datasets", "Results and Discussion ::: Figure-Caption Correspondence", "Results and Discussion ::: Image-Sentence Matching", "Results and Discussion ::: Caption and Figure Classification", "Results and Discussion ::: Textbook Question Answering (TQA) for Multi-Modal Machine Comprehension", "Qualitative Analysis", "Conclusions", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "22a3e7b8380676e7cacd99c88b376a67ac80eb6a", "7cc48ed9d2382421618d54f6af51d08ee0d3afcc", "8dff34b25becb8fdfe34f05f0951c6852caeba0e", "a8aeaf521629afc2a2572eb48f67ddcc429899c6" ], "answer": [ { "evidence": [ "Results and Discussion ::: Datasets", "We have used the following datasets for training and evaluation:", "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information.", "Flickr30K and COCO, as image-sentence matching benchmarks." ], "extractive_spans": [ "The Semantic Scholar corpus ", "Springer Nature's SciGraph", "The Textbook Question Answering corpus", "Wikipedia", "Flickr30K and COCO" ], "free_form_answer": "", "highlighted_evidence": [ "Results and Discussion ::: Datasets", "We have used the following datasets for training and evaluation:\n\nThe Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.\n\nSpringer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).\n\nThe Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.\n\nWikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information.\n\nFlickr30K and COCO, as image-sentence matching benchmarks." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We have used the following datasets for training and evaluation:", "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information.", "Flickr30K and COCO, as image-sentence matching benchmarks." ], "extractive_spans": [ "The Semantic Scholar corpus", "Springer Nature's SciGraph", "The Textbook Question Answering corpus", "January 2018 English Wikipedia dataset", "Flickr30K", "COCO" ], "free_form_answer": "", "highlighted_evidence": [ "We have used the following datasets for training and evaluation:", "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories.", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula.", "We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo.", "Flickr30K and COCO, as image-sentence matching benchmarks." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We have used the following datasets for training and evaluation:", "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information.", "Flickr30K and COCO, as image-sentence matching benchmarks." ], "extractive_spans": [ "The Semantic Scholar corpus", "Springer Nature's SciGraph", "The Textbook Question Answering corpus", "Wikipedia", "Flickr30K", "COCO" ], "free_form_answer": "", "highlighted_evidence": [ "We have used the following datasets for training and evaluation:", "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories.", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. ", "Flickr30K and COCO, as image-sentence matching benchmarks." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We have used the following datasets for training and evaluation:", "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information.", "Flickr30K and COCO, as image-sentence matching benchmarks." ], "extractive_spans": [ "Semantic Scholar corpus BIBREF21 (SemScholar)", "Springer Nature's SciGraph", "Textbook Question Answering corpus BIBREF23", "Wikipedia", "Flickr30K", "COCO" ], "free_form_answer": "", "highlighted_evidence": [ "We have used the following datasets for training and evaluation:\n\nThe Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories.", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo.", "Flickr30K and COCO, as image-sentence matching benchmarks." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "1ad33622b77ca124d072ad0e130a266aff8cd83a", "982a6b0a8a02c90721854f81184962dddd1486fd", "d87bb3554b4b03ea56e8c3c85a647f497a7a74a2", "e50d4ab380843cdf68652dfb5c201b5ba27b33d0" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 2: Selected visual features and activation heatmaps. The top row labels the dominant pattern for each feature.", "FLOAT SELECTED: Figure 3: Selected text features. Top row labels the dominant pattern for each text feature." ], "extractive_spans": [], "free_form_answer": "English", "highlighted_evidence": [ "FLOAT SELECTED: Figure 2: Selected visual features and activation heatmaps. The top row labels the dominant pattern for each feature.", "FLOAT SELECTED: Figure 3: Selected text features. Top row labels the dominant pattern for each text feature." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "We use HolE BIBREF19 and Vecsigrafo BIBREF16 to learn semantic embeddings. The latter extends the Swivel algorithm BIBREF20 to jointly learn word, lemma and concept embeddings on a corpus disambiguated against the KG, outperforming the previous state of the art in word and word-sense embeddings by co-training word, lemma and concept embeddings as opposed to training each individually. In contrast to Vecsigrafo, which requires both a text corpus and a KG, HolE follows a graph-based approach where embeddings are learnt exclusively from the KG. As section SECREF14 will show, this gives Vecsigrafo a certain advantage in the FCC task. Following up with the work presented in BIBREF16, our experiments focus on Sensigrafo, the KG underlying Expert System's Cogito NLP proprietary platform. Similar to WordNet, on which Vecsigrafo has also been successfully trained, Sensigrafo is a general-purpose KG with lexical and semantic information that contains over 300K concepts, 400K lemmas and 80 types of relations rendering 3M links. We use Cogito to disambiguate the text corpora prior to training Vecsigrafo. All the semantic (lemma and concept) embeddings produced with HolE or Vecsigrafo are 100-D.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information." ], "extractive_spans": [ "English" ], "free_form_answer": "", "highlighted_evidence": [ "We use HolE BIBREF19 and Vecsigrafo BIBREF16 to learn semantic embeddings.", "We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "228dbeae40b90cf8291e7cdff9f33849ea03679b", "8431777fa28a3662ac21a11f8c3cf6a26d029852", "ad5cd7c53b12c72387faa45008fc6cc10d901b4f" ], "answer": [ { "evidence": [ "Since graph-based KG embedding approaches like HolE only generate embeddings of the artifacts explicitly contained in the KG, this may indicate that Sensigrafo, the KG used in this task, provides a partial coverage of the scientific domain, as could be expected since we are using an off-the-shelf version. Deeper inspection shows that HolE only covers 20% of the lemmas in the SciGraph vocabulary. On the other hand, Vecsigrafo, trained on the same KG, also captures lexical information from the text corpora it is trained on, Wikipedia or SemScholar, raising lemma coverage to 42% and 47%, respectively." ], "extractive_spans": [ "HolE", "Vecsigrafo" ], "free_form_answer": "", "highlighted_evidence": [ "Since graph-based KG embedding approaches like HolE only generate embeddings of the artifacts explicitly contained in the KG, this may indicate that Sensigrafo, the KG used in this task, provides a partial coverage of the scientific domain, as could be expected since we are using an off-the-shelf version. Deeper inspection shows that HolE only covers 20% of the lemmas in the SciGraph vocabulary. On the other hand, Vecsigrafo, trained on the same KG, also captures lexical information from the text corpora it is trained on, Wikipedia or SemScholar, raising lemma coverage to 42% and 47%, respectively." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We put our FCC task in the context of the more general problem of image-sentence matching through a bidirectional retrieval task where images are sought given a text query and vice versa. While table TABREF20 focuses on natural images datasets (Flickr30K and COCO), table TABREF21 shows results on scientific datasets (SciGraph and SemScholar) rich in scientific figures and diagrams. The selected baselines (Embedding network, 2WayNet, VSE++ and DSVE-loc) report results obtained on the Flickr30K and COCO datasets, which we also include in table TABREF20. Performance is measured in recall at k ($Rk$), with k={1,5,10}. From the baselines, we successfully reproduced DSVE-loc, using the code made available by the authors, and trained it on SciGraph and SemScholar." ], "extractive_spans": [ "Embedding network", "2WayNet", "VSE++", "DSVE-loc)" ], "free_form_answer": "", "highlighted_evidence": [ "The selected baselines (Embedding network, 2WayNet, VSE++ and DSVE-loc) report results obtained on the Flickr30K and COCO datasets, which we also include in table TABREF20." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "bb3fed3d773a9ce39cd48d9a743c5786bfe93744", "cea220502edf4a45bf267a5206b9b7f98b2a8866", "d404dc0d9fdbe207b4516f44eb55e82ee8f2e424", "f34aef4c46230f6e34e8e7c87a10e05d81405bf6" ], "answer": [ { "evidence": [ "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks. If it exceeds a threshold, which we heuristically fixed on 0.325, the result is positive. The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers. While direct combination provides a notion of the agreement between the two branches, supervised pre-training is the most similar supervised approach to our method." ], "extractive_spans": [ "direct combination", "supervised pre-training" ], "free_form_answer": "", "highlighted_evidence": [ "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks.", "The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate our method in the task it was trained to solve: determining whether a figure and a caption correspond. We also compare the performance of the FCC task against two supervised baselines, training them on a classification task against the SciGraph taxonomy. For such baselines we first train the vision and language networks independently and then combine them. The feature extraction parts of both networks are the same as described in section SECREF6. On top of them, we attach a fully connected layer with 128 neurons and ReLU activation and a softmax layer, with as many neurons as target classes.", "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks. If it exceeds a threshold, which we heuristically fixed on 0.325, the result is positive. The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers. While direct combination provides a notion of the agreement between the two branches, supervised pre-training is the most similar supervised approach to our method." ], "extractive_spans": [ "direct combination baseline", "supervised pre-training baseline" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our method in the task it was trained to solve: determining whether a figure and a caption correspond. We also compare the performance of the FCC task against two supervised baselines, training them on a classification task against the SciGraph taxonomy.", "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks.", "The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks. If it exceeds a threshold, which we heuristically fixed on 0.325, the result is positive. The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers. While direct combination provides a notion of the agreement between the two branches, supervised pre-training is the most similar supervised approach to our method." ], "extractive_spans": [ "The direct combination baseline ", "The supervised pre-training baseline" ], "free_form_answer": "", "highlighted_evidence": [ "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks. If it exceeds a threshold, which we heuristically fixed on 0.325, the result is positive. The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks. If it exceeds a threshold, which we heuristically fixed on 0.325, the result is positive. The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers. While direct combination provides a notion of the agreement between the two branches, supervised pre-training is the most similar supervised approach to our method." ], "extractive_spans": [ "direct combination baseline", "supervised pre-training baseline" ], "free_form_answer": "", "highlighted_evidence": [ "The direct combination baseline computes the figure-caption correspondence through the scalar product between the softmax outputs of both networks.", "The supervised pre-training baseline freezes the weights of the feature extraction trunks from the two trained networks, assembles them in the FCC architecture as shown in section SECREF6, and trains the FCC task on the fully connected layers." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3010fd9ddcaf7c8ba7d145147efedb9e0e36cab5", "41c0f6c00d0e3b6c9ebde5d6e9ce2540ba06ed3d", "649d67e68b8b9a48025d118b75dc3d9b105cc9d0", "d700045d1c5bda3de3edeb36e9dedb5aecd3e7a0" ], "answer": [ { "evidence": [ "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories.", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula.", "We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "We have used the following datasets for training and evaluation:", "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.", "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information.", "Flickr30K and COCO, as image-sentence matching benchmarks." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We have used the following datasets for training and evaluation:\n\nThe Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.\n\nSpringer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).\n\nThe Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset.\n\nWikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information.\n\nFlickr30K and COCO, as image-sentence matching benchmarks." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "Wikipedia. We used the January 2018 English Wikipedia dataset as one of the corpora on which to train Vecsigrafo. As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "As opposed to SciGraph or SemScholar, specific of the scientific domain, Wikipedia is a source of general-purpose information." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "In this paper, we make use of this observation and tap on the potential of learning from the enormous source of free supervision available in the scientific literature, with millions of figures and their captions. We build models that learn from the scientific discourse both visually and textually by simply looking at the figures and reading their explanatory captions, inspired in how humans learn by reading a scientific publication. To this purpose, we explore how multi-modal scientific knowledge can be learnt from the correspondence between figures and captions." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "To this purpose, we explore how multi-modal scientific knowledge can be learnt from the correspondence between figures and captions." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "004e55522e72c85db1b11dc18a70ce17bc96ecd4", "65fdba1ea2117425d94cff944fd4bc615111fe9b", "6df81b43420d1bd8958e189d49d4e794f2b43881", "f870fc4eedd2a205fdd562311df02e79a7cb3872" ], "answer": [ { "evidence": [ "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14)." ], "extractive_spans": [ "The Semantic Scholar corpus", "Springer Nature's SciGraph" ], "free_form_answer": "", "highlighted_evidence": [ "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22. We randomly selected 500K papers to train the FCC task on their figures and captions and another 500K to train Vecsigrafo on the text of their titles and abstracts.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset." ], "extractive_spans": [ "scientific publications", "middle school science curricula" ], "free_form_answer": "", "highlighted_evidence": [ "The Semantic Scholar corpus BIBREF21 (SemScholar) is a large dataset of scientific publications made available by AI2. From its 39M articles, we downloaded 3,3M PDFs (the rest were behind paywalls, did not have a link or it was broken) and extracted 12.5M figures and captions through PDFFigures2 BIBREF22.", "Springer Nature's SciGraph contains 7M scientific publications organized in 22 scientific fields or categories. Since SciGraph does not provide a link to the PDF of the publication, we selected the intersection with SemScholar, producing a smaller corpus of 80K papers (in addition to the 1M papers from SemScholar mentioned above) and 82K figures that we used for training certain FCC configurations and supervised baselines (section SECREF14).", "The Textbook Question Answering corpus BIBREF23 includes 1,076 lessons and 26,260 multi-modal test questions from middle school science curricula. Its complexity and scope make it a challenging textual and visual question answering dataset." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper, we make use of this observation and tap on the potential of learning from the enormous source of free supervision available in the scientific literature, with millions of figures and their captions. We build models that learn from the scientific discourse both visually and textually by simply looking at the figures and reading their explanatory captions, inspired in how humans learn by reading a scientific publication. To this purpose, we explore how multi-modal scientific knowledge can be learnt from the correspondence between figures and captions." ], "extractive_spans": [ "scientific literature" ], "free_form_answer": "", "highlighted_evidence": [ "In this paper, we make use of this observation and tap on the potential of learning from the enormous source of free supervision available in the scientific literature, with millions of figures and their captions." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "A corpus of scientific figures and captions extracted from SN SciGraph and AI2 Semantic Scholar." ], "extractive_spans": [ "SN SciGraph and AI2 Semantic Scholar" ], "free_form_answer": "", "highlighted_evidence": [ "A corpus of scientific figures and captions extracted from SN SciGraph and AI2 Semantic Scholar." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "" ], "question": [ "What datasets are used in this paper?", "What language are the captions in?", "What ad-hoc approaches are explored?", "What supervised baselines did they compare with?", "Is the data specific to a domain?", "Where do their figure and captions come from?" ], "question_id": [ "b8a3ab219be6c1e6893fe80e1fbf14f0c0c3c97c", "780c7993d446cd63907bb38992a60bbac9cb42b1", "3da4606a884593f7702d098277b9a6ce207c080b", "91336f12ab94a844b66b607f8621eb8bbd209f32", "c5221bb28e58a4f13cf2eccce0e1b1bec7dd3c13", "42a4ab4607a9eec42c427a817b7e898230d26444" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "" ] }
{ "caption": [ "Figure 1: Proposed 2-branch architecture of the FCC task.", "Table 1: FCC and supervised baselines results (% accuracy).", "Table 2: Caption length: natural images vs scientific datasets.", "Table 3: Bidirectional retrieval. FCC vs. image-sentence matching baselines (%recall@k). Natural images datasets.", "Table 4: Bidirectional retrieval. FCC vs. image-sentence matching baselines (%recall@k). Scientific datasets.", "Table 5: Caption and figure classification (%accuracy)", "Table 6: TQA results (% accuracy). FCC vs. random, BiDAF, MemoryNet, VQA and DSDP-NET baselines.", "Figure 2: Selected visual features and activation heatmaps. The top row labels the dominant pattern for each feature.", "Figure 3: Selected text features. Top row labels the dominant pattern for each text feature.", "Figure 4: Sample caption activation heatmaps. Darker means higher activation." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "8-Figure2-1.png", "9-Figure3-1.png", "10-Figure4-1.png" ] }
[ "What language are the captions in?" ]
[ [ "1909.09070-8-Figure2-1.png", "1909.09070-Figure-Caption Correspondence ::: Semantic Embeddings-0", "1909.09070-9-Figure3-1.png", "1909.09070-Results and Discussion ::: Datasets-4" ] ]
[ "English" ]
11
1708.05521
EmoAtt at EmoInt-2017: Inner attention sentence embedding for Emotion Intensity
In this paper we describe a deep learning system that has been designed and built for the WASSA 2017 Emotion Intensity Shared Task. We introduce a representation learning approach based on inner attention on top of an RNN. Results show that our model offers good capabilities and is able to successfully identify emotion-bearing words to predict intensity without leveraging on lexicons, obtaining the 13th place among 22 shared task competitors.
{ "paragraphs": [ [ "Twitter is a huge micro-blogging service with more than 500 million tweets per day from different locations in the world and in different languages. This large, continuous, and dynamically updated content is considered a valuable resource for researchers. In particular, many of these messages contain emotional charge, conveying affect—emotions, feelings and attitudes, which can be studied to understand the expression of emotion in text, as well as the social phenomena associated.", "While studying emotion in text it is commonly useful to characterize the emotional charge of a passage based on its words. Some words have affect as a core part of their meaning. For example, dejected and wistful denote some amount of sadness, and are thus associated with sadness. On the other hand, some words are associated with affect even though they do not denote affect. For example, failure and death describe concepts that are usually accompanied by sadness and thus they denote some amount of sadness.", "While analyzing the emotional content in text, mosts tasks are almost always framed as classification tasks, where the intention is to identify one emotion among many for a sentence or passage. However, it is often useful for applications to know the degree to which an emotion is expressed in text. To this end, the WASSA-2017 Shared Task on Emotion Intensity BIBREF0 represents the first task where systems have to automatically determine the intensity of emotions in tweets. Concretely, the objective is to given a tweet containing the emotion of joy, sadness, fear or anger, determine the intensity or degree of the emotion felt by the speaker as a real-valued score between zero and one.", "The task is specially challenging since tweets contain informal language, spelling errors and text referring to external content. Given the 140 character limit of tweets, it is also possible to find some phenomena such as the intensive usage of emoticons and of other special Twitter features, such as hashtags and usernames mentions —used to call or notify other users. In this paper we describe our system designed for the WASSA-2017 Shared Task on Emotion Intensity, which we tackle based on the premise of representation learning without the usage of external information, such as lexicons. In particular, we use a Bi-LSTM model with intra-sentence attention on top of word embeddings to generate a tweet representation that is suitable for emotion intensity. Our results show that our proposed model offers interesting capabilities compared to approaches that do rely on external information sources." ], [ "Our work is related to deep learning techniques for emotion recognition in images BIBREF1 and videos BIBREF2 , as well as and emotion classification BIBREF3 . Our work is also related to liuattention-based2016, who introduced an attention RNN for slot filling in Natural Language Understanding. Since in the task the input-output alignment is explicit, they investigated how the alignment can be best utilized in encoder-decoder models concluding that the attention mechanisms are helpful.", "EmoAtt is based on a bidirectional RNN that receives an embedded input sequence INLINEFORM0 and returns a list of hidden vectors that capture the context each input token INLINEFORM1 . To improve the capabilities of the RNN to capture short-term temporal dependencies BIBREF4 , we define the following: DISPLAYFORM0 ", "Where INLINEFORM0 can be regarded as a context window of ordered word embedding vectors around position INLINEFORM1 , with a total size of INLINEFORM2 . To further complement the context-aware token representations, we concatenate each hidden vector to a vector of binary features INLINEFORM3 , extracted from each tweet token, defining an augmented hidden state INLINEFORM4 . Finally, we combine our INLINEFORM5 augmented hidden states, compressing them into a single vector, using a global intra-sentence attentional component in a fashion similar to vinyalsgrammar2015. Formally, DISPLAYFORM0 ", "Where INLINEFORM0 is the vector that compresses the input sentence INLINEFORM1 , focusing on the relevant parts to estimate emotion intensity. We input this compressed sentence representation into a feed-forward neural network, INLINEFORM2 , where INLINEFORM3 is the final predicted emotion intensity. As a loss function we use the mini-batch negative Pearson correlation with the gold-standard." ], [ "To test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger. These were annotated using Best-Worst Scaling (BWS) to obtain very reliable scores BIBREF6 .", "We experimented with GloVe BIBREF7 as pre-trained word embedding vectors, for sizes 25, 50 and 100. These are vectors trained on a dataset of 2B tweets, with a total vocabulary of 1.2 M. To pre-process the data, we used Twokenizer BIBREF8 , which basically provides a set of curated rules to split the tweets into tokens. We also use Tweeboparser BIBREF9 to get the POS-tags for each tweet.", "Table TABREF3 summarizes the average, maximum and minimum sentence lengths for each dataset after we processed them with Twokenizer. We can see the four corpora offer similar characteristics in terms of length, with a cross dataset maximum length of 41 tokens. We also see there is an important vocabulary gap between the dataset and GloVe, with an average coverage of only 64.3 %. To tackle this issue, we used a set of binary features derived from POS tags to capture some of the semantics of the words that are not covered by the GloVe embeddings. We also include features for member mentions and hashtags as well as a feature to capture word elongation, based on regular expressions. Word elongation is very common in tweets, and is usually associated to strong sentiment. The following are the POS tag-derived rules we used to generate our binary features.", "While the structure of our introduced model allows us to easily include more linguistic features that could potentially improve our predictive power, such as lexicons, since our focus is to study sentence representation for emotion intensity, we do not experiment adding any additional sources of information as input.", "In this paper we also only report results for LSTMs, which outperformed regular RNNs as well as GRUs and a batch normalized version of the LSTM in on preliminary experiments. The hidden size of the attentional component is set to match the size of the augmented hidden vectors on each case. Given this setting, we explored different hyper-parameter configurations, including context window sizes of 1, 3 and 5 as well as RNN hidden state sizes of 100, 200 and 300. We experimented with unidirectional and bidirectional versions of the RNNs.", "To avoid over-fitting, we used dropout regularization, experimenting with keep probabilities of INLINEFORM0 and INLINEFORM1 . We also added a weighed L2 regularization term to our loss function. We experimented with different values for weight INLINEFORM2 , with a minimum value of 0.01 and a maximum of 0.2.", "To evaluate our model, we wrapped the provided scripts for the shared task and calculated the Pearson correlation coefficient and the Spearman rank coefficient with the gold standard in the validation set, as well as the same values over a subset of the same data formed by taking every instance with a gold emotion intensity score greater than or equal to 0.5.", "For training, we used mini-batch stochastic gradient descent with a batch size of 16 and padded sequences to a maximum size of 50 tokens, given the nature of the data. We used exponential decay of ratio INLINEFORM0 and early stopping on the validation when there was no improvement after 1000 steps. Our code is available for download on GitHub ." ], [ "In this section we report the results of the experiments we performed to test our proposed model. In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. Moreover, the model manages to do so without any additional resources, except pre-trained word embeddings. These results are, however, reversed for the test dataset, where our model performs worse than the baseline. This shows that the model is not able to generalize well, which we think is related to the missing semantic information due to the vocabulary gap we observed between the datasets and the GloVe embeddings.", "To validate the usefulness of our binary features, we performed an ablation experiment and trained our best models for each corpus without them. Table TABREF15 summarizes our results in terms of Pearson correlation on the development portion of the datasets. As seen, performance decreases in all cases, which shows that indeed these features are critical for performance, allowing the model to better capture the semantics of words missing in GloVe. In this sense, we think the usage of additional features, such as the ones derived from emotion or sentiment lexicons could indeed boost our model capabilities. This is proposed for future work.", "On the other hand, our model also offers us very interesting insights on how the learning is performed, since we can inspect the attention weights that the neural network is assigning to each specific token when predicting the emotion intensity. By visualizing these weights we can have a clear notion about the parts of the sentence that the model considers are more important. As Figure FIGREF16 shows, we see the model seems to be have learned to attend the words that naturally bear emotion or sentiment. This is specially patent for the examples extracted from the Joy dataset, where positive words are generally identified. However, we also see some examples where the lack of semantic information about the input words, specially for hashtags or user mentions, makes the model unable to identify some of these the most salient words to predict emotion intensity. Several pre-processing techniques can be implemented to alleviate this problem, which we intend to explore in the future." ], [ "For the anger dataset, our experiments showed that GloVe embeddings of dimension 50 outperformed others, obtaining an average gain of 0.066 correlation over embeddings of size 25 and of 0.021 for embeddings of size 100. However on ly the first of these values was significant, with a p-value of INLINEFORM0 . Regarding the hidden size of the RNN, we could not find statistical difference across the tested sizes. Dropout also had inconsistent effects, but was generally useful." ], [ "In the joy dataset, our experiments showed us that GloVe vectors of dimension 50 again outperformed others, in this case obtaining an average correlation gain of 0.052 ( INLINEFORM0 ) over embeddings of size 100, and of 0.062 ( INLINEFORM1 ) for size 25. Regarding the hidden size of the RNN, we observed that 100 hidden units offered better performance in our experiments, with an average absolute gain of 0.052 ( INLINEFORM2 ) over 50 hidden units. Compared to the models with 200 hidden units, the performance difference was statistically not significant." ], [ "On the fear dataset, again we observed that embeddings of size 50 provided the best results, offering average gains of 0.12 ( INLINEFORM0 ) and 0.11 ( INLINEFORM1 ) for sizes 25 and 100, respectively. When it comes to the size of the RNN hidden state, our experiments showed that using 100 hidden units offered the best results, with average absolute gains of 0.117 ( INLINEFORM2 ) and 0.108 ( INLINEFORM3 ) over sizes 50 and 200." ], [ "Finally, on the sadness datasets again we experimentally observed that using embeddings of 50 offered the best results, with a statistically significant average gain of 0.092 correlation points INLINEFORM0 over size 25. Results were statistically equivalent for size 100. We also observed that using 50 or 100 hidden units for the RNN offered statistically equivalent results, while both of these offered better performance than when using a hidden size of 200." ], [ "In this paper we introduced an intra-sentence attention RNN for the of emotion intensity, which we developed for the WASSA-2017 Shared Task on Emotion Intensity. Our model does not make use of external information except for pre-trained embeddings and is able to outperform the Weka baseline for the development set, but not in the test set. In the shared task, it obtained the 13th place among 22 competitors." ] ], "section_name": [ "Introduction", "Proposed Approach", "Experimental Setup", "Results and Discussion", "Anger Dataset", "Joy Dataset", "Fear Dataset", "Sadness Dataset", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "94e57f0ec54937b4c481b2400091f573e26d0bba", "980231104caf9183f234dc6f3b857e714288fe2f", "d033b5131b4b6ddfe9fcd331c476caa3f01db685", "f9575a1eeae3d932eb14289df57d0ae65c94c7b4" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "07cb4b1e8605cbb3da8050121d3530e8e3359910", "6aff278c28a00ae3ebb2ee107452577ee99f9c14", "d093e16ca2b1d469a4d5c9a792959079f60b032b" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "The task is specially challenging since tweets contain informal language, spelling errors and text referring to external content. Given the 140 character limit of tweets, it is also possible to find some phenomena such as the intensive usage of emoticons and of other special Twitter features, such as hashtags and usernames mentions —used to call or notify other users. In this paper we describe our system designed for the WASSA-2017 Shared Task on Emotion Intensity, which we tackle based on the premise of representation learning without the usage of external information, such as lexicons. In particular, we use a Bi-LSTM model with intra-sentence attention on top of word embeddings to generate a tweet representation that is suitable for emotion intensity. Our results show that our proposed model offers interesting capabilities compared to approaches that do rely on external information sources.", "In this section we report the results of the experiments we performed to test our proposed model. In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. Moreover, the model manages to do so without any additional resources, except pre-trained word embeddings. These results are, however, reversed for the test dataset, where our model performs worse than the baseline. This shows that the model is not able to generalize well, which we think is related to the missing semantic information due to the vocabulary gap we observed between the datasets and the GloVe embeddings." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In this paper we describe our system designed for the WASSA-2017 Shared Task on Emotion Intensity, which we tackle based on the premise of representation learning without the usage of external information, such as lexicons.", "In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. " ], "unanswerable": false, "yes_no": false }, { "evidence": [ "While the structure of our introduced model allows us to easily include more linguistic features that could potentially improve our predictive power, such as lexicons, since our focus is to study sentence representation for emotion intensity, we do not experiment adding any additional sources of information as input." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "While the structure of our introduced model allows us to easily include more linguistic features that could potentially improve our predictive power, such as lexicons, since our focus is to study sentence representation for emotion intensity, we do not experiment adding any additional sources of information as input." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "18a00f94a95fb4236b9aa426dc74975072a57804", "7ec2b03cedac934ba06054434c299cf932c18fc9", "941a7801f500cdc8b0a227c741d1f5dd81b8de07", "d763fe4235b2acc28508afbd35d5224120203b5c" ], "answer": [ { "evidence": [ "In this section we report the results of the experiments we performed to test our proposed model. In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. Moreover, the model manages to do so without any additional resources, except pre-trained word embeddings. These results are, however, reversed for the test dataset, where our model performs worse than the baseline. This shows that the model is not able to generalize well, which we think is related to the missing semantic information due to the vocabulary gap we observed between the datasets and the GloVe embeddings." ], "extractive_spans": [ "Weka baseline BIBREF5" ], "free_form_answer": "", "highlighted_evidence": [ "In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this section we report the results of the experiments we performed to test our proposed model. In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. Moreover, the model manages to do so without any additional resources, except pre-trained word embeddings. These results are, however, reversed for the test dataset, where our model performs worse than the baseline. This shows that the model is not able to generalize well, which we think is related to the missing semantic information due to the vocabulary gap we observed between the datasets and the GloVe embeddings." ], "extractive_spans": [ "Weka baseline BIBREF5" ], "free_form_answer": "", "highlighted_evidence": [ "In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this section we report the results of the experiments we performed to test our proposed model. In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. Moreover, the model manages to do so without any additional resources, except pre-trained word embeddings. These results are, however, reversed for the test dataset, where our model performs worse than the baseline. This shows that the model is not able to generalize well, which we think is related to the missing semantic information due to the vocabulary gap we observed between the datasets and the GloVe embeddings." ], "extractive_spans": [ "Weka" ], "free_form_answer": "", "highlighted_evidence": [ "In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this section we report the results of the experiments we performed to test our proposed model. In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. Moreover, the model manages to do so without any additional resources, except pre-trained word embeddings. These results are, however, reversed for the test dataset, where our model performs worse than the baseline. This shows that the model is not able to generalize well, which we think is related to the missing semantic information due to the vocabulary gap we observed between the datasets and the GloVe embeddings." ], "extractive_spans": [ " Weka baseline BIBREF5" ], "free_form_answer": "", "highlighted_evidence": [ "In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "1dfb198b7ea3e66c46bc3e19d52da177bd756a96", "bfb08d4ef3c43e6acf6361d0afdd2699d6f70d75", "fa7d2790979dd93b96373f0608049e9de4c031c3", "fb4a61e09b112143bfa258e9471b09cae382e11f" ], "answer": [ { "evidence": [ "To evaluate our model, we wrapped the provided scripts for the shared task and calculated the Pearson correlation coefficient and the Spearman rank coefficient with the gold standard in the validation set, as well as the same values over a subset of the same data formed by taking every instance with a gold emotion intensity score greater than or equal to 0.5.", "FLOAT SELECTED: Table 2: Summary of the best results.", "In this section we report the results of the experiments we performed to test our proposed model. In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. Moreover, the model manages to do so without any additional resources, except pre-trained word embeddings. These results are, however, reversed for the test dataset, where our model performs worse than the baseline. This shows that the model is not able to generalize well, which we think is related to the missing semantic information due to the vocabulary gap we observed between the datasets and the GloVe embeddings." ], "extractive_spans": [], "free_form_answer": "Pearson correlation on sadness test data is 0.52, on joy test data is .537, on anger test data is 0.47, on fear data is 0.561.", "highlighted_evidence": [ "To evaluate our model, we wrapped the provided scripts for the shared task and calculated the Pearson correlation coefficient and the Spearman rank coefficient with the gold standard in the validation set, as well as the same values over a subset of the same data formed by taking every instance with a gold emotion intensity score greater than or equal to 0.5.", "FLOAT SELECTED: Table 2: Summary of the best results.", "In general, as Table TABREF13 shows, our intra-sentence attention RNN was able to outperform the Weka baseline BIBREF5 on the development dataset by a solid margin. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 2: Summary of the best results." ], "extractive_spans": [], "free_form_answer": "0.689 on development and 0.522 on test set", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Summary of the best results." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For the anger dataset, our experiments showed that GloVe embeddings of dimension 50 outperformed others, obtaining an average gain of 0.066 correlation over embeddings of size 25 and of 0.021 for embeddings of size 100. However on ly the first of these values was significant, with a p-value of INLINEFORM0 . Regarding the hidden size of the RNN, we could not find statistical difference across the tested sizes. Dropout also had inconsistent effects, but was generally useful.", "Anger Dataset", "Joy Dataset", "In the joy dataset, our experiments showed us that GloVe vectors of dimension 50 again outperformed others, in this case obtaining an average correlation gain of 0.052 ( INLINEFORM0 ) over embeddings of size 100, and of 0.062 ( INLINEFORM1 ) for size 25. Regarding the hidden size of the RNN, we observed that 100 hidden units offered better performance in our experiments, with an average absolute gain of 0.052 ( INLINEFORM2 ) over 50 hidden units. Compared to the models with 200 hidden units, the performance difference was statistically not significant.", "Fear Dataset", "On the fear dataset, again we observed that embeddings of size 50 provided the best results, offering average gains of 0.12 ( INLINEFORM0 ) and 0.11 ( INLINEFORM1 ) for sizes 25 and 100, respectively. When it comes to the size of the RNN hidden state, our experiments showed that using 100 hidden units offered the best results, with average absolute gains of 0.117 ( INLINEFORM2 ) and 0.108 ( INLINEFORM3 ) over sizes 50 and 200.", "Sadness Dataset", "Finally, on the sadness datasets again we experimentally observed that using embeddings of 50 offered the best results, with a statistically significant average gain of 0.092 correlation points INLINEFORM0 over size 25. Results were statistically equivalent for size 100. We also observed that using 50 or 100 hidden units for the RNN offered statistically equivalent results, while both of these offered better performance than when using a hidden size of 200." ], "extractive_spans": [ "For the anger dataset, our experiments showed that GloVe embeddings of dimension 50 outperformed others, obtaining an average gain of 0.066 correlation over embeddings of size 25 and of 0.021 for embeddings of size 100.", "In the joy dataset, our experiments showed us that GloVe vectors of dimension 50 again outperformed others, in this case obtaining an average correlation gain of 0.052 ( INLINEFORM0 ) over embeddings of size 100, and of 0.062 ( INLINEFORM1 ) for size 25.", "On the fear dataset, again we observed that embeddings of size 50 provided the best results, offering average gains of 0.12 ( INLINEFORM0 ) and 0.11 ( INLINEFORM1 ) for sizes 25 and 100, respectively.", "on the sadness datasets again we experimentally observed that using embeddings of 50 offered the best results, with a statistically significant average gain of 0.092 correlation points INLINEFORM0 over size 25" ], "free_form_answer": "", "highlighted_evidence": [ "For the anger dataset, our experiments showed that GloVe embeddings of dimension 50 outperformed others, obtaining an average gain of 0.066 correlation over embeddings of size 25 and of 0.021 for embeddings of size 100.", "Anger Dataset\nFor the anger dataset, our experiments showed that GloVe embeddings of dimension 50 outperformed others, obtaining an average gain of 0.066 correlation over embeddings of size 25 and of 0.021 for embeddings of size 100. However on ly the first of these values was significant, with a p-value of INLINEFORM0 . Regarding the hidden size of the RNN, we could not find statistical difference across the tested sizes. Dropout also had inconsistent effects, but was generally useful.\n\nJoy Dataset\nIn the joy dataset, our experiments showed us that GloVe vectors of dimension 50 again outperformed others, in this case obtaining an average correlation gain of 0.052 ( INLINEFORM0 ) over embeddings of size 100, and of 0.062 ( INLINEFORM1 ) for size 25. Regarding the hidden size of the RNN, we observed that 100 hidden units offered better performance in our experiments, with an average absolute gain of 0.052 ( INLINEFORM2 ) over 50 hidden units. Compared to the models with 200 hidden units, the performance difference was statistically not significant.\n\nFear Dataset\nOn the fear dataset, again we observed that embeddings of size 50 provided the best results, offering average gains of 0.12 ( INLINEFORM0 ) and 0.11 ( INLINEFORM1 ) for sizes 25 and 100, respectively. When it comes to the size of the RNN hidden state, our experiments showed that using 100 hidden units offered the best results, with average absolute gains of 0.117 ( INLINEFORM2 ) and 0.108 ( INLINEFORM3 ) over sizes 50 and 200.\n\nSadness Dataset\nFinally, on the sadness datasets again we experimentally observed that using embeddings of 50 offered the best results, with a statistically significant average gain of 0.092 correlation points INLINEFORM0 over size 25. Results were statistically equivalent for size 100. We also observed that using 50 or 100 hidden units for the RNN offered statistically equivalent results, while both of these offered better performance than when using a hidden size of 200." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "a43c73cf5f8358676b671eccc4650489141999c9", "d20abb8f5e3b0c4c4e96efcd3cc45b0371756b3b", "d33eae6f15c7340c99baad013df9c5c362ee84a5", "f5c5c3a692c8d503cdb258b6c45cd2dcaeb5077a" ], "answer": [ { "evidence": [ "To test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger. These were annotated using Best-Worst Scaling (BWS) to obtain very reliable scores BIBREF6 ." ], "extractive_spans": [ " training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger" ], "free_form_answer": "", "highlighted_evidence": [ "To test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger. These were annotated using Best-Worst Scaling (BWS) to obtain very reliable scores BIBREF6 ." ], "extractive_spans": [ "datasets provided for the shared task BIBREF5" ], "free_form_answer": "", "highlighted_evidence": [ "To test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger. These were annotated using Best-Worst Scaling (BWS) to obtain very reliable scores BIBREF6 ." ], "extractive_spans": [], "free_form_answer": "Dataset of tweets provided for the shared task.", "highlighted_evidence": [ "o test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger. These were annotated using Best-Worst Scaling (BWS) to obtain very reliable scores BIBREF6 ." ], "extractive_spans": [], "free_form_answer": "Dataset from shared task BIBREF5", "highlighted_evidence": [ "To test our model, we experiment using the training, validation and test datasets provided for the shared task BIBREF5 , which include tweets for four emotions: joy, sadness, fear, and anger. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "question": [ "did the top teams experiment with lexicons?", "did they experiment with lexicons?", "what was the baseline?", "what was their result?", "what dataset was used?" ], "question_id": [ "622efbecd9350a0f4487bdff2b8b362ef2541f3c", "f54e19f7ecece1bb0ef3171403ae322ad572ff00", "4137a82d7752be7a6c142ceb48ce784fd475fb06", "6c50871294562e4886ede804574e6acfa8d1a5f9", "0ac6fbd81e2dd95b800283dc7e59ce969d45fc02" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ] }
{ "caption": [ "Table 1: Data summary.", "Table 2: Summary of the best results.", "Table 3: Impact of adding our binary features.", "Figure 1: Example of attention weights for the Joy dataset. White denotes more weight." ], "file": [ "2-Table1-1.png", "4-Table2-1.png", "4-Table3-1.png", "4-Figure1-1.png" ] }
[ "what was their result?", "what dataset was used?" ]
[ [ "1708.05521-Joy Dataset-0", "1708.05521-Results and Discussion-0", "1708.05521-4-Table2-1.png", "1708.05521-Fear Dataset-0", "1708.05521-Anger Dataset-0", "1708.05521-Sadness Dataset-0", "1708.05521-Experimental Setup-6" ], [ "1708.05521-Experimental Setup-0" ] ]
[ "0.689 on development and 0.522 on test set", "Dataset from shared task BIBREF5" ]
12
1908.11049
Multilingual and Multi-Aspect Hate Speech Analysis
Current research on hate speech analysis is typically oriented towards monolingual and single classification tasks. In this paper, we present a new multilingual multi-aspect hate speech analysis dataset and use it to test the current state-of-the-art multilingual multitask learning approaches. We evaluate our dataset in various classification settings, then we discuss how to leverage our annotations in order to improve hate speech detection and classification in general.
{ "paragraphs": [ [ "With the expanding amount of text data generated on different social media platforms, current filters are insufficient to prevent the spread of hate speech. Most internet users involved in a study conducted by the Pew Research Center report having been subjected to offensive name calling online or witnessed someone being physically threatened or harassed online. Additionally, Amnesty International within Element AI have lately reported that many women politicians and journalists are assaulted every 30 seconds on Twitter. This is despite the Twitter policy condemning the promotion of violence against people on the basis of race, ethnicity, national origin, sexual orientation, gender identity, religious affiliation, age, disability, or serious disease. Hate speech may not represent the general opinion, yet it promotes the dehumanization of people who are typically from minority groups BIBREF0, BIBREF1 and can incite hate crime BIBREF2.", "Moreover, although people of various linguistic backgrounds are exposed to hate speech BIBREF3, BIBREF2, English is still at the center of existing work on toxic language analysis. Recently, some research studies have been conducted on languages such as German BIBREF4, Arabic BIBREF5, and Italian BIBREF6. However, such studies usually use monolingual corpora and do not contrast, or examine the correlations between online hate speech in different languages. On the other hand, tasks involving more than one language such as the hatEval task, which covers English and Spanish, include only separate classification tasks, namely (a) women and immigrants as target groups, (b) individual or generic hate and, (c) aggressive or non-aggressive hate speech.", "Treating hate speech classification as a binary task may not be enough to inspect the motivation and the behavior of the users promoting it and, how people would react to it. For instance, the hateful tweets presented in Figure FIGREF5 show toxicity directed towards different targets, with or without using slurs, and generating several types of reactions. We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments. To the best of our knowledge there are no other hate speech datasets that attempt to capture fear out of ignorance in hateful tweets or examine how people react to hate speech. We claim that our multi-aspect annotation schema would provide a valuable insight into several linguistic and cultural differences and bias in hate speech.", "We use Amazon Mechanical Turk to label around 13,000 potentially derogatory tweets in English, French, and Arabic based on the above mentioned aspects and, regard each aspect as a prediction task. Since in natural language processing, there is a peculiar interest in multitask learning, where different tasks can be used to help each other BIBREF7, BIBREF8, BIBREF9, we use a unified model to handle the annotated data in all three languages and five tasks. We adopt BIBREF8 as a learning algorithm adapted to loosely related tasks such as our five annotated aspects and, use the Babylon cross-lingual embeddings BIBREF10 to align the three languages. We compare the multilingual multitask learning settings with monolingual multitask, multilingual single-task, and monolingual single-task learning settings respectively. Then, we report the performance results of the different settings and discuss how each task affects the remaining ones. We release our dataset and code to the community to extend research work on multilingual hate speech detection and classification." ], [ "There is little consensus on the difference between profanity and hate speech and, how to define the latter BIBREF17. As shown in Figure FIGREF11, slurs are not an unequivocal indicator of hate speech and can be part of a non-aggressive conversation, while some of the most offensive comments may come in the form of subtle metaphors or sarcasm BIBREF18. Consequently, there is no existing human annotated vocabulary that explicitly reveals the presence of hate speech, which makes the available hate speech corpora sparse and noisy BIBREF19.", "Given the subjectivity and the complexity of such data, annotation schemes have rarely been made fine-grained. Table TABREF10 compares different labelsets that exist in the literature. For instance, BIBREF12 use racist, sexist, and normal as labels; BIBREF13 label their data as hateful, offensive (but not hateful), and neither, while BIBREF16 present an English dataset that records the target category based on which hate speech discriminates against people, such as ethnicity, gender, or sexual orientation and ask human annotators to classify the tweets as hate and non hate. BIBREF15 label their data as offensive, abusive, hateful, aggressive, cyberbullying, spam, and normal. On the other hand, BIBREF20 have chosen to detect ideologies of hate speech counting 40 different hate ideologies among 13 extremist hate groups.", "The detection of hate speech targets is yet another challenging aspect of the annotation. BIBREF21 report the bias that exists in the current datasets towards identity words, such as women, which may later cause false predictions. They propose to debias gender identity word embeddings with additional data for training and tuning their binary classifier. We address this false positive bias problem and the common ambiguity of target detection by asking the annotators to label target attributes such as origin, gender, or religious affiliation within 16 named target groups such as refugees, or immigrants.", "Furthermore, BIBREF22 have reproduced the experiment of BIBREF12 in order to study how hate speech affects the popularity of a tweet, but discovered that some tweets have been deleted. For replication purposes, we provide the community with anonymized tweet texts rather than IDs.", "Non-English hate speech datasets include Italian, German, Dutch, and Arabic corpora. BIBREF6 present a dataset of Italian tweets, in which the annotations capture the degree of intensity of offensive and aggressive tweets, in addition to whether the tweets are ironic and contain stereotypes or not. BIBREF2 have collected more than 500 German tweets against refugees, and annotated them as hateful and not hateful. BIBREF23 detect bullies and victims among youngsters in Dutch comments on AskFM, and classify cyberbullying comments as insults or threats. Moreover, BIBREF5 provide a corpus of Arabic sectarian speech.", "Another predominant phenomenon in hate speech corpora is code switching. BIBREF24 present a dataset of code mixed Hindi-English tweets, while BIBREF25 report the presence of Hindi tokens in English data and use multilingual word embeddings to deal with this issue when detecting toxicity. Similarly, we use such embeddings to take advantage of the multilinguality and comparability of our corpora during the classification.", "Our dataset is the first trilingual dataset comprising English, French, and Arabic tweets that encompasses various targets and hostility types. Additionally, to the best of our knowledge, this is the first work that examines how annotators react to hate speech comments.", "To fully exploit the collected annotations, we tested multitask learning on our dataset. Multitask learning BIBREF7 allows neural networks to share parameters with one another and, thus, learn from related tasks. It has been used in different NLP tasks such as parsing BIBREF9, dependency parsing BIBREF26, neural machine translation BIBREF27, sentiment analysis BIBREF28, and other tasks. Multitask learning architectures tackle challenges that include sharing the label space and the question of private and shared space for loosely related tasks BIBREF8, for which techniques may involve a massive space of potential parameter sharing architectures." ], [ "In this section, we present our data collection methodology and annotation process." ], [ "Considering the cultural differences and commonly debated topics in the main geographic regions where English, French, and Arabic are spoken, searching for equivalent terms in the three languages led to different results at first. Therefore, after looking for 1,000 tweets per 15 more or less equivalent phrases in the three languages, we revised our search words three times by questioning the results, adding phrases, and taking off unlikely ones in each of the languages. In fact, we started our data collection by searching for common slurs and demeaning expressions such as “go back to where you come from”. Then, we observed that discussions about controversial topics, such as feminism in general, illegal immigrants in English, Islamo-gauchisme (“Islamic leftism\") in French, or Iran in Arabic were more likely to provoke disputes, comments filled with toxicity and thus, notable insult patterns that we looked for in subsequent search rounds." ], [ "All of the annotated tweets include original tweets only, whose content has been processed by (1) deleting unarguably detectable spam tweets, (2) removing unreadable characters and emojis, and (3) masking the names of mentioned users using @user and potentially enclosed URLs using @url. As a result, annotators had to face the lack of context generated by this normalization process.", "Furthermore, we perceived code-switching in English where Hindi, Spanish, and French tokens appear in the tweets. Some French tweets also contain Romanized dialectal Arabic tokens generated by, most likely, bilingual North African Twitter users. Hence, although we eliminated most of these tweets in order to avoid misleading the annotators, the possibly remaining ones still added noise to the data.", "One more challenge that the annotators and ourselves had to tackle, consisted of Arabic diglossia and switching between different Arabic dialects and Modern Standard Arabic (MSA). While MSA represents the standardized and literary variety of Arabic, there are several Arabic dialects spoken in North Africa and the Middle East in use on Twitter. Therefore, we searched for derogatory terms adapted to different circumstances, and acquired an Arabic corpus that combines tweets written in MSA and Arabic dialects. For instance, the tweet shown in Figure FIGREF5 contains a dialectal slur that means “maiden.”" ], [ "We rely on the general public opinion and common linguistic knowledge to assess how people view and react to hate speech. Given the subjectivity and difficulty of the task, we reminded the annotators not to let their personal opinions about the topics being discussed in the tweets influence their annotation decisions.", "Our annotation guidelines explained the fact that offensive comments and hate do not necessarily come in the form of profanity. Since different degrees of discrimination work on the dehumanization of individuals or groups of people in distinct ways, we chose not to annotate the tweets within two or three classes. For instance, a sexist comment can be disrespectful, hateful, or offensive towards women. Our initial labelset was established in conformity with the prevalent anti-social behaviors people tend to deal with. We also chose to address the problem of false positives caused by the misleading use of identity words by asking the annotators to label both the target attributes and groups." ], [ "To prevent scams, we also prepared three annotation guideline forms and three aligned labelsets written in English, French, and Modern Standard Arabic with respect to the language of the tweets to be annotated.", "We requested native speakers to annotate the data and chose annotators with good reputation scores (more than 0.90). We informed the annotator in the guidelines, that in case of noticeable patterns of random labeling on a substantial number of tweets, their work will be rejected and we may have to block them. Since the rejection affects the reputation of the annotators and their chances to get new tasks on Amazon Mechanical Turk, well-reputed annotators are usually reliable. We have divided our corpora into smaller batches on Amazon Mechanical Turk in order to facilitate the analysis of the annotations of the workers and, fairly identify any incoherence patterns possibly caused by the use of an automatic translation system on the tweets, or the repetition of the same annotation schema. If we reject the work of a scam, we notify them, then reassign the tasks to other annotators." ], [ "We initially put samples of 100 tweets in each of the three languages on Amazon Mechanical Turk. We showed the annotators the tweet along with lists of labels describing (a) whether it is direct or indirect hate speech; (b) if the tweet is dangerous, offensive, hateful, disrespectful, confident or supported by some URL, fearful out of ignorance, or other; (c) the target attribute based on which it discriminates against people, specifically, race, ethnicity, nationality, gender, gender identity, sexual orientation, religious affiliation, disability, and other (“other” could refer to political ideologies or social classes.); (d) the name of its target group, and (e) whether the annotators feel anger, sadness, fear or nothing about the tweets.", "Each tweet has been labeled by three annotators. We have provided them with additional text fields to fill in with labels or adjectives that would (1) better describe the tweet, (2) describe how they feel about it more accurately, and (3) name the group of people the tweet shows bias against. We kept the most commonly used labels from our initial labelset, took off some of the initial class names and added frequently introduced labels, especially the emotions of the annotators when reading the tweets and the names of the target groups. For instance, after this step, we have ended up merging race, ethnicity, nationality into one label origin given common confusions we noticed and; added disgust and shock to the emotion labelset; and introduced socialists as a target group label since many annotators have suggested these labels." ], [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets. Each of the annotated aspects represents a classification task of its own, that could either be evaluated independently, or, as intended in this paper, tested on how it impacts other tasks. The different labels are designed to facilitate the study of the correlations between the explicitness of the tweet, the type of hostility it conveys, its target attribute, the group it dehumanizes, how different people react to it, and the performance of multitask learning on the five tasks. We assigned each tweet to five annotators, then applied majority voting to each of the labeling tasks. Given the numbers of annotators and labels in each annotation sub-task, we allowed multilabel annotations in the most subjective classification tasks, namely the hostility type and the annotator's sentiment labels, in order to keep the right human-like approximations. If there are two annotators agreeing on two labels respectively, we add both labels to the annotation.", "The average Krippendorff scores for inter-annotator agreement (IAA) are 0.153, 0.244, and 0.202 for English, French, and Arabic respectively, which are comparable to existing complex annotations BIBREF6 given the nature of the labeling tasks and the number of labels.", "We present the labelset the annotators refer to, and statistics of our annotated data in the following." ], [ "Annotators determine the explicitness of the tweet by labeling it as direct or indirect speech. This should be based on whether the target is explicitly named, or less easily discernible, especially if the tweet contains humor, metaphor, or figurative speech. Table TABREF20 shows that even when partly using equivalent keywords to search for candidate tweets, there are still significant differences in the resulting data." ], [ "To identify the hostility type of the tweet, we stick to the following conventions: (1) if the tweet sounds dangerous, it should be labeled as abusive; (2) according to the degree to which it spreads hate and the tone its author uses, it can be hateful, offensive or disrespectful; (3) if the tweet expresses or spreads fear out of ignorance against a group of individuals, it should be labeled as fearful; (4) otherwise it should be annotated as normal. We define this task to be multilabel. Table TABREF20 shows that hostility types are relatively consistent across different languages and offensive is the most frequent label." ], [ "After annotating the pilot dataset, we noticed common misconceptions regarding race, ethnicity, and nationality, therefore we merged these attributes into one label origin. Then, we asked the annotators to determine whether the tweet insults or discriminates against people based on their (1) origin, (2) religious affiliation, (3) gender, (4) sexual orientation, (5) special needs or (6) other. Table TABREF20 shows there are fewer tweets targeting disability in Arabic compared to English and French and no tweets insulting people based on their sexual orientation which may be due to the fact that the labels of gender, gender identity, and sexual orientation use almost the same wording. On the other hand, French contains a small number of tweets targeting people based on their gender in comparison to English and Arabic. We have observed significant differences in terms of target attributes in the three languages. More data may help us examine the problems affecting targets of different linguistic backgrounds." ], [ "We determined 16 common target groups tagged by the annotators after the first annotation step. The annotators had to decide on whether the tweet is aimed at women, people of African descent, Hispanics, gay people, Asians, Arabs, immigrants in general, refugees; people of different religious affiliations such as Hindu, Christian, Jewish people, and Muslims; or from political ideologies socialists, and others. We also provided the annotators with a category to cover hate directed towards one individual, which cannot be generalized. In case the tweet targets more than one group of people, the annotators should choose the group which would be the most affected by it according to them. Table TABREF10 shows the counts of the five categories out of 16 that commonly occur in the three languages. In fact, most of the tweets target individuals or fall into the “other” category. In the latter case, they may target people with different political views such as liberals or conservatives in English and French, or specific ethnic groups such as Kurdish people in Arabic. English tweets tend to have more tweets targeting people with special needs, due to common language-specific demeaning terms used in conversations where people insult one another. Arabic tweets contain more hateful comments towards women for the same reason. On the other hand, the French corpus contains more tweets that are offensive towards African people, due to hateful comments generated by debates about immigrants." ], [ "We claim that the choice of a suitable emotion representation model is key to this sub-task, given the subjective nature and social ground of the annotator's sentiment analysis. After collecting the annotation results of the pilot dataset regarding how people feel about the tweets, and observing the added categories, we adopted a range of sentiments that are in the negative and neutral scales of the hourglass of emotions introduced by BIBREF29. This model includes sentiments that are connected to objectively assessed natural language opinions, and excludes what is known as self-conscious or moral emotions such as shame and guilt. Our labels include shock, sadness, disgust, anger, fear, confusion in case of ambivalence, and indifference. This is the second multilabel task of our model.", "Table TABREF20 shows more tweets making the annotators feel disgusted and angry in English, while annotators show more indifference in both French and Arabic. A relatively more frequent label in both French and Arabic is shock, therefore reflecting what some of the annotators were feeling during the labeling process." ], [ "We report and discuss the results of five classification tasks: (1) the directness of the speech, (2) the hostility type of the tweet, (3) the discriminating target attribute, (4) the target group, and (5) the annotator's sentiment." ], [ "We compare both traditional baselines using bag-of-words (BOW) as features on Logistic regression (LR), and deep learning based methods.", "For deep learning based models, we run bidirectional LSTM (biLSTM) models with one hidden layer on each of the classification tasks. Deeper BiLSTM models performed poorly due to the size of the tweets. We chose to use Sluice networks BIBREF8 since they are suitable for loosely related tasks such as the annotated aspects of our corpora.", "We test different models, namely single task single language (STSL), single task multilingual (STML), and multitask multilingual models (MTML) on our dataset. In multilingual settings, we tested Babylon multilingual word embeddings BIBREF10 and MUSE BIBREF30 on the different tasks. We use Babylon embeddings since they appear to outperform MUSE on our data.", "Sluice networks BIBREF8 learn the weights of the neural networks sharing parameters (sluices) jointly with the rest of the model and share an embedding layer, Babylon embeddings in our case, that associates the elements of an input sequence. We use a standard 1-layer BiLSTM partitioned into two subspaces, a shared subspace and a private one, forced to be orthogonal through a regularization penalty term in the loss function in order to enable the multitask network to learn both task-specific and shared representations. The hidden layer has a dimension of 200, the learning rate is initially set to 0.1 with a learning rate decay, and we use the DyNet BIBREF31 automatic minibatch function to speed-up the computation. We initialize the cross-stitch unit to imbalanced, set the standard deviation of the Gaussian noise to 2, and use simple stochastic gradient descent (SGD) as the optimizer.", "All compared methods use the same split as train:dev:test=8:1:1 and the reported results are based on the test set. We use the dev set to tune the threshold for each binary classification problem in the multilabel classification settings of each task." ], [ "We report both the micro and macro-F1 scores of the different classification tasks in Tables TABREF27 and TABREF28. Majority refers to labeling based on the majority label, LR to logistic regression, STSL to single task single language models, STML to single task multilingual models, and MTML to multitask multilingual models." ], [ "STSL performs the best among all models on the directness classification, and it is also consistent in both micro and macro-F1 scores. This is due to the fact that the directness has only two labels and multilabeling is not allowed in this task. Tasks involving imbalanced data, multiclass and multilabel annotations harm the performance of the directness in multitask settings.", "Since macro-F1 is the average of all F1 scores of individual labels, all deep learning models have high macro-F1 scores in English which indicates that they are particularly good at classifying the direct class. STSL is also comparable or better than traditional BOW feature-based classifiers when performed on other tasks in terms of micro-F1 and for most of the macro-F1 scores. This shows the power of the deep learning approach." ], [ "Except for the directness, MTSL usually outperforms STSL or is comparable to it. When we jointly train each task on the three languages, the performance decreases in most cases, other than the target group classification tasks. This may be due to the difference in label distributions across languages. Yet, multilingual training of the target group classification task improves in all languages. Since the target group classification task involves 16 labels, the amount of data annotated for each label is lower than in other tasks. Hence, when aggregating annotated data in different languages, the size of the training data also increases, due to the relative regularity of identification words of different groups in all three languages in comparison to other tasks." ], [ "MTML settings do not lead to a big improvement which may be due to the class imbalance, multilabel tasks, and the difference in the nature of the tasks. In order to inspect which tasks hurt or help one another, we trained multilingual models for pairwise tasks such as (group, target), (hostility, annotator's sentiment), (hostility, target), (hostility, group), (annotator's sentiment, target) and (annotator's sentiment, group). We noticed that when trained jointly, the target attribute slightly improves the performance of the tweet's hostility type classification by 0.03,0.05 and 0.01 better than the best reported scores in English, French, and Arabic, respectively. When target groups and attributes are trained jointly, the macro F-score of the target group classification in Arabic improves by 0.25 and when we train the tweet's hostility type within the annotator's sentiment, we improve the macro F-score of Arabic by 0.02. We believe that we can take advantage of the correlations between target attributes and groups along with other tasks, to set logic rules and develop better multilingual and multitask settings." ], [ "In this paper, we presented a multilingual hate speech dataset of English, French, and Arabic tweets. We analyzed in details the difficulties related to the collection and annotation of this dataset. We performed multilingual and multitask learning on our corpora and showed that deep learning models perform better than traditional BOW-based models in most of the multilabel classification tasks. Multilingual multitask learning also helped tasks where each label had less annotated data associated with it. Better tuned deep learning settings in our multilingual and multitask models would be expected to outperform the existing state-of-the-art embeddings and algorithms applied to our data. The different annotation labels and comparable corpora would help us perform transfer learning and investigate how multimodal information on the tweets, additional unlabeled data, label transformation, and label information sharing may boost the classification performance in the future." ], [ "This paper was supported by the Early Career Scheme (ECS, No. 26206717) from Research Grants Council in Hong Kong, and by postgraduate studentships from the Computer Science and Engineering department of the Hong Kong University of Science and Technology." ] ], "section_name": [ "Introduction", "Related Work", "Dataset", "Dataset ::: Data Collection", "Dataset ::: Linguistic Challenges", "Dataset ::: Annotation Process", "Dataset ::: Annotation Process ::: Avoiding scams", "Dataset ::: Pilot Dataset", "Dataset ::: Final Dataset", "Dataset ::: Final Dataset ::: Directness label", "Dataset ::: Final Dataset ::: Hostility type", "Dataset ::: Final Dataset ::: Target attribute", "Dataset ::: Final Dataset ::: Target group", "Dataset ::: Final Dataset ::: Sentiment of the annotator", "Experiments", "Experiments ::: Models", "Experiments ::: Results and Analysis", "Experiments ::: Results and Analysis ::: STSL", "Experiments ::: Results and Analysis ::: MTSL", "Experiments ::: Results and Analysis ::: MTML", "Conclusion", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "424b2363d7948b6344f14f4ac1a84aa6f07b6f33", "7d897764b97144b0cb79437baf6505a3f2f33768", "c8590223239d93afbb3b63408da8759dd74cb701" ], "answer": [ { "evidence": [ "We rely on the general public opinion and common linguistic knowledge to assess how people view and react to hate speech. Given the subjectivity and difficulty of the task, we reminded the annotators not to let their personal opinions about the topics being discussed in the tweets influence their annotation decisions." ], "extractive_spans": [ "rely on the general public opinion and common linguistic knowledge to assess how people view and react to hate speech" ], "free_form_answer": "", "highlighted_evidence": [ "We rely on the general public opinion and common linguistic knowledge to assess how people view and react to hate speech." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Treating hate speech classification as a binary task may not be enough to inspect the motivation and the behavior of the users promoting it and, how people would react to it. For instance, the hateful tweets presented in Figure FIGREF5 show toxicity directed towards different targets, with or without using slurs, and generating several types of reactions. We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments. To the best of our knowledge there are no other hate speech datasets that attempt to capture fear out of ignorance in hateful tweets or examine how people react to hate speech. We claim that our multi-aspect annotation schema would provide a valuable insight into several linguistic and cultural differences and bias in hate speech." ], "extractive_spans": [], "free_form_answer": "Hate speech is a text that contains one or more of the following aspects: directness, offensiveness, targeting a group or individual based on specific attributes, overall negativity.", "highlighted_evidence": [ "We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Treating hate speech classification as a binary task may not be enough to inspect the motivation and the behavior of the users promoting it and, how people would react to it. For instance, the hateful tweets presented in Figure FIGREF5 show toxicity directed towards different targets, with or without using slurs, and generating several types of reactions. We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments. To the best of our knowledge there are no other hate speech datasets that attempt to capture fear out of ignorance in hateful tweets or examine how people react to hate speech. We claim that our multi-aspect annotation schema would provide a valuable insight into several linguistic and cultural differences and bias in hate speech." ], "extractive_spans": [ " in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis." ], "free_form_answer": "", "highlighted_evidence": [ " We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "01448f46cbd6b9af4b94ff9cf65a7d60bdcfa79c", "2b55ef43ccce0cb2f8e62c7a740caa4809c2ab13", "6fff26727f82935e76bf6367e32ecd4201b33cfe", "9a21045e1c0436d63272e893f6398e075d983624" ], "answer": [ { "evidence": [ "Our dataset is the first trilingual dataset comprising English, French, and Arabic tweets that encompasses various targets and hostility types. Additionally, to the best of our knowledge, this is the first work that examines how annotators react to hate speech comments." ], "extractive_spans": [ "English", "French", "Arabic" ], "free_form_answer": "", "highlighted_evidence": [ "Our dataset is the first trilingual dataset comprising English, French, and Arabic tweets that encompasses various targets and hostility types. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our dataset is the first trilingual dataset comprising English, French, and Arabic tweets that encompasses various targets and hostility types. Additionally, to the best of our knowledge, this is the first work that examines how annotators react to hate speech comments." ], "extractive_spans": [ "English", "French", "Arabic" ], "free_form_answer": "", "highlighted_evidence": [ "Our dataset is the first trilingual dataset comprising English, French, and Arabic tweets that encompasses various targets and hostility types. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets. Each of the annotated aspects represents a classification task of its own, that could either be evaluated independently, or, as intended in this paper, tested on how it impacts other tasks. The different labels are designed to facilitate the study of the correlations between the explicitness of the tweet, the type of hostility it conveys, its target attribute, the group it dehumanizes, how different people react to it, and the performance of multitask learning on the five tasks. We assigned each tweet to five annotators, then applied majority voting to each of the labeling tasks. Given the numbers of annotators and labels in each annotation sub-task, we allowed multilabel annotations in the most subjective classification tasks, namely the hostility type and the annotator's sentiment labels, in order to keep the right human-like approximations. If there are two annotators agreeing on two labels respectively, we add both labels to the annotation." ], "extractive_spans": [ "English", "French", "Arabic" ], "free_form_answer": "", "highlighted_evidence": [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use Amazon Mechanical Turk to label around 13,000 potentially derogatory tweets in English, French, and Arabic based on the above mentioned aspects and, regard each aspect as a prediction task. Since in natural language processing, there is a peculiar interest in multitask learning, where different tasks can be used to help each other BIBREF7, BIBREF8, BIBREF9, we use a unified model to handle the annotated data in all three languages and five tasks. We adopt BIBREF8 as a learning algorithm adapted to loosely related tasks such as our five annotated aspects and, use the Babylon cross-lingual embeddings BIBREF10 to align the three languages. We compare the multilingual multitask learning settings with monolingual multitask, multilingual single-task, and monolingual single-task learning settings respectively. Then, we report the performance results of the different settings and discuss how each task affects the remaining ones. We release our dataset and code to the community to extend research work on multilingual hate speech detection and classification." ], "extractive_spans": [ "English, French, and Arabic " ], "free_form_answer": "", "highlighted_evidence": [ "We use Amazon Mechanical Turk to label around 13,000 potentially derogatory tweets in English, French, and Arabic based on the above mentioned aspects and, regard each aspect as a prediction task. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "16b602b30653bcd7ac8b51f86442e3baa1cef7f3", "ca5058ebd3c569b04caec881b7a8c5d26b967197", "ed32bbb0d41770d64e5a834cf47177c3fb7c10aa", "f59f7723e58ff79a38322f802d70852ec049d041" ], "answer": [ { "evidence": [ "Treating hate speech classification as a binary task may not be enough to inspect the motivation and the behavior of the users promoting it and, how people would react to it. For instance, the hateful tweets presented in Figure FIGREF5 show toxicity directed towards different targets, with or without using slurs, and generating several types of reactions. We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments. To the best of our knowledge there are no other hate speech datasets that attempt to capture fear out of ignorance in hateful tweets or examine how people react to hate speech. We claim that our multi-aspect annotation schema would provide a valuable insight into several linguistic and cultural differences and bias in hate speech." ], "extractive_spans": [ " (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments" ], "free_form_answer": "", "highlighted_evidence": [ "We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Treating hate speech classification as a binary task may not be enough to inspect the motivation and the behavior of the users promoting it and, how people would react to it. For instance, the hateful tweets presented in Figure FIGREF5 show toxicity directed towards different targets, with or without using slurs, and generating several types of reactions. We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments. To the best of our knowledge there are no other hate speech datasets that attempt to capture fear out of ignorance in hateful tweets or examine how people react to hate speech. We claim that our multi-aspect annotation schema would provide a valuable insight into several linguistic and cultural differences and bias in hate speech." ], "extractive_spans": [ "whether the text is direct or indirect", "if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal", "the attribute based on which it discriminates against an individual or a group of people", "the name of this group", " how the annotators feel about its content within a range of negative to neutral sentiments" ], "free_form_answer": "", "highlighted_evidence": [ "Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Treating hate speech classification as a binary task may not be enough to inspect the motivation and the behavior of the users promoting it and, how people would react to it. For instance, the hateful tweets presented in Figure FIGREF5 show toxicity directed towards different targets, with or without using slurs, and generating several types of reactions. We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments. To the best of our knowledge there are no other hate speech datasets that attempt to capture fear out of ignorance in hateful tweets or examine how people react to hate speech. We claim that our multi-aspect annotation schema would provide a valuable insight into several linguistic and cultural differences and bias in hate speech." ], "extractive_spans": [ "(a) whether the text is direct or indirect", "(b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal", "(c) the attribute based on which it discriminates against an individual or a group of people", "(d) the name of this group", "(e) how the annotators feel about its content within a range of negative to neutral sentiments" ], "free_form_answer": "", "highlighted_evidence": [ "Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We present the labelset the annotators refer to, and statistics of our annotated data in the following.", "Dataset ::: Final Dataset ::: Directness label", "Annotators determine the explicitness of the tweet by labeling it as direct or indirect speech. This should be based on whether the target is explicitly named, or less easily discernible, especially if the tweet contains humor, metaphor, or figurative speech. Table TABREF20 shows that even when partly using equivalent keywords to search for candidate tweets, there are still significant differences in the resulting data.", "Dataset ::: Final Dataset ::: Hostility type", "To identify the hostility type of the tweet, we stick to the following conventions: (1) if the tweet sounds dangerous, it should be labeled as abusive; (2) according to the degree to which it spreads hate and the tone its author uses, it can be hateful, offensive or disrespectful; (3) if the tweet expresses or spreads fear out of ignorance against a group of individuals, it should be labeled as fearful; (4) otherwise it should be annotated as normal. We define this task to be multilabel. Table TABREF20 shows that hostility types are relatively consistent across different languages and offensive is the most frequent label.", "Dataset ::: Final Dataset ::: Target attribute", "After annotating the pilot dataset, we noticed common misconceptions regarding race, ethnicity, and nationality, therefore we merged these attributes into one label origin. Then, we asked the annotators to determine whether the tweet insults or discriminates against people based on their (1) origin, (2) religious affiliation, (3) gender, (4) sexual orientation, (5) special needs or (6) other. Table TABREF20 shows there are fewer tweets targeting disability in Arabic compared to English and French and no tweets insulting people based on their sexual orientation which may be due to the fact that the labels of gender, gender identity, and sexual orientation use almost the same wording. On the other hand, French contains a small number of tweets targeting people based on their gender in comparison to English and Arabic. We have observed significant differences in terms of target attributes in the three languages. More data may help us examine the problems affecting targets of different linguistic backgrounds.", "Dataset ::: Final Dataset ::: Target group", "We determined 16 common target groups tagged by the annotators after the first annotation step. The annotators had to decide on whether the tweet is aimed at women, people of African descent, Hispanics, gay people, Asians, Arabs, immigrants in general, refugees; people of different religious affiliations such as Hindu, Christian, Jewish people, and Muslims; or from political ideologies socialists, and others. We also provided the annotators with a category to cover hate directed towards one individual, which cannot be generalized. In case the tweet targets more than one group of people, the annotators should choose the group which would be the most affected by it according to them. Table TABREF10 shows the counts of the five categories out of 16 that commonly occur in the three languages. In fact, most of the tweets target individuals or fall into the “other” category. In the latter case, they may target people with different political views such as liberals or conservatives in English and French, or specific ethnic groups such as Kurdish people in Arabic. English tweets tend to have more tweets targeting people with special needs, due to common language-specific demeaning terms used in conversations where people insult one another. Arabic tweets contain more hateful comments towards women for the same reason. On the other hand, the French corpus contains more tweets that are offensive towards African people, due to hateful comments generated by debates about immigrants.", "Dataset ::: Final Dataset ::: Sentiment of the annotator", "We claim that the choice of a suitable emotion representation model is key to this sub-task, given the subjective nature and social ground of the annotator's sentiment analysis. After collecting the annotation results of the pilot dataset regarding how people feel about the tweets, and observing the added categories, we adopted a range of sentiments that are in the negative and neutral scales of the hourglass of emotions introduced by BIBREF29. This model includes sentiments that are connected to objectively assessed natural language opinions, and excludes what is known as self-conscious or moral emotions such as shame and guilt. Our labels include shock, sadness, disgust, anger, fear, confusion in case of ambivalence, and indifference. This is the second multilabel task of our model." ], "extractive_spans": [ "Directness", "Hostility", "Target group", "Target", "Sentiment of the annotator" ], "free_form_answer": "", "highlighted_evidence": [ "We present the labelset the annotators refer to, and statistics of our annotated data in the following.\n\nDataset ::: Final Dataset ::: Directness label\nAnnotators determine the explicitness of the tweet by labeling it as direct or indirect speech. ", "Dataset ::: Final Dataset ::: Hostility type\nTo identify the hostility type of the tweet, we stick to the following conventions: (1) if the tweet sounds dangerous, it should be labeled as abusive; (2) according to the degree to which it spreads hate and the tone its author uses, it can be hateful, offensive or disrespectful; (3) if the tweet expresses or spreads fear out of ignorance against a group of individuals, it should be labeled as fearful; (4) otherwise it should be annotated as normal.", "Dataset ::: Final Dataset ::: Target attribute\nAfter annotating the pilot dataset, we noticed common misconceptions regarding race, ethnicity, and nationality, therefore we merged these attributes into one label origin. ", "Dataset ::: Final Dataset ::: Target group\nWe determined 16 common target groups tagged by the annotators after the first annotation step. ", "Dataset ::: Final Dataset ::: Sentiment of the annotator\nWe claim that the choice of a suitable emotion representation model is key to this sub-task, given the subjective nature and social ground of the annotator's sentiment analysis. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "240af42531a51ebcb5e0d7d631ab0ade8eb640c6", "5e339791be22a55000871cfc5199c9485917d217", "b01c83aa5bbb89397c79c5017042a26d549c2658", "c191ca71bfa6bb40fd054709af314053726ce88a" ], "answer": [ { "evidence": [ "We use Amazon Mechanical Turk to label around 13,000 potentially derogatory tweets in English, French, and Arabic based on the above mentioned aspects and, regard each aspect as a prediction task. Since in natural language processing, there is a peculiar interest in multitask learning, where different tasks can be used to help each other BIBREF7, BIBREF8, BIBREF9, we use a unified model to handle the annotated data in all three languages and five tasks. We adopt BIBREF8 as a learning algorithm adapted to loosely related tasks such as our five annotated aspects and, use the Babylon cross-lingual embeddings BIBREF10 to align the three languages. We compare the multilingual multitask learning settings with monolingual multitask, multilingual single-task, and monolingual single-task learning settings respectively. Then, we report the performance results of the different settings and discuss how each task affects the remaining ones. We release our dataset and code to the community to extend research work on multilingual hate speech detection and classification." ], "extractive_spans": [], "free_form_answer": "13 000 tweets", "highlighted_evidence": [ "We use Amazon Mechanical Turk to label around 13,000 potentially derogatory tweets in English, French, and Arabic based on the above mentioned aspects and, regard each aspect as a prediction task." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets. Each of the annotated aspects represents a classification task of its own, that could either be evaluated independently, or, as intended in this paper, tested on how it impacts other tasks. The different labels are designed to facilitate the study of the correlations between the explicitness of the tweet, the type of hostility it conveys, its target attribute, the group it dehumanizes, how different people react to it, and the performance of multitask learning on the five tasks. We assigned each tweet to five annotators, then applied majority voting to each of the labeling tasks. Given the numbers of annotators and labels in each annotation sub-task, we allowed multilabel annotations in the most subjective classification tasks, namely the hostility type and the annotator's sentiment labels, in order to keep the right human-like approximations. If there are two annotators agreeing on two labels respectively, we add both labels to the annotation." ], "extractive_spans": [], "free_form_answer": "13014", "highlighted_evidence": [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets. Each of the annotated aspects represents a classification task of its own, that could either be evaluated independently, or, as intended in this paper, tested on how it impacts other tasks. The different labels are designed to facilitate the study of the correlations between the explicitness of the tweet, the type of hostility it conveys, its target attribute, the group it dehumanizes, how different people react to it, and the performance of multitask learning on the five tasks. We assigned each tweet to five annotators, then applied majority voting to each of the labeling tasks. Given the numbers of annotators and labels in each annotation sub-task, we allowed multilabel annotations in the most subjective classification tasks, namely the hostility type and the annotator's sentiment labels, in order to keep the right human-like approximations. If there are two annotators agreeing on two labels respectively, we add both labels to the annotation." ], "extractive_spans": [ "5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets" ], "free_form_answer": "", "highlighted_evidence": [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets. Each of the annotated aspects represents a classification task of its own, that could either be evaluated independently, or, as intended in this paper, tested on how it impacts other tasks. The different labels are designed to facilitate the study of the correlations between the explicitness of the tweet, the type of hostility it conveys, its target attribute, the group it dehumanizes, how different people react to it, and the performance of multitask learning on the five tasks. We assigned each tweet to five annotators, then applied majority voting to each of the labeling tasks. Given the numbers of annotators and labels in each annotation sub-task, we allowed multilabel annotations in the most subjective classification tasks, namely the hostility type and the annotator's sentiment labels, in order to keep the right human-like approximations. If there are two annotators agreeing on two labels respectively, we add both labels to the annotation." ], "extractive_spans": [ "5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets" ], "free_form_answer": "", "highlighted_evidence": [ "The final dataset is composed of a pilot corpus of 100 tweets per language, and comparable corpora of 5,647 English tweets, 4,014 French tweets, and 3,353 Arabic tweets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What is their definition of hate speech?", "What languages does the new dataset contain?", "What aspects are considered?", "How big is their dataset?" ], "question_id": [ "ed44f7e698d6124cb86791841d02fc6f8b4d862a", "d9e7633004ed1bc1ee45be58409bcc1fa6db59b2", "c58ef13abe5fa91a761362ca962d7290312c74e4", "9ef0d2365bde0d18054511fbb53cec5fa2cda5ee" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "Hate speech", "Hate speech", "Hate speech", "Hate speech" ], "topic_background": [ "research", "research", "research", "research" ] }
{ "caption": [ "Figure 1: Annotation examples in our dataset.", "Figure 2: Three tweets in which (1) the first one accuses immigrants of harming society without using any direct insult; (2) the second insults a Hispanic person using a slur; and (3) the third one uses slurs to give a personal account. This shows that profanity is not a clear indicator of the presence of hate speech.", "Table 1: Comparative table of some of the available hate speech and abusive language corpora in terms of labels and sizes.", "Table 2: The label distributions of each task. The counts of direct and indirect hate speech include all tweets except those that are single labeled as “normal”. Tweet and annotator’s sentiment (Annotator) are multilabel classification tasks, while target attribute (Target) and target group (Group) are not.", "Table 3: Full evaluation scores of the only binary classification task where the single task single language model consistently outperforms multilingual multitask models.", "Table 4: Full evaluation of tasks where multilingual and multitask models outperform on average single task single language model on four different tasks." ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "3-Table1-1.png", "5-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png" ] }
[ "What is their definition of hate speech?", "How big is their dataset?" ]
[ [ "1908.11049-Dataset ::: Annotation Process-0", "1908.11049-Introduction-2" ], [ "1908.11049-Introduction-3", "1908.11049-Dataset ::: Final Dataset-0" ] ]
[ "Hate speech is a text that contains one or more of the following aspects: directness, offensiveness, targeting a group or individual based on specific attributes, overall negativity.", "13014" ]
13
1907.10676
Semantic Web for Machine Translation: Challenges and Directions
A large number of machine translation approaches have recently been developed to facilitate the fluid migration of content across languages. However, the literature suggests that many obstacles must still be dealt with to achieve better automatic translations. One of these obstacles is lexical and syntactic ambiguity. A promising way of overcoming this problem is using Semantic Web technologies. This article is an extended abstract of our systematic review on machine translation approaches that rely on Semantic Web technologies for improving the translation of texts. Overall, we present the challenges and opportunities in the use of Semantic Web technologies in Machine Translation. Moreover, our research suggests that while Semantic Web technologies can enhance the quality of machine translation outputs for various problems, the combination of both is still in its infancy.
{ "paragraphs": [ [ "Alongside increasing globalization comes a greater need for readers to understand texts in languages foreign to them. For example, approximately 48% of the pages on the Web are not available in English. The technological progress of recent decades has made both the distribution and access to content in different languages ever simpler. Translation aims to support users who need to access content in a language in which they are not fluent BIBREF0 .", "However, translation is a difficult task due to the complexity of natural languages and their structure BIBREF0 . In addition, manual translation does not scale to the magnitude of the Web. One remedy for this problem is MT. The main goal of MT is to enable people to assess content in languages other than the languages in which they are fluent BIBREF1 . From a formal point of view, this means that the goal of MT is to transfer the semantics of text from an input language to an output language BIBREF2 .", "Although MT systems are now popular on the Web, they still generate a large number of incorrect translations. Recently, Popović BIBREF3 has classified five types of errors that still remain in MT systems. According to research, the two main faults that are responsible for 40% and 30% of problems respectively, are reordering errors and lexical and syntactic ambiguity. Thus, addressing these barriers is a key challenge for modern translation systems. A large number of MT approaches have been developed over the years that could potentially serve as a remedy. For instance, translators began by using methodologies based on linguistics which led to the family of RBMT. However, RBMT systems have a critical drawback in their reliance on manually crafted rules, thus making the development of new translation modules for different languages even more difficult.", "SMT and EBMT were developed to deal with the scalability issue in RBMT BIBREF4 , a necessary characteristic of MT systems that must deal with data at Web scale. Presently, these approaches have begun to address the drawbacks of rule-based approaches. However, some problems that had already been solved for linguistics based methods reappeared. The majority of these problems are connected to the issue of ambiguity, including syntactic and semantic variations BIBREF0 . Nowadays, a novel SMT paradigm has arisen called NMT which relies on NN algorithms. NMT has been achieving impressive results and is now the state-of-the-art in MT approaches. However, NMT is still a statistical approach sharing some semantic drawbacks from other well-defined SMT approaches BIBREF5 .", "One possible solution to address the remaining issues of MT lies in the use of SWT, which have emerged over recent decades as a paradigm to make the semantics of content explicit so that it can be used by machines. It is believed that explicit semantic knowledge made available through these technologies can empower MT systems to supply translations with significantly better quality while remaining scalable. In particular, the disambiguated knowledge about real-world entities, their properties and their relationships made available on the LD Web can potentially be used to infer the right meaning of ambiguous sentences or words.", "According to our survey BIBREF6 , the obvious opportunity of using SWT for MT has already been studied by a number of approaches, especially w.r.t. the issue of ambiguity. In this paper, we present the challenges and opportunities in the use of SWT in MT for translating texts." ], [ "The idea of using a structured KB in MT systems started in the 90s with the work of Knight and Luk BIBREF7 . Still, only a few researchers have designed different strategies for benefiting of structured knowledge in MT architectures BIBREF8 . Recently, the idea of using KG into MT systems has gained renewed attention. Du et al. BIBREF9 created an approach to address the problem of OOV words by using BabelNet BIBREF10 . Their approach applies different methods of using BabelNet. In summary, they create additional training data and apply a post-editing technique, which replaces the OOV words while querying BabelNet. Shi et al. BIBREF11 have recently built a semantic embedding model reliant upon a specific KB to be used in NMT systems. The model relies on semantic embeddings to encode the key information contained in words to translate the meaning of sentences correctly. The work consists of mapping a source sentence to triples, which are then used to extract the intrinsic meaning of words to generate a target sentence. This mapping results in a semantic embedding model containing KB triples, which are responsible for gathering the key information of each word in the sentences." ], [ "The most problematic unresolved MT challenges, from our point of view, which are still experienced by the aforementioned MT approaches are the following:", "Additionally, there are five MT open challenges posed by Lopez and Post BIBREF12 which we describe more generically below.", "(1) Excessive focus on English and European languages as one of the involved languages in MT approaches and poor research on low-resource language pairs such as African and/or South American languages. (2) The limitations of SMT approaches for translating across domains. Most MT systems exhibit good performance on law and the legislative domains due to the large amount of data provided by the European Union. In contrast, translations performed on sports and life-hacks commonly fail, because of the lack of training data. (3) How to translate the huge amount of data from social networks that uniquely deal with no-standard speech texts from users (e.g., tweets). (4) The difficult translations among morphologically rich languages. This challenge shares the same problem with the first one, namely that most research work focuses on English as one of the involved languages. Therefore, MT systems which translate content between, for instance, Arabic and Spanish are rare. (5) For the speech translation task, the parallel data for training differs widely from real user speech.", "The challenges above are clearly not independent, which means that addressing one of them can have an impact on the others. Since NMT has shown impressive results on reordering, the main problem turns out to be the disambiguation process (both syntactically and semantically) in SMT approaches BIBREF0 ." ], [ "Based on the surveyed works on our research BIBREF6 , SWT have mostly been applied at the semantic analysis step, rather than at the other stages of the translation process, due to their ability to deal with concepts behind the words and provide knowledge about them. As SWT have developed, they have increasingly been able to resolve some of the open challenges of MT. They may be applied in different ways according to each MT approach.", "Disambiguation. Human language is very ambiguous. Most words have multiple interpretations depending on the context in which they are mentioned. In the MT field, WSD techniques are concerned with finding the respective meaning and correct translation to these ambiguous words in target languages. This ambiguity problem was identified early in MT development. In 1960 Bar-Hillel BIBREF1 stated that an MT system is not able to find the right meaning without a specific knowledge. Although the ambiguity problem has been lessened significantly since the contribution of Carpuat and subsequent works BIBREF13 , this problem still remains a challenge. As seen in Moussallem et al. BIBREF6 , MT systems still try to resolve this problem by using domain specific language models to prefer domain specific expressions, but when translating a highly ambiguous sentence or a short text which covers multiple domains, the languages models are not enough.", "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words. However, SWT were applied in two ways to support the semantic disambiguation in MT. First, the ambiguous words were recognized in the source text before carrying out the translation, applying a pre-editing technique. Second, SWT were applied to the output translation in the target language as a post-editing technique. Although applying one of these techniques has increased the quality of a translation, both techniques are tedious to implement when they have to translate common words instead of named entities, then be applied several times to achieve a successful translation.", "The real benefit of SW comes from its capacity to provide unseen knowledge about emergent data, which appears every day. Therefore, we suggest performing the topic-modelling technique over the source text to provide a necessary context before translation. Instead of applying the topic-modeling over the entire text, we would follow the principle of communication (i.e from 3 to 5 sentences for describing an idea and define a context for each piece of text. Thus, at the execution of a translation model in a given SMT, we would focus on every word which may be a homonymous or polysemous word. For every word which has more than one translation, a SPARQL query would be required to find the best combination in the current context. Thus, at the translation phase, the disambiguation algorithm could search for an appropriate word using different SW resources such as DBpedia, in consideration of the context provided by the topic modelling. The goal is to exploit the use of more than one SW resource at once for improving the translation of ambiguous terms. The use of two or more SW resources simultaneously has not yet been investigated.", "On the other hand, there is also a syntactic disambiguation problem which as yet lacks good solutions. For instance, the English language contains irregular verbs like “set” or “put”. Depending on the structure of a sentence, it is not possible to recognize their verbal tense, e.g., present or past tense. Even statistical approaches trained on huge corpora may fail to find the exact meaning of some words due to the structure of the language. Although this challenge has successfully been dealt with since NMT has been used for European languages, implementations of NMT for some non-European languages have not been fully exploited (e.g., Brazilian Portuguese, Latin-America Spanish, Zulu, Hindi) due to the lack of large bilingual data sets on the Web to be trained on. Thus, we suggest gathering relationships among properties within an ontology by using the reasoning technique for handling this issue. For instance, the sentence “Anna usually put her notebook on the table for studying\" may be annotated using a certain vocabulary and represented by triples. Thus, the verb “put\", which is represented by a predicate that groups essential information about the verbal tense, may support the generation step of a given MT system. This sentence usually fails when translated to rich morphological languages, such as Brazilian-Portuguese and Arabic, for which the verb influences the translation of “usually\" to the past tense. In this case, a reasoning technique may support the problem of finding a certain rule behind relationships between source and target texts in the alignment phase (training phase). However, a well-known problem of reasoners is the poor run-time performance. Therefore, this run-time deficiency needs to be addressed or minimized before implementing reasoners successfully into MT systems.", "Named Entities. Most NERD approaches link recognized entities with database entries or websites. This method helps to categorize and summarize text, but also contributes to the disambiguation of words in texts. The primary issue in MT systems is caused by common words from a source language that are used as proper nouns in a target language. For instance, the word “Kiwi\" is a family name in New Zealand which comes from the Māori culture, but it also can be a fruit, a bird, or a computer program. Named Entities are a common and difficult problem in both MT (see Koehn BIBREF0 ) and SW fields. The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 . In MT systems, however, this problem is directly related to the ambiguity problem and therefore has to be resolved in that wider context.", "Although MT systems include good recognition methods, they still need improvement. When an MT system does not recognize an entity, the translation output often has poor quality, immediately deteriorating the target text readability. Therefore, we suggest recognizing such entities before the translation process and first linking them to a reference knowledge base. Afterwards, the type of entities would be agglutinated along with their labels and their translations from a reference knowledge base. For instance, in NMT, the idea is to include in the training set for the aforementioned word “Kiwi\", “Kiwi.animal.link, Kiwi.person.link, Kiwi.food.link\" then finally to align them with the translations in the target text. For example, in SMT, the additional information can be included by XML or by an additional model. In contrast, in NMT, this additional information can be used as parameters in the training phase. This method would also contribute to OOV mistakes regarding names. This idea is supported by BIBREF11 where the authors encoded the types of entities along with the words to improve the translation of sentences between Chinese-English. Recently, Moussallem et al. BIBREF15 have shown promising results by applying a multilingual entity linking algorithm along with knowledge graph embeddings into the translation phase of a neural machine translation model for improving the translation of entities in texts. Their approach achieved significant and consistent improvements of +3 BLEU, METEOR and CHRF3 on average on the newstest datasets between 2014 and 2018 for WMT English-German translation task.", "Non-standard speech. The non-standard language problem is a rather important one in the MT field. Many people use the colloquial form to speak and write to each other on social networks. Thus, when MT systems are applied on this context, the input text frequently contains slang, MWE, and unreasonable abbreviations such as “Idr = I don't remember.” and “cya = see you”. Additionally, idioms contribute to this problem, decreasing the translation quality. Idioms often have an entirely different meaning than their separated word meanings. Consequently, most translation outputs of such expressions contain errors. For a good translation, the MT system needs to recognize such slang and try to map it to the target language. Some SMT systems like Google or Bing have recognition patterns over non-standard speech from old translations through the Web using SMT approaches. In rare cases SMT can solve this problem, but considering that new idiomatic expressions appear every day and most of them are isolated sentences, this challenge still remains open. Moreover, each person has their own speaking form.", "Therefore, we suggest that user characteristics can be applied as context for solving the non-standard language problem. These characteristics can be extracted from social media or user logs and stored as user properties using SWT, e.g., FOAF vocabulary. These ontologies have properties which would help identify the birth place or the interests of a given user. For instance, the properties foaf:interest and sioc:topic can be used to describe a given person's topics of interest. If the person is a computer scientist and the model contains topics such as “Information Technology\" and “Sports\", the SPARQL queries would search for terms inserted in this context which are ambiguous. Furthermore, the property foaf:based_near may support the problem of idioms. Assuming that a user is located in a certain part of Russia and he is reading an English web page which contains some idioms, this property may be used to gather appropriate translations of idioms from English to Russian using a given RDF KB. Therefore, an MT system can be adapted to a user by using specific data about him in RDF along with given KBs. Recently, Moussallem et al BIBREF16 have released a multilingual linked idioms dataset as a first part of supporting the investigation of this suggestion. The dataset contains idioms in 5 languages and are represented by knowledge graphs which facilitates the retrieval and inference of translations among the idioms.", "Translating KBs. According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems. For instance, some content provided by the German Wikipedia version are not contained in the Portuguese one. Therefore, the semantic structure (i.e., triples) provided by DBpedia versions of these respective Wikipedia versions would be able to help translate from German to Portuguese. For example, the terms contained in triples would be translated to a given target language using a dictionary containing domain words. This dictionary may be acquired in two different ways. First, by performing localisation, as in the work by J. P. McCrae BIBREF17 which translates the terms contained in a monolingual ontology, thus generating a bilingual ontology. Second, by creating embeddings of both DBpedia versions in order to determine the similarity between entities through their vectors. This insight is supported by some recent works, such as Ristoski et al. BIBREF18 , which creates bilingual embeddings using RDF based on Word2vec algorithms. Therefore, we suggest investigating an MT approach mainly based on SWT using NN for translating KBs. Once the KBs are translated, we suggest including them in the language models for improving the translation of entities.", "Besides C. Shi et al BIBREF11 , Arčan and Buitelaar BIBREF19 presented an approach to translate domain-specific expressions represented by English KBs in order to make the knowledge accessible for other languages. They claimed that KBs are mostly in English, therefore they cannot contribute to the problem of MT to other languages. Thus, they translated two KBs belonging to medical and financial domains, along with the English Wikipedia, to German. Once translated, the KBs were used as external resources in the translation of German-English. The results were quite appealing and the further research into this area should be undertaken. Recently, Moussallem et al BIBREF20 created THOTH, an approach which translates and enriches knowledge graphs across languages. Their approach relies on two different recurrent neural network models along with knowledge graph embeddings. The authors applied their approach on the German DBpedia with the German translation of the English DBpedia on two tasks: fact checking and entity linking. THOTH showed promising results with a translation accuracy of 88.56 while being capable of improving two NLP tasks with its enriched-German KG .", "" ], [ "In this extended abstract, we detailed the results of a systematic literature review of MT using SWT for improving the translation of natural language sentences. Our goal was to present the current open MT translation problems and how SWT can address these problems and enhance MT quality. Considering the decision power of SWT, they cannot be ignored by future MT systems. As a next step, we intend to continue elaborating a novel MT approach which is capable of simultaneously gathering knowledge from different SW resources and consequently being able to address the ambiguity of named entities and also contribute to the OOV words problem. This insight relies on our recent works, such as BIBREF15 , which have augmented NMT models with the usage of external knowledge for improving the translation of entities in texts. Additionally, future works that can be expected from fellow researchers, include the creation of multilingual linguistic ontologies describing the syntax of rich morphologically languages for supporting MT approaches. Also, the creation of more RDF multilingual dictionaries which can improve some MT steps, such as alignment." ], [ "This work was supported by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) in the projects LIMBO (no. 19F2029I) and OPAL (no. 19F2028A) as well as by the Brazilian National Council for Scientific and Technological Development (CNPq) (no. 206971/2014-1)" ] ], "section_name": [ "Introduction", "Related Works", "Open MT Challenges", "Suggestions and Possible Directions using SW", "conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "54cd625f96189ab4fac90fb5daa2b0ea4996af96", "63a0b0c3186ccd836eb87f9cb0a4e78a16126915", "eaf7ab2124a8acba2f48f44dfa1a88cde26e7b97", "f7b15d4c2b2eaf2c50cfa1c1e689f977035cf2b3" ], "answer": [ { "evidence": [ "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words. However, SWT were applied in two ways to support the semantic disambiguation in MT. First, the ambiguous words were recognized in the source text before carrying out the translation, applying a pre-editing technique. Second, SWT were applied to the output translation in the target language as a post-editing technique. Although applying one of these techniques has increased the quality of a translation, both techniques are tedious to implement when they have to translate common words instead of named entities, then be applied several times to achieve a successful translation.", "Named Entities. Most NERD approaches link recognized entities with database entries or websites. This method helps to categorize and summarize text, but also contributes to the disambiguation of words in texts. The primary issue in MT systems is caused by common words from a source language that are used as proper nouns in a target language. For instance, the word “Kiwi\" is a family name in New Zealand which comes from the Māori culture, but it also can be a fruit, a bird, or a computer program. Named Entities are a common and difficult problem in both MT (see Koehn BIBREF0 ) and SW fields. The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 . In MT systems, however, this problem is directly related to the ambiguity problem and therefore has to be resolved in that wider context.", "Non-standard speech. The non-standard language problem is a rather important one in the MT field. Many people use the colloquial form to speak and write to each other on social networks. Thus, when MT systems are applied on this context, the input text frequently contains slang, MWE, and unreasonable abbreviations such as “Idr = I don't remember.” and “cya = see you”. Additionally, idioms contribute to this problem, decreasing the translation quality. Idioms often have an entirely different meaning than their separated word meanings. Consequently, most translation outputs of such expressions contain errors. For a good translation, the MT system needs to recognize such slang and try to map it to the target language. Some SMT systems like Google or Bing have recognition patterns over non-standard speech from old translations through the Web using SMT approaches. In rare cases SMT can solve this problem, but considering that new idiomatic expressions appear every day and most of them are isolated sentences, this challenge still remains open. Moreover, each person has their own speaking form.", "Translating KBs. According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems. For instance, some content provided by the German Wikipedia version are not contained in the Portuguese one. Therefore, the semantic structure (i.e., triples) provided by DBpedia versions of these respective Wikipedia versions would be able to help translate from German to Portuguese. For example, the terms contained in triples would be translated to a given target language using a dictionary containing domain words. This dictionary may be acquired in two different ways. First, by performing localisation, as in the work by J. P. McCrae BIBREF17 which translates the terms contained in a monolingual ontology, thus generating a bilingual ontology. Second, by creating embeddings of both DBpedia versions in order to determine the similarity between entities through their vectors. This insight is supported by some recent works, such as Ristoski et al. BIBREF18 , which creates bilingual embeddings using RDF based on Word2vec algorithms. Therefore, we suggest investigating an MT approach mainly based on SWT using NN for translating KBs. Once the KBs are translated, we suggest including them in the language models for improving the translation of entities." ], "extractive_spans": [ "disambiguation", "Named Entities", "Non-standard speech", "Translating KBs" ], "free_form_answer": "", "highlighted_evidence": [ "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words.", "Named Entities. Most NERD approaches link recognized entities with database entries or websites. This method helps to categorize and summarize text, but also contributes to the disambiguation of words in texts. The primary issue in MT systems is caused by common words from a source language that are used as proper nouns in a target language. For instance, the word “Kiwi\" is a family name in New Zealand which comes from the Māori culture, but it also can be a fruit, a bird, or a computer program. Named Entities are a common and difficult problem in both MT (see Koehn BIBREF0 ) and SW fields. The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 .", "Non-standard speech. The non-standard language problem is a rather important one in the MT field. Many people use the colloquial form to speak and write to each other on social networks. Thus, when MT systems are applied on this context, the input text frequently contains slang, MWE, and unreasonable abbreviations such as “Idr = I don't remember.” and “cya = see you”. Additionally, idioms contribute to this problem, decreasing the translation quality. Idioms often have an entirely different meaning than their separated word meanings. Consequently, most translation outputs of such expressions contain errors. For a good translation, the MT system needs to recognize such slang and try to map it to the target language. Some SMT systems like Google or Bing have recognition patterns over non-standard speech from old translations through the Web using SMT approaches. In rare cases SMT can solve this problem, but considering that new idiomatic expressions appear every day and most of them are isolated sentences, this challenge still remains open.", "Translating KBs. According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words. However, SWT were applied in two ways to support the semantic disambiguation in MT. First, the ambiguous words were recognized in the source text before carrying out the translation, applying a pre-editing technique. Second, SWT were applied to the output translation in the target language as a post-editing technique. Although applying one of these techniques has increased the quality of a translation, both techniques are tedious to implement when they have to translate common words instead of named entities, then be applied several times to achieve a successful translation.", "Named Entities. Most NERD approaches link recognized entities with database entries or websites. This method helps to categorize and summarize text, but also contributes to the disambiguation of words in texts. The primary issue in MT systems is caused by common words from a source language that are used as proper nouns in a target language. For instance, the word “Kiwi\" is a family name in New Zealand which comes from the Māori culture, but it also can be a fruit, a bird, or a computer program. Named Entities are a common and difficult problem in both MT (see Koehn BIBREF0 ) and SW fields. The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 . In MT systems, however, this problem is directly related to the ambiguity problem and therefore has to be resolved in that wider context.", "Therefore, we suggest that user characteristics can be applied as context for solving the non-standard language problem. These characteristics can be extracted from social media or user logs and stored as user properties using SWT, e.g., FOAF vocabulary. These ontologies have properties which would help identify the birth place or the interests of a given user. For instance, the properties foaf:interest and sioc:topic can be used to describe a given person's topics of interest. If the person is a computer scientist and the model contains topics such as “Information Technology\" and “Sports\", the SPARQL queries would search for terms inserted in this context which are ambiguous. Furthermore, the property foaf:based_near may support the problem of idioms. Assuming that a user is located in a certain part of Russia and he is reading an English web page which contains some idioms, this property may be used to gather appropriate translations of idioms from English to Russian using a given RDF KB. Therefore, an MT system can be adapted to a user by using specific data about him in RDF along with given KBs. Recently, Moussallem et al BIBREF16 have released a multilingual linked idioms dataset as a first part of supporting the investigation of this suggestion. The dataset contains idioms in 5 languages and are represented by knowledge graphs which facilitates the retrieval and inference of translations among the idioms.", "Translating KBs. According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems. For instance, some content provided by the German Wikipedia version are not contained in the Portuguese one. Therefore, the semantic structure (i.e., triples) provided by DBpedia versions of these respective Wikipedia versions would be able to help translate from German to Portuguese. For example, the terms contained in triples would be translated to a given target language using a dictionary containing domain words. This dictionary may be acquired in two different ways. First, by performing localisation, as in the work by J. P. McCrae BIBREF17 which translates the terms contained in a monolingual ontology, thus generating a bilingual ontology. Second, by creating embeddings of both DBpedia versions in order to determine the similarity between entities through their vectors. This insight is supported by some recent works, such as Ristoski et al. BIBREF18 , which creates bilingual embeddings using RDF based on Word2vec algorithms. Therefore, we suggest investigating an MT approach mainly based on SWT using NN for translating KBs. Once the KBs are translated, we suggest including them in the language models for improving the translation of entities." ], "extractive_spans": [ "disambiguation", "NERD", " non-standard language", "translating KBs" ], "free_form_answer": "", "highlighted_evidence": [ "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words.", "The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 .", "Therefore, we suggest that user characteristics can be applied as context for solving the non-standard language problem. These characteristics can be extracted from social media or user logs and stored as user properties using SWT, e.g., FOAF vocabulary.", "According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words. However, SWT were applied in two ways to support the semantic disambiguation in MT. First, the ambiguous words were recognized in the source text before carrying out the translation, applying a pre-editing technique. Second, SWT were applied to the output translation in the target language as a post-editing technique. Although applying one of these techniques has increased the quality of a translation, both techniques are tedious to implement when they have to translate common words instead of named entities, then be applied several times to achieve a successful translation.", "Based on the surveyed works on our research BIBREF6 , SWT have mostly been applied at the semantic analysis step, rather than at the other stages of the translation process, due to their ability to deal with concepts behind the words and provide knowledge about them. As SWT have developed, they have increasingly been able to resolve some of the open challenges of MT. They may be applied in different ways according to each MT approach.", "Named Entities. Most NERD approaches link recognized entities with database entries or websites. This method helps to categorize and summarize text, but also contributes to the disambiguation of words in texts. The primary issue in MT systems is caused by common words from a source language that are used as proper nouns in a target language. For instance, the word “Kiwi\" is a family name in New Zealand which comes from the Māori culture, but it also can be a fruit, a bird, or a computer program. Named Entities are a common and difficult problem in both MT (see Koehn BIBREF0 ) and SW fields. The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 . In MT systems, however, this problem is directly related to the ambiguity problem and therefore has to be resolved in that wider context.", "Therefore, we suggest that user characteristics can be applied as context for solving the non-standard language problem. These characteristics can be extracted from social media or user logs and stored as user properties using SWT, e.g., FOAF vocabulary. These ontologies have properties which would help identify the birth place or the interests of a given user. For instance, the properties foaf:interest and sioc:topic can be used to describe a given person's topics of interest. If the person is a computer scientist and the model contains topics such as “Information Technology\" and “Sports\", the SPARQL queries would search for terms inserted in this context which are ambiguous. Furthermore, the property foaf:based_near may support the problem of idioms. Assuming that a user is located in a certain part of Russia and he is reading an English web page which contains some idioms, this property may be used to gather appropriate translations of idioms from English to Russian using a given RDF KB. Therefore, an MT system can be adapted to a user by using specific data about him in RDF along with given KBs. Recently, Moussallem et al BIBREF16 have released a multilingual linked idioms dataset as a first part of supporting the investigation of this suggestion. The dataset contains idioms in 5 languages and are represented by knowledge graphs which facilitates the retrieval and inference of translations among the idioms.", "Translating KBs. According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems. For instance, some content provided by the German Wikipedia version are not contained in the Portuguese one. Therefore, the semantic structure (i.e., triples) provided by DBpedia versions of these respective Wikipedia versions would be able to help translate from German to Portuguese. For example, the terms contained in triples would be translated to a given target language using a dictionary containing domain words. This dictionary may be acquired in two different ways. First, by performing localisation, as in the work by J. P. McCrae BIBREF17 which translates the terms contained in a monolingual ontology, thus generating a bilingual ontology. Second, by creating embeddings of both DBpedia versions in order to determine the similarity between entities through their vectors. This insight is supported by some recent works, such as Ristoski et al. BIBREF18 , which creates bilingual embeddings using RDF based on Word2vec algorithms. Therefore, we suggest investigating an MT approach mainly based on SWT using NN for translating KBs. Once the KBs are translated, we suggest including them in the language models for improving the translation of entities." ], "extractive_spans": [ "Disambiguation", "Named Entities", "Non-standard speech", "Translating KBs" ], "free_form_answer": "", "highlighted_evidence": [ "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words. However, SWT were applied in two ways to support the semantic disambiguation in MT. ", "As SWT have developed, they have increasingly been able to resolve some of the open challenges of MT. ", "Named Entities. Most NERD approaches link recognized entities with database entries or websites. ", "Named Entities are a common and difficult problem in both MT (see Koehn BIBREF0 ) and SW fields. The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 . In MT systems, however, this problem is directly related to the ambiguity problem and therefore has to be resolved in that wider context.", "Therefore, we suggest that user characteristics can be applied as context for solving the non-standard language problem. These characteristics can be extracted from social media or user logs and stored as user properties using SWT, e.g., FOAF vocabulary. ", "Translating KBs. According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words. However, SWT were applied in two ways to support the semantic disambiguation in MT. First, the ambiguous words were recognized in the source text before carrying out the translation, applying a pre-editing technique. Second, SWT were applied to the output translation in the target language as a post-editing technique. Although applying one of these techniques has increased the quality of a translation, both techniques are tedious to implement when they have to translate common words instead of named entities, then be applied several times to achieve a successful translation.", "Translating KBs. According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems. For instance, some content provided by the German Wikipedia version are not contained in the Portuguese one. Therefore, the semantic structure (i.e., triples) provided by DBpedia versions of these respective Wikipedia versions would be able to help translate from German to Portuguese. For example, the terms contained in triples would be translated to a given target language using a dictionary containing domain words. This dictionary may be acquired in two different ways. First, by performing localisation, as in the work by J. P. McCrae BIBREF17 which translates the terms contained in a monolingual ontology, thus generating a bilingual ontology. Second, by creating embeddings of both DBpedia versions in order to determine the similarity between entities through their vectors. This insight is supported by some recent works, such as Ristoski et al. BIBREF18 , which creates bilingual embeddings using RDF based on Word2vec algorithms. Therefore, we suggest investigating an MT approach mainly based on SWT using NN for translating KBs. Once the KBs are translated, we suggest including them in the language models for improving the translation of entities." ], "extractive_spans": [], "free_form_answer": "SWT can be applied to support the semantic disambiguation in MT: to recognize ambiguous words before translation and as a post-editing technique applied to the output language. SWT may be used for translating KBs.", "highlighted_evidence": [ "However, SWT were applied in two ways to support the semantic disambiguation in MT. First, the ambiguous words were recognized in the source text before carrying out the translation, applying a pre-editing technique. Second, SWT were applied to the output translation in the target language as a post-editing technique. ", " According to our research, it is clear that SWT may be used for translating KBs in order to be applied in MT systems." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "613fed4c2e83d06d8f7da6820fba08b1b9175924", "b175211585f099a18ecb40f90a777aa2fed47649", "b37027e7e1f42b838fa4fd5220eac876dfc199fd" ], "answer": [ { "evidence": [ "On the other hand, there is also a syntactic disambiguation problem which as yet lacks good solutions. For instance, the English language contains irregular verbs like “set” or “put”. Depending on the structure of a sentence, it is not possible to recognize their verbal tense, e.g., present or past tense. Even statistical approaches trained on huge corpora may fail to find the exact meaning of some words due to the structure of the language. Although this challenge has successfully been dealt with since NMT has been used for European languages, implementations of NMT for some non-European languages have not been fully exploited (e.g., Brazilian Portuguese, Latin-America Spanish, Zulu, Hindi) due to the lack of large bilingual data sets on the Web to be trained on. Thus, we suggest gathering relationships among properties within an ontology by using the reasoning technique for handling this issue. For instance, the sentence “Anna usually put her notebook on the table for studying\" may be annotated using a certain vocabulary and represented by triples. Thus, the verb “put\", which is represented by a predicate that groups essential information about the verbal tense, may support the generation step of a given MT system. This sentence usually fails when translated to rich morphological languages, such as Brazilian-Portuguese and Arabic, for which the verb influences the translation of “usually\" to the past tense. In this case, a reasoning technique may support the problem of finding a certain rule behind relationships between source and target texts in the alignment phase (training phase). However, a well-known problem of reasoners is the poor run-time performance. Therefore, this run-time deficiency needs to be addressed or minimized before implementing reasoners successfully into MT systems.", "Named Entities. Most NERD approaches link recognized entities with database entries or websites. This method helps to categorize and summarize text, but also contributes to the disambiguation of words in texts. The primary issue in MT systems is caused by common words from a source language that are used as proper nouns in a target language. For instance, the word “Kiwi\" is a family name in New Zealand which comes from the Māori culture, but it also can be a fruit, a bird, or a computer program. Named Entities are a common and difficult problem in both MT (see Koehn BIBREF0 ) and SW fields. The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 . In MT systems, however, this problem is directly related to the ambiguity problem and therefore has to be resolved in that wider context.", "Non-standard speech. The non-standard language problem is a rather important one in the MT field. Many people use the colloquial form to speak and write to each other on social networks. Thus, when MT systems are applied on this context, the input text frequently contains slang, MWE, and unreasonable abbreviations such as “Idr = I don't remember.” and “cya = see you”. Additionally, idioms contribute to this problem, decreasing the translation quality. Idioms often have an entirely different meaning than their separated word meanings. Consequently, most translation outputs of such expressions contain errors. For a good translation, the MT system needs to recognize such slang and try to map it to the target language. Some SMT systems like Google or Bing have recognition patterns over non-standard speech from old translations through the Web using SMT approaches. In rare cases SMT can solve this problem, but considering that new idiomatic expressions appear every day and most of them are isolated sentences, this challenge still remains open. Moreover, each person has their own speaking form." ], "extractive_spans": [ "syntactic disambiguation problem which as yet lacks good solutions", "directly related to the ambiguity problem and therefore has to be resolved in that wider context", "In rare cases SMT can solve this problem, but considering that new idiomatic expressions appear every day and most of them are isolated sentences, this challenge still remains open" ], "free_form_answer": "", "highlighted_evidence": [ "On the other hand, there is also a syntactic disambiguation problem which as yet lacks good solutions.", "The SW achieved important advances in NERD using structured data and semantic annotations, e.g., by adding an rdf:type statement which identifies whether a certain kiwi is a fruit BIBREF14 . In MT systems, however, this problem is directly related to the ambiguity problem and therefore has to be resolved in that wider context.", "In rare cases SMT can solve this problem, but considering that new idiomatic expressions appear every day and most of them are isolated sentences, this challenge still remains open." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Although MT systems are now popular on the Web, they still generate a large number of incorrect translations. Recently, Popović BIBREF3 has classified five types of errors that still remain in MT systems. According to research, the two main faults that are responsible for 40% and 30% of problems respectively, are reordering errors and lexical and syntactic ambiguity. Thus, addressing these barriers is a key challenge for modern translation systems. A large number of MT approaches have been developed over the years that could potentially serve as a remedy. For instance, translators began by using methodologies based on linguistics which led to the family of RBMT. However, RBMT systems have a critical drawback in their reliance on manually crafted rules, thus making the development of new translation modules for different languages even more difficult." ], "extractive_spans": [ "reordering errors", " lexical and syntactic ambiguity" ], "free_form_answer": "", "highlighted_evidence": [ ". According to research, the two main faults that are responsible for 40% and 30% of problems respectively, are reordering errors and lexical and syntactic ambiguity. Thus, addressing these barriers is a key challenge for modern translation systems. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "SW has already shown its capability for semantic disambiguation of polysemous and homonymous words. However, SWT were applied in two ways to support the semantic disambiguation in MT. First, the ambiguous words were recognized in the source text before carrying out the translation, applying a pre-editing technique. Second, SWT were applied to the output translation in the target language as a post-editing technique. Although applying one of these techniques has increased the quality of a translation, both techniques are tedious to implement when they have to translate common words instead of named entities, then be applied several times to achieve a successful translation." ], "extractive_spans": [], "free_form_answer": "SWT are hard to implement", "highlighted_evidence": [ "However, SWT were applied in two ways to support the semantic disambiguation in MT. ", "Although applying one of these techniques has increased the quality of a translation, both techniques are tedious to implement when they have to translate common words instead of named entities, then be applied several times to achieve a successful translation." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "0d6b6445276fb9b2f52b8814eb51b0237ec150a4", "3c6a6e028a1d168535d3f6afbc237a053b528fdd", "77941c958e6cf7ef99172cae2fba1edfc9c35204", "b2f3342182c4e57ad7992f227dcffdc67f40ddce" ], "answer": [ { "evidence": [ "(1) Excessive focus on English and European languages as one of the involved languages in MT approaches and poor research on low-resource language pairs such as African and/or South American languages. (2) The limitations of SMT approaches for translating across domains. Most MT systems exhibit good performance on law and the legislative domains due to the large amount of data provided by the European Union. In contrast, translations performed on sports and life-hacks commonly fail, because of the lack of training data. (3) How to translate the huge amount of data from social networks that uniquely deal with no-standard speech texts from users (e.g., tweets). (4) The difficult translations among morphologically rich languages. This challenge shares the same problem with the first one, namely that most research work focuses on English as one of the involved languages. Therefore, MT systems which translate content between, for instance, Arabic and Spanish are rare. (5) For the speech translation task, the parallel data for training differs widely from real user speech." ], "extractive_spans": [ "Excessive focus on English and European languages", "limitations of SMT approaches for translating across domains", "no-standard speech texts from users", "morphologically rich languages", "parallel data for training differs widely from real user speech" ], "free_form_answer": "", "highlighted_evidence": [ "(1) Excessive focus on English and European languages as one of the involved languages in MT approaches and poor research on low-resource language pairs such as African and/or South American languages. (2) The limitations of SMT approaches for translating across domains. Most MT systems exhibit good performance on law and the legislative domains due to the large amount of data provided by the European Union. In contrast, translations performed on sports and life-hacks commonly fail, because of the lack of training data. (3) How to translate the huge amount of data from social networks that uniquely deal with no-standard speech texts from users (e.g., tweets). (4) The difficult translations among morphologically rich languages. This challenge shares the same problem with the first one, namely that most research work focuses on English as one of the involved languages. Therefore, MT systems which translate content between, for instance, Arabic and Spanish are rare. (5) For the speech translation task, the parallel data for training differs widely from real user speech." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Although MT systems are now popular on the Web, they still generate a large number of incorrect translations. Recently, Popović BIBREF3 has classified five types of errors that still remain in MT systems. According to research, the two main faults that are responsible for 40% and 30% of problems respectively, are reordering errors and lexical and syntactic ambiguity. Thus, addressing these barriers is a key challenge for modern translation systems. A large number of MT approaches have been developed over the years that could potentially serve as a remedy. For instance, translators began by using methodologies based on linguistics which led to the family of RBMT. However, RBMT systems have a critical drawback in their reliance on manually crafted rules, thus making the development of new translation modules for different languages even more difficult." ], "extractive_spans": [ "reordering errors" ], "free_form_answer": "", "highlighted_evidence": [ ". According to research, the two main faults that are responsible for 40% and 30% of problems respectively, are reordering errors and lexical and syntactic ambiguity. " ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Although MT systems are now popular on the Web, they still generate a large number of incorrect translations. Recently, Popović BIBREF3 has classified five types of errors that still remain in MT systems. According to research, the two main faults that are responsible for 40% and 30% of problems respectively, are reordering errors and lexical and syntactic ambiguity. Thus, addressing these barriers is a key challenge for modern translation systems. A large number of MT approaches have been developed over the years that could potentially serve as a remedy. For instance, translators began by using methodologies based on linguistics which led to the family of RBMT. However, RBMT systems have a critical drawback in their reliance on manually crafted rules, thus making the development of new translation modules for different languages even more difficult." ], "extractive_spans": [ "reordering errors" ], "free_form_answer": "", "highlighted_evidence": [ "Recently, Popović BIBREF3 has classified five types of errors that still remain in MT systems. According to research, the two main faults that are responsible for 40% and 30% of problems respectively, are reordering errors and lexical and syntactic ambiguity. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "", "", "" ], "question": [ "What are the opportunities presented by the use of Semantic Web technologies in Machine Translation?", "What are the challenges associated with the use of Semantic Web technologies in Machine Translation?", "What are the other obstacles to automatic translations which are not mentioned in the abstract?" ], "question_id": [ "cbb3c1c1e6e1818b6480f929f1c299eaa5ffd07a", "9f74f3991b8681619d95ab93a7c8733a843ddffe", "7c2c15ea3f1b1375b8aaef1103a001069d9915bb" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [], "file": [] }
[ "What are the opportunities presented by the use of Semantic Web technologies in Machine Translation?", "What are the challenges associated with the use of Semantic Web technologies in Machine Translation?" ]
[ [ "1907.10676-Suggestions and Possible Directions using SW-0", "1907.10676-Suggestions and Possible Directions using SW-9", "1907.10676-Suggestions and Possible Directions using SW-2", "1907.10676-Suggestions and Possible Directions using SW-7", "1907.10676-Suggestions and Possible Directions using SW-5", "1907.10676-Suggestions and Possible Directions using SW-8" ], [ "1907.10676-Introduction-2", "1907.10676-Suggestions and Possible Directions using SW-2", "1907.10676-Suggestions and Possible Directions using SW-7", "1907.10676-Suggestions and Possible Directions using SW-5", "1907.10676-Suggestions and Possible Directions using SW-4" ] ]
[ "SWT can be applied to support the semantic disambiguation in MT: to recognize ambiguous words before translation and as a post-editing technique applied to the output language. SWT may be used for translating KBs.", "SWT are hard to implement" ]
14
1906.08871
Advancing Speech Recognition With No Speech Or With Noisy Speech
In this paper we demonstrate end to end continuous speech recognition (CSR) using electroencephalography (EEG) signals with no speech signal as input. An attention model based automatic speech recognition (ASR) and connectionist temporal classification (CTC) based ASR systems were implemented for performing recognition. We further demonstrate CSR for noisy speech by fusing with EEG features.
{ "paragraphs": [ [ "Electroencephalography (EEG) is a non invasive way of measuring electrical activity of human brain. In BIBREF0 we demonstrated deep learning based automatic speech recognition (ASR) using EEG signals for a limited English vocabulary of four words and five vowels. In this paper we extend our work for a much larger English vocabulary and we use state-of-art end-to-end continuous speech recognition models to perform recognition. In our prior work we predicted isolated words and vowels.", "ASR systems forms the front end or back end in many cutting edge voice activated technologies like Amazon Alexa, Apple Siri, Windows Cortana, Samsung Bixby etc. Unfortunately these systems are trained to recognize text only from acoustic features. This limits technology accessibility to people with speaking disabilities and disorders. The research work presented in this paper tries to address this issue by investigating speech recognition using only EEG signals with no acoustic input and also by combining EEG features along with traditional acoustic features to perform recognition. We believe the former will help with speech restoration for people who can not speak at all and the latter will help people who are having speaking disabilities like broken or discontinued speech etc to use voice activated technologies with better user experience there by helping in improving technology accessibility.", "ASR performance is degraded in presence of noisy speech and in real life situations most of the speech is noisy. Inspired from the unique robustness to environmental artifacts exhibited by the human auditory cortex BIBREF1 , BIBREF2 we used very noisy speech data for this work and demonstrated lower word error rate (WER) for smaller corpus using EEG features, concatenation of EEG features and acoustic features.", "In BIBREF3 authors decode imagined speech from EEG using synthetic EEG data and connectionist temporal classification (CTC) network but in our work we use real EEG data, use EEG data recorded along with acoustics. In BIBREF4 authors perform envisioned speech recognition using random forest classifier but in our case we use end to end state of art models and perform recognition for noisy speech. In BIBREF5 authors demonstrate speech recognition using electrocorticography (ECoG) signals, which are invasive in nature but in our work we use non invasive EEG signals.", "This work is mainly motivated by the results explained in BIBREF0 , BIBREF6 , BIBREF7 , BIBREF3 . In BIBREF6 the authors used classification approach for identifying phonological categories in imagined and silent speech but in our work we used continuous speech recognition state of art models and our models were predicting words, characters at each time step. Similarly in BIBREF7 neural network based classification approach was used for predicting phonemes.", "Major contribution of this paper is the demonstration of end to end continuous noisy speech recognition using only EEG features and this paper further validates the concepts introduced in BIBREF0 for a much larger English corpus." ], [ "An end-to-end ASR model maps input feature vectors to an output sequence of vectors of posterior probabilities of tokens without using separate acoustic model, pronunciation model and language model. In this work we implemented two different types of state of art end to end ASR models used for the task of continuous speech recognition and the input feature vectors can be EEG features or concatenation of acoustic and EEG features. We used Google's tensorflow and keras deep learning libraries for building our ASR models." ], [ "The main ideas behind CTC based ASR were first introduced in the following papers BIBREF8 , BIBREF9 . In our work we used a single layer gated recurrent unit (GRU) BIBREF10 with 128 hidden units as encoder for the CTC network. The decoder consists of a combination of a dense layer ( fully connected layer) and a softmax activation. Output at every time step of the GRU layer is fed into the decoder network. The number of time steps of the GRU encoder is equal to product of the sampling frequency of the input features and the length of the input sequence. Since different speakers have different rate of speech, we used dynamic recurrent neural network (RNN) cell. There is no fixed value for time steps of the encoder.", "Usually the number of time steps of the encoder (T) is greater than the length of output tokens for a continuous speech recognition problem. A RNN based CTC network tries to make length of output tokens equal to T by allowing the repetition of output prediction unit tokens and by introducing a special token called blank token BIBREF8 across all the frames. We used CTC loss function with adam optimizer BIBREF11 and during inference time we used CTC beam search decoder.", "We now explain the loss function used in our CTC model. Consider training data set INLINEFORM0 with training examples INLINEFORM1 and the corresponding label set INLINEFORM2 with target vectors INLINEFORM3 . Consider any training example, label pair ( INLINEFORM4 , INLINEFORM5 ). Let the number of time steps of the RNN encoder for ( INLINEFORM6 , INLINEFORM7 ) is INLINEFORM8 . In case of character based CTC model, the RNN predicts a character at every time step. Whereas in word based CTC model, the RNN predicts a word at every time step. For the sake of simplicity, let us assume that length of target vector INLINEFORM9 is equal to INLINEFORM10 . Let the probability vector output by the RNN at each time step INLINEFORM11 be INLINEFORM12 and let INLINEFORM13 value of INLINEFORM14 be denoted by INLINEFORM15 . The probability that model outputs INLINEFORM16 on input INLINEFORM17 is given by INLINEFORM18 . During the training phase, we would like to maximize the conditional probability INLINEFORM19 , and thereby define the loss function as INLINEFORM20 .", "In case when the length of INLINEFORM0 is less than INLINEFORM1 , we extend the target vector INLINEFORM2 by repeating a few of its values and by introducing blank token ( INLINEFORM3 ) to create a target vector of length INLINEFORM4 . Let the possible extensions of INLINEFORM5 be denoted by INLINEFORM6 . For example, when INLINEFORM7 and INLINEFORM8 , the possible extensions are INLINEFORM9 , INLINEFORM10 , INLINEFORM11 , INLINEFORM12 , INLINEFORM13 , INLINEFORM14 and INLINEFORM15 . We then define INLINEFORM16 as INLINEFORM17 .", "In our work we used character based CTC ASR model. CTC assumes the conditional independence constraint that output predictions are independent given the entire input sequence." ], [ "RNN encoder - decoder ASR model consists of a RNN encoder and a RNN decoder with attention mechanism BIBREF12 , BIBREF13 , BIBREF14 . The number of time steps of the encoder is equal to the product of sampling frequency of the input features and the length of input sequence. There is no fixed value for time steps in our case. We used dynamic RNN cell. We used a single layer GRU with 128 hidden units for both encoder and decoder. A dense layer followed by softmax activation is used after the decoder GRU to get the prediction probabilities. Dense layer performs an affine transformation. The number of time steps of the decoder GRU is same as the number of words present in the sentence for a given training example. Training objective is to maximize the log probability of the ordered conditionals, ie: INLINEFORM0 , where X is input feature vector, INLINEFORM1 's are the labels for the ordered words present in that training example and INLINEFORM2 is the length of the output label sentence for that example. Cross entropy was used as the loss function with adam as the optimizer. We used teacher forcing algorithm BIBREF15 to train the model. During inference time we used beam search decoder.", "We now explain the attention mechanism used in our attention model. Consider any training example, label pair ( INLINEFORM0 , INLINEFORM1 ). Let the number of times steps of encoder GRU for that example be INLINEFORM2 . The GRU encoder will transform the input features ( INLINEFORM3 ) into hidden output feature vectors ( INLINEFORM4 ). Let INLINEFORM5 word label in INLINEFORM6 (sentence) be INLINEFORM7 , then to predict INLINEFORM8 at decoder time step INLINEFORM9 , context vector INLINEFORM10 is computed and fed into the decoder GRU. INLINEFORM11 is computed as INLINEFORM12 , where INLINEFORM13 is the attention weight vector satisfying the property INLINEFORM14 .", " INLINEFORM0 can be intuitively seen as a measure of how much attention INLINEFORM1 must pay to INLINEFORM2 , INLINEFORM3 . INLINEFORM4 is mathematically defined as INLINEFORM5 , where INLINEFORM6 is hidden state of the decoder GRU at time step INLINEFORM7 .", "The way of computing value for INLINEFORM0 depends on the type of attention used. In this work, we used bahdanau's additive style attention BIBREF13 , which defines INLINEFORM1 as INLINEFORM2 ) where INLINEFORM3 and INLINEFORM4 are learnable parameters during training of the model." ], [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. Except two subjects, rest all were native English speakers for both the databases. All subjects were UT Austin undergraduate,graduate students in their early twenties.", "For data set A, the 10 subjects were asked to speak the first 30 sentences from the USC-TIMIT database BIBREF16 and their simultaneous speech and EEG signals were recorded. This data was recorded in presence of background noise of 40 dB (noise generated by room air conditioner fan). We then asked each subject to repeat the same experiment two more times, thus we had 30 speech EEG recording examples for each sentence.", "For data set B, the 8 subjects were asked to repeat the same previous experiment but this time we used background music played from our lab computer to generate a background noise of 65 dB. Here we had 24 speech EEG recording examples for each sentence.", "We used Brain Vision EEG recording hardware. Our EEG cap had 32 wet EEG electrodes including one electrode as ground as shown in Figure 1. We used EEGLab BIBREF17 to obtain the EEG sensor location mapping. It is based on standard 10-20 EEG sensor placement method for 32 electrodes.", "For data set A, we used data from first 8 subjects for training the model, remaining two subjects data for validation and test set respectively.", "For data set B, we used data from first 6 subjects for training the model, remaining two subjects data for validation and test set respectively." ], [ "EEG signals were sampled at 1000Hz and a fourth order IIR band pass filter with cut off frequencies 0.1Hz and 70Hz was applied. A notch filter with cut off frequency 60 Hz was used to remove the power line noise. EEGlab's BIBREF17 Independent component analysis (ICA) toolbox was used to remove other biological signal artifacts like electrocardiography (ECG), electromyography (EMG), electrooculography (EOG) etc from the EEG signals. We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel.", "We used spectral entropy because it captures the spectral ( frequency domain) and signal complexity information of EEG. It is also a widely used feature in EEG signal analysis BIBREF18 . Similarly zero crossing rate was chosen as it is a commonly used feature both for speech recognition and bio signal analysis. Remaining features were chosen to capture time domain statistical information. We performed lot of experiments to identify this set of features. Initially we used only spectral entropy and zero crossing rate but we noticed that the performance of the ASR system significantly went up by 20 % when we added the remaining additional features.", "The recorded speech signal was sampled at 16KHz frequency. We extracted Mel-frequency cepstrum (MFCC) as features for speech signal. We first extracted MFCC 13 features and then computed first and second order differentials (delta and delta-delta) thus having total MFCC 39 features. The MFCC features were also sampled at 100Hz same as the sampling frequency of EEG features to avoid seq2seq problem." ], [ "After extracting EEG and acoustic features as explained in the previous section, we used non linear methods to do feature dimension reduction in order to obtain set of EEG features which are better representation of acoustic features. We reduced the 155 EEG features to a dimension of 30 by applying Kernel Principle Component Analysis (KPCA) BIBREF19 .We plotted cumulative explained variance versus number of components to identify the right feature dimension as shown in Figure 2. We used KPCA with polynomial kernel of degree 3 BIBREF0 . We further computed delta, delta and delta of those 30 EEG features, thus the final feature dimension of EEG was 90 (30 times 3) for both the data sets.", "When we used the EEG features for ASR without dimension reduction, the ASR performance went down by 40 %. The non linear dimension reduction of EEG features significantly improved the performance of ASR." ], [ "The attention model was predicting a word and CTC model was predicting a character at every time step, hence we used word error rate (WER) as performance metric to evaluate attention model and character error rate (CER) for CTC model for different feature sets as shown below.", "Table i@ and ii@ shows the test time results for attention model for both the data sets when trained using EEG features and concatenation of EEG, acoustic features respectively. As seen from the results the attention model gave lower WER when trained and tested on smaller number of sentences. As the vocabulary size increase, the WER also went up. We believe for the attention model to achieve lower WER for larger vocabulary size more number of training examples or larger training data set is required as large number of weights need to be adapted. Figure 3 shows the training loss convergence of our attention model.", "Table iv@ and v@ shows the results obtained using CTC model. The error rates for CTC model also went up with the increase in vocabulary size for both the data sets. However the CTC model was trained for 500 epochs compared to 100 epochs for attention model to observe loss convergence and batch size was set to one for CTC model. Thus CTC model training was lot more time consuming than attention model.", "In BIBREF0 we have demonstrated that EEG sensors T7 and T8 features contributed most towards ASR performance. Table vi@ shows the CTC model test time results when we trained the model using EEG features from only T7 and T8 sensors on the most noisy data set B. We observed that as vocabulary size increase, error rates were slightly lower than the error rates from Table iv@ where we used EEG features from all 31 sensors with dimension reduction. Table iii@ shows the results for attention model when trained with EEG features from sensors T7 and T8 only on data set B. We observed that error rates were higher in this case compared to the error rates reported in table ii@.", "Figures 4 shows the visualization of the attention weights when the attention model was trained and tested using only EEG features for Data set B. The plots shows the EEG feature importance ( attention) distribution across time steps for predicting first sentence and it indicates that attention model was not able to attend properly to EEG features, which might be another reason for giving higher WER." ], [ "In this paper we demonstrated the feasibility of using EEG features, concatenation of EEG and acoustic features for performing noisy continuous speech recognition. To our best knowledge this is the first time a continuous noisy speech recognition is demonstrated using only EEG features.", "For both attention and CTC model we observed that as the vocabulary size increase, concatenating acoustic features with EEG features will help in reducing the test time error rates.", "We further plan to publish our speech EEG data base used in this work to help advancement of research in this area.", "For future work, we plan to build a much larger speech EEG data base and also perform experiments with data collected from subjects with speaking disabilities.", "We will also investigate whether it is possible to improve the attention model results by tuning hyper parameters to improve the model's ability to condition on the input,improve CTC model results by training with more number of examples and by using external language model during inference time." ], [ "We would like to thank Kerry Loader and Rezwanul Kabir from Dell, Austin, TX for donating us the GPU to train the models used in this work." ] ], "section_name": [ "Introduction", "Automatic Speech Recognition System Models", "Connectionist Temporal Classification (CTC)", "RNN Encoder-Decoder or Attention model", "Design of Experiments for building the database", "EEG and Speech feature extraction details", "EEG Feature Dimension Reduction Algorithm Details", "Results", "Conclusion and Future work", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "40b54ebf4216e8e0f661fa2772b531836239afa4", "6fc2782728bbf766ee7dbcfecc26d064d5f007a1", "b927fe832bf10780a2a75e3887665c40c68e1d11", "ffee3097f81f44fc8dcba74ef51e34bf04b08f7e" ], "answer": [ { "evidence": [ "EEG signals were sampled at 1000Hz and a fourth order IIR band pass filter with cut off frequencies 0.1Hz and 70Hz was applied. A notch filter with cut off frequency 60 Hz was used to remove the power line noise. EEGlab's BIBREF17 Independent component analysis (ICA) toolbox was used to remove other biological signal artifacts like electrocardiography (ECG), electromyography (EMG), electrooculography (EOG) etc from the EEG signals. We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel.", "We used spectral entropy because it captures the spectral ( frequency domain) and signal complexity information of EEG. It is also a widely used feature in EEG signal analysis BIBREF18 . Similarly zero crossing rate was chosen as it is a commonly used feature both for speech recognition and bio signal analysis. Remaining features were chosen to capture time domain statistical information. We performed lot of experiments to identify this set of features. Initially we used only spectral entropy and zero crossing rate but we noticed that the performance of the ASR system significantly went up by 20 % when we added the remaining additional features.", "The recorded speech signal was sampled at 16KHz frequency. We extracted Mel-frequency cepstrum (MFCC) as features for speech signal. We first extracted MFCC 13 features and then computed first and second order differentials (delta and delta-delta) thus having total MFCC 39 features. The MFCC features were also sampled at 100Hz same as the sampling frequency of EEG features to avoid seq2seq problem." ], "extractive_spans": [ "We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0", " So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel", "We extracted Mel-frequency cepstrum (MFCC) as features for speech signal. We first extracted MFCC 13 features and then computed first and second order differentials (delta and delta-delta) thus having total MFCC 39 features. " ], "free_form_answer": "", "highlighted_evidence": [ "EEG signals were sampled at 1000Hz and a fourth order IIR band pass filter with cut off frequencies 0.1Hz and 70Hz was applied. A notch filter with cut off frequency 60 Hz was used to remove the power line noise. EEGlab's BIBREF17 Independent component analysis (ICA) toolbox was used to remove other biological signal artifacts like electrocardiography (ECG), electromyography (EMG), electrooculography (EOG) etc from the EEG signals. We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel.\n\nWe used spectral entropy because it captures the spectral ( frequency domain) and signal complexity information of EEG. It is also a widely used feature in EEG signal analysis BIBREF18 . Similarly zero crossing rate was chosen as it is a commonly used feature both for speech recognition and bio signal analysis. Remaining features were chosen to capture time domain statistical information. We performed lot of experiments to identify this set of features. Initially we used only spectral entropy and zero crossing rate but we noticed that the performance of the ASR system significantly went up by 20 % when we added the remaining additional features.\n\nThe recorded speech signal was sampled at 16KHz frequency. We extracted Mel-frequency cepstrum (MFCC) as features for speech signal. We first extracted MFCC 13 features and then computed first and second order differentials (delta and delta-delta) thus having total MFCC 39 features. The MFCC features were also sampled at 100Hz same as the sampling frequency of EEG features to avoid seq2seq problem." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "EEG signals were sampled at 1000Hz and a fourth order IIR band pass filter with cut off frequencies 0.1Hz and 70Hz was applied. A notch filter with cut off frequency 60 Hz was used to remove the power line noise. EEGlab's BIBREF17 Independent component analysis (ICA) toolbox was used to remove other biological signal artifacts like electrocardiography (ECG), electromyography (EMG), electrooculography (EOG) etc from the EEG signals. We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel.", "After extracting EEG and acoustic features as explained in the previous section, we used non linear methods to do feature dimension reduction in order to obtain set of EEG features which are better representation of acoustic features. We reduced the 155 EEG features to a dimension of 30 by applying Kernel Principle Component Analysis (KPCA) BIBREF19 .We plotted cumulative explained variance versus number of components to identify the right feature dimension as shown in Figure 2. We used KPCA with polynomial kernel of degree 3 BIBREF0 . We further computed delta, delta and delta of those 30 EEG features, thus the final feature dimension of EEG was 90 (30 times 3) for both the data sets." ], "extractive_spans": [ "root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy" ], "free_form_answer": "", "highlighted_evidence": [ "We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel.", "After extracting EEG and acoustic features as explained in the previous section, we used non linear methods to do feature dimension reduction in order to obtain set of EEG features which are better representation of acoustic features. We reduced the 155 EEG features to a dimension of 30 by applying Kernel Principle Component Analysis (KPCA) BIBREF19 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "EEG signals were sampled at 1000Hz and a fourth order IIR band pass filter with cut off frequencies 0.1Hz and 70Hz was applied. A notch filter with cut off frequency 60 Hz was used to remove the power line noise. EEGlab's BIBREF17 Independent component analysis (ICA) toolbox was used to remove other biological signal artifacts like electrocardiography (ECG), electromyography (EMG), electrooculography (EOG) etc from the EEG signals. We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel." ], "extractive_spans": [ "root mean square", "zero crossing rate", "moving window average", "kurtosis", "power spectral entropy" ], "free_form_answer": "", "highlighted_evidence": [ "We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "EEG signals were sampled at 1000Hz and a fourth order IIR band pass filter with cut off frequencies 0.1Hz and 70Hz was applied. A notch filter with cut off frequency 60 Hz was used to remove the power line noise. EEGlab's BIBREF17 Independent component analysis (ICA) toolbox was used to remove other biological signal artifacts like electrocardiography (ECG), electromyography (EMG), electrooculography (EOG) etc from the EEG signals. We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel." ], "extractive_spans": [ "root mean square", "zero crossing rate", "moving window average", "kurtosis", "power spectral entropy", "extracted 31(channels) X 5 or 155 features" ], "free_form_answer": "", "highlighted_evidence": [ "We extracted five statistical features for EEG, namely root mean square, zero crossing rate,moving window average,kurtosis and power spectral entropy BIBREF0 . So in total we extracted 31(channels) X 5 or 155 features for EEG signals.The EEG features were extracted at a sampling frequency of 100Hz for each EEG channel." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "0df546c1949acf5453a7057eee364829c60124f0", "923d600ccb51990a0c6ed481c4608dc127de79cb", "f5207f400a6c3204900bead95cf16c739394c522" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "303d754f4723fcb85ae3f7239ceff8c3a84df4da", "4603465a73edf2980cdc95923ffe5c5c132ba548", "65c85408ece446a2e0f34967aee692395ab02f32", "e2a7b8c914e9e36b63a59e2e8c801e1195cbfcd2" ], "answer": [ { "evidence": [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. Except two subjects, rest all were native English speakers for both the databases. All subjects were UT Austin undergraduate,graduate students in their early twenties.", "For data set A, the 10 subjects were asked to speak the first 30 sentences from the USC-TIMIT database BIBREF16 and their simultaneous speech and EEG signals were recorded. This data was recorded in presence of background noise of 40 dB (noise generated by room air conditioner fan). We then asked each subject to repeat the same experiment two more times, thus we had 30 speech EEG recording examples for each sentence.", "For data set B, the 8 subjects were asked to repeat the same previous experiment but this time we used background music played from our lab computer to generate a background noise of 65 dB. Here we had 24 speech EEG recording examples for each sentence.", "We used Brain Vision EEG recording hardware. Our EEG cap had 32 wet EEG electrodes including one electrode as ground as shown in Figure 1. We used EEGLab BIBREF17 to obtain the EEG sensor location mapping. It is based on standard 10-20 EEG sensor placement method for 32 electrodes." ], "extractive_spans": [ " two types of simultaneous speech EEG recording databases " ], "free_form_answer": "", "highlighted_evidence": [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. Except two subjects, rest all were native English speakers for both the databases. All subjects were UT Austin undergraduate,graduate students in their early twenties.", "For data set A, the 10 subjects were asked to speak the first 30 sentences from the USC-TIMIT database BIBREF16 and their simultaneous speech and EEG signals were recorded. This data was recorded in presence of background noise of 40 dB (noise generated by room air conditioner fan). We then asked each subject to repeat the same experiment two more times, thus we had 30 speech EEG recording examples for each sentence.", "For data set B, the 8 subjects were asked to repeat the same previous experiment but this time we used background music played from our lab computer to generate a background noise of 65 dB. Here we had 24 speech EEG recording examples for each sentence.", "We used Brain Vision EEG recording hardware. Our EEG cap had 32 wet EEG electrodes including one electrode as ground as shown in Figure 1. We used EEGLab BIBREF17 to obtain the EEG sensor location mapping. It is based on standard 10-20 EEG sensor placement method for 32 electrodes." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. Except two subjects, rest all were native English speakers for both the databases. All subjects were UT Austin undergraduate,graduate students in their early twenties." ], "extractive_spans": [], "free_form_answer": "The two types of simultaneous speech EEG recording databases: A- five female and five male subjects took part in the experiment, and B- five male and three female subjects took part in the experiment.", "highlighted_evidence": [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. Except two subjects, rest all were native English speakers for both the databases. All subjects were UT Austin undergraduate,graduate students in their early twenties.", "For data set A, the 10 subjects were asked to speak the first 30 sentences from the USC-TIMIT database BIBREF16 and their simultaneous speech and EEG signals were recorded. This data was recorded in presence of background noise of 40 dB (noise generated by room air conditioner fan). We then asked each subject to repeat the same experiment two more times, thus we had 30 speech EEG recording examples for each sentence.", "For data set B, the 8 subjects were asked to repeat the same previous experiment but this time we used background music played from our lab computer to generate a background noise of 65 dB. Here we had 24 speech EEG recording examples for each sentence." ], "extractive_spans": [], "free_form_answer": "Speech EEG recording collected from male and female subjects under different background noises", "highlighted_evidence": [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. ", "For data set A, the 10 subjects were asked to speak the first 30 sentences from the USC-TIMIT database BIBREF16 and their simultaneous speech and EEG signals were recorded. This data was recorded in presence of background noise of 40 dB (noise generated by room air conditioner fan).", "For data set B, the 8 subjects were asked to repeat the same previous experiment but this time we used background music played from our lab computer to generate a background noise of 65 dB. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. Except two subjects, rest all were native English speakers for both the databases. All subjects were UT Austin undergraduate,graduate students in their early twenties." ], "extractive_spans": [ "For database A five female and five male subjects took part in the experiment.", "For database B five male and three female subjects took part in the experiment." ], "free_form_answer": "", "highlighted_evidence": [ "We built two types of simultaneous speech EEG recording databases for this work. For database A five female and five male subjects took part in the experiment. For database B five male and three female subjects took part in the experiment. Except two subjects, rest all were native English speakers for both the databases. All subjects were UT Austin undergraduate,graduate students in their early twenties." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "what eeg features were used?", "what were the baselines?", "what dataset was used?" ], "question_id": [ "a77d38427639d54461ae308f3045434f81e497d0", "010fd15696580d9924ac0275a4ff269005e5808d", "d36a6447bfe58204e0d29f9213d84be04d875624" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Fig. 1. EEG channel locations for the cap used in our experiments", "Fig. 2. Explained variance plot", "Fig. 5. Training loss convergence for CTC model using only EEG features for first 3 sentences from data set B", "TABLE I WER ON TEST SET FOR ATTENTION MODEL FOR DATA SET A", "Fig. 3. Training loss convergence for attention model using only EEG features for first 10 sentences from data set A", "TABLE II WER ON TEST SET FOR ATTENTION MODEL FOR DATA SET B", "TABLE III WER ON TEST SET FOR ATTENTION MODEL FOR DATA SET B USING EEG FEATURES FROM ONLY T7 AND T8 ELECTRODES", "Fig. 4. Visualization of attention weights for the first sentence", "TABLE IV CER ON TEST SET FOR CTC MODEL FOR DATA SET B", "TABLE V CER ON TEST SET FOR CTC MODEL FOR DATA SET A", "TABLE VI CER ON TEST SET FOR CTC MODEL FOR DATA SET B USING EEG FEATURES FROM ONLY T7 AND T8 ELECTRODES", "TABLE VII CER ON TEST SET FOR CTC MODEL FOR DATA SET A FOR MFCC-EEG FUSION FOR LARGER VOCABULARY SIZE", "TABLE VIII CER ON TEST SET FOR CTC MODEL FOR DATA SET B FOR MFCC-EEG FUSION FOR LARGER VOCABULARY SIZE" ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure5-1.png", "4-TableI-1.png", "4-Figure3-1.png", "4-TableII-1.png", "4-TableIII-1.png", "4-Figure4-1.png", "5-TableIV-1.png", "5-TableV-1.png", "5-TableVI-1.png", "5-TableVII-1.png", "5-TableVIII-1.png" ] }
[ "what dataset was used?" ]
[ [ "1906.08871-Design of Experiments for building the database-2", "1906.08871-Design of Experiments for building the database-1", "1906.08871-Design of Experiments for building the database-0", "1906.08871-Design of Experiments for building the database-3" ] ]
[ "Speech EEG recording collected from male and female subjects under different background noises" ]
15
2004.04124
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
BERT is a cutting-edge language representation model pre-trained by a large corpus, which achieves superior performances on various natural language understanding tasks. However, a major blocking issue of applying BERT to online services is that it is memory-intensive and leads to unsatisfactory latency of user requests, raising the necessity of model compression. Existing solutions leverage the knowledge distillation framework to learn a smaller model that imitates the behaviors of BERT. However, the training procedure of knowledge distillation is expensive itself as it requires sufficient training data to imitate the teacher model. In this paper, we address this issue by proposing a hybrid solution named LadaBERT (Lightweight adaptation of BERT through hybrid model compression), which combines the advantages of different model compression methods, including weight pruning, matrix factorization and knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various public datasets while the training overheads can be reduced by an order of magnitude.
{ "paragraphs": [ [ "The pre-trained language model, BERT BIBREF0 has led to a big breakthrough in various kinds of natural language understanding tasks. Ideally, people can start from a pre-trained BERT checkpoint and fine-tune it on a specific downstream task. However, the original BERT models are memory-exhaustive and latency-prohibitive to be served in embedded devices or CPU-based online environments. As the memory and latency constraints vary in different scenarios, the pre-trained BERT model should be adaptive to different requirements with accuracy retained to the largest extent. Existing BERT-oriented model compression solutions largely depend on knowledge distillation BIBREF1, which is inefficient and resource-consuming because a large training corpus is required to learn the behaviors of a teacher. For example, DistilBERT BIBREF2 is re-trained on the same corpus as pre-training a vanilla BERT from scratch; and TinyBERT BIBREF3 utilizes expensive data augmentation to fit the distillation target. The costs of these model compression methods are as large as pre-training and unaffordable for low-resource settings. Therefore, it is straight-forward to ask, can we design a lightweight method to generate adaptive models with comparable accuracy using significantly less time and resource consumption? In this paper, we propose LadaBERT (Lightweight adaptation of BERT through hybrid model compression) to tackle the raised questions. Specifically, LadaBERT is based on an iterative hybrid model compression framework consisting of weighting pruning, matrix factorization and knowledge distillation. Initially, the architecture and weights of student model are inherited from the BERT teacher. In each iteration, the student model is first compressed by a small ratio based on weight pruning and matrix factorization, and is then fine-tuned under the guidance of teacher model through knowledge distillation. Because weight pruning and matrix factorization help to generate better initial and intermediate status", "in the knowledge distillation iterations, the accuracy and efficiency of model compression can be greatly improved.", "We conduct extensive experiments on five public datasets of natural language understanding. As an example, the performance comparison of LadaBERT and state-of-the-art models on MNLI-m dataset is illustrated in Figure FIGREF1. We can see that LadaBERT outperforms other BERT-oriented model compression baselines at various model compression ratios. Especially, LadaBERT-1 outperforms BERT-PKD significantly under $2.5\\times $ compression ratio, and LadaBERT-3 outperforms TinyBERT under $7.5\\times $ compression ratio while the training speed is accelerated by an order of magnitude.", "The rest of this paper is organized as follows. First, we summarizes the related works of model compression and their applications to BERT in Section SECREF2. Then, the methodology of LadaBERT is introduced in Section SECREF3, and experimental results are presented in Section SECREF4. At last, we conclude this work and discuss future works in Section SECREF5." ], [ "Deep Neural Networks (DNNs) have achieved great success in many areas in recent years, but the memory consumption and computational cost expand greatly with the growing complexity of models. Therefore, model compression has become an indispensable technique for practice, especially in low-resource settings. In this section, we review the current progresses of model compression techniques briefly, which can be divided into four categories, namely weight pruning, matrix factorization, weight quantization and knowledge distillation. We also present hybrid approaches and the applications of model compression to pre-trained BERT models." ], [ "Numerous researches have shown that removing a large portion of connections or neurons does not cause significant performance drop in deep neural network models BIBREF4, BIBREF5, BIBREF6, BIBREF7. For example, Han et al. BIBREF4 proposed a method to reduce the storage and computation of neural networks by removing unimportant connections, resulting in sparse networks without affecting the model accuracy. Li et al. BIBREF5 presented an acceleration method for convolution neural network by pruning whole filters together with their connecting filter maps. This approach does not generate sparse connectivity patterns and brings much larger acceleration ratio with existing BLAS libraries for dense matrix multiplications. Ye et al. BIBREF8 argued that small weights are in fact important for preserving the performance of a model, and Hu et al. BIBREF6 alleviated this problem by a data-driven approach that pruned zero-activation neurons iteratively based on intermediate feature maps. Zhu and Gupta BIBREF7 empirically compared large-sparse models with smaller dense models of similar parameter sizes and found that large sparse models performed better consistently. In addition, sparsity-induced models BIBREF9, BIBREF10, BIBREF11 can be regarded as similar methods as pruning. For example, Wen et al. BIBREF9 applied group lasso as a regularizer at training time, and Louizos et al. BIBREF10 learned sparse neural networks through $l_0$ regularization." ], [ "The goal of matrix factorization is to decompose a matrix into the product of two matrices in lower dimensions, and Singular Value Decomposition (SVD) is a popular way of matrix factorization that generalizes the eigendecomposition of a square normal matrix to a $m \\times n$ matrix. It has been proved that SVD is the best approximation of a matrix given the rank $r$ under Frobenius norm BIBREF12. Matrix factorization was widely studied in the deep learning domain for model compression and acceleration BIBREF13, BIBREF14, BIBREF15. Sainath et al BIBREF13 explored a low-rank matrix factorization method of DNN layers for acoustic modeling. Xu et al. BIBREF14, BIBREF15 applied singular value decomposition to deep neural network acoustic models and achieved comparable performances with state-of-the-art models through much fewer parameters. GroupReduce BIBREF16 focused on the compression of neural language models and applied low-rank matrix approximation to vocabulary-partition. Acharya et al. BIBREF17 compressed the word embedding layer via matrix factorization and achieved promising results in text classification. Winata et al. BIBREF18 carried out experiments for low-rank matrix factorization on different NLP tasks and demonstrated that it was more effective in general than weight pruning." ], [ "Weight quantization is a common technique for compressing deep neural networks, which aims to reduce the number of bits to represent every weight in the model. In a neural network, parameters are stacked into clusters, and the parameters in the same cluster share the same value. With weight quantization, the weights can be reduced to at most 1-bit binary value from 32-bits floating point numbers. Zhou et al. BIBREF19 showed that quantizing weights to 8-bits does not hurt the performance, and Binarized Neural Networks BIBREF20 contained binary weights and activations of only one bit. Incremental Network Quantization BIBREF21 converted a pre-trained full-precision neural network into low-precision counterpart through three interdependent operations: weight partition, groupwise quantization and re-training. Variational Network Quantization BIBREF22 formulated the problem of network quantization as a variational inference problem. Moreover, Choi et al. BIBREF23 investigated the drawbacks of conventional quantization methods based on k-means and proposed a Hessian-weighted k-means clustering algorithm as the solution." ], [ "Knowledge distillation is first proposed by BIBREF1, which trains a compact or smaller model to approximate the function learned by a large and complex model. A preliminary step of knowledge distillation is to train a deep network (the teacher model) that automatically generates soft labels for training instances. This “synthetic\" label is then used to train a smaller network (the student model), which assimilates the function that is learned by the teacher model. Chen et al. BIBREF24 successfully applied knowledge distillation to object detection tasks by introducing several modifications, including a weighted cross-entropy loss, a teacher bounded loss, and adaptation layers to model intermediate teacher distributions. Li et al. BIBREF25 developed a framework to learn from noisy labels, where the knowledge learned from a clean dataset and semantic knowledge graph were leveraged to correct the wrong labels. Anil et al. BIBREF26 proposed online distillation, a variant of knowledge distillation which enabled extra parallelism for training large-scale data. In addition, knowledge distillation is also useful for aggregating model ensembles into a single model by treating the ensemble model as a teacher." ], [ "To improve the performance of model compression, there are many attempts to conduct hybrid model compression method that combines more than one category of algorithms. Han et al. BIBREF27 combined quantization, hamming coding and weight pruning to conduct model compression on image classification tasks. Yu et al. BIBREF28 proposed a unified framework for low-rank and sparse decomposition of weight matrices with feature map reconstructions. Polino et al. BIBREF29 advocated a combination of distillation and quantization techniques and proposed two hybrid models, i.e., quantified distillation and differentiable quantization to address this problem. Li et al., BIBREF30 compressed DNN-based acoustic model through knowledge distillation and pruning. NNCF BIBREF31 provided a neural network compression framework that supported an integration of various model compression methods to generate more lightweight networks and achieved state-of-the-art performances in terms of a trade-off between accuracy and efficiency. In BIBREF32, an AutoML pipeline was adopted for model compression. It leveraged reinforcement learning to search for the best model compression strategy among multiple combinatorial configurations." ], [ "In the natural language processing community, there is a growing interest recently to study BERT-oriented model compression for shipping its performance gain into latency-critical or low-resource scenarios. Most existing works focus on knowledge distillation. For instance, BERT-PKD BIBREF33 is a patient knowledge distillation approach that compresses the original BERT model into a lightweight shallow network. Different from traditional knowledge distillation methods, BERT-PKD enables an exploitation of rich information in the teacher's hidden layers by utilizing a layer-wise distillation constraint. DistillBERT BIBREF2 pre-trains a smaller general-purpose language model on the same corpus as vanilla BERT. Distilled BiLSTM BIBREF34 adopts a single-layer BiLSTM as the student model and achieves comparable results with ELMo BIBREF35 through much fewer parameters and less inference time. TinyBERT BIBREF3 reports the best-ever performance on BERT model compression, which exploits a novel attention-based distillation schema that encourages the linguistic knowledge in teacher to be well transferred into the student model. It adopts a two-stage learning framework, including general distillation (pre-training from scratch via distillation loss) and task-specific distillation with data augmentation. Both procedures require huge resources and long training times (from several days to weeks), which is cumbersome for industrial applications. Therefore, we are aiming to explore more lightweight solutions in this paper." ], [ "The overall pipeline of LadaBERT (Lightweight Adaptation of BERT) is illustrated in Figure FIGREF8. As shown in the figure, the pre-trained BERT model (e.g., BERT-Base) is served as the teacher as well as the initial status of the student model. Then, the student model is compressed towards smaller parameter size through a hybrid model compression framework in an iterative manner until the target compression ratio is reached. Concretely, in each iteration, the parameter size of student model is first reduced by $1-\\Delta $ based on weight pruning and matrix factorization, and then the parameters are fine-tuned by the loss function of knowledge distillation. The motivation behind is that matrix factorization and weight pruning are complementary with each other. Matrix factorization calculates the optimal approximation under a certain rank, while weight pruning introduces additional sparsity to the decomposed matrices. Moreover, weight pruning and matrix factorization generates better initial and intermediate status of the student model, which improve the efficiency and effectiveness of knowledge distillation. In the following subsections, we will introduce the algorithms in detail." ], [ "We use Singular Value Decomposition (SVD) for matrix factorization. Each parameter matrix, including the embedding layer are compressed by SVD. Without loss generality, we assume a matrix of parameters ${W} \\in \\mathbb {R}^{m\\times n}$, the singular value decomposition of which can be written as:", "where ${U} \\in \\mathbb {R}^{m \\times p}$ and ${V} \\in \\mathbb {R}^{p \\times n}$. ${\\Sigma } =diag(\\sigma _1,\\sigma _2,\\ldots ,\\sigma _p)$ is a diagonal matrix composed of singular values and $p$ is the full rank of $W$ satisfying $p \\le min(m, n)$.", "To compress this weight matrix, we select a lower rank $r$. The diagonal matrix ${\\Sigma }$ is truncated by selecting the top $r$ singular values. i.e., ${\\Sigma }_r =diag(\\sigma _1, \\sigma _2,\\ldots ,\\sigma _r)$, while ${U}$ and ${V}$ are also truncated by selecting the top $r$ columns and rows respectively, resulting in ${U}_r \\in \\mathbb {R}^{m\\times r}$ and ${V}_r \\in \\mathbb {R}^{r\\times n}$.", "Thus, low-rank matrix approximation of ${W}$ can be formulated as:", "In this way, the original weight matrix $W$ is decomposed by the multiplication of two smaller matrices, where ${A}={U}_r\\sqrt{{\\Sigma }_r} \\in \\mathbb {R}^{n\\times r}$ and ${B}={V}_r\\sqrt{{\\Sigma }_r} \\in \\mathbb {R}^{m\\times r}$. These two matrices are initialized by SVD and will be further tuned during training.", "Given a rank $r \\le min(m, n)$, the compression ratio of matrix factorization is defined as:", "Therefore, for a target model compression ratio $P_{svd}$, the desired rank $r$ can be calculated by:" ], [ "Weight pruning BIBREF4 is an unstructured compression method that induces desirable sparsity for a neural network model. For a neural network $f({x; \\theta })$ with parameters $\\theta $, weight pruning finds a binary mask ${M} \\in \\lbrace 0, 1\\rbrace ^{|\\theta |}$ subject to a given sparsity ratio, $P_{weight}$. The neural network after pruning will be $f({x; M \\cdot \\theta })$, where the non-zero parameter size is $||{M}||_1 = P_{weight}\\cdot |\\theta |$, where $|\\theta |$ is the number of parameters in $\\theta $. For example, when $P_m = 0.3$, there are 70% zeros and 30% ones in the mask ${m}$. We adopt a simple pruning strategy in our implementation: the binary mask is generated by setting the smallest weights to zeros BIBREF36.", "To combine the benefits of weight pruning with matrix factorization, we leverage a hybrid approach that applies weight pruning on the basis of decomposed matrices generated by SVD. Following Equation (DISPLAY_FORM12), SVD-based matrix factorization for any weight matrix ${W}$ can be written as: ${W}_{svd}={A}_{m\\times r}{B}_{n\\times r}^T$. Then, weight pruning is applied on the decomposed matrices ${A} \\in \\mathbb {R}^{m \\times r}$ and ${B} \\in \\mathbb {R}^{n \\times r}$ separately. The weight matrix after hybrid compression is denoted by:", "where ${M_A}$ and ${M_B}$ are binary masks derived by the weight pruning algorithm with compression ratio $P_{weight}$. The compression ratio of this hybrid approach can be calculated by:", "In LadaBERT, the hybrid compression produce is applied to each layer of the pre-trained BERT model. Given an overall model compression target $P$, the following constraint should be satisfied:", "where $|\\theta |$ is the total number of model parameters and $P$ is the target compression ratio; $|\\theta _{embd}|$ denotes the parameter number of embedding layer, which has a relative compression ratio of $P_embd$, and $|\\theta _{encd}|$ denotes the number of parameters of all layers in BERT encoder, which have a compression ratio of $P_{hybrid}$. The classification layer (often MLP layer with Softmax activation) has a small parameter size ($|\\theta _{cls}|$), so it is not modified in the model compression procedure. In the experiments, these fine-grained compression ratios can be optimized by random search on the validation data." ], [ "Knowledge distillation (KD) has been widely used to transfer knowledge from a large teacher model to a smaller student model. In other words, the student model mimics the behavior of the teacher model by minimize the knowledge distillation loss functions. Various types of knowledge distillation can be employed at different sub-layers. Generally, all types of knowledge distillation can be modeled as minimizing the following loss function:", "Where $x$ indicates a sample input and $\\mathcal {X}$ is the training dataset. $f^{(s)}({x})$ and $f^{(t)}({x})$ represent intermediate outputs or weight matrices for the student model and teacher model correspondingly. $L(\\cdot )$ represents for a loss function which can be carefully defined for different types of knowledge distillation. We follow the recent technique proposed by TinyBERT BIBREF3, which applies knowledge distillation constraints upon embedding, self-attention, hidden representation and prediction levels. Concretely, there are four types of knowledge distillation constraints as follows:", "Embedding-layer distillation is performed upon the embedding layer. $f({x}) \\in \\mathbb {R}^{n \\times d}$ represents for the word embedding output for input $x$, where $n$ is the input word length and $d$ is the dimension of word embedding. Mean Squared Error (MSE) is adopted as the loss function $L(\\cdot )$.", "Attention-layer distillation is performed upon the self-attention sub-layer. $f({x}) = \\lbrace a_{ij}\\rbrace \\in \\mathbb {R}^{n \\times n}$ represents the attention output for each self-attention sub-layer, and $L(\\cdot )$ denotes MSE loss function.", "Hidden-layer Distillation is performed at each fully-connected sub-layer in the Transformer architectures. $f({x})$ denotes the output representation of the corresponding sub-layer, and $L(\\cdot )$ also adopts MSE loss function.", "Prediction-layer distillation makes the student model to learns the predictions from a teacher model directly. It is identical to the vanilla form of knowledge distillation BIBREF1. It takes the soft cross-entropy loss function, which is formulated as:", "where $f^t({x})$ and $f^s({x})$ are the predictive logits of teacher and student models respectively." ], [ "We compare LadaBERT with state-of-the-art model compression approaches on five public datasets of different tasks of natural language understanding, including sentiment classification (SST-2), natural language inference (MNLI-m, MNLI-mm, QNLI) and pairwise semantic equivalence (QQP). The statistics of these datasets are described in Table TABREF27.", "The baseline approaches are summarized below.", "Weight pruning and matrix factorization are two simple baselines described in Section SECREF2. We evaluate both pruning methods in an iterative manner until the target compression ratio is reached.", "Hybrid pruning is a combination of matrix factorization and weight pruning, which conducts iterative weight pruning on the basis of SVD-based matrix factorization. It is performed iteratively until the desired compression ratio is achieved.", "BERT-FT, BERT-KD and BERT-PKD are reported in BIBREF33, where BERT-FT directly fine-tunes the model via supervision labels, BERT-KD is the vanilla knowledge distillation algorithm BIBREF1, and BERT-PKD stands for Patient Knowledge Distillation proposed in BIBREF33. The student model is composed of 3 Transformer layers, resulting in a $2.5\\times $ compression ratio. Each layer has the same hidden size as the pre-trained teacher, so the initial parameters of student model can be inherited from the corresponding teacher.", "TinyBERT BIBREF3 instantiates a tiny student model, which has totally 14.5M parameters ($7.5\\times $ compression ratio) composed of 4 layers, 312 hidden units, 1200 intermediate size and 12 heads. For a fair comparison, we reproduce the TinyBERT pipeline without general distillation and data augmentation, which is time-exhaustive and resource-consuming.", "BERT-SMALL has the same model architecture as TinyBERT, but is directly pre-trained by the official BERT pipeline. The performance values are inherited from BIBREF3 for reference.", "Distilled-BiLSTM BIBREF34 leverages a single-layer bidirectional-LSTM as the student model, where the hidden units and intermediate size are set to be 300 and 400 respectively, resulting in a $10.8 \\times $ compression ratio. This model requires a expensive pre-training process using the knowledge distillation constraints." ], [ "We leverage the pre-trained checkpoint of base-bert-uncased as the initial model for compression, which contains 12 layers, 12 heads, 110M parameters, and 768 hidden units per layer. Hyper-parameter selection is conducted on the validation data for each dataset. After training, the prediction results are submitted to the GLUE-benchmark evaluation platform to get the evaluation performance on test data.", "For a comprehensive evaluation, we experiment with four settings of LadaBERT, namely LadaBERT-1, -2, -3 and -4, which reduce the model parameters of BERT-Base by 2.5, 5, 7.5 and 10 times respectively. In our experiment, we take the batch size as 32, learning rate as 2e-5. The optimizer is BertAdam with default setting. Fine-grained compression ratios are optimized by random search and shown in Table TABREF38." ], [ "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT consistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of hybrid combination of SVD-based matrix factorization, weight pruning and knowledge distillation.", "With model size of $2.5\\times $ reduction, LadaBERT-1 performs significantly better than BERT-PKD, boosting the performance by relative 8.9, 8.1, 6.1, 3.8 and 5.8 percentages on MNLI-m, MNLI-mm, SST-2, QQP and QNLI datasets respectively. Recall that BERT-PKD initializes the student model by selecting 3 of 12 layers in the pre-trained BERT-Base model. It turns out that the discarded layers have huge impact on the model performance, which is hard to be recovered by knowledge distillation. On the other hand, LadaBERT generates the student model by iterative pruning on the pre-trained teacher. In this way, the original knowledge in the teacher model can be preserved to the largest extent, and the benefit of which is complementary to knowledge distillation.", "LadaBERT-3 has a comparable size as TinyBERT with a $7.5 \\times $ compression ratio. As shown in the results, TinyBERT does not work well without expensive data augmentation and general distillation, hindering its application to low-resource settings. The reason is that the student model of TinyBERT is distilled from scratch, so it requires much more data to mimic the teacher's behaviors. Instead, LadaBERT has better initial and intermediate status calculated by hybrid model compression, which is much more light-weighted and achieves competitive performances with much faster learning speed (learning curve comparison is shown in Section SECREF41). Moreover, LadaBERT-3 also outperforms BERT-SMALL on most of the datasets, which is pre-trained from scratch by the official BERT pipeline on a $7.5 \\times $ smaller architecture. This indicates that LadaBERT can quickly adapt to a smaller model size and achieve competitive performance without expansive re-training on a large corpus.", "Moreover, Distilled-BiLSTM performs well on SST-2 dataset with more than $10 \\times $ compression ratio, perhaps owing to its advantage of generalization on small datasets. Nevertheless, the performance of LadaBERT-4 is competitive on larger datasets such as MNLI and QQP. This is impressive as LadaBERT is much more efficient without exhaustive re-training on a large corpus. In addition, the inference speed of BiLSTM is usually slower than transformer-based models with similar parameter sizes." ], [ "To further demonstrate the efficiency of LadaBERT, we visualize the learning curves on MNLI-m and QQP datasets in Figure FIGREF42 and FIGREF42, where LadaBERT-3 is compared to the strongest baseline, TinyBERT, under $7.5 \\times $ compression ratio. As shown in the figures, LadaBERT-3 achieves good performances much faster and results in a better convergence point. After training $2 \\times 10^4$ steps (batches) on MNLI-m dataset, the performance of LadaBERT-3 is already comparable to TinyBERT after convergence (approximately $2 \\times 10^5$ steps), achieving nearly $10 \\times $ acceleration. And on QQP dataset, both performance improvement and training speed acceleration is very significant. This clearly shows the superiority of combining matrix factorization, weight pruning and knowledge distillation in a reinforce manner. Instead, TinyBERT is based on pure knowledge distillation, so the learning speed is much slower." ], [ "In this paper, we demonstrate that a combination of matrix factorization and weight pruning is better than single solutions for BERT-oriented model compression. Similar phenomena has been reported in the computer vision scenarios BIBREF28, which shows that low-rank and sparsity are complementary to each other. Here we provide another explanation to support this observation.", "In Figure FIGREF44, we visualize the distribution of errors for a weight matrix in the neural network after pruning to 20% of its original parameter size. The errors can be calculated by $\\mathop {Error}=||\\hat{{M}}-{M}||_1$, where $\\hat{{M}}$ denotes the weight matrix after pruning.", "The yellow line in Figure FIGREF44 shows the distribution of errors generated by pure weight pruning, which has a sudden drop at the pruning threshold. The orange line represents for pure SVD pruning, which turns out to be smoother and aligned with Gaussian distribution. The blue line shows the result of hybrid pruning, which conducts weight pruning on the decomposed matrices. First, we apply SVD-based matrix factorization to reduce 60% of total parameters. Then, weight pruning is applied on the decomposed matrices by 50%, resulting in only 20% parameters while the error distribution changes slightly. As a result, it has smaller mean and deviation than pure matrix factorization. In addition, a smoother distribution is more appropriate for the knowledge distillation procedure to fine-tune the weights, so it is advantageous than pure weight pruning." ], [ "Model compression is a common way to deal with latency-critical or memory-intensive scenarios. Existing model compression methods for BERT need to be re-trained on a large corpus to reserve its original performance, which is inapplicable in low-resource settings. In this paper, we propose LadaBERT to address this problem. LadaBERT is a lightweight model compression pipeline that generates adaptive BERT model efficiently based on a given task and specific constraint. It is based on a hybrid solution, which conducts matrix factorization, weight pruning and knowledge distillation in a reinforce manner. The experimental results verify that EAdaBERT is able to achieve comparable performance with other state-of-the-art solutions using much less training data and time budget. Therefore, LadaBERT can be easily plugged into various applications with competitive performances and little training overheads. In the future, we would like to apply LadaBERT to large-scale industrial applications, such as search relevance and query recommendation." ] ], "section_name": [ "Introduction", "Related Work", "Related Work ::: Weight pruning", "Related Work ::: Matrix factorization", "Related Work ::: Weight quantization", "Related Work ::: Knowledge distillation", "Related Work ::: Hybrid approach", "Related Work ::: BERT model compression", "Lightweight Adaptation of BERT ::: Overview", "Lightweight Adaptation of BERT ::: Overview ::: Matrix factorization", "Lightweight Adaptation of BERT ::: Overview ::: Weight pruning", "Lightweight Adaptation of BERT ::: Knowledge distillation", "Experiments ::: Datasets & Baselines", "Experiments ::: Setup", "Experiments ::: Performance Comparison", "Experiments ::: Learning curve comparison", "Experiments ::: Effect of low-rank + sparsity", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "1ed69482f2580b4a13c44eb5c98cfe4b5435e342", "9c425665ce12ca6b7407b02b94256a3a92425c0f", "cfdcbda95f767cff0bcb4311f7e30044102e07a1" ], "answer": [ { "evidence": [ "The overall pipeline of LadaBERT (Lightweight Adaptation of BERT) is illustrated in Figure FIGREF8. As shown in the figure, the pre-trained BERT model (e.g., BERT-Base) is served as the teacher as well as the initial status of the student model. Then, the student model is compressed towards smaller parameter size through a hybrid model compression framework in an iterative manner until the target compression ratio is reached. Concretely, in each iteration, the parameter size of student model is first reduced by $1-\\Delta $ based on weight pruning and matrix factorization, and then the parameters are fine-tuned by the loss function of knowledge distillation. The motivation behind is that matrix factorization and weight pruning are complementary with each other. Matrix factorization calculates the optimal approximation under a certain rank, while weight pruning introduces additional sparsity to the decomposed matrices. Moreover, weight pruning and matrix factorization generates better initial and intermediate status of the student model, which improve the efficiency and effectiveness of knowledge distillation. In the following subsections, we will introduce the algorithms in detail.", "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT consistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of hybrid combination of SVD-based matrix factorization, weight pruning and knowledge distillation.", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The overall pipeline of LadaBERT (Lightweight Adaptation of BERT) is illustrated in Figure FIGREF8. As shown in the figure, the pre-trained BERT model (e.g., BERT-Base) is served as the teacher as well as the initial status of the student model.", "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. ", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "unanswerable": false, "yes_no": false }, { "evidence": [ "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "unanswerable": false, "yes_no": false }, { "evidence": [ "The overall pipeline of LadaBERT (Lightweight Adaptation of BERT) is illustrated in Figure FIGREF8. As shown in the figure, the pre-trained BERT model (e.g., BERT-Base) is served as the teacher as well as the initial status of the student model. Then, the student model is compressed towards smaller parameter size through a hybrid model compression framework in an iterative manner until the target compression ratio is reached. Concretely, in each iteration, the parameter size of student model is first reduced by $1-\\Delta $ based on weight pruning and matrix factorization, and then the parameters are fine-tuned by the loss function of knowledge distillation. The motivation behind is that matrix factorization and weight pruning are complementary with each other. Matrix factorization calculates the optimal approximation under a certain rank, while weight pruning introduces additional sparsity to the decomposed matrices. Moreover, weight pruning and matrix factorization generates better initial and intermediate status of the student model, which improve the efficiency and effectiveness of knowledge distillation. In the following subsections, we will introduce the algorithms in detail.", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes", "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT consistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of hybrid combination of SVD-based matrix factorization, weight pruning and knowledge distillation." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The overall pipeline of LadaBERT (Lightweight Adaptation of BERT) is illustrated in Figure FIGREF8. As shown in the figure, the pre-trained BERT model (e.g., BERT-Base) is served as the teacher as well as the initial status of the student model.", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes", "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "50f4b45ad6cabdb44701a8b6ff071ef50015f033", "74c775a4d816d4f52211f597f85aabc1eb362f9a", "85f21405d1de69ac14fdc9d5599b65af055a85d5", "f230f73a21d24ea1d130048f251b147592b8580b" ], "answer": [ { "evidence": [ "In this paper, we demonstrate that a combination of matrix factorization and weight pruning is better than single solutions for BERT-oriented model compression. Similar phenomena has been reported in the computer vision scenarios BIBREF28, which shows that low-rank and sparsity are complementary to each other. Here we provide another explanation to support this observation." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In this paper, we demonstrate that a combination of matrix factorization and weight pruning is better than single solutions for BERT-oriented model compression. " ], "unanswerable": false, "yes_no": true }, { "evidence": [ "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT consistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of hybrid combination of SVD-based matrix factorization, weight pruning and knowledge distillation.", "With model size of $2.5\\times $ reduction, LadaBERT-1 performs significantly better than BERT-PKD, boosting the performance by relative 8.9, 8.1, 6.1, 3.8 and 5.8 percentages on MNLI-m, MNLI-mm, SST-2, QQP and QNLI datasets respectively. Recall that BERT-PKD initializes the student model by selecting 3 of 12 layers in the pre-trained BERT-Base model. It turns out that the discarded layers have huge impact on the model performance, which is hard to be recovered by knowledge distillation. On the other hand, LadaBERT generates the student model by iterative pruning on the pre-trained teacher. In this way, the original knowledge in the teacher model can be preserved to the largest extent, and the benefit of which is complementary to knowledge distillation." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT consistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of hybrid combination of SVD-based matrix factorization, weight pruning and knowledge distillation.\n\nWith model size of $2.5\\times $ reduction, LadaBERT-1 performs significantly better than BERT-PKD, boosting the performance by relative 8.9, 8.1, 6.1, 3.8 and 5.8 percentages on MNLI-m, MNLI-mm, SST-2, QQP and QNLI datasets respectively. Recall that BERT-PKD initializes the student model by selecting 3 of 12 layers in the pre-trained BERT-Base model. It turns out that the discarded layers have huge impact on the model performance, which is hard to be recovered by knowledge distillation. On the other hand, LadaBERT generates the student model by iterative pruning on the pre-trained teacher. In this way, the original knowledge in the teacher model can be preserved to the largest extent, and the benefit of which is complementary to knowledge distillation." ], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [ "FLOAT SELECTED: Figure 5: Distribution of pruning errors" ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Figure 5: Distribution of pruning errors" ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "1d891bd4b2e591232e1db56ca2639db87faa9d04", "c73248043007e82d7a36b64d43e0884e2634181c", "d56e9d37ec283781db94cd3429f8a30832fbe3b7", "e9613103f865a219efeaa5e2e1c4c1f1217c18de" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "extractive_spans": [], "free_form_answer": "MNLI-m, MNLI-mm, SST-2, QQP, QNLI", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We compare LadaBERT with state-of-the-art model compression approaches on five public datasets of different tasks of natural language understanding, including sentiment classification (SST-2), natural language inference (MNLI-m, MNLI-mm, QNLI) and pairwise semantic equivalence (QQP). The statistics of these datasets are described in Table TABREF27.", "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT consistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of hybrid combination of SVD-based matrix factorization, weight pruning and knowledge distillation.", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "extractive_spans": [], "free_form_answer": "LadaBERT -1, -2 achieves state of art on all datasets namely, MNLI-m MNLI-mm, SST-2, QQP, and QNLI. \nLadaBERT-3 achieves SOTA on the first four dataset. \nLadaBERT-4 achieves SOTA on MNLI-m, MNLI-mm, and QNLI ", "highlighted_evidence": [ "We compare LadaBERT with state-of-the-art model compression approaches on five public datasets of different tasks of natural language understanding, including sentiment classification (SST-2), natural language inference (MNLI-m, MNLI-mm, QNLI) and pairwise semantic equivalence (QQP). The statistics of these datasets are described in Table TABREF27.", "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT consistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of hybrid combination of SVD-based matrix factorization, weight pruning and knowledge distillation.", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We compare LadaBERT with state-of-the-art model compression approaches on five public datasets of different tasks of natural language understanding, including sentiment classification (SST-2), natural language inference (MNLI-m, MNLI-mm, QNLI) and pairwise semantic equivalence (QQP). The statistics of these datasets are described in Table TABREF27.", "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT consistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of hybrid combination of SVD-based matrix factorization, weight pruning and knowledge distillation.", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "extractive_spans": [ "SST-2", "MNLI-m", "MNLI-mm", "QNLI", "QQP" ], "free_form_answer": "", "highlighted_evidence": [ "We compare LadaBERT with state-of-the-art model compression approaches on five public datasets of different tasks of natural language understanding, including sentiment classification (SST-2), natural language inference (MNLI-m, MNLI-mm, QNLI) and pairwise semantic equivalence (QQP). ", "The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table TABREF40, where the models are ranked by parameter sizes for feasible comparison. ", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "With model size of $2.5\\times $ reduction, LadaBERT-1 performs significantly better than BERT-PKD, boosting the performance by relative 8.9, 8.1, 6.1, 3.8 and 5.8 percentages on MNLI-m, MNLI-mm, SST-2, QQP and QNLI datasets respectively. Recall that BERT-PKD initializes the student model by selecting 3 of 12 layers in the pre-trained BERT-Base model. It turns out that the discarded layers have huge impact on the model performance, which is hard to be recovered by knowledge distillation. On the other hand, LadaBERT generates the student model by iterative pruning on the pre-trained teacher. In this way, the original knowledge in the teacher model can be preserved to the largest extent, and the benefit of which is complementary to knowledge distillation.", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "extractive_spans": [], "free_form_answer": "LadaBERT-1 and LadaBERT-2 on MNLI-m, MNLI-mm, SST-2, QQP and QNLI .\nLadaBERT-3 on MNLI-m, MNLI-mm, SST-2, and QQP . LadaBERT-4 on MNLI-m, MNLI-mm and QNLI .", "highlighted_evidence": [ "With model size of $2.5\\times $ reduction, LadaBERT-1 performs significantly better than BERT-PKD, boosting the performance by relative 8.9, 8.1, 6.1, 3.8 and 5.8 percentages on MNLI-m, MNLI-mm, SST-2, QQP and QNLI datasets respectively. ", "FLOAT SELECTED: Table 3: Performance comparison on various model sizes" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "no", "no", "no" ], "question": [ "Does LadaBERT ever outperform its knowledge destilation teacher in terms of accuracy on some problems?", "Do they evaluate which compression method yields the most gains?", "On which datasets does LadaBERT achieve state-of-the-art?" ], "question_id": [ "5ed02ae6c534cd49d405489990f0e4ba0330ff1b", "f6346828c2f44529dc307abf04dd246bfeb4a9b2", "935873b97872820b7b6100d6a785fba286b94900" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Accuracy comparison on MNLI-m dataset", "Figure 2: Overview of LadaBERT framework", "Table 1: Dataset Statistics", "Table 2: Fine-grained compression ratios", "Table 3: Performance comparison on various model sizes", "Figure 3: Learning curve on MNLI-m dataset. Figure 4: Learning curve on QQP dataset.", "Figure 5: Distribution of pruning errors" ], "file": [ "2-Figure1-1.png", "4-Figure2-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Figure3-1.png", "9-Figure5-1.png" ] }
[ "On which datasets does LadaBERT achieve state-of-the-art?" ]
[ [ "2004.04124-Experiments ::: Performance Comparison-0", "2004.04124-7-Table3-1.png", "2004.04124-Experiments ::: Performance Comparison-1", "2004.04124-Experiments ::: Datasets & Baselines-0" ] ]
[ "LadaBERT-1 and LadaBERT-2 on MNLI-m, MNLI-mm, SST-2, QQP and QNLI .\nLadaBERT-3 on MNLI-m, MNLI-mm, SST-2, and QQP . LadaBERT-4 on MNLI-m, MNLI-mm and QNLI ." ]
16
1603.07252
Neural Summarization by Extracting Sentences and Words
Traditional approaches to extractive summarization rely heavily on human-engineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs. Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation.
{ "paragraphs": [ [ "The need to access and digest large amounts of textual data has provided strong impetus to develop automatic summarization systems aiming to create shorter versions of one or more documents, whilst preserving their information content. Much effort in automatic summarization has been devoted to sentence extraction, where a summary is created by identifying and subsequently concatenating the most salient text units in a document.", "Most extractive methods to date identify sentences based on human-engineered features. These include surface features such as sentence position and length BIBREF0 , the words in the title, the presence of proper nouns, content features such as word frequency BIBREF1 , and event features such as action nouns BIBREF2 . Sentences are typically assigned a score indicating the strength of presence of these features. Several methods have been used in order to select the summary sentences ranging from binary classifiers BIBREF3 , to hidden Markov models BIBREF4 , graph-based algorithms BIBREF5 , BIBREF6 , and integer linear programming BIBREF7 .", "In this work we propose a data-driven approach to summarization based on neural networks and continuous sentence features. There has been a surge of interest recently in repurposing sequence transduction neural network architectures for NLP tasks such as machine translation BIBREF8 , question answering BIBREF9 , and sentence compression BIBREF10 . Central to these approaches is an encoder-decoder architecture modeled by recurrent neural networks. The encoder reads the source sequence into a list of continuous-space representations from which the decoder generates the target sequence. An attention mechanism BIBREF11 is often used to locate the region of focus during decoding.", "We develop a general framework for single-document summarization which can be used to extract sentences or words. Our model includes a neural network-based hierarchical document reader or encoder and an attention-based content extractor. The role of the reader is to derive the meaning representation of a document based on its sentences and their constituent words. Our models adopt a variant of neural attention to extract sentences or words. Contrary to previous work where attention is an intermediate step used to blend hidden units of an encoder to a vector propagating additional information to the decoder, our model applies attention directly to select sentences or words of the input document as the output summary. Similar neural attention architectures have been previously used for geometry reasoning BIBREF12 , under the name Pointer Networks.", "One stumbling block to applying neural network models to extractive summarization is the lack of training data, i.e., documents with sentences (and words) labeled as summary-worthy. Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website. Highlights usually appear as bullet points giving a brief overview of the information contained in the article (see Figure 1 for an example). Using a number of transformation and scoring algorithms, we are able to match highlights to document content and construct two large scale training datasets, one for sentence extraction and the other for word extraction. Previous approaches have used small scale training data in the range of a few hundred examples.", "Our work touches on several strands of research within summarization and neural sequence modeling. The idea of creating a summary by extracting words from the source document was pioneered in bankoetal00 who view summarization as a problem analogous to statistical machine translation and generate headlines using statistical models for selecting and ordering the summary words. Our word-based model is similar in spirit, however, it operates over continuous representations, produces multi-sentence output, and jointly selects summary words and organizes them into sentences. A few recent studies BIBREF14 , BIBREF15 perform sentence extraction based on pre-trained sentence embeddings following an unsupervised optimization paradigm. Our work also uses continuous representations to express the meaning of sentences and documents, but importantly employs neural networks more directly to perform the actual summarization task.", "rush2015neural propose a neural attention model for abstractive sentence compression which is trained on pairs of headlines and first sentences in an article. In contrast, our model summarizes documents rather than individual sentences, producing multi-sentential discourse. A major architectural difference is that our decoder selects output symbols from the document of interest rather than the entire vocabulary. This effectively helps us sidestep the difficulty of searching for the next output symbol under a large vocabulary, with low-frequency words and named entities whose representations can be challenging to learn. Gu:ea:16 and gulcehre2016pointing propose a similar “copy” mechanism in sentence compression and other tasks; their model can accommodate both generation and extraction by selecting which sub-sequences in the input sequence to copy in the output.", "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints." ], [ "In this section we formally define the summarization tasks considered in this paper. Given a document $D$ consisting of a sequence of sentences $\\lbrace s_1, \\cdots , s_m\\rbrace $ and a word set $\\lbrace w_1, \\cdots , w_n\\rbrace $ , we are interested in obtaining summaries at two levels of granularity, namely sentences and words.", "Sentence extraction aims to create a summary from $D$ by selecting a subset of $j$ sentences (where $j<m$ ). We do this by scoring each sentence within $D$ and predicting a label $y_L \\in {\\lbrace 0,1\\rbrace }$ indicating whether the sentence should be included in the summary. As we apply supervised training, the objective is to maximize the likelihood of all sentence labels $\\mathbf {y}_L=(y_L^1, \\cdots , y_L^m)$ given the input document $D$ and model parameters $\\theta $ : ", "$$\\log p(\\mathbf {y}_L |D; \\theta ) = \\sum \\limits _{i=1}^{m} \\log p(y_L^i |D; \\theta )$$ (Eq. 5) ", "Although extractive methods yield naturally grammatical summaries and require relatively little linguistic analysis, the selected sentences make for long summaries containing much redundant information. For this reason, we also develop a model based on word extraction which seeks to find a subset of words in $D$ and their optimal ordering so as to form a summary $\\mathbf {y}_s = (w^{\\prime }_1, \\cdots , w^{\\prime }_k), w^{\\prime }_i \\in D$ . Compared to sentence extraction which is a sequence labeling problem, this task occupies the middle ground between full abstractive summarization which can exhibit a wide range of rewrite operations and extractive summarization which exhibits none. We formulate word extraction as a language generation task with an output vocabulary restricted to the original document. In our supervised setting, the training goal is to maximize the likelihood of the generated sentences, which can be further decomposed by enforcing conditional dependencies among their constituent words: ", "$$\\hspace*{-5.69046pt}\\log p(\\mathbf {y}_s |D;\n\\theta )\\hspace*{-2.84544pt}=\\hspace*{-2.84544pt}\\sum \\limits _{i=1}^{k}\\hspace*{-2.84544pt}\\log p(w^{\\prime }_i | D, w^{\\prime }_1,\\hspace*{-2.84544pt}\\cdots \\hspace*{-2.84544pt}, w^{\\prime }_{i-1}; \\theta )$$ (Eq. 7) ", "In the following section, we discuss the data elicitation methods which allow us to train neural networks based on the above defined objectives." ], [ "Data-driven neural summarization models require a large training corpus of documents with labels indicating which sentences (or words) should be in the summary. Until now such corpora have been limited to hundreds of examples (e.g., the DUC 2002 single document summarization corpus) and thus used mostly for testing BIBREF7 . To overcome the paucity of annotated data for training, we adopt a methodology similar to hermann2015teaching and create two large-scale datasets, one for sentence extraction and another one for word extraction.", "In a nutshell, we retrieved hundreds of thousands of news articles and their corresponding highlights from DailyMail (see Figure 1 for an example). The highlights (created by news editors) are genuinely abstractive summaries and therefore not readily suited to supervised training. To create the training data for sentence extraction, we reverse approximated the gold standard label of each document sentence given the summary based on their semantic correspondence BIBREF7 . Specifically, we designed a rule-based system that determines whether a document sentence matches a highlight and should be labeled with 1 (must be in the summary), and 0 otherwise. The rules take into account the position of the sentence in the document, the unigram and bigram overlap between document sentences and highlights, the number of entities appearing in the highlight and in the document sentence. We adjusted the weights of the rules on 9,000 documents with manual sentence labels created by woodsend2010automatic. The method obtained an accuracy of 85% when evaluated on a held-out set of 216 documents coming from the same dataset and was subsequently used to label 200K documents. Approximately 30% of the sentences in each document were deemed summary-worthy.", "For the creation of the word extraction dataset, we examine the lexical overlap between the highlights and the news article. In cases where all highlight words (after stemming) come from the original document, the document-highlight pair constitutes a valid training example and is added to the word extraction dataset. For out-of-vocabulary (OOV) words, we try to find a semantically equivalent replacement present in the news article. Specifically, we check if a neighbor, represented by pre-trained embeddings, is in the original document and therefore constitutes a valid substitution. If we cannot find any substitutes, we discard the document-highlight pair. Following this procedure, we obtained a word extraction dataset containing 170K articles, again from the DailyMail." ], [ "The key components of our summarization model include a neural network-based hierarchical document reader and an attention-based hierarchical content extractor. The hierarchical nature of our model reflects the intuition that documents are generated compositionally from words, sentences, paragraphs, or even larger units. We therefore employ a representation framework which reflects the same architecture, with global information being discovered and local information being preserved. Such a representation yields minimum information loss and is flexible allowing us to apply neural attention for selecting salient sentences and words within a larger context. In the following, we first describe the document reader, and then present the details of our sentence and word extractors." ], [ "The role of the reader is to derive the meaning representation of the document from its constituent sentences, each of which is treated as a sequence of words. We first obtain representation vectors at the sentence level using a single-layer convolutional neural network (CNN) with a max-over-time pooling operation BIBREF16 , BIBREF17 , BIBREF18 . Next, we build representations for documents using a standard recurrent neural network (RNN) that recursively composes sentences. The CNN operates at the word level, leading to the acquisition of sentence-level representations that are then used as inputs to the RNN that acquires document-level representations, in a hierarchical fashion. We describe these two sub-components of the text reader below.", "We opted for a convolutional neural network model for representing sentences for two reasons. Firstly, single-layer CNNs can be trained effectively (without any long-term dependencies in the model) and secondly, they have been successfully used for sentence-level classification tasks such as sentiment analysis BIBREF19 . Let $d$ denote the dimension of word embeddings, and $s$ a document sentence consisting of a sequence of $n$ words $(w_1, \\cdots , w_n)$ which can be represented by a dense column matrix $\\mathbf {W} \\in \\mathbb {R}^{n \\times d}$ . We apply a temporal narrow convolution between $\\mathbf {W}$ and a kernel $\\mathbf {K} \\in \\mathbb {R}^{c \\times d}$ of width $c$ as follows: ", "$$\\mathbf {f}^{i}_{j} = \\tanh (\\mathbf {W}_{j : j+c-1} \\otimes \\mathbf {K} + b)$$ (Eq. 12) ", "where $\\otimes $ equates to the Hadamard Product followed by a sum over all elements. $\\mathbf {f}^i_j $ denotes the $j$ -th element of the $i$ -th feature map $\\mathbf {f}^i$ and $b$ is the bias. We perform max pooling over time to obtain a single feature (the $i$ th feature) representing the sentence under the kernel $\\mathbf {K}$ with width $c$ : ", "$$\\mathbf {s}_{i, \\mathbf {K}}= \\max _j \\mathbf {f}_j^i$$ (Eq. 13) ", "In practice, we use multiple feature maps to compute a list of features that match the dimensionality of a sentence under each kernel width. In addition, we apply multiple kernels with different widths to obtain a set of different sentence vectors. Finally, we sum these sentence vectors to obtain the final sentence representation. The CNN model is schematically illustrated in Figure 2 (bottom). In the example, the sentence embeddings have six dimensions, so six feature maps are used under each kernel width. The blue feature maps have width two and the red feature maps have width three. The sentence embeddings obtained under each kernel width are summed to get the final sentence representation (denoted by green).", "At the document level, a recurrent neural network composes a sequence of sentence vectors into a document vector. Note that this is a somewhat simplistic attempt at capturing document organization at the level of sentence to sentence transitions. One might view the hidden states of the recurrent neural network as a list of partial representations with each focusing mostly on the corresponding input sentence given the previous context. These representations altogether constitute the document representation, which captures local and global sentential information with minimum compression.", "The RNN we used has a Long Short-Term Memory (LSTM) activation unit for ameliorating the vanishing gradient problem when training long sequences BIBREF20 . Given a document $d=(s_1,\n\\cdots , s_m)$ , the hidden state at time step $t$ , denoted by $\\mathbf {h_t}$ , is updated as: ", "$$\\begin{bmatrix}\n\\mathbf {i}_t\\\\ \\mathbf {f}_t\\\\ \\mathbf {o}_t\\\\ \\mathbf {\\hat{c}}_t\n\\end{bmatrix} =\n\\begin{bmatrix} \\sigma \\\\ \\sigma \\\\ \\sigma \\\\ \\tanh \\end{bmatrix} \\mathbf {W}\\cdot \\begin{bmatrix} \\mathbf {h}_{t-1}\\\\ \\mathbf {s}_t\n\\end{bmatrix}$$ (Eq. 15) ", "$$ \\mathbf {c}_t = \\mathbf {f}_t \\odot \\mathbf {c}_{t-1} +\n\\mathbf {i}_t \\odot \\mathbf {\\hat{c}}_t$$ (Eq. 16) ", "where $\\mathbf {W}$ is a learnable weight matrix. Next, we discuss a special attention mechanism for extracting sentences and words given the recurrent document encoder just described, starting from the sentence extractor." ], [ "In the standard neural sequence-to-sequence modeling paradigm BIBREF11 , an attention mechanism is used as an intermediate step to decide which input region to focus on in order to generate the next output. In contrast, our sentence extractor applies attention to directly extract salient sentences after reading them.", "The extractor is another recurrent neural network that labels sentences sequentially, taking into account not only whether they are individually relevant but also mutually redundant. The complete architecture for the document encoder and the sentence extractor is shown in Figure 2 . As can be seen, the next labeling decision is made with both the encoded document and the previously labeled sentences in mind. Given encoder hidden states $(h_1, \\cdots , h_m)$ and extractor hidden states $(\\bar{h}_1, \\cdots , \\bar{h}_m)$ at time step $t$ , the decoder attends the $t$ -th sentence by relating its current decoding state to the corresponding encoding state: ", "$$\\bar{\\mathbf {h}}_{t} = \\text{LSTM} ( p_{t-1} \\mathbf {s}_{t-1}, \\mathbf {\\bar{h}}_{t-1})$$ (Eq. 20) ", "$$p(y_L(t)=1 | D ) = \\sigma (\\text{MLP} (\\mathbf {\\bar{h}}_t : \\mathbf {h}_t) )$$ (Eq. 21) ", "where MLP is a multi-layer neural network with as input the concatenation of $\\mathbf {\\bar{h}}_t$ and $\\mathbf {h}_t$ . $p_{t-1}$ represents the degree to which the extractor believes the previous sentence should be extracted and memorized ( $p_{t-1}$ =1 if the system is certain; 0 otherwise).", "In practice, there is a discrepancy between training and testing such a model. During training we know the true label $p_{t-1}$ of the previous sentence, whereas at test time $p_{t-1}$ is unknown and has to be predicted by the model. The discrepancy can lead to quickly accumulating prediction errors, especially when mistakes are made early in the sequence labeling process. To mitigate this, we adopt a curriculum learning strategy BIBREF21 : at the beginning of training when $p_{t-1}$ cannot be predicted accurately, we set it to the true label of the previous sentence; as training goes on, we gradually shift its value to the predicted label $p(y_L(t-1)=1\n| d )$ ." ], [ "Compared to sentence extraction which is a purely sequence labeling task, word extraction is closer to a generation task where relevant content must be selected and then rendered fluently and grammatically. A small extension to the structure of the sequential labeling model makes it suitable for generation: instead of predicting a label for the next sentence at each time step, the model directly outputs the next word in the summary. The model uses a hierarchical attention architecture: at time step $t$ , the decoder softly attends each document sentence and subsequently attends each word in the document and computes the probability of the next word to be included in the summary $p(w^{\\prime }_t = w_i|\nd, w^{\\prime }_1, \\cdots , w^{\\prime }_{t-1})$ with a softmax classifier: ", "$$\\bar{\\mathbf {h}}_{t} = \\text{LSTM} ( \\mathbf {w^{\\prime }}_{t-1},\n\\mathbf {\\bar{h}}_{t-1})\\footnote {We empirically found that feeding\nthe previous sentence-level attention vector as additional\ninput to the LSTM would lead to small performance improvements.\nThis is not shown in the equation.}$$ (Eq. 25) ", "$$a_j^t = \\mathbf {z}^\\mathtt {T} \\tanh (\\mathbf {W}_e \\mathbf {\\bar{h}}_t + \\mathbf {W}_r \\mathbf {h}_j), h_j \\in D$$ (Eq. 26) ", "In the above equations, $\\mathbf {w}_i$ corresponds to the vector of the $i$ -th word in the input document, whereas $\\mathbf {z}$ , $\\mathbf {W}_e$ , $\\mathbf {W}_r$ , $\\mathbf {v}$ , $\\mathbf {W}_{e^{\\prime }}$ , and $\\mathbf {W}_{r^{\\prime }}$ are model weights. The model architecture is shown in Figure 3 .", "The word extractor can be viewed as a conditional language model with a vocabulary constraint. In practice, it is not powerful enough to enforce grammaticality due to the lexical diversity and sparsity of the document highlights. A possible enhancement would be to pair the extractor with a neural language model, which can be pre-trained on a large amount of unlabeled documents and then jointly tuned with the extractor during decoding BIBREF23 . A simpler alternative which we adopt is to use $n$ -gram features collected from the document to rerank candidate summaries obtained via beam decoding. We incorporate the features in a log-linear reranker whose feature weights are optimized with minimum error rate training BIBREF24 ." ], [ "In this section we present our experimental setup for assessing the performance of our summarization models. We discuss the datasets used for training and evaluation, give implementation details, briefly introduce comparison models, and explain how system output was evaluated." ], [ "Table 1 (upper half) summarizes our results on the DUC 2002 test dataset using Rouge. nn-se represents our neural sentence extraction model, nn-we our word extraction model, and nn-abs the neural abstractive baseline. The table also includes results for the lead baseline, the logistic regression classifier (lreg), and three previously published systems (ilp, tgraph, and urank).", "The nn-se outperforms the lead and lreg baselines with a significant margin, while performing slightly better than the ilp model. This is an encouraging result since our model has only access to embedding features obtained from raw text. In comparison, lreg uses a set of manually selected features, while the ilp system takes advantage of syntactic information and extracts summaries subject to well-engineered linguistic constraints, which are not available to our models. Overall, our sentence extraction model achieves performance comparable to the state of the art without sophisticated constraint optimization (ilp, tgraph) or sentence ranking mechanisms (urank). We visualize the sentence weights of the nn-se model in the top half of Figure 4 . As can be seen, the model is able to locate text portions which contribute most to the overall meaning of the document.", "Rouge scores for the word extraction model are less promising. This is somewhat expected given that Rouge is $n$ -gram based and not very well suited to measuring summaries which contain a significant amount of paraphrasing and may deviate from the reference even though they express similar meaning. However, a meaningful comparison can be carried out between nn-we and nn-abs which are similar in spirit. We observe that nn-we consistently outperforms the purely abstractive model. As nn-we generates summaries by picking words from the original document, decoding is easier for this model compared to nn-abs which deals with an open vocabulary. The extraction-based generation approach is more robust for proper nouns and rare words, which pose a serious problem to open vocabulary models. An example of the generated summaries for nn-we is shown at the lower half of Figure 4 .", "Table 1 (lower half) shows system results on the 500 DailyMail news articles (test set). In general, we observe similar trends to DUC 2002, with nn-se performing the best in terms of all rouge metrics. Note that scores here are generally lower compared to DUC 2002. This is due to the fact that the gold standard summaries (aka highlights) tend to be more laconic and as a result involve a substantial amount of paraphrasing. More experimental results on this dataset are provided in the appendix.", "The results of our human evaluation study are shown in Table 2 . Specifically, we show, proportionally, how often our participants ranked each system 1st, 2nd, and so on. Perhaps unsurprisingly, the human-written descriptions were considered best and ranked 1st 27% of the time, however closely followed by our nn-se model which was ranked 1st 22% of the time. The ilp system was mostly ranked in 2nd place (38% of the time). The rest of the systems occupied lower ranks. We further converted the ranks to ratings on a scale of 1 to 6 (assigning ratings 6 $\\dots $ 1 to rank placements 1 $\\dots $ 6). This allowed us to perform Analysis of Variance (ANOVA) which revealed a reliable effect of system type. Specifically, post-hoc Tukey tests showed that nn-se and ilp are significantly ( $p < 0.01$ ) better than lead, nn-we, and nn-abs but do not differ significantly from each other or the human goldstandard." ], [ "In this work we presented a data-driven summarization framework based on an encoder-extractor architecture. We developed two classes of models based on sentence and word extraction. Our models can be trained on large scale datasets and learn informativeness features based on continuous representations without recourse to linguistic annotations. Two important ideas behind our work are the creation of hierarchical neural structures that reflect the nature of the summarization task and generation by extraction. The later effectively enables us to sidestep the difficulties of generating under a large vocabulary, essentially covering the entire dataset, with many low-frequency words and named entities.", "Directions for future work are many and varied. One way to improve the word-based model would be to take structural information into account during generation, e.g., by combining it with a tree-based algorithm BIBREF31 . It would also be interesting to apply the neural models presented here in a phrase-based setting similar to lebret2015phrase. A third direction would be to adopt an information theoretic perspective and devise a purely unsupervised approach that selects summary sentences and words so as to minimize information loss, a task possibly achievable with the dataset created in this work." ], [ "We would like to thank three anonymous reviewers and members of the ILCC at the School of Informatics for their valuable feedback. The support of the European Research Council under award number 681760 “Translating Multiple Modalities into Text” is gratefully acknowledged." ], [ "In addition to the DUC 2002 and 500 DailyMail samples, we additionally report results on the entire DailyMail test set (Table 3 ). Since there is no established evaluation standard for this task, we experimented with three different ROUGE limits: 75 bytes, 275 bytes and full length." ] ], "section_name": [ "Introduction", "Problem Formulation", "Training Data for Summarization", "Neural Summarization Model", "Document Reader", "Sentence Extractor", "Word Extractor", "Experimental Setup", "Results", "Conclusions", "Acknowledgments", "Appendix" ] }
{ "answers": [ { "annotation_id": [ "314a8d1ec8be949b9d8916e1cf2b8f177b6fc4d8", "58dc903089bd2d566a4cdc6bbfa12ddb25dd1833", "761669edc7244372295ce2e280c1387ded3bd2f7", "95e7568437e3aa0d8d15d9100ac90387ef8724ee" ], "answer": [ { "evidence": [ "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints.", "One stumbling block to applying neural network models to extractive summarization is the lack of training data, i.e., documents with sentences (and words) labeled as summary-worthy. Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website. Highlights usually appear as bullet points giving a brief overview of the information contained in the article (see Figure 1 for an example). Using a number of transformation and scoring algorithms, we are able to match highlights to document content and construct two large scale training datasets, one for sentence extraction and the other for word extraction. Previous approaches have used small scale training data in the range of a few hundred examples." ], "extractive_spans": [ "news articles" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. ", "Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints." ], "extractive_spans": [ "news" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "One stumbling block to applying neural network models to extractive summarization is the lack of training data, i.e., documents with sentences (and words) labeled as summary-worthy. Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website. Highlights usually appear as bullet points giving a brief overview of the information contained in the article (see Figure 1 for an example). Using a number of transformation and scoring algorithms, we are able to match highlights to document content and construct two large scale training datasets, one for sentence extraction and the other for word extraction. Previous approaches have used small scale training data in the range of a few hundred examples." ], "extractive_spans": [ "news articles" ], "free_form_answer": "", "highlighted_evidence": [ "Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints." ], "extractive_spans": [ "news" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "7fa8d8b1eb8a1630feb99a8e11ebfa501ac5bc3c", "64535162a1194b06db3080285c566202b651354c", "f840a836eee0180d2c976457f8b3052d8e78050c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "26b89a45d84c3d38828d7dd7c424470f5ce60733", "77d78931af7708c07a7ff65abd70318f596b5ec4", "d3f385018a588eb5e440537612a7abe28487751d", "d7ccaf4b6d46e4d791bf789280d180f41a27c70c" ], "answer": [ { "evidence": [ "Data-driven neural summarization models require a large training corpus of documents with labels indicating which sentences (or words) should be in the summary. Until now such corpora have been limited to hundreds of examples (e.g., the DUC 2002 single document summarization corpus) and thus used mostly for testing BIBREF7 . To overcome the paucity of annotated data for training, we adopt a methodology similar to hermann2015teaching and create two large-scale datasets, one for sentence extraction and another one for word extraction.", "In a nutshell, we retrieved hundreds of thousands of news articles and their corresponding highlights from DailyMail (see Figure 1 for an example). The highlights (created by news editors) are genuinely abstractive summaries and therefore not readily suited to supervised training. To create the training data for sentence extraction, we reverse approximated the gold standard label of each document sentence given the summary based on their semantic correspondence BIBREF7 . Specifically, we designed a rule-based system that determines whether a document sentence matches a highlight and should be labeled with 1 (must be in the summary), and 0 otherwise. The rules take into account the position of the sentence in the document, the unigram and bigram overlap between document sentences and highlights, the number of entities appearing in the highlight and in the document sentence. We adjusted the weights of the rules on 9,000 documents with manual sentence labels created by woodsend2010automatic. The method obtained an accuracy of 85% when evaluated on a held-out set of 216 documents coming from the same dataset and was subsequently used to label 200K documents. Approximately 30% of the sentences in each document were deemed summary-worthy.", "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints." ], "extractive_spans": [ "DUC 2002 document summarization corpus", "our own DailyMail news highlights corpus" ], "free_form_answer": "", "highlighted_evidence": [ "To overcome the paucity of annotated data for training, we adopt a methodology similar to hermann2015teaching and create two large-scale datasets, one for sentence extraction and another one for word extraction.\n\nIn a nutshell, we retrieved hundreds of thousands of news articles and their corresponding highlights from DailyMail (see Figure 1 for an example). ", "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints." ], "extractive_spans": [ "DUC 2002", "our own Dailymail news highlights corpus" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "One stumbling block to applying neural network models to extractive summarization is the lack of training data, i.e., documents with sentences (and words) labeled as summary-worthy. Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website. Highlights usually appear as bullet points giving a brief overview of the information contained in the article (see Figure 1 for an example). Using a number of transformation and scoring algorithms, we are able to match highlights to document content and construct two large scale training datasets, one for sentence extraction and the other for word extraction. Previous approaches have used small scale training data in the range of a few hundred examples.", "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing hand-engineered features and sophisticated linguistic constraints." ], "extractive_spans": [ "the benchmark DUC 2002 document summarization corpus", "DailyMail news highlights corpus" ], "free_form_answer": "", "highlighted_evidence": [ "Inspired by previous work on summarization BIBREF7 , BIBREF13 and reading comprehension BIBREF9 we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website. ", "We evaluate our models both automatically (in terms of Rouge) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In a nutshell, we retrieved hundreds of thousands of news articles and their corresponding highlights from DailyMail (see Figure 1 for an example). The highlights (created by news editors) are genuinely abstractive summaries and therefore not readily suited to supervised training. To create the training data for sentence extraction, we reverse approximated the gold standard label of each document sentence given the summary based on their semantic correspondence BIBREF7 . Specifically, we designed a rule-based system that determines whether a document sentence matches a highlight and should be labeled with 1 (must be in the summary), and 0 otherwise. The rules take into account the position of the sentence in the document, the unigram and bigram overlap between document sentences and highlights, the number of entities appearing in the highlight and in the document sentence. We adjusted the weights of the rules on 9,000 documents with manual sentence labels created by woodsend2010automatic. The method obtained an accuracy of 85% when evaluated on a held-out set of 216 documents coming from the same dataset and was subsequently used to label 200K documents. Approximately 30% of the sentences in each document were deemed summary-worthy." ], "extractive_spans": [], "free_form_answer": "DailyMail news articles", "highlighted_evidence": [ "In a nutshell, we retrieved hundreds of thousands of news articles and their corresponding highlights from DailyMail (see Figure 1 for an example). The highlights (created by news editors) are genuinely abstractive summaries and therefore not readily suited to supervised training." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b", "64535162a1194b06db3080285c566202b651354c", "35491e1e579f6d147f4793edce4c1a80ab2410e7", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "2eed38a4d5e3a2e614f90f84acbdb829ccecae2b", "87d753ee7b6ecddcedbd40c4acdbac1c69171b66", "8be34142a56be34ad0db27e8a8a9f76e44090248" ], "answer": [ { "evidence": [ "Table 1 (upper half) summarizes our results on the DUC 2002 test dataset using Rouge. nn-se represents our neural sentence extraction model, nn-we our word extraction model, and nn-abs the neural abstractive baseline. The table also includes results for the lead baseline, the logistic regression classifier (lreg), and three previously published systems (ilp, tgraph, and urank).", "Rouge scores for the word extraction model are less promising. This is somewhat expected given that Rouge is $n$ -gram based and not very well suited to measuring summaries which contain a significant amount of paraphrasing and may deviate from the reference even though they express similar meaning. However, a meaningful comparison can be carried out between nn-we and nn-abs which are similar in spirit. We observe that nn-we consistently outperforms the purely abstractive model. As nn-we generates summaries by picking words from the original document, decoding is easier for this model compared to nn-abs which deals with an open vocabulary. The extraction-based generation approach is more robust for proper nouns and rare words, which pose a serious problem to open vocabulary models. An example of the generated summaries for nn-we is shown at the lower half of Figure 4 ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ " the neural abstractive baseline", "Table 1 (upper half) summarizes our results on the DUC 2002 test dataset using Rouge. nn-se represents our neural sentence extraction model, nn-we our word extraction model, and nn-abs the neural abstractive baseline.", "We observe that nn-we consistently outperforms the purely abstractive model. " ], "unanswerable": false, "yes_no": true }, { "evidence": [ "Although extractive methods yield naturally grammatical summaries and require relatively little linguistic analysis, the selected sentences make for long summaries containing much redundant information. For this reason, we also develop a model based on word extraction which seeks to find a subset of words in $D$ and their optimal ordering so as to form a summary $\\mathbf {y}_s = (w^{\\prime }_1, \\cdots , w^{\\prime }_k), w^{\\prime }_i \\in D$ . Compared to sentence extraction which is a sequence labeling problem, this task occupies the middle ground between full abstractive summarization which can exhibit a wide range of rewrite operations and extractive summarization which exhibits none. We formulate word extraction as a language generation task with an output vocabulary restricted to the original document. In our supervised setting, the training goal is to maximize the likelihood of the generated sentences, which can be further decomposed by enforcing conditional dependencies among their constituent words:", "Table 1 (upper half) summarizes our results on the DUC 2002 test dataset using Rouge. nn-se represents our neural sentence extraction model, nn-we our word extraction model, and nn-abs the neural abstractive baseline. The table also includes results for the lead baseline, the logistic regression classifier (lreg), and three previously published systems (ilp, tgraph, and urank)." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Compared to sentence extraction which is a sequence labeling problem, this task occupies the middle ground between full abstractive summarization which can exhibit a wide range of rewrite operations and extractive summarization which exhibits none", "Table 1 (upper half) summarizes our results on the DUC 2002 test dataset using Rouge. nn-se represents our neural sentence extraction model, nn-we our word extraction model, and nn-abs the neural abstractive baseline" ], "unanswerable": false, "yes_no": true }, { "evidence": [ "Table 1 (upper half) summarizes our results on the DUC 2002 test dataset using Rouge. nn-se represents our neural sentence extraction model, nn-we our word extraction model, and nn-abs the neural abstractive baseline. The table also includes results for the lead baseline, the logistic regression classifier (lreg), and three previously published systems (ilp, tgraph, and urank)." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Table 1 (upper half) summarizes our results on the DUC 2002 test dataset using Rouge. nn-se represents our neural sentence extraction model, nn-we our word extraction model, and nn-abs the neural abstractive baseline" ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "35491e1e579f6d147f4793edce4c1a80ab2410e7", "594e0b1297abe0ad3e2555ad27eedfb59c442bb9", "f840a836eee0180d2c976457f8b3052d8e78050c" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "What domain of text are they working with?", "What dataset do they use?", "Do they compare to abstractive summarization methods?" ], "question_id": [ "f2bcfdbebb418e7da165c19b8c7167719432ee48", "0fe49431db5ffaa24372919daf24d8f84117bfda", "0f9c1586f1b4b531fa4fd113e767d06af90b1ae8" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 3: Neural attention mechanism for word extraction.", "Figure 2: A recurrent convolutional document reader with a neural sentence extractor.", "Table 2: Rankings (shown as proportions) and mean ranks given to systems by human participants (lower is better).", "Table 1: ROUGE evaluation (%) on DUC-2002 and DailyMail corpora.", "Figure 4: Visualization of the summaries for a DailyMail article. The top half shows the relative attention weights given by the sentence extraction model. Darkness indicates sentence importance. The lower half shows the summary generated by the word extraction." ], "file": [ "6-Figure3-1.png", "6-Figure2-1.png", "9-Table2-1.png", "9-Table1-1.png", "10-Figure4-1.png" ] }
[ "What dataset do they use?" ]
[ [ "1603.07252-Introduction-4", "1603.07252-Training Data for Summarization-1", "1603.07252-Introduction-7", "1603.07252-Training Data for Summarization-0" ] ]
[ "DailyMail news articles" ]
17
1708.00549
Improved Representation Learning for Predicting Commonsense Ontologies
Recent work in learning ontologies (hierarchical and partially-ordered structures) has leveraged the intrinsic geometry of spaces of learned representations to make predictions that automatically obey complex structural constraints. We explore two extensions of one such model, the order-embedding model for hierarchical relation learning, with an aim towards improved performance on text data for commonsense knowledge representation. Our first model jointly learns ordering relations and non-hierarchical knowledge in the form of raw text. Our second extension exploits the partial order structure of the training data to find long-distance triplet constraints among embeddings which are poorly enforced by the pairwise training procedure. We find that both incorporating free text and augmented training constraints improve over the original order-embedding model and other strong baselines.
{ "paragraphs": [ [ "A core problem in artificial intelligence is to capture, in machine-usable form, the collection of information that an ordinary person would have, known as commonsense knowledge. For example, a machine should know that a room may have a door, and that when a person enters a room, it is generally through a door. This background knowledge is crucial for solving many difficult, ambiguous natural language problems in coreference resolution and question answering, as well as the creation of other reasoning machines.", "More than just curating a static collection of facts, we would like commonsense knowledge to be represented in a way that lends itself to machine reasoning and inference of missing information. We concern ourselves in this paper with the problem of learning commonsense knowledge representations.", "In machine learning settings, knowledge is usually represented as a hypergraph of triplets such as Freebase BIBREF1 , WordNet BIBREF2 , and ConceptNet BIBREF3 . In these knowledge graphs, nodes represent entities or terms $t$ , and hyperedges are relations $R$ between these entities or terms, with each fact in the knowledge graph represented as a triplet $<t_1, R, t_2>$ . Researchers have developed many models for knowledge representation and learning in this setting BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , under the umbrella of knowledge graph completion. However, none of these naturally lend themselves to traditional methods of logical reasoning such as transitivity and negation.", "While a knowledge graph completion model can represent relations such as Is-A and entailment, there is no mechanism to ensure that its predictions are internally consistent. For example, if we know that a dog is a mammal, and a pit bull is a dog, we would like the model to also predict that a pit bull is a mammal. These transitive entailment relations describe ontologies of hierarchical data, a key component of commonsense knowledge which we focus on in this work.", "Recently, a thread of research on representation learning has aimed to create embedding spaces that automatically enforce consistency in these predictions using the intrinsic geometry of the embedding space BIBREF9 , BIBREF0 , BIBREF10 . In these models, the inferred embedding space creates a globally consistent structured prediction of the ontology, rather than the local relation predictions of previous models.", "We focus on the order-embedding model BIBREF0 which was proposed for general hierarchical prediction including multimodal problems such as image captioning. While the original work included results on ontology prediction on WordNet, we focus exclusively on the model's application to commonsense knowledge, with its unique characteristics including complex ordering structure, compositional, multi-word entities, and the wealth of commonsense knowledge to be found in large-scale unstructured text data.", "We propose two extensions to the order embedding model. The first augments hierarchical supervision from existing ontologies with non-hierarchical knowledge in the form of raw text. We find incorporating unstructured text brings accuracy from 92.0 to 93.0 on a commonsense dataset containing Is-A relations from ConceptNet and Microsoft Concept Graph (MCG), with larger relative gains from smaller amounts of labeled data.", "The second extension uses the complex partial-order structure of real-world ontologies to find long-distance triplet constraints among embeddings which are poorly enforced by the standard pairwise training method. By adding our additional triplet constraints to the baseline order-embedding model, we find performance improves from 90.6 to 91.3 accuracy on the WordNet ontology dataset.", "We find that order embeddings' ease of extension, both by incorporating non-ordered data, and additional training constraints derived from the structure of the problem, makes it a promising avenue for the development of further algorithms for automatic learning and jointly consistent prediction of ontologies." ], [ "In this work, we use the ConceptNet BIBREF3 , WordNet BIBREF2 , and Microsoft Concept Graph (MCG) BIBREF11 , BIBREF12 knowledge bases for our ontology prediction experiments.", "WordNet is a knowledge base (KB) of single words and relations between them such as hypernymy and meronymy. For our task, we use the hypernym relations only. ConceptNet is a KB of triples consisting of a left term $t_1$ , a relation $R$ , and a right term $t_2$ . The relations come from a fixed set of size 34. But unlike WordNet, terms in ConceptNet can be phrases. We focus on the Is-A relation in this work. MCG also consists of hierarchical relations between multi-word phrases, ranging from extremely general to specific. Examples from each dataset are shown in Table 1 .", "For experiments involving unstructured text, we use the WaCkypedia corpus BIBREF13 ." ], [ "We introduce two variants of order embeddings. The first incorporates non-hierarchical unstructured text data into the supervised ontology. The second improves the training procedure by adding additional examples representing long-range constraints." ], [ "Order Embeddings are a model for automatically enforcing partial-ordering (or lattice) constraints among predictions directly in embedding space. The vector embeddings satisfy the following property with respect to the partial order: $\nx \\preceq y \\text{ if and only if } \\bigwedge _{i=1}^{N}x_{i}\\ge y_i\n$ ", "where $x$ is the subcategory and $y$ is the supercategory. This means the general concept embedding should be smaller than the specific concept embedding in every coordinate of the embeddings. An illustration of this geometry can be found in Figure 1. We can define a surrogate energy for this ordering function as $d(x, y) = \\left\\Vert \\max (0,y-x) \\right\\Vert ^2$ . The learning objective for order embeddings becomes the following, where $m$ is a margin parameter, $x$ and $y$ are the hierarchically supervised pairs, and $x^{\\prime }$ and $y^{\\prime }$ are negatively sampled concepts: $\nL_{\\text{Order}} = \\sum _{x,y}\\max (0, m+d(x,y)-d(x^{\\prime }, y^{\\prime }))\n$ " ], [ "We aim to augment our ontology prediction embedding model with more general commonsense knowledge mined from raw text. A standard method for learning word representations is word2vec BIBREF14 , which predicts current word embeddings using a context of surrounding word embeddings. We incorporate a modification of the CBOW model in this work, which uses the average embedding from a window around the current word as a context vector $v_2$ to predict the current word vector $v_1$ : $\nv_2 = \\frac{1}{window}\\sum _{k \\in \\lbrace -window/2,...,window/2\\rbrace \\setminus \\lbrace t\\rbrace }v_{t+k}\n$ ", " Because order embeddings are all positive and compared coordinate-wise, we use a variant of CBOW that scores similarity to context based on based on $L_1$ distance and not dot product, $v^{\\prime }_1$ and $v^{\\prime }_2$ are the negative examples selected from the vocabulary during training: $\n& d_\\text{pos} = d(v_1,v_2) = \\left\\Vert v_1- v_2\\right\\Vert \\\\\n& d_\\text{neg} = d(v^{\\prime }_1, v^{\\prime }_2) = \\left\\Vert v^{\\prime }_1- v^{\\prime }_2\\right\\Vert \\\\\n& L_{\\text{CBOW}}= \\sum _{w_c,w_t}\\max (0, m+d_\\text{pos}-d_\\text{neg})\n$ ", "Finally, after each gradient update, we map the embeddings back to the positive domain by applying the absolute value function. We propose jointly learning both the order- and text- embedding model with a simple weighted combination of the two objective functions: $\n&L_{\\text{Joint}} = \\alpha _{1}L_{\\text{Order}}+\\alpha _{2}L_{\\text{CBOW}}\n$ ", "We perform two sets of experiments on the combined ConceptNet and MCG Is-A relations, using different amounts of training and testing data. The first data set, called Data1, uses 119,159 training examples, 1,089 dev examples, and 1,089 test examples. The second dataset, Data2, evenly splits the data in 47,662 examples for each set.", "Our baselines for this model are a standard order embedding model, and a bilinear classifier BIBREF6 trained to predict Is-A, both with and without additional unstructured text augmenting the model in the same way as the joint order embedding model.", "We see in Table 2 that while adding extra text data helps all models, the best performance is consistently achieved by a combination of order embeddings and unstructured text." ], [ "Order embeddings map words to a partially-ordered space, which we can think of as a directed acyclic graph (DAG). A simple way to add more training examples is to take the transitive closure of this graph. For example, if we have $<$ dog IsA mammal $>$ , $<$ mammal IsA animal $>$ , we can produce the training example $<$ dog IsA animal $>$ .", "We observe that even more training examples can be created by treating our partial-order structure as a lattice. A lattice is a partial order equipped with two additional operations, join and meet. The join and meet of a pair P are respectively the supremum (least upper bound) of P, denoted $\\vee $ , and the infimum (greatest lower bound), denoted $\\wedge $ . In our case, the vector join and meet would be the pointwise max and min of two embeddings.", "We can add many additional training examples to our data by enforcing that the vector join and meet operations satisfy the joins and meets found in the training lattice/DAG. If $w_c$ and $w_p$ are the nearest common child and parent for a pair $w_1, w_2$ , the loss for join and meet learning can be written as the following: $\n& d_c(w_1,w_2,w_c) = \\left\\Vert \\max (0,w_1 \\vee w_2-w_c) \\right\\Vert ^2 \\\\\n& d_p(w_1,w_2,w_p) = \\left\\Vert \\max (0,w_p - w_1 \\wedge w_2) \\right\\Vert ^2 \\\\\n& {\\small L_\\text{join} = \\sum _{w_1,w_2,w_c}\\max (0, m+d_c(w_1,w_2,w_c))}\\\\\n& {\\small L_\\text{meet} = \\sum _{w_1,w_2,w_p}\\max (0, m+d_p(w_1,w_2,w_p))}\\\\\n& L = L_\\text{join} + L_\\text{meet}\n$ ", "In this experiment, we use the same dataset as BIBREF0 , created by taking 40,00 edges from the 838,073-edge transitive closure of the WordNet hierarchy for the dev set, 4,000 for the test set, and training on the rest of the transitive closure. We additionally add the long-range join and meet constraints (3,028,302 and 4,006 respectively) between different concepts and see that the inclusion of this additional supervision results in further improvement over the baseline order embedding model, as seen in Table 3." ], [ "In both sets of experiments we train all models using the Adam optimizer BIBREF15 , using embeddings of dimension 50, with all hyperparameters tuned on a development set. When embedding multi-word phrases, we represent them as the average of the constituent word embeddings." ], [ "In this work we presented two extensions to the order embedding model. The first incorporates unstructured text to improve performance on Is-A relations, while the second uses long-range constraints automatically derived from the ontology to provide the model with more useful global supervision. In future work we would like to explore embedding models for structured prediction that automatically incorporate additional forms of reasoning such as negation, joint learning of ontological and other commonsense relations, and the application of improved training methods to new models for ontology prediction such as Poincaré embeddings." ] ], "section_name": [ "Introduction", "Data", "Models", "Order Embeddings", "Joint Text and Order Embedding", "Long-Range Join and Meet Constraints", "Experiments", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "82ed8873f1f11f648488d2e00f341bebc4553dfe", "2e7776fd36d252feb9ce6d6b03d967660aad0782", "813f8f1fe7b8c803e368c317d00d13512a88e89a", "de32134088fe9b436decd6b7ed11d24afd9e7562" ], "answer": [ { "evidence": [ "In this work, we use the ConceptNet BIBREF3 , WordNet BIBREF2 , and Microsoft Concept Graph (MCG) BIBREF11 , BIBREF12 knowledge bases for our ontology prediction experiments.", "WordNet is a knowledge base (KB) of single words and relations between them such as hypernymy and meronymy. For our task, we use the hypernym relations only. ConceptNet is a KB of triples consisting of a left term $t_1$ , a relation $R$ , and a right term $t_2$ . The relations come from a fixed set of size 34. But unlike WordNet, terms in ConceptNet can be phrases. We focus on the Is-A relation in this work. MCG also consists of hierarchical relations between multi-word phrases, ranging from extremely general to specific. Examples from each dataset are shown in Table 1 ." ], "extractive_spans": [ "hypernym relations" ], "free_form_answer": "", "highlighted_evidence": [ "In this work, we use the ConceptNet BIBREF3 , WordNet BIBREF2 , and Microsoft Concept Graph (MCG) BIBREF11 , BIBREF12 knowledge bases for our ontology prediction experiments.", "WordNet is a knowledge base (KB) of single words and relations between them such as hypernymy and meronymy. For our task, we use the hypernym relations only.", "But unlike WordNet, terms in ConceptNet can be phrases. We focus on the Is-A relation in this work. MCG also consists of hierarchical relations between multi-word phrases, ranging from extremely general to specific." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "A core problem in artificial intelligence is to capture, in machine-usable form, the collection of information that an ordinary person would have, known as commonsense knowledge. For example, a machine should know that a room may have a door, and that when a person enters a room, it is generally through a door. This background knowledge is crucial for solving many difficult, ambiguous natural language problems in coreference resolution and question answering, as well as the creation of other reasoning machines." ], "extractive_spans": [ "the collection of information that an ordinary person would have" ], "free_form_answer": "", "highlighted_evidence": [ "A core problem in artificial intelligence is to capture, in machine-usable form, the collection of information that an ordinary person would have, known as commonsense knowledge. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this work, we use the ConceptNet BIBREF3 , WordNet BIBREF2 , and Microsoft Concept Graph (MCG) BIBREF11 , BIBREF12 knowledge bases for our ontology prediction experiments.", "WordNet is a knowledge base (KB) of single words and relations between them such as hypernymy and meronymy. For our task, we use the hypernym relations only. ConceptNet is a KB of triples consisting of a left term $t_1$ , a relation $R$ , and a right term $t_2$ . The relations come from a fixed set of size 34. But unlike WordNet, terms in ConceptNet can be phrases. We focus on the Is-A relation in this work. MCG also consists of hierarchical relations between multi-word phrases, ranging from extremely general to specific. Examples from each dataset are shown in Table 1 ." ], "extractive_spans": [], "free_form_answer": "Hypernymy or is-a relations between words or phrases", "highlighted_evidence": [ "In this work, we use the ConceptNet BIBREF3 , WordNet BIBREF2 , and Microsoft Concept Graph (MCG) BIBREF11 , BIBREF12 knowledge bases for our ontology prediction experiments.", "WordNet is a knowledge base (KB) of single words and relations between them such as hypernymy and meronymy. For our task, we use the hypernym relations only. ConceptNet is a KB of triples consisting of a left term $t_1$ , a relation $R$ , and a right term $t_2$ . The relations come from a fixed set of size 34. But unlike WordNet, terms in ConceptNet can be phrases. We focus on the Is-A relation in this work. MCG also consists of hierarchical relations between multi-word phrases, ranging from extremely general to specific." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "While a knowledge graph completion model can represent relations such as Is-A and entailment, there is no mechanism to ensure that its predictions are internally consistent. For example, if we know that a dog is a mammal, and a pit bull is a dog, we would like the model to also predict that a pit bull is a mammal. These transitive entailment relations describe ontologies of hierarchical data, a key component of commonsense knowledge which we focus on in this work.", "We focus on the order-embedding model BIBREF0 which was proposed for general hierarchical prediction including multimodal problems such as image captioning. While the original work included results on ontology prediction on WordNet, we focus exclusively on the model's application to commonsense knowledge, with its unique characteristics including complex ordering structure, compositional, multi-word entities, and the wealth of commonsense knowledge to be found in large-scale unstructured text data." ], "extractive_spans": [], "free_form_answer": "Knowledge than an ordinary person would have such as transitive entailment relation, complex ordering, compositionality, multi-word entities", "highlighted_evidence": [ "For example, if we know that a dog is a mammal, and a pit bull is a dog, we would like the model to also predict that a pit bull is a mammal. These transitive entailment relations describe ontologies of hierarchical data, a key component of commonsense knowledge which we focus on in this work.", "While the original work included results on ontology prediction on WordNet, we focus exclusively on the model's application to commonsense knowledge, with its unique characteristics including complex ordering structure, compositional, multi-word entities, and the wealth of commonsense knowledge to be found in large-scale unstructured text data." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "1d87720d0db14aa36d083b7dc3999984c4489389", "428b6da53085b8fd7b37e9fb259c0c609bd09984", "f840a836eee0180d2c976457f8b3052d8e78050c", "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "annotation_id": [ "83963e6834a184a95bbb8d1d276b1f3815e1c854", "9931b14a7d8e4a3e419718327c233ab7fb8fdb34", "ef0bd4779475d09aca1cf6a1f8fa8e995484895f" ], "answer": [ { "evidence": [ "Recently, a thread of research on representation learning has aimed to create embedding spaces that automatically enforce consistency in these predictions using the intrinsic geometry of the embedding space BIBREF9 , BIBREF0 , BIBREF10 . In these models, the inferred embedding space creates a globally consistent structured prediction of the ontology, rather than the local relation predictions of previous models." ], "extractive_spans": [ "In these models, the inferred embedding space creates a globally consistent structured prediction of the ontology, rather than the local relation predictions of previous models." ], "free_form_answer": "", "highlighted_evidence": [ "Recently, a thread of research on representation learning has aimed to create embedding spaces that automatically enforce consistency in these predictions using the intrinsic geometry of the embedding space BIBREF9 , BIBREF0 , BIBREF10 . In these models, the inferred embedding space creates a globally consistent structured prediction of the ontology, rather than the local relation predictions of previous models." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "where $x$ is the subcategory and $y$ is the supercategory. This means the general concept embedding should be smaller than the specific concept embedding in every coordinate of the embeddings. An illustration of this geometry can be found in Figure 1. We can define a surrogate energy for this ordering function as $d(x, y) = \\left\\Vert \\max (0,y-x) \\right\\Vert ^2$ . The learning objective for order embeddings becomes the following, where $m$ is a margin parameter, $x$ and $y$ are the hierarchically supervised pairs, and $x^{\\prime }$ and $y^{\\prime }$ are negatively sampled concepts: $ L_{\\text{Order}} = \\sum _{x,y}\\max (0, m+d(x,y)-d(x^{\\prime }, y^{\\prime })) $", "FLOAT SELECTED: Figure 1. Order Embedding" ], "extractive_spans": [], "free_form_answer": "The intrinsic geometry is that the general concept embedding should be smaller than the specific concept embedding in every coordinate of the embeddings", "highlighted_evidence": [ "This means the general concept embedding should be smaller than the specific concept embedding in every coordinate of the embeddings. An illustration of this geometry can be found in Figure 1. ", "FLOAT SELECTED: Figure 1. Order Embedding" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Recently, a thread of research on representation learning has aimed to create embedding spaces that automatically enforce consistency in these predictions using the intrinsic geometry of the embedding space BIBREF9 , BIBREF0 , BIBREF10 . In these models, the inferred embedding space creates a globally consistent structured prediction of the ontology, rather than the local relation predictions of previous models." ], "extractive_spans": [], "free_form_answer": "the inferred embedding space creates a globally consistent structured prediction of the ontology, rather than local relation predictions", "highlighted_evidence": [ "Recently, a thread of research on representation learning has aimed to create embedding spaces that automatically enforce consistency in these predictions using the intrinsic geometry of the embedding space BIBREF9 , BIBREF0 , BIBREF10 . In these models, the inferred embedding space creates a globally consistent structured prediction of the ontology, rather than the local relation predictions of previous models." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c7d4a630661cd719ea504dba56393f78278b296b", "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ], "nlp_background": [ "infinity", "two" ], "paper_read": [ "no", "no" ], "question": [ "What types of commonsense knowledge are they talking about?", "What do they mean by intrinsic geometry of spaces of learned representations?" ], "question_id": [ "52faf319e37aa15fff1ab47f634a5a584dc42e75", "0c7cb3010ed92b8d46583a67e72946a6c0115f1f" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "commonsense", "common sense" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1. Order Embedding", "Table 1. Example triplets from each dataset.", "Figure 2. Adding more training examples: black line is the original training data, green line is obtained by transitive closure, and yellow line is obtained by join and meet.", "Table 2. Joint Text and Order Embedding" ], "file": [ "2-Figure1-1.png", "3-Table1-1.png", "3-Figure2-1.png", "4-Table2-1.png" ] }
[ "What types of commonsense knowledge are they talking about?", "What do they mean by intrinsic geometry of spaces of learned representations?" ]
[ [ "1708.00549-Introduction-3", "1708.00549-Data-0", "1708.00549-Introduction-0", "1708.00549-Introduction-5", "1708.00549-Data-1" ], [ "1708.00549-2-Figure1-1.png", "1708.00549-Introduction-4" ] ]
[ "Knowledge than an ordinary person would have such as transitive entailment relation, complex ordering, compositionality, multi-word entities", "the inferred embedding space creates a globally consistent structured prediction of the ontology, rather than local relation predictions" ]
18
1905.00472
A system for the 2019 Sentiment, Emotion and Cognitive State Task of DARPAs LORELEI project
During the course of a Humanitarian Assistance-Disaster Relief (HADR) crisis, that can happen anywhere in the world, real-time information is often posted online by the people in need of help which, in turn, can be used by different stakeholders involved with management of the crisis. Automated processing of such posts can considerably improve the effectiveness of such efforts; for example, understanding the aggregated emotion from affected populations in specific areas may help inform decision-makers on how to best allocate resources for an effective disaster response. However, these efforts may be severely limited by the availability of resources for the local language. The ongoing DARPA project Low Resource Languages for Emergent Incidents (LORELEI) aims to further language processing technologies for low resource languages in the context of such a humanitarian crisis. In this work, we describe our submission for the 2019 Sentiment, Emotion and Cognitive state (SEC) pilot task of the LORELEI project. We describe a collection of sentiment analysis systems included in our submission along with the features extracted. Our fielded systems obtained the best results in both English and Spanish language evaluations of the SEC pilot task.
{ "paragraphs": [ [ "The growing adoption of online technologies has created new opportunities for emergency information propagation BIBREF0 . During crises, affected populations post information about what they are experiencing, what they are witnessing, and relate what they hear from other sources BIBREF1 . This information contributes to the creation and dissemination of situational awareness BIBREF2 , BIBREF3 , BIBREF4 , BIBREF0 , and crisis response agencies such as government departments or public health-care NGOs can make use of these channels to gain insight into the situation as it unfolds BIBREF2 , BIBREF5 . Additionally, these organizations might also post time-sensitive crisis management information to help with resource allocation and provide status reports BIBREF6 . While many of these organizations recognize the value of the information found online—specially during the on-set of a crisis—they are in need of automatic tools that locate actionable and tactical information BIBREF7 , BIBREF0 .", "Opinion mining and sentiment analysis techniques offer a viable way of addressing these needs, with complementary insights to what keyword searches or topic and event extraction might offer BIBREF8 . Studies have shown that sentiment analysis of social media during crises can be useful to support response coordination BIBREF9 or provide information about which audiences might be affected by emerging risk events BIBREF10 . For example, identifying tweets labeled as “fear” might support responders on assessing mental health effects among the affected population BIBREF11 . Given the critical and global nature of the HADR events, tools must process information quickly, from a variety of sources and languages, making it easily accessible to first responders and decision makers for damage assessment and to launch relief efforts accordingly BIBREF12 , BIBREF13 . However, research efforts in these tasks are primarily focused on high resource languages such as English, even though such crises may happen anywhere in the world.", "The LORELEI program provides a framework for developing and testing systems for real-time humanitarian crises response in the context of low-resource languages. The working scenario is as follows: a sudden state of danger requiring immediate action has been identified in a region which communicates in a low resource language. Under strict time constraints, participants are expected to build systems that can: translate documents as necessary, identify relevant named entities and identify the underlying situation BIBREF14 . Situational information is encoded in the form of Situation Frames — data structures with fields identifying and characterizing the crisis type. The program's objective is the rapid deployment of systems that can process text or speech audio from a variety of sources, including newscasts, news articles, blogs and social media posts, all in the local language, and populate these Situation Frames. While the task of identifying Situation Frames is similar to existing tasks in literature (e.g., slot filling), it is defined by the very limited availability of data BIBREF15 . This lack of data requires the use of simpler but more robust models and the utilization of transfer learning or data augmentation techniques.", "The Sentiment, Emotion, and Cognitive State (SEC) evaluation task was a recent addition to the LORELEI program introduced in 2019, which aims to leverage sentiment information from the incoming documents. This in turn may be used in identifying severity of the crisis in different geographic locations for efficient distribution of the available resources. In this work, we describe our systems for targeted sentiment detection for the SEC task. Our systems are designed to identify authored expressions of sentiment and emotion towards a HADR crisis. To this end, our models are based on a combination of state-of-the-art sentiment classifiers and simple rule-based systems. We evaluate our systems as part of the NIST LoREHLT 2019 SEC pilot task." ], [ "Social media has received a lot of attention as a way to understand what people communicate during disasters BIBREF16 , BIBREF11 . These communications typically center around collective sense-making BIBREF17 , supportive actions BIBREF18 , BIBREF19 , and social sharing of emotions and empathetic concerns for affected individuals BIBREF20 . To organize and make sense of the sentiment information found in social media, particularly those messages sent during the disaster, several works propose the use of machine learning models (e.g., Support Vector Machines, Naive Bayes, and Neural Networks) trained on a multitude of linguistic features. These features include bag of words, part-of-speech tags, n-grams, and word embeddings; as well as previously validated sentiment lexica such as Linguistic Inquiry and Word Count (LIWC) BIBREF22 , AFINN BIBREF23 , and SentiWordNet BIBREF24 . Most of the work is centered around identifying messages expressing sentiment towards a particular situation as a way to distinguish crisis-related posts from irrelevant information BIBREF25 . Either in a binary fashion (positive vs. negative) (e.g., BIBREF25 ) or over fine-grained emotional classes (e.g., BIBREF16 ).", "In contrast to social media posts, sentiment analysis of news articles and blogs has received less attention BIBREF26 . This can be attributed to a more challenging task due to the nature of the domain since, for example, journalists will often refrain from using clearly positive or negative vocabulary when writing news articles BIBREF27 . However, certain aspects of these communication channels are still apt for sentiment analysis, such as column pieces BIBREF28 or political news BIBREF27 , BIBREF29 .", "In the context of leveraging the information found online for HADR emergencies, approaches for languages other than English have been limited. Most of which are done by manually constructing resources for a particular language (e.g., in tweets BIBREF30 , BIBREF31 , BIBREF32 and in disaster-related news coverage BIBREF33 ), or by applying cross-language text categorization to build language-specific models BIBREF31 , BIBREF34 .", "In this work, we develop systems that identify positive and negative sentiments expressed in social media posts, news articles and blogs in the context of a humanitarian emergency. Our systems work for both English and Spanish by using an automatic machine translation system. This makes our approach easily extendable to other languages, bypassing the scalability issues that arise from the need to manually construct lexica resources." ], [ "This section describes the SEC task in the LORELEI program along with the dataset, evaluation conditions and metrics." ], [ "Given a dataset of text documents and manually annotated situation frames, the task is to automatically detect sentiment polarity relevant to existing frames and identify the source and target for each sentiment instance. The source is defined as a person or a group of people expressing the sentiment, and can be either a PER/ORG/GPE (person, organization or geo political entity) construct in the frame, the author of the text document, or an entity not explicitly expressed in the document. The target toward which the sentiment is expressed, is either the frame or an entity in the document.", "Situation awareness information is encoded into situation frames in the LORELEI program BIBREF35 . Situation Frames (SF) are similar in nature to those used in Natural Language Understanding (NLU) systems: in essence they are data structures that record information corresponding to a single incident at a single location BIBREF15 . A SF frame includes a situation Type taken from a fixed inventory of 11 categories (e.g., medical need, shelter, infrastructure), Location where the situation exists (if a location is mentioned) and additional variables highlighting the Status of the situation (e.g., entities involved in resolution, time and urgency). An example of a SF can be found in table 1 . A list of situation frames and documents serve as input for our sentiment analysis systems." ], [ "Training data provided for the task included documents were collected from social media, SMS, news articles, and news wires. This consisted of 76 documents in English and 47 in Spanish. The data are relevant to the HADR domain but are not grounded in a common HADR incident. Each document is annotated for situation frames and associated sentiment by 2 trained annotators from the Linguistic Data Consortium (LDC). Sentiment annotations were done at a segment (sentence) level, and included Situation Frame, Polarity (positive / negative), Sentiment Score, Emotion, Source and Target. Sentiment labels were annotated between the values of -3 (very negative) and +3 (very positive) with 0.5 increments excluding 0. Additionally, the presence or absence of three specific emotions: fear, anger, and joy/happiness was marked. If a segment contains sentiment toward more than one target, each will be annotated separately. Summary of the training data is given in Table 2 ." ], [ "Systems participating in the task were expected to produce outputs with sentiment polarity, emotion, sentiment source and target, and the supporting segment from the input document. This output is evaluated against a ground truth derived from two or more annotations. For the SEC pilot evaluation, a reference set with dual annotations from two different annotators was provided. The system's performance was measured using variants of precision, recall and f1 score, each modified to take into account the multiple annotations. The modified scoring is as follows: let the agreement between annotators be defined as two annotations with the same sentiment polarity, source, and target. That is, consider two annotators in agreement even if their judgments vary on sentiment values or perceived emotions. Designate those annotations with agreement as “D” and those which were not agreed upon as “S”. When computing precision, recall and f measure, each of the sentiment annotations in D will count as two occurrences in the reference, and likewise a system match on a sentiment annotation in D will count as two matches. Similarly, a match on a sentiment annotation in S will count as a single match. The updated precision, recall and f-measure were defined as follows: $\n\\text{precision} &= \\frac{2 * \\text{Matches in D} + \\text{Matches in S}}{2 * \\text{Matches in D} + \\text{Matches in S} + \\text{Unmatched}}\\\\[10pt]\n\\text{recall} &= \\frac{2 * \\text{Matches in D} + \\text{Matches in S}}{2|D| + |S|}\\\\[10pt]\n\\text{f1} &= \\frac{2 * \\text{precision} * \\text{recall}}{(\\text{precision} + \\text{recall})}\n$ " ], [ "We approach the SEC task, particularly the polarity and emotion identification, as a classification problem. Our systems are based on English, and are extended to other languages via automatic machine translation (to English). In this section we present the linguistic features and describe the models using for the evaluation." ], [ "Automatic translations from Spanish to English were obtained from Microsoft Bing using their publicly available API. For the pilot evaluation, we translated all of the Spanish documents into English, and included them as additional training data. At this time we do not translate English to Spanish, but plan to explore this thread in future work." ], [ "We extract word unigrams and bigrams. These features were then transformed using term frequencies (TF) and Inverse document-frequency (IDF).", "Word embeddings pretrained on large corpora allow models to efficiently leverage word semantics as well as similarities between words. This can help with vocabulary generalization as models can adapt to words not previously seen in training data. In our feature set we include a 300-dimensional word2vec word representation trained on a large news corpus BIBREF36 . We obtain a representation for each segment by averaging the embedding of each word in the segment. We also experimented with the use of GloVe BIBREF37 , and Sent2Vec BIBREF38 , an extension of word2vec for sentences.", "We use two sources of sentiment features: manually constructed lexica, and pre-trained sentiment embeddings. When available, manually constructed lexica are a useful resource for identifying expressions of sentiment BIBREF21 . We obtained word percentages across 192 lexical categories using Empath BIBREF39 , which extends popular tools such as the Linguistic Inquiry and Word Count (LIWC) BIBREF22 and General Inquirer (GI) BIBREF40 by adding a wider range of lexical categories. These categories include emotion classes such as surprise or disgust.", "Neural networks have been shown to capture specific task related subtleties which can complement the manually constructed sentiment lexica described in the previous subsection. For this work, we learn sentiment representations using a bilateral Long Short-Term Memory model BIBREF41 trained on the Stanford Sentiment Treebank BIBREF42 . This model was selected because it provided a good trade off between simplicity and performance on a fine-grained sentiment task, and has been shown to achieve competitive results to the state-of-the-art BIBREF43 ." ], [ "We now describe the models used for this work. Our models can be broken down into two groups: our first approach explores state-of-the-art models in targeted and untargeted sentiment analysis to evaluate their performance in the context of the SEC task. These models were pre-trained on larger corpora and evaluated directly on the task without any further adaptation. In a second approach we explore a data augmentation technique based on a proposed simplification of the task. In this approach, traditional machine learning classifiers were trained to identify which segments contain sentiment towards a SF regardless of sentiment polarity. For the classifiers, we explored the use of Support Vector Machines and Random Forests. Model performance was estimated through 10-fold cross validation on the train set. Hyper-parameters, such as of regularization, were selected based on the performance on grid-search using an 10-fold inner-cross validation loop. After choosing the parameters, models were re-trained on all the available data.", "We consider some of the most popular baseline models in the literature: (i) minority class baseline (due to the heavily imbalanced dataset), (ii) Support Vector Machines trained on TF-IDF bi-gram language model, (iii) and Support Vector Machines trained on word2vec representations. These models were trained using English documents only.", "Two types of targeted sentiment are evaluated for the task: those expressed towards either a situation frame or those towards an entity. To identify sentiment expressed towards an SF, we use the pretrained model described in BIBREF44 , in which a multiplicative LSTM cell is trained at the character level on a corpus of 82 million Amazon reviews. The model representation is then fed to a logistic regression classifier to predict sentiment. This model (which we will refer to as OpenAI) was chosen since at the time of our system submission it was one of the top three performers on the binary sentiment classification task on the Stanford Sentiment Treebank. In our approach, we first map the text associated with the SF annotation with a segment from the document and pass the full segment to the pretrained OpenAI model identify the sentiment polarity for that segment.", "To identify sentiment targeted towards an entity, we use the recently released Target-Based Sentiment Analysis (TBSA) model from BIBREF45 . In TBSA, two stacked LSTM cells are trained to predict both sentiment and target boundary tags (e.g., predicting S-POS to indicate the start of the target towards which the author is expressing positive sentiment, I-POS and E-POS to indicate intermediate and end of the target). In our submission, since input text documents can be arbitrarily long, we only consider sentences which include a known and relevant entity; these segments are then fed to the TBSA model to predict targeted sentiment. If the target predicted by this model matched with any of the known entities, the system would output the polarity and the target.", "In this model we limit our focus on the task of correctly identifying those segments with sentiment towards a SF. That is, given a pair of SF and segment, we train models to identify if this segment contains any sentiment towards that SF. This allows us to expand our dataset from 123 documents into one with $\\sum _d |SF_d| \\times |d|$ number of samples, where $|d|$ is the length of the document (i.e., number of segments) and $|SF_d|$ is the number of SF annotations for document $d$ . Summary of the training dataset after augmentation is given in Table 3 .", "Given the highly skewed label distribution in the training data, a majority of the constructed pairs do not have any sentiment towards a SF. Hence, our resulting dataset has a highly imbalanced distribution which we address by training our models after setting the class weights to be the inverse class frequency. To predict polarity, we assume the majority class of negative sentiment. We base this assumption on the fact that the domain we are working with doesn't seem to support the presence of positive sentiment, as made evident by the highly imbalanced dataset.", "Owing to the nature of the problem domain, there is considerable variance in the source of the text documents and their structure. For example, tweets only have one segment per sample whereas news articles contain an average of $7.07\\pm 4.96$ and $6.31\\pm 4.93$ segments for English and Spanish documents respectively. Moreover, studies suggest that sentiments expressed in social media tend to differ significantly from those in the news BIBREF26 . Table 4 presents a breakdown of the train set for each sentiment across domains, as is evident tweets form a sizeable group of the training set. Motivated by this, we train different models for tweets and non-tweet documents in order to capture the underlying differences between the data sources.", "Initial experiments showed that our main source of error was not being able to correctly identify the supporting segment. Even if polarity, source and target were correctly identified, missing the correct segment was considered an error, and thus lowered our models' precision. To address this, we decided to use a model which only produced results for tweets given that these only contain one segment, making the segment identification sub-task trivial." ], [ "Model performance during train is presented in Table 5 . While all the models outperformed the baselines, not all of them did so with a significant margin due to the robustness of the baselines selected. The ones found to be significantly better than the baselines were models IIb (Domain-specific) and IIc (Twitter-only) (permutation test, $n = 10^5$ both $p < 0.05$ ). The difference in precision between model IIb and IIc points out to the former making the wrong predictions for news articles. These errors are most likely in selecting the wrong supporting segment. Moreover, even though models IIa-c only produce negative labels, they still achieve improved performance over the state-of-the-art systems, highlighting the highly skewed nature of the training dataset.", "Table 6 present the official evaluation results for English and Spanish. Some information is missing since at the time of submission only partial score had been made public. As previously mentioned, the pre-trained state-of-the-art models (model I) were directly applied to the evaluation data without any adaptation. These performed reasonably well for the English data. Among the submissions of the SEC Task pilot, our systems outperformed the other competitors for both languages." ], [ "Understanding the expressed sentiment from an affected population during the on-set of a crisis is a particularly difficult task, especially in low-resource scenarios. There are multiple difficulties beyond the limited amount of data. For example, in order to provide decision-makers with actionable and usable information, it is not enough for the system to correctly classify sentiment or emotional state, it also ought to identify the source and target of the expressed sentiment. To provide a sense of trust and accountability on the system's decisions, it makes sense to identify a justifying segment. Moreover, these systems should consider a variety of information sources to create a broader and richer picture on how a situation unfolds. Thus, it is important that systems take into account the possible differences in the way sentiment is expressed in each one of these sources. In this work, we presented two approaches to the task of providing actionable and useful information. Our results show that state-of-the-art sentiment classifiers can be leveraged out-of-the-box for a reasonable performance on English data. By identifying possible differences coming from the information sources, as well as by exploiting the information communicated as the situation unfolds, we showed significant performance gains on both English and Spanish." ] ], "section_name": [ "Introduction", "Previous Work", "Problem Definition", "The Sentiment, Emotion and Cognitive State (SEC) Task", "Data", "Evaluation", "Method", "Machine Translation", "Linguistic Features", "Models", "Results", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "5a15df8e679710bf03a84c60fb5aa3c9f6456b05", "872e88ead8873b993f239e550a6e9215feac0e0c", "f47069b478eb088739e9f90006e94022b1b67520", "fed054396c48689f7054bae3d1374ca09e966058" ], "answer": [ { "evidence": [ "We use two sources of sentiment features: manually constructed lexica, and pre-trained sentiment embeddings. When available, manually constructed lexica are a useful resource for identifying expressions of sentiment BIBREF21 . We obtained word percentages across 192 lexical categories using Empath BIBREF39 , which extends popular tools such as the Linguistic Inquiry and Word Count (LIWC) BIBREF22 and General Inquirer (GI) BIBREF40 by adding a wider range of lexical categories. These categories include emotion classes such as surprise or disgust.", "Neural networks have been shown to capture specific task related subtleties which can complement the manually constructed sentiment lexica described in the previous subsection. For this work, we learn sentiment representations using a bilateral Long Short-Term Memory model BIBREF41 trained on the Stanford Sentiment Treebank BIBREF42 . This model was selected because it provided a good trade off between simplicity and performance on a fine-grained sentiment task, and has been shown to achieve competitive results to the state-of-the-art BIBREF43 ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We use two sources of sentiment features: manually constructed lexica, and pre-trained sentiment embeddings. ", "For this work, we learn sentiment representations using a bilateral Long Short-Term Memory model BIBREF41 trained on the Stanford Sentiment Treebank BIBREF42 ." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "Two types of targeted sentiment are evaluated for the task: those expressed towards either a situation frame or those towards an entity. To identify sentiment expressed towards an SF, we use the pretrained model described in BIBREF44 , in which a multiplicative LSTM cell is trained at the character level on a corpus of 82 million Amazon reviews. The model representation is then fed to a logistic regression classifier to predict sentiment. This model (which we will refer to as OpenAI) was chosen since at the time of our system submission it was one of the top three performers on the binary sentiment classification task on the Stanford Sentiment Treebank. In our approach, we first map the text associated with the SF annotation with a segment from the document and pass the full segment to the pretrained OpenAI model identify the sentiment polarity for that segment." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "To identify sentiment expressed towards an SF, we use the pretrained model described in BIBREF44 , in which a multiplicative LSTM cell is trained at the character level on a corpus of 82 million Amazon reviews." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "We now describe the models used for this work. Our models can be broken down into two groups: our first approach explores state-of-the-art models in targeted and untargeted sentiment analysis to evaluate their performance in the context of the SEC task. These models were pre-trained on larger corpora and evaluated directly on the task without any further adaptation. In a second approach we explore a data augmentation technique based on a proposed simplification of the task. In this approach, traditional machine learning classifiers were trained to identify which segments contain sentiment towards a SF regardless of sentiment polarity. For the classifiers, we explored the use of Support Vector Machines and Random Forests. Model performance was estimated through 10-fold cross validation on the train set. Hyper-parameters, such as of regularization, were selected based on the performance on grid-search using an 10-fold inner-cross validation loop. After choosing the parameters, models were re-trained on all the available data." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "These models were pre-trained on larger corpora and evaluated directly on the task without any further adaptation. " ], "unanswerable": false, "yes_no": true }, { "evidence": [ "Two types of targeted sentiment are evaluated for the task: those expressed towards either a situation frame or those towards an entity. To identify sentiment expressed towards an SF, we use the pretrained model described in BIBREF44 , in which a multiplicative LSTM cell is trained at the character level on a corpus of 82 million Amazon reviews. The model representation is then fed to a logistic regression classifier to predict sentiment. This model (which we will refer to as OpenAI) was chosen since at the time of our system submission it was one of the top three performers on the binary sentiment classification task on the Stanford Sentiment Treebank. In our approach, we first map the text associated with the SF annotation with a segment from the document and pass the full segment to the pretrained OpenAI model identify the sentiment polarity for that segment." ], "extractive_spans": [], "free_form_answer": "No, they used someone else's pretrained model. ", "highlighted_evidence": [ "To identify sentiment expressed towards an SF, we use the pretrained model described in BIBREF44 , in which a multiplicative LSTM cell is trained at the character level on a corpus of 82 million Amazon reviews." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "043654eefd60242ac8da08ddc1d4b8d73f86f653", "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "ea17aa5ea17e7838a55c252484390079b16c31ae", "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "annotation_id": [ "3f877a06e4076624be95d8d78577afb680e4290c", "8c3e3e2cd5ad11acde2f156db142ee4d7f5f5ff4", "76a2c963552628e66e439cc37a87bc20b104ba37", "caf56a9147c80a684661f7a5353ba138d8a1e02f" ], "answer": [ { "evidence": [ "We extract word unigrams and bigrams. These features were then transformed using term frequencies (TF) and Inverse document-frequency (IDF).", "Word embeddings pretrained on large corpora allow models to efficiently leverage word semantics as well as similarities between words. This can help with vocabulary generalization as models can adapt to words not previously seen in training data. In our feature set we include a 300-dimensional word2vec word representation trained on a large news corpus BIBREF36 . We obtain a representation for each segment by averaging the embedding of each word in the segment. We also experimented with the use of GloVe BIBREF37 , and Sent2Vec BIBREF38 , an extension of word2vec for sentences.", "We use two sources of sentiment features: manually constructed lexica, and pre-trained sentiment embeddings. When available, manually constructed lexica are a useful resource for identifying expressions of sentiment BIBREF21 . We obtained word percentages across 192 lexical categories using Empath BIBREF39 , which extends popular tools such as the Linguistic Inquiry and Word Count (LIWC) BIBREF22 and General Inquirer (GI) BIBREF40 by adding a wider range of lexical categories. These categories include emotion classes such as surprise or disgust." ], "extractive_spans": [ "unigrams and bigrams", "word2vec", "manually constructed lexica", "sentiment embeddings" ], "free_form_answer": "", "highlighted_evidence": [ "We extract word unigrams and bigrams.", " In our feature set we include a 300-dimensional word2vec word representation trained on a large news corpus BIBREF36 ", "We use two sources of sentiment features: manually constructed lexica, and pre-trained sentiment embeddings. " ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "043654eefd60242ac8da08ddc1d4b8d73f86f653", "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "057bf5a20e4406f1f05cf82ecd49cf4f227dd287", "415f2b85b4ca163ae73ace49338be598d1e9e167" ] }, { "annotation_id": [ "36b8c17efb42e0fbe9dfcfed29130deb89d698fe", "1a199a5c07a5b0d4a604eb1e5c8a3923e1d98c16", "64eeb49d924de3df06c97761ec2963f5911879c5" ], "answer": [ { "evidence": [ "Training data provided for the task included documents were collected from social media, SMS, news articles, and news wires. This consisted of 76 documents in English and 47 in Spanish. The data are relevant to the HADR domain but are not grounded in a common HADR incident. Each document is annotated for situation frames and associated sentiment by 2 trained annotators from the Linguistic Data Consortium (LDC). Sentiment annotations were done at a segment (sentence) level, and included Situation Frame, Polarity (positive / negative), Sentiment Score, Emotion, Source and Target. Sentiment labels were annotated between the values of -3 (very negative) and +3 (very positive) with 0.5 increments excluding 0. Additionally, the presence or absence of three specific emotions: fear, anger, and joy/happiness was marked. If a segment contains sentiment toward more than one target, each will be annotated separately. Summary of the training data is given in Table 2 ." ], "extractive_spans": [], "free_form_answer": "2", "highlighted_evidence": [ "Training data provided for the task included documents were collected from social media, SMS, news articles, and news wires. This consisted of 76 documents in English and 47 in Spanish." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Training data provided for the task included documents were collected from social media, SMS, news articles, and news wires. This consisted of 76 documents in English and 47 in Spanish. The data are relevant to the HADR domain but are not grounded in a common HADR incident. Each document is annotated for situation frames and associated sentiment by 2 trained annotators from the Linguistic Data Consortium (LDC). Sentiment annotations were done at a segment (sentence) level, and included Situation Frame, Polarity (positive / negative), Sentiment Score, Emotion, Source and Target. Sentiment labels were annotated between the values of -3 (very negative) and +3 (very positive) with 0.5 increments excluding 0. Additionally, the presence or absence of three specific emotions: fear, anger, and joy/happiness was marked. If a segment contains sentiment toward more than one target, each will be annotated separately. Summary of the training data is given in Table 2 ." ], "extractive_spans": [ "2" ], "free_form_answer": "", "highlighted_evidence": [ "Training data provided for the task included documents were collected from social media, SMS, news articles, and news wires. This consisted of 76 documents in English and 47 in Spanish. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Training data provided for the task included documents were collected from social media, SMS, news articles, and news wires. This consisted of 76 documents in English and 47 in Spanish. The data are relevant to the HADR domain but are not grounded in a common HADR incident. Each document is annotated for situation frames and associated sentiment by 2 trained annotators from the Linguistic Data Consortium (LDC). Sentiment annotations were done at a segment (sentence) level, and included Situation Frame, Polarity (positive / negative), Sentiment Score, Emotion, Source and Target. Sentiment labels were annotated between the values of -3 (very negative) and +3 (very positive) with 0.5 increments excluding 0. Additionally, the presence or absence of three specific emotions: fear, anger, and joy/happiness was marked. If a segment contains sentiment toward more than one target, each will be annotated separately. Summary of the training data is given in Table 2 ." ], "extractive_spans": [], "free_form_answer": "2 (Spanish and English)", "highlighted_evidence": [ "Training data provided for the task included documents were collected from social media, SMS, news articles, and news wires. This consisted of 76 documents in English and 47 in Spanish." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "ea17aa5ea17e7838a55c252484390079b16c31ae", "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "annotation_id": [ "b7fc9b2988bb88c70cf88e00b9c8bb47a7cde3b2", "5dd3c0a5cfd596e9beba529770e17a240538e6a3", "7204d0776bc966db479c2771ee9338a01f5ab145" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "057bf5a20e4406f1f05cf82ecd49cf4f227dd287", "ea17aa5ea17e7838a55c252484390079b16c31ae" ] } ], "nlp_background": [ "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Did they pre-train on existing sentiment corpora?", "What were the most salient features extracted by the models?", "How many languages are in the dataset?", "Did the system perform well on low-resource languages?" ], "question_id": [ "9c2cacf77041e02d38f92a4c490df1e04552f96f", "35cdaa0fff007add4a795850b139df80af7d1ffc", "3de3a083b8ba3086792d38ae9667e095070f7f37", "04914917d01c9cd8718cd551dc253eb3827915d8" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "TABLE I EXAMPLE OF A SEGMENT CONTAINING A SITUATION FRAME WITH SENTIMENT RELATED ANNOTATIONS (IN BOLD).", "TABLE II FREQUENCY STATISTICS FOR THE PROVIDED TRAINING DATA PER LANGUAGE: NUMBER OF DOCUMENTS, NUMBER OF ANNOTATED SITUATION FRAMES, NUMBER OF SENTIMENT INSTANCES, PERCENTAGE OF NEGATIVE POLARITY.", "TABLE IV TRAIN DATASET DOMAIN BREAK-DOWN", "TABLE V MODEL PERFORMANCE ON ENGLISH TRAIN DATA ESTIMATED USING 10-FOLD CV" ], "file": [ "2-TableI-1.png", "3-TableII-1.png", "4-TableIV-1.png", "5-TableV-1.png" ] }
[ "Did they pre-train on existing sentiment corpora?", "How many languages are in the dataset?" ]
[ [ "1905.00472-Linguistic Features-3", "1905.00472-Linguistic Features-2", "1905.00472-Models-0", "1905.00472-Models-2" ], [ "1905.00472-Data-0" ] ]
[ "No, they used someone else's pretrained model. ", "2 (Spanish and English)" ]
19
1912.02866
Classifying Diagrams and Their Parts using Graph Neural Networks: A Comparison of Crowd-Sourced and Expert Annotations
This article compares two multimodal resources that consist of diagrams which describe topics in elementary school natural sciences. Both resources contain the same diagrams and represent their structure using graphs, but differ in terms of their annotation schema and how the annotations have been created - depending on the resource in question - either by crowd-sourced workers or trained experts. This article reports on two experiments that evaluate how effectively crowd-sourced and expert-annotated graphs can represent the multimodal structure of diagrams for representation learning using various graph neural networks. The results show that the identity of diagram elements can be learned from their layout features, while the expert annotations provide better representations of diagram types.
{ "paragraphs": [ [ "Diagrams are a common feature of many everyday media from newspapers to school textbooks, and not surprisingly, different forms of diagrammatic representation have been studied from various perspectives. To name just a few examples, recent work has examined patterns in diagram design BIBREF0 and their interpretation in context BIBREF1, and developed frameworks for classifying diagrams BIBREF2 and proposed guidelines for their design BIBREF3. There is also a long-standing interest in processing and generating diagrams computationally BIBREF4, BIBREF5, BIBREF6, which is now resurfacing as advances emerging from deep learning for computer vision and natural language processing are brought to bear on diagrammatic representations BIBREF7, BIBREF8, BIBREF9.", "From the perspective of computational processing, diagrammatic representations present a formidable challenge, as they involve tasks from both computer vision and natural language processing. On the one hand, diagrams have a spatial organisation – layout – which needs to be segmented to identify meaningful units and their position. Making sense of how diagrams exploit the 2D layout space falls arguably within the domain of computer vision. On the other hand, diagrams also have a discourse structure, which uses the layout space to set up discourse relations between instances of natural language, various types of images, arrows and lines, thus forming a unified discourse organisation. The need to parse this discourse structure shifts the focus towards the field of natural language processing.", "Understanding and making inferences about the structure of diagrams and other forms of multimodal discourse may be broadly conceptualised as multimodal discourse parsing. Recent examples of work in this area include alikhanietal2019 and ottoetal2019, who model discourse relations between natural language and photographic images, drawing on linguistic theories of coherence and text–image relations, respectively. In most cases, however, predicting a single discourse relation covers only a part of the discourse structure. sachanetal2019 note that there is a need for comprehensive theories and models of multimodal communication, as they can be used to rethink tasks that have been previously considered only from the perspective of natural language processing.", "Unlike many other areas, the study of diagrammatic representations is particularly well-resourced, as several multimodal resources have been published recently to support research on computational processing of diagrams BIBREF10, BIBREF8, BIBREF11. This study compares two such resources, AI2D BIBREF10 and AI2D-RST BIBREF11, which both feature the same diagrams, as the latter is an extension of the former. Whereas AI2D features crowd-sourced, non-expert annotations, AI2D-RST provides multiple layers of expert annotations, which are informed by state-of-the-art approaches to multimodal communication BIBREF12 and annotation BIBREF13, BIBREF14.", "This provides an interesting setting for comparison and evaluation, as non-expert annotations are cheap to produce and easily outnumber the expert-annotated data, whose production consumes both time and resources. Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer.", "Both AI2D and AI2D-RST represent the multimodal structure of diagrams using graphs. This enables learning their representations using graph neural networks, which are gaining currency as a graph is a natural choice for representing many types of data BIBREF15. This article reports on two experiments that evaluate the capability of AI2D and AI2D-RST to represent the multimodal structure of diagrams using graphs, focusing particularly on spatial layout, the hierarchical organisation of diagram elements and their connections expressed using arrows and lines." ], [ "This section introduces the two multimodal resources compared in this study and discusses related work, beginning with the crowd-sourced annotations in AI2D and continuing with the alternative expert annotations in AI2D-RST, which are built on top of the crowd-sourced descriptions and cover a 1000-diagram subset of the original data. Figure FIGREF1 provides an overview of the two datasets, explains their relation to each other and provides an overview of the experiments reported in Section SECREF4" ], [ "The Allen Institute for Artificial Intelligence Diagrams dataset (AI2D) contains 4903 English-language diagrams, which represent topics in primary school natural sciences, such as food webs, human physiology and life cycles, amounting to a total of 17 classes BIBREF10. The dataset was originally developed to support research on diagram understanding and visual question answering BIBREF16, but has also been used to study the contextual interpretation of diagrammatic elements, such as arrows and lines BIBREF17.", "The AI2D annotation schema models four types of diagram elements: text, graphics, arrows and arrowheads, whereas the semantic relations that hold between these elements are described using ten relations from a framework for analysing diagrammatic representations in engelhardt2002. Each diagram is represented using a Diagram Parse Graph (DPG), whose nodes stand for diagram elements while the edges between the nodes carry information about their semantic relations. The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10.", "I have previously argued that describing different types of multimodal structures in diagrammatic representations requires different types of graphs BIBREF18. To exemplify, many forms of multimodal discourse are assumed to possess a hierarchical structure, whose representation requires a tree graph. Diagrams, however, use arrows and lines to draw connections between elements that are not necessarily part of the same subtree, and for this reason representing connectivity requires a cyclic graph. AI2D DPGs, in turn, conflate the description of semantic relations and connections expressed using diagrammatic elements. Whether computational modelling of diagrammatic structures, or more generally, multimodal discourse parsing, benefits from pulling apart different types of multimodal structure remains an open question, which we pursued by developing an alternative annotation schema for AI2D, named AI2D-RST, which is introduced below." ], [ "AI2D-RST covers a subset of 1000 diagrams from AI2D, which have been annotated by trained experts using a new multi-layer annotation schema for describing the diagrams in AI2D BIBREF11. The annotation schema, which draws on state-of-the-art theories of multimodal communication BIBREF12, adopts a stand-off approach to describing the diagrams. Hence the three annotation layers in AI2D-RST are represented using three different graphs, which use the same identifiers for nodes across all three graphs to allow combining the descriptions in different graphs. AI2D-RST contains three graphs:", "Grouping: A tree graph that groups together diagram elements that are likely to be visually perceived as belonging together, based loosely on Gestalt principles of visual perception BIBREF19. These groups are organised into a hierarchy, which represents the organisation of content in the 2D layout space BIBREF13, BIBREF14.", "Connectivity: A cyclic graph representing connections between diagram elements or their groups, which are signalled using arrows or lines BIBREF20.", "Discourse structure: A tree graph representing discourse structure of the diagram using Rhetorical Structure Theory BIBREF21, BIBREF22: hence the name AI2D-RST.", "The grouping graph, which is initially populated by diagram elements from the AI2D layout segmentation, provides a foundation for describing connectivity and discourse structure by adding nodes to the grouping graph that stand for groups of diagram elements, as shown in the upper part of Figure FIGREF1. In addition, the grouping graph includes annotations for 11 different diagram types identified in the data (e.g. cycles, cross-sections and networks), which may be used as target labels during training, as explained in Section SECREF26 The coarse and fine-grained diagram types identified in the data are shown in Figure FIGREF8.", "hiippalaetal2019-ai2d show that the proposed annotation schema can be reliably applied to the data by measuring inter-annotator agreement between five annotators on random samples from the AI2D-RST corpus using Fleiss' $\\kappa $ BIBREF23. The results show high agreement on grouping ($N = 256, \\kappa = 0.84$), diagram types ($N = 119, \\kappa = 0.78$), connectivity ($N = 239, \\kappa = 0.88$) and discourse structure ($N = 227, \\kappa = 0.73$). It should be noted, however, that these measures may be affected by implicit knowledge that tends to develop among expert annotators who work towards the same task BIBREF24." ], [ "Both AI2D and AI2D-RST use graphs to represent the multimodal structure of diagrams. This section explicates how the graphs and their node and edge types differ across the two multimodal resources." ], [ "AI2D and AI2D-RST share most node types that represent different diagram elements, namely text, graphics, arrows and the image constant, which is a node that stands for the entire diagram. In AI2D, generic diagram elements such as titles describing the entire diagram are typically connected to the image constant. In AI2D-RST, the image constant acts as the root node of the tree in the grouping graph. In addition to text, graphics, arrows and the image constant, AI2D-RST features two additional node types for groups and discourse relations, whereas AI2D includes an additional node for arrowheads. To summarise, AI2D contains five distinct node types, whereas AI2D-RST has six. Note, however, that only grouping and connectivity graphs used in this study, which limits the number to five for AI2D-RST." ], [ "The same features are used for both AI2D and AI2D-RST for nodes with layout information, namely text, graphics, arrows and arrowheads (in AI2D only). The position, size and shape of each diagram element are described using the following features: (1) the centre point of the bounding box or polygon, divided by the height and width of the diagram image, (2) area, or the number of pixels within the polygon, divided by the total number of pixels in the image, and (3) the solidity of the polygon, or the polygon area divided by the area of its convex hull. This yields a 4-dimensional feature vector describing the position and size of each diagram element in the layout. Each dimension is set to zero for grouping nodes in AI2D-RST and image constant nodes in AI2D and AI2D-RST." ], [ "AI2D-RST models discourse relations using nodes, which have a 25-dimensional, one-hot encoded feature vector to represent the type of discourse relation, which are drawn from Rhetorical Structure Theory BIBREF21. In AI2D, the discourse relations derived from engelhardt2002 are represented using a 10-dimensional one-hot encoded vector, which is associated with edges connecting diagram elements participating in the relation. Because the two resources draw on different theories and represent discourse relations differently, I use the grouping and connectivity graph for AI2D-RST representations and ignore the edge features in AI2D, as these descriptions attempt to describe roughly the same multimodal structures. A comparison of discourse relations is left for a follow-up study focusing on representing the discourse structure of diagrams." ], [ "Whereas AI2D encodes information about semantic relations using edges, in AI2D-RST the information carried by edges depends on the graph in question. The edges of the grouping graph do not have features, whereas the edges of the connectivity graph have a 3-dimensional, one-hot encoded vector that represents the type of connection. The edges of the discourse structure graph have a 2-dimensional, one-hot encoded feature vector to represent nuclearity, that is, whether the nodes that participate in a discourse relations act as nuclei or satellites.", "For the experiments reported in Section 4, self-loops are added to each node in the graph. A self-loop is an edge that originates in and terminates at the same node. Self-loops essentially add the graph's identity matrix to the adjacency matrix, which allow the graph neural networks to account for the node's own features during message passing, that is, when sending and receiving features from adjacent nodes." ], [ "This section presents two experiments that compare AI2D and AI2D-RST annotations in classifying diagrams and their parts using various graph neural networks." ], [ "I evaluated the following graph neural network architectures for both graph and node classification tasks:", "Graph Convolutional Network (GCN) BIBREF25", "Simplifying Graph Convolution (SGC) BIBREF26, averaging incoming node features from up to 2 hops away", "Graph Attention Network (GAT) BIBREF27 with 2 heads", "GraphSAGE (SAGE) BIBREF28 with LSTM aggregation", "I implemented all graph neural networks using Deep Graph Library 0.4 BIBREF29 on the PyTorch 1.3 backend BIBREF30. For GCN, GAT and SAGE, each network consists of two of the aforementioned layers with a Rectified Linear Unit (ReLU) activation, followed by a dense layer and a final softmax function for predicting class membership probabilities. For SGC, the network consists of a single SGC layer without an activation function. The implementations for each network are available in the repository associated with this article." ], [ "I used the Tree of Parzen Estimators (TPE) algorithm BIBREF31 to tune model hyperparameters separately for each dataset, architecture and task using the implementation in the Tune BIBREF32 and hyperopt BIBREF33 libraries. For each dataset, architecture and task, I evaluated a total of 100 hyperparameter combinations for a maximum of 100 epochs, using 850 diagrams for training and 150 for validation. The objective metric to be maximised was macro F1 score. Tables TABREF20 and TABREF21 give the hyperparameters and spaces searched for node and graph classification. Following shcuretal2018, I shuffled the training and validation splits for each run to prevent overfitting and used the same training procedure throughout. I used the Adam optimiser BIBREF34 for both hyperparameter search and training.", "To address the issue of class imbalance present in both tasks, class weights were calculated by dividing the total number of samples by the product of the number of unique classes and the number of samples for each class, as implemented in scikit-learn BIBREF35. These weights were passed to the loss function during hyperparameter search and training.", "After hyperparameter optimisation, I trained each model with the best hyperparameter combination for 20 runs, using 850 diagrams for training, 75 for validation and 75 for testing, shuffling the splits for each run while monitoring performance on the evaluation set and stopping training early if the macro F1 score failed to improve over 15 epochs for graph classification or over 25 epochs for node classification. I then evaluated the model on the testing set and recorded the result." ], [ "The purpose of the node classification task is to evaluate how well algorithms learn to classify the parts of a diagram using the graph-based representations in AI2D and AI2D-RST and node features representing the position, size and shape of the element, as described in Section SECREF11 Identifying the correct node type is a key step when populating a graph with candidate nodes from object detectors, particularly if the nodes will be processed further, for instance, to extract semantic representations from CNN features or word embeddings. Furthermore, the node representations learned during this task can be used as node features for graph classification, as will be shown shortly below in Section SECREF26", "Table TABREF25 presents a baseline for node classification from a dummy classifier, together with results for random forest and support vector machine classifiers trained on 850 and tested on 150 diagrams. Both AI2D and AI2D-RST include five node types, of which four are the same: the difference is that whereas AI2D includes arrowheads, AI2D-RST includes nodes for groups of diagram elements, as outlined in Section SECREF9 The results seem to reflect the fact that image constants and grouping nodes have their features set to zero, and RF and SVM cannot leverage features incoming from their neighbouring nodes to learn node representations. This is likely to affect the result for AI2D-RST, which includes 7300 grouping nodes that are used to create a hierarchy of diagram elements.", "Table TABREF22 shows the results for node classification using various graph neural network architectures. Because the results are not entirely comparable due to different node types present in the two resources, it is more reasonable to compare architectures. SAGE, GCN and GAT clearly outperform SGC in classifying nodes from both resources, as does the random forest classifier. AI2D nodes are classified with particularly high accuracy, which may result from having to learn representations for only one node type, that is, the image constant ($N = 1000$). AI2D-RST, in turn, must learn representations from scratch for both image constants ($N = 1000$) and grouping nodes ($N = 7300$).", "Because SAGE learns useful node representations for both resources, as reflected in high performance for all metrics, I chose this architecture for extracting node features for graph classification." ], [ "This task compares the performance of graph-based representations in AI2D and AI2D-RST for classifying entire diagrams. Here the aim is to evaluate to what extent graph neural networks can learn about the generic structure of primary school science diagrams from the graph-based representations in AI2D and AI2D-RST. Correctly identifying what the diagram attempts to communicate and how carries implications for tasks such as visual question answering, as the type of a diagram constrains the interpretation of key diagrammatic elements, such as the meaning of lines and arrows BIBREF1, BIBREF17.", "To enable a fair comparison, the target classes are derived from both AI2D and AI2D-RST. Whereas AI2D includes 17 classes that represent the semantic content of diagrams, as exemplified by categories such as `parts of the Earth', `volcano', and `food chains and webs', AI2D-RST classifies diagrams into abstract diagram types, such as cycles, networks, cross-sections and cut-outs. More specifically, AI2D-RST provides classes for diagram types at two levels of granularity, fine-grained (12 classes) and coarse (5 classes), which are derived from the proposed schema for diagram types in AI2D-RST BIBREF11.", "The 11 fine-grained classes in AI2D-RST shown in Figure FIGREF8 are complemented by an additional class (`mixed'), which includes diagrams that combine multiple diagram types, whose inclusion avoids performing multi-label classification (see the example in Figure FIGREF28). The coarse classes, which are derived by grouping fine-grained classes for tables, tabular and spatial organisations, networks and cycles, diagrammatic and pictorial representations, and so on, are also complemented by a `mixed' class.", "For this task, the node features consist of the representations learned during node classification in Section SECREF24 These representations are extracted by feeding the features representing node position, size and shape to the graph neural network, which in both cases uses the GraphSAGE architecture BIBREF28, and recording the output of the final softmax activation. Compared to a one-hot encoding, representing node identity using a probability distribution from a softmax activation reduces the sparsity of the feature vector. This yields a 5-dimensional feature vector for each node.", "Table TABREF29 provides a baseline for graph classification from a dummy classifier, as well as results for random forest (RF) and support vector machine (SVM) classifiers trained on 850 and tested on 150 diagrams. The macro F1 scores show that the RF classifier with 100 decision trees offers competitive performance for all target classes and both AI2D and AI2D-RST, in some cases outperforming graph neural networks. It should be noted, however, that the RF classifier is trained with node features learned using GraphSAGE.", "The results for graph classification using graph neural networks presented in Table TABREF27 show certain differences between AI2D and AI2D-RST. When classifying diagrams into the original semantic categories defined in AI2D ($N = 17$), the AI2D graphs significantly outperform AI2D-RST when using the GraphSAGE architecture. For all other graph neural networks, the differences between AI2D and AI2D-RST are not statistically significant. This is not surprising as the AI2D graphs were tailored for the original classes, yet the AI2D-RST graphs seem to capture generic properties that help to classify diagrams into semantic categories nearly as accurately as AI2D graphs designed specifically for this purpose, although no semantic features apart from the layout structure are provided to the classifier.", "The situation is reversed for the coarse ($N = 5$) and fine-grained ($N = 12$) classes from AI2D-RST, in which the AI2D-RST graphs generally outperform AI2D, except for coarse classification using SGC. This classification task obviously benefits AI2D-RST, whose classification schema was originally designed for abstract diagram types. This may also suggest that the AI2D graphs do not capture regularities that would support learning to generalise about diagram types. The situation is somewhat different for fine-grained classification, in which the differences in performance are relatively small.", "Generally, most architectures do not benefit from combining the grouping and connectivity graphs in AI2D-RST. This is an interesting finding, as many diagram types differ in terms of their connectivity structures (e.g. cycles and networks) BIBREF11. The edges introduced from the connectivity graph naturally increase the flow of information in the graph, but this does not seem to help learn distinctive features between diagram types. On the other hand, it should be noted that the nodes are not typed, that is, the model cannot distinguish between edges from the grouping and connectivity graphs.", "Overall, the macro F1 scores for both AI2D and AI2D-RST, which assigns equal weight to all classes regardless of the number of samples, underline the challenge of training classifiers using limited data with imbalanced classes. The lack of visual features may also affect overall classification performance: certain fine-grained classes, which are also prominent in the data, such as 2D cross-sections and 3D cut-outs, may have similar graph-based representations. Extracting visual features from diagram images may help to discern between diagrams whose graphs bear close resemblance to one another, but this would require advanced object detectors for non-photographic images." ], [ "The results for AI2D-RST show that the grouping graph, which represents visual perceptual groups of diagram elements and their hierarchical organisation, provides a robust foundation for describing the spatial organisation of diagrammatic representations. This kind of generic schema can be expanded beyond diagrams to other modes of expression that make use of the spatial extent, such as entire page layouts. A description of how the layout space is used can be incorporated into any effort to model discourse relations that may hold between the groups or their parts.", "The promising results AI2D-RST suggest is that domain experts in multimodal communication should be involved in planning crowd-sourced annotation tasks right from the beginning. Segmentation, in particular, warrants attention as this phase defines the units of analysis: cut-outs and cross-sections, for instance, use labels and lines to pick out sub-regions of graphical objects, whereas in illustrations the labels often refer to entire objects. Such distinctions should preferably be picked out at the very beginning to be incorporated fully into the annotation schema.", "Tasks related to grouping and connectivity annotation could be crowd-sourced relatively easily, whereas annotating diagram types and discourse relations may require multi-step procedures and assistance in the form of prompts, as yungetal2019 have recently shown for RST. Involving both expert and crowd-sourced annotators could also alleviate problems related to circularity by forcing domain experts to frame the tasks in terms understandable to crowd-sourced workers BIBREF24.", "In light of the results for graph classification, one should note that node features are averaged before classification regardless of their connections in the graph. Whereas the expert-annotated grouping graph in AI2D-RST has been pruned from isolated nodes, which ensures that features are propagated to neighbouring nodes, the crowd-sourced AI2D graphs contain both isolated nodes and subgraphs. To what extent these disconnections affect the performance for AI2D warrant a separate study. Additionally, more advanced techniques than mere averaging, such as pooling, should be explored in future work.", "Finally, there are many aspects of diagrammatic representation that were not explored in this study. To begin with, a comparison of representations for discourse structures using the question-answering set accompanying AI2D would be particularly interesting, especially if both AI2D and AI2D-RST graphs were enriched with features from state of the art semantic representations for natural language and graphic elements." ], [ "In this article, I compared graph-based representations of diagrams representing primary school science topics from two datasets that contain the same diagrams, which have been annotated by either crowd-sourced workers or trained experts. The comparison involved two tasks, graph and node classification, using four different architectures for graph neural networks, which were compared to baselines from dummy, random forest and support vector machine classifiers.", "The results showed that graph neural networks can learn to accurately identify diagram elements from their size, shape and position in layout. These node representations could then be used as features for graph classification. Identifying diagrams, either in terms of what they represent (semantic content) or how (abstract diagram type), proved more challenging using the graph-based representations. Improving accuracy may require additional features that capture visual properties of the diagrams, as these distinctions cannot be captured by graph-based representations and features focusing on layout.", "Overall, the results nevertheless suggest that simple layout features can provide a foundation for representing diagrammatic structures, which use the layout space to organise the content and set up discourse relations between different elements. To what extent these layout features can support the prediction of actual discourse relations should be explored in future research." ] ], "section_name": [ "Introduction", "Data", "Data ::: Crowd-sourced Annotations from AI2D", "Data ::: Expert Annotations from AI2D-RST", "Graph-based Representations", "Graph-based Representations ::: Nodes ::: Node Types", "Graph-based Representations ::: Nodes ::: Node Features", "Graph-based Representations ::: Nodes ::: Discourse Relations", "Graph-based Representations ::: Edges", "Experiments", "Experiments ::: Graph Neural Networks", "Experiments ::: Hyperparameters and Training", "Experiments ::: Tasks ::: Node Classification", "Experiments ::: Tasks ::: Graph Classification", "Discussion", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "25fea29bb0d5938ef71ea3a4d4dd7df89029bfe8", "793e8b6492460b7840dbf093f01a0ee48005eb4f", "f03e115c4a2bab6045fbf73d11b2a9565ac809ed" ], "answer": [ { "evidence": [ "From the perspective of computational processing, diagrammatic representations present a formidable challenge, as they involve tasks from both computer vision and natural language processing. On the one hand, diagrams have a spatial organisation – layout – which needs to be segmented to identify meaningful units and their position. Making sense of how diagrams exploit the 2D layout space falls arguably within the domain of computer vision. On the other hand, diagrams also have a discourse structure, which uses the layout space to set up discourse relations between instances of natural language, various types of images, arrows and lines, thus forming a unified discourse organisation. The need to parse this discourse structure shifts the focus towards the field of natural language processing." ], "extractive_spans": [ "spatial organisation ", "discourse structure" ], "free_form_answer": "", "highlighted_evidence": [ "From the perspective of computational processing, diagrammatic representations present a formidable challenge, as they involve tasks from both computer vision and natural language processing. On the one hand, diagrams have a spatial organisation – layout – which needs to be segmented to identify meaningful units and their position. Making sense of how diagrams exploit the 2D layout space falls arguably within the domain of computer vision. On the other hand, diagrams also have a discourse structure, which uses the layout space to set up discourse relations between instances of natural language, various types of images, arrows and lines, thus forming a unified discourse organisation." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "AI2D and AI2D-RST share most node types that represent different diagram elements, namely text, graphics, arrows and the image constant, which is a node that stands for the entire diagram. In AI2D, generic diagram elements such as titles describing the entire diagram are typically connected to the image constant. In AI2D-RST, the image constant acts as the root node of the tree in the grouping graph. In addition to text, graphics, arrows and the image constant, AI2D-RST features two additional node types for groups and discourse relations, whereas AI2D includes an additional node for arrowheads. To summarise, AI2D contains five distinct node types, whereas AI2D-RST has six. Note, however, that only grouping and connectivity graphs used in this study, which limits the number to five for AI2D-RST.", "The same features are used for both AI2D and AI2D-RST for nodes with layout information, namely text, graphics, arrows and arrowheads (in AI2D only). The position, size and shape of each diagram element are described using the following features: (1) the centre point of the bounding box or polygon, divided by the height and width of the diagram image, (2) area, or the number of pixels within the polygon, divided by the total number of pixels in the image, and (3) the solidity of the polygon, or the polygon area divided by the area of its convex hull. This yields a 4-dimensional feature vector describing the position and size of each diagram element in the layout. Each dimension is set to zero for grouping nodes in AI2D-RST and image constant nodes in AI2D and AI2D-RST.", "AI2D-RST models discourse relations using nodes, which have a 25-dimensional, one-hot encoded feature vector to represent the type of discourse relation, which are drawn from Rhetorical Structure Theory BIBREF21. In AI2D, the discourse relations derived from engelhardt2002 are represented using a 10-dimensional one-hot encoded vector, which is associated with edges connecting diagram elements participating in the relation. Because the two resources draw on different theories and represent discourse relations differently, I use the grouping and connectivity graph for AI2D-RST representations and ignore the edge features in AI2D, as these descriptions attempt to describe roughly the same multimodal structures. A comparison of discourse relations is left for a follow-up study focusing on representing the discourse structure of diagrams.", "Whereas AI2D encodes information about semantic relations using edges, in AI2D-RST the information carried by edges depends on the graph in question. The edges of the grouping graph do not have features, whereas the edges of the connectivity graph have a 3-dimensional, one-hot encoded vector that represents the type of connection. The edges of the discourse structure graph have a 2-dimensional, one-hot encoded feature vector to represent nuclearity, that is, whether the nodes that participate in a discourse relations act as nuclei or satellites." ], "extractive_spans": [ "node types that represent different diagram elements", "The same features are used for both AI2D and AI2D-RST for nodes with layout information", "discourse relations", "information about semantic relations" ], "free_form_answer": "", "highlighted_evidence": [ "AI2D and AI2D-RST share most node types that represent different diagram elements, namely text, graphics, arrows and the image constant, which is a node that stands for the entire diagram.", "The same features are used for both AI2D and AI2D-RST for nodes with layout information, namely text, graphics, arrows and arrowheads (in AI2D only).", "AI2D-RST models discourse relations using nodes, which have a 25-dimensional, one-hot encoded feature vector to represent the type of discourse relation, which are drawn from Rhetorical Structure Theory BIBREF21. In AI2D, the discourse relations derived from engelhardt2002 are represented using a 10-dimensional one-hot encoded vector, which is associated with edges connecting diagram elements participating in the relation.", "Whereas AI2D encodes information about semantic relations using edges, in AI2D-RST the information carried by edges depends on the graph in question. The edges of the grouping graph do not have features, whereas the edges of the connectivity graph have a 3-dimensional, one-hot encoded vector that represents the type of connection." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "AI2D-RST covers a subset of 1000 diagrams from AI2D, which have been annotated by trained experts using a new multi-layer annotation schema for describing the diagrams in AI2D BIBREF11. The annotation schema, which draws on state-of-the-art theories of multimodal communication BIBREF12, adopts a stand-off approach to describing the diagrams. Hence the three annotation layers in AI2D-RST are represented using three different graphs, which use the same identifiers for nodes across all three graphs to allow combining the descriptions in different graphs. AI2D-RST contains three graphs:", "Grouping: A tree graph that groups together diagram elements that are likely to be visually perceived as belonging together, based loosely on Gestalt principles of visual perception BIBREF19. These groups are organised into a hierarchy, which represents the organisation of content in the 2D layout space BIBREF13, BIBREF14.", "Connectivity: A cyclic graph representing connections between diagram elements or their groups, which are signalled using arrows or lines BIBREF20.", "Discourse structure: A tree graph representing discourse structure of the diagram using Rhetorical Structure Theory BIBREF21, BIBREF22: hence the name AI2D-RST." ], "extractive_spans": [], "free_form_answer": "grouping, connectivity, and discourse structure ", "highlighted_evidence": [ "AI2D-RST contains three graphs:\n\nGrouping: A tree graph that groups together diagram elements that are likely to be visually perceived as belonging together, based loosely on Gestalt principles of visual perception BIBREF19. These groups are organised into a hierarchy, which represents the organisation of content in the 2D layout space BIBREF13, BIBREF14.\n\nConnectivity: A cyclic graph representing connections between diagram elements or their groups, which are signalled using arrows or lines BIBREF20.\n\nDiscourse structure: A tree graph representing discourse structure of the diagram using Rhetorical Structure Theory BIBREF21, BIBREF22: hence the name AI2D-RST." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "258ee4069f740c400c0049a2580945a1cc7f044c", "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "annotation_id": [ "b4026b048481cb313540992bd22f43f6f2c2be56", "bf23cd4b9185a89f2f4000e134d04cef97279659", "e73eb28f273f1c47e4b9bc9c90de0097efa58556" ], "answer": [ { "evidence": [ "The AI2D annotation schema models four types of diagram elements: text, graphics, arrows and arrowheads, whereas the semantic relations that hold between these elements are described using ten relations from a framework for analysing diagrammatic representations in engelhardt2002. Each diagram is represented using a Diagram Parse Graph (DPG), whose nodes stand for diagram elements while the edges between the nodes carry information about their semantic relations. The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10.", "AI2D-RST covers a subset of 1000 diagrams from AI2D, which have been annotated by trained experts using a new multi-layer annotation schema for describing the diagrams in AI2D BIBREF11. The annotation schema, which draws on state-of-the-art theories of multimodal communication BIBREF12, adopts a stand-off approach to describing the diagrams. Hence the three annotation layers in AI2D-RST are represented using three different graphs, which use the same identifiers for nodes across all three graphs to allow combining the descriptions in different graphs. AI2D-RST contains three graphs:" ], "extractive_spans": [], "free_form_answer": "The annotation for AI2D was\ncreated by crowd-sourced non-expert annotators on AMT while AI2D-RST covers a subset of diagrams from AI2D annotated by trained experts", "highlighted_evidence": [ "The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10.", "AI2D-RST covers a subset of 1000 diagrams from AI2D, which have been annotated by trained experts using a new multi-layer annotation schema for describing the diagrams in AI2D BIBREF11. " ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "258ee4069f740c400c0049a2580945a1cc7f044c", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "14fa950041e502086b68eeadcb4ce458e9163dd8", "2d320c7f1bb1a54c7f874bba5eacbb2c40776877", "d18636d487240a328bd7d5d48d372940c6793ed9" ], "answer": [ { "evidence": [ "This section presents two experiments that compare AI2D and AI2D-RST annotations in classifying diagrams and their parts using various graph neural networks.", "Experiments ::: Graph Neural Networks", "I evaluated the following graph neural network architectures for both graph and node classification tasks:", "Graph Convolutional Network (GCN) BIBREF25", "Simplifying Graph Convolution (SGC) BIBREF26, averaging incoming node features from up to 2 hops away", "Graph Attention Network (GAT) BIBREF27 with 2 heads" ], "extractive_spans": [], "free_form_answer": "by using them as features in classifying diagrams and\ntheir parts using various graph neural networks.", "highlighted_evidence": [ "This section presents two experiments that compare AI2D and AI2D-RST annotations in classifying diagrams and their parts using various graph neural networks.\n\nExperiments ::: Graph Neural Networks\nI evaluated the following graph neural network architectures for both graph and node classification tasks:\n\nGraph Convolutional Network (GCN) BIBREF25\n\nSimplifying Graph Convolution (SGC) BIBREF26, averaging incoming node features from up to 2 hops away\n\nGraph Attention Network (GAT) BIBREF27 with 2 heads" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "This provides an interesting setting for comparison and evaluation, as non-expert annotations are cheap to produce and easily outnumber the expert-annotated data, whose production consumes both time and resources. Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer." ], "extractive_spans": [], "free_form_answer": "Expert annotators incorporate domain knowledge from multimodality theory while non-expert cannot but they are less time-consuming and use less resources.", "highlighted_evidence": [ "This provides an interesting setting for comparison and evaluation, as non-expert annotations are cheap to produce and easily outnumber the expert-annotated data, whose production consumes both time and resources. Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF22 shows the results for node classification using various graph neural network architectures. Because the results are not entirely comparable due to different node types present in the two resources, it is more reasonable to compare architectures. SAGE, GCN and GAT clearly outperform SGC in classifying nodes from both resources, as does the random forest classifier. AI2D nodes are classified with particularly high accuracy, which may result from having to learn representations for only one node type, that is, the image constant ($N = 1000$). AI2D-RST, in turn, must learn representations from scratch for both image constants ($N = 1000$) and grouping nodes ($N = 7300$)." ], "extractive_spans": [ "results are not entirely comparable due to different node types", "more reasonable to compare architectures" ], "free_form_answer": "", "highlighted_evidence": [ "Because the results are not entirely comparable due to different node types present in the two resources, it is more reasonable to compare architectures. SAGE, GCN and GAT clearly outperform SGC in classifying nodes from both resources, as does the random forest classifier. AI2D nodes are classified with particularly high accuracy, which may result from having to learn representations for only one node type, that is, the image constant ($N = 1000$). AI2D-RST, in turn, must learn representations from scratch for both image constants ($N = 1000$) and grouping nodes ($N = 7300$)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "2f39a223666fc557671d5482811c82a78d0c8db4", "4c2ffe858453e8f30dda94aeab7ebacabcace00a", "a15b692db669283c4e7b032cac469771d37da5dc", "d77baadeea67b5e3e63b92aae028b29895767adc" ], "answer": [ { "evidence": [ "The AI2D annotation schema models four types of diagram elements: text, graphics, arrows and arrowheads, whereas the semantic relations that hold between these elements are described using ten relations from a framework for analysing diagrammatic representations in engelhardt2002. Each diagram is represented using a Diagram Parse Graph (DPG), whose nodes stand for diagram elements while the edges between the nodes carry information about their semantic relations. The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10." ], "extractive_spans": [ "Amazon Mechanical Turk" ], "free_form_answer": "", "highlighted_evidence": [ "The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The AI2D annotation schema models four types of diagram elements: text, graphics, arrows and arrowheads, whereas the semantic relations that hold between these elements are described using ten relations from a framework for analysing diagrammatic representations in engelhardt2002. Each diagram is represented using a Diagram Parse Graph (DPG), whose nodes stand for diagram elements while the edges between the nodes carry information about their semantic relations. The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10." ], "extractive_spans": [ "Amazon Mechanical Turk" ], "free_form_answer": "", "highlighted_evidence": [ " The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10" ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "The AI2D annotation schema models four types of diagram elements: text, graphics, arrows and arrowheads, whereas the semantic relations that hold between these elements are described using ten relations from a framework for analysing diagrammatic representations in engelhardt2002. Each diagram is represented using a Diagram Parse Graph (DPG), whose nodes stand for diagram elements while the edges between the nodes carry information about their semantic relations. The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10." ], "extractive_spans": [ "Amazon Mechanical Turk" ], "free_form_answer": "", "highlighted_evidence": [ "The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk BIBREF10." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "ae845b6aefa5ee30d7f0ff5216132acdc0e5d292", "c1deefec7513e115152b7959e19ed107fb90f871", "c5b725bd6188b9844fc71a3cfc4edf5ad92640c8", "e8e9f01df33a2319bb4448269c208dd3de93075c" ], "answer": [ { "evidence": [ "This provides an interesting setting for comparison and evaluation, as non-expert annotations are cheap to produce and easily outnumber the expert-annotated data, whose production consumes both time and resources. Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer." ], "extractive_spans": [], "free_form_answer": "Annotators trained on multimodality theory", "highlighted_evidence": [ "Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer.", "Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing" ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "This provides an interesting setting for comparison and evaluation, as non-expert annotations are cheap to produce and easily outnumber the expert-annotated data, whose production consumes both time and resources. Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer." ], "extractive_spans": [ "domain knowledge from multimodality theory" ], "free_form_answer": "", "highlighted_evidence": [ "Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Unlike many other areas, the study of diagrammatic representations is particularly well-resourced, as several multimodal resources have been published recently to support research on computational processing of diagrams BIBREF10, BIBREF8, BIBREF11. This study compares two such resources, AI2D BIBREF10 and AI2D-RST BIBREF11, which both feature the same diagrams, as the latter is an extension of the former. Whereas AI2D features crowd-sourced, non-expert annotations, AI2D-RST provides multiple layers of expert annotations, which are informed by state-of-the-art approaches to multimodal communication BIBREF12 and annotation BIBREF13, BIBREF14." ], "extractive_spans": [], "free_form_answer": "Those who have domain knowledge on multimodal communication and annotation.", "highlighted_evidence": [ "Whereas AI2D features crowd-sourced, non-expert annotations, AI2D-RST provides multiple layers of expert annotations, which are informed by state-of-the-art approaches to multimodal communication BIBREF12 and annotation BIBREF13, BIBREF14." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "fa716cd87ce6fd6905e2f23f09b262e90413167f", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "five", "five", "five", "five", "five" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat", "somewhat" ], "question": [ "What are the parts of the \"multimodal\" resources?", "Are annotators familiar with the science topics annotated?", "How are the expert and crowd-sourced annotations compared to one another?", "What platform do the crowd-sourced workers come from?", "Who are considered trained experts?" ], "question_id": [ "20632fc4d2b693b5aabfbbc99ee5c1e9fc485dea", "a57e266c936e438aeeab5e8d20d9edd1c15a32ee", "27356a99290fcc01e3e5660af3405d2a6c6f6e7c", "6e37f43f4f54ffc77c785d60c6058fbad2147922", "fff1ed2435ba622d884ecde377ff2de127167638" ], "question_writer": [ "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255" ], "search_query": [ "expert", "expert", "expert", "expert", "expert" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: The relationship between crowd-sourced annotations in AI2D and AI2D-RST. AI2D-RST provides alternative, expert-annotated stand-off descriptions for a subset of 1000 diagrams from the original AI2D dataset. The grouping layer in AI2D provides a foundation for further annotation layers by allowing references to groups of nodes.", "Figure 2: Fine-grained classes, their number and frequencies in AI2D-RST (N = 1134). Note that the number of classes exceeds the number of diagrams in AI2D-RST, as some diagrams feature multiple diagram types. The arrows indicate choices: if the diagram designer chooses depiction, a further choice must be made between pictorial– diagrammatic and 2D/3D representations. The dashed lines indicate coarse groups of diagram types.", "Table 1: Hyperparameter ranges for graph classification", "Table 2: Hyperparameter ranges for node classification", "Table 3: Mean accuracy, macro F1 and weighted F1 scores for node classification. The results are averaged over 20 runs. The following abbreviations indicate the graph used: ‘AI2D’ for the original crowd-sourced graphs from AI2D, ‘G’ for the grouping graph and ‘G+C’ for the combination of grouping and connectivity graph from AI2D-RST. An asterisk indicates that the difference between AI2D and the best AI2D-RST graph is statistically significant at p < 0.05 when comparing the results for the given metric over 20 runs using Mann–Whitney U test. The best result for each metric is marked using bold.", "Figure 3: Diagram #4120 in AI2D combines two diagram types: a cross-section with a cycle (cf. Figure 2)", "Table 4: Baseline accuracy, macro F1 and weighted F1 scores for node classification from dummy (D), random forest (RF; 100 estimators) and support vector machine (SVM; C = 1.0) classifiers with balanced class weights. The results are averaged over 20 runs. All models were implemented using scikit-learn 0.21.3. Each node is represented by a 4-dimensional vector.", "Table 5: Mean accuracy, macro F1 and weighted F1 scores for graph classification. The results are averaged over 20 runs. The following abbreviations indicate the graph used: ‘AI2D’ for the original crowd-sourced graphs from AI2D, ‘G’ for the grouping graph and ‘G+C’ for the combination of grouping and connectivity graph from AI2D-RST. * indicates that the difference between AI2D and the best AI2D-RST graph is statistically significant at p < 0.05 when comparing the results over 20 runs for the given metric using Mann–Whitney U test. + indicates the same for AI2D-RST grouping graph and the combination of grouping and connectivity graphs. The best result for each metric across all models and graphs is marked in bold.", "Table 6: Baseline accuracy, macro F1 and weighted F1 scores for graph classification using dummy (D), random forest (RF; 100 estimators) and support vector machine (SVM; C = 1.0) classifiers with balanced class weights. The results are averaged over 20 runs. All models were implemented using scikit-learn 0.21.3. Each diagram is represented by a 5-dimensional vector acquired by averaging the features for all nodes in the graph." ], "file": [ "3-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Figure3-1.png", "5-Table4-1.png", "6-Table5-1.png", "6-Table6-1.png" ] }
[ "What are the parts of the \"multimodal\" resources?", "Are annotators familiar with the science topics annotated?", "How are the expert and crowd-sourced annotations compared to one another?", "Who are considered trained experts?" ]
[ [ "1912.02866-Graph-based Representations ::: Nodes ::: Node Features-0", "1912.02866-Data ::: Expert Annotations from AI2D-RST-2", "1912.02866-Graph-based Representations ::: Edges-0", "1912.02866-Data ::: Expert Annotations from AI2D-RST-3", "1912.02866-Graph-based Representations ::: Nodes ::: Node Types-0", "1912.02866-Data ::: Expert Annotations from AI2D-RST-0", "1912.02866-Graph-based Representations ::: Nodes ::: Discourse Relations-0", "1912.02866-Introduction-1", "1912.02866-Data ::: Expert Annotations from AI2D-RST-1" ], [ "1912.02866-Data ::: Expert Annotations from AI2D-RST-0", "1912.02866-Data ::: Crowd-sourced Annotations from AI2D-1" ], [ "1912.02866-Experiments ::: Graph Neural Networks-2", "1912.02866-Experiments ::: Graph Neural Networks-3", "1912.02866-Experiments ::: Graph Neural Networks-1", "1912.02866-Experiments ::: Tasks ::: Node Classification-2", "1912.02866-Introduction-4", "1912.02866-Experiments ::: Graph Neural Networks-0", "1912.02866-Experiments-0" ], [ "1912.02866-Introduction-4", "1912.02866-Introduction-3" ] ]
[ "grouping, connectivity, and discourse structure ", "The annotation for AI2D was\ncreated by crowd-sourced non-expert annotators on AMT while AI2D-RST covers a subset of diagrams from AI2D annotated by trained experts", "Expert annotators incorporate domain knowledge from multimodality theory while non-expert cannot but they are less time-consuming and use less resources.", "Those who have domain knowledge on multimodal communication and annotation." ]
20
1903.02930
Neural Language Modeling with Visual Features
Multimodal language models attempt to incorporate non-linguistic features for the language modeling task. In this work, we extend a standard recurrent neural network (RNN) language model with features derived from videos. We train our models on data that is two orders-of-magnitude bigger than datasets used in prior work. We perform a thorough exploration of model architectures for combining visual and text features. Our experiments on two corpora (YouCookII and 20bn-something-something-v2) show that the best performing architecture consists of middle fusion of visual and text features, yielding over 25% relative improvement in perplexity. We report analysis that provides insights into why our multimodal language model improves upon a standard RNN language model.
{ "paragraphs": [ [ " INLINEFORM0 Work performed while the author was an intern at Google.", "Language models are vital components of a wide variety of systems for Natural Language Processing (NLP) including Automatic Speech Recognition, Machine Translation, Optical Character Recognition, Spelling Correction, etc. However, most language models are trained and applied in a manner that is oblivious to the environment in which human language operates BIBREF0 . These models are typically trained only on sequences of words, ignoring the physical context in which the symbolic representations are grounded, or ignoring the social context that could inform the semantics of an utterance.", "For incorporating additional modalities, the NLP community has typically used datasets such as MS COCO BIBREF1 and Flickr BIBREF2 for image-based tasks, while several datasets BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 have been curated for video-based tasks. Despite the lack of big datasets, researchers have started investigating language grounding in images BIBREF8 , BIBREF9 , BIBREF10 and to lesser extent in videos BIBREF11 , BIBREF1 . However, language grounding has focused more on obtaining better word and sentence representations or other downstream tasks, and to lesser extent on language modeling.", "In this paper, we examine the problem of incorporating temporal visual context into a recurrent neural language model (RNNLM). Multimodal Neural Language Models were introduced in BIBREF12 , where log-linear LMs BIBREF13 were conditioned to handle both image and text modalities. Notably, this work did not use the recurrent neural model paradigm which has now become the de facto way of implementing neural LMs.", "The closest work to ours is that of BIBREF0 , who report perplexity gains of around 5–6% on three languages on the MS COCO dataset (with an English vocabulary of only 16K words).", "Our work is distinguishable from previous work with respect to three dimensions:" ], [ "A language model assigns to a sentence INLINEFORM0 the probability: INLINEFORM1 ", "where each word is assigned a probability given the previous word history.", "For a given video segment, we assume that there is a sequence of INLINEFORM0 video frames represented by features INLINEFORM1 , and the corresponding transcription INLINEFORM2 . In practice, we assume INLINEFORM3 since we can always assign a video frame to each word by replicating the video frames the requisite number of times. Thus, our visually-grounded language model models the probability of the next word given the history of previous words as well as video frames: INLINEFORM4 " ], [ "There are several options for combining the text and video modalities. We opt for the simplest strategy, which concatenates the representations. For a word embedding INLINEFORM0 and corresponding visual representation INLINEFORM1 , the input to our RNNLM will be the concatenated vector INLINEFORM2 . For the examples where we were unable to compute visual features (see Section § SECREF3 ), we set INLINEFORM3 to be a zero-vector.", "In addition to concatenating the word and visual embedding, we explore two variants of our model that allow for a finer-grained integration of the two modalities:", "In this case, the RNNLM is given as input a vector INLINEFORM0 that is a weighted sum of the two embeddings: INLINEFORM1 ", "where INLINEFORM0 are learned matrices.", "Here, we apply the intuition that some words could provide information as to whether or not the visual context is helpful. In a simplistic example, if the word history is the article “the,\" then the visual context could provide relevant information needed for predicting the next word. For other word histories, though, the visual context might not be needed or be even irrelevant for the next word prediction: if the previous word is “carpe\", the next word is very likely to be “diem\", regardless of visual context. We implement a simple weighting mechanism that learns a scalar weight for the visual embedding prior to concatenation with the word embedding. The input to the RNNLM is now INLINEFORM0 , where: INLINEFORM1 ", "This approach does not add any new parameters to the model, but since the word representations INLINEFORM0 are learned, this mechanism has the potential to learn word embeddings that are also appropriate for weighting the visual context." ], [ "We explore three locations for fusing visual features in an RNNLM (Figure ). Our Early Fusion strategy merges the text and the visual features at the input to the LSTM cells. This embodies the intuition that it is best to do feature combination at the earliest possible stage. The Middle Fusion merges the visual features at the output of the 1st LSTM layer while the Late Fusion strategies merges the two features after the final LSTM layer. The idea behind the Middle and Late fusion is that we would like to minimize changes to the regular RNNLM architecture at the early stages and still be able to benefit from the visual features." ], [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces.", "Our RNNLM models consist of 2 LSTM layers, each containing 2048 units which are linearly projected to 512 units BIBREF19 . The word-piece and video embeddings are of size 512 each. We do not use dropout. During training, the batch size per worker is set to 256, and we perform full length unrolling to a max length of 70. The INLINEFORM0 -norms of the gradients are clipped to a max norm of INLINEFORM1 for the LSTM weights and to 10,000 for all other weights. We train with Synchronous SGD with the Adafactor optimizer BIBREF20 until convergence on a development set, created by randomly selecting INLINEFORM2 of all utterances." ], [ "For evaluation we used two datasets, YouCook2 and sth-sth, allowing us to evaluate our models in cases where the visual context is relevant to the modelled language. Note that no data from these datasets are present in the YouTube videos used for training. The perplexity of our models is shown in Table ." ], [ "We present a simple strategy to augment a standard recurrent neural network language model with temporal visual features. Through an exploration of candidate architectures, we show that the Middle Fusion of visual and textual features leads to a 20-28% reduction in perplexity relative to a text only baseline. These experiments were performed using datasets of unprecedented scale, with more than 1.2 billion tokens – two orders of magnitude more than any previously published work. Our work is a first step towards creating and deploying large-scale multimodal systems that properly situate themselves into a given context, by taking full advantage of every available signal." ] ], "section_name": [ "Introduction", "Model", "Combining the text and video modalities", "Location of combination", "Data and Experimental Setup", "Experiments", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "3d9c8966785305ff489db15f49387dcb40da24c4", "9bf26c3886d431b53d9816930da751a092980938", "ab026ba3bbeb2b1c77cf8fe434abbb3de985dc3d", "b34f595ccaf1258ce200fc2ea5eadb2621a6ef8e" ], "answer": [ { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [ "64M segments from YouTube videos" ], "free_form_answer": "", "highlighted_evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For evaluation we used two datasets, YouCook2 and sth-sth, allowing us to evaluate our models in cases where the visual context is relevant to the modelled language. Note that no data from these datasets are present in the YouTube videos used for training. The perplexity of our models is shown in Table ." ], "extractive_spans": [ "YouCook2 ", "sth-sth" ], "free_form_answer": "", "highlighted_evidence": [ "For evaluation we used two datasets, YouCook2 and sth-sth, allowing us to evaluate our models in cases where the visual context is relevant to the modelled language. Note that no data from these datasets are present in the YouTube videos used for training. The perplexity of our models is shown in Table ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [ "64M segments from YouTube videos" ], "free_form_answer": "", "highlighted_evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [], "free_form_answer": "About 64M segments from YouTube videos comprising a total of 1.2B tokens.", "highlighted_evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "010d90bda059312a422020f57e2cb16b100fa3ae", "7151b64db6f43e2c7b7584fc2f1cd757364322cc", "9bc1f14fd2be3844ddfe12c84058d2368a28d4be" ], "answer": [ { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [], "free_form_answer": "64M video segments with 1.2B tokens", "highlighted_evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [ "64M" ], "free_form_answer": "", "highlighted_evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [ "64M segments from YouTube videos", "INLINEFORM0 B tokens", "vocabulary of 66K wordpieces" ], "free_form_answer": "", "highlighted_evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "31d1aac8c04f8a9f97d5e5a669bb3d9de39594a9", "36795398e4464e3c5d5d498dabb4f735aedc3c6d", "e04bd543b614f0709789098d90aa4d2a5bea37cc" ], "answer": [ { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [], "free_form_answer": "1500-dimensional vectors similar to those used for large scale image classification tasks.", "highlighted_evidence": [ "The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [ "features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks" ], "free_form_answer": "", "highlighted_evidence": [ "For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14 . We tokenize the training data using a vocabulary of 66K wordpieces BIBREF15 . Thus, the input to the model is a sequence of wordpieces. Using wordpieces allows us to address out-of-vocabulary (OOV) word issues that would arise from having a fixed word vocabulary. In practice, a wordpiece RNNLM gives similar performance as a word-level model BIBREF16 . For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 . For a INLINEFORM2 -second video and INLINEFORM3 wordpieces, each feature is uniformly allocated to INLINEFORM4 wordpieces." ], "extractive_spans": [ "1500-dimensional vectors, extracted from the video frames at 1-second intervals" ], "free_form_answer": "", "highlighted_evidence": [ "Our training data consist of about 64M segments from YouTube videos comprising a total of INLINEFORM0 B tokens BIBREF14", "For about INLINEFORM1 of the segments, we were able to obtain visual features at the frame level. The features are 1500-dimensional vectors, extracted from the video frames at 1-second intervals, similar to those used for large scale image classification tasks BIBREF17 , BIBREF18 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "what dataset was used for training?", "what is the size of the training data?", "what features were derived from the videos?" ], "question_id": [ "9b868c7d17852f46a8fe725f24cb9548fdbd2b05", "243cf21c4e34c4b91fcc4905aa4dc15a72087f0c", "488e3c4fd1103c46e12815d1bf414a0356fb0d0e" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: Visualization of our different Language Models. Given word and visual embeddings, the input can be created by three methods. Left panels: simple concatenation (examples with early, middle, and late fusion of the visual embeddings). Top right panel: learning a linear combination of the two embeddings. Bottom right panel: learn to weight the visual embedding based on the current word. Note: ⊕ denotes concatenation,⊗ denotes matrix multiplication, denotes dot product.", "Table 2: Withholding visual context from our best model leads to worse performance (similar to an RNNLM trained only on text).", "Table 1: Middle Fusion of text and frame-level visual features leads to significant reductions in perplexity on two multimodal datasets.", "Table 3: Two sentences from YouCook2 with wordpiece-level negative log likelihood scores. Most gains (high-" ], "file": [ "2-Figure1-1.png", "3-Table2-1.png", "3-Table1-1.png", "4-Table3-1.png" ] }
[ "what dataset was used for training?", "what is the size of the training data?", "what features were derived from the videos?" ]
[ [ "1903.02930-Data and Experimental Setup-0", "1903.02930-Experiments-0" ], [ "1903.02930-Data and Experimental Setup-0" ], [ "1903.02930-Data and Experimental Setup-0" ] ]
[ "About 64M segments from YouTube videos comprising a total of 1.2B tokens.", "64M video segments with 1.2B tokens", "1500-dimensional vectors similar to those used for large scale image classification tasks." ]
22
1911.04873
Can Neural Networks Learn Symbolic Rewriting?
This work investigates if the current neural architectures are adequate for learning symbolic rewriting. Two kinds of data sets are proposed for this research -- one based on automated proofs and the other being a synthetic set of polynomial terms. The experiments with use of the current neural machine translation models are performed and its results are discussed. Ideas for extending this line of research are proposed and its relevance is motivated.
{ "paragraphs": [ [ "Neural networks (NNs) turned out to be very useful in several domains. In particular, one of the most spectacular advances achieved with use of NNs has been natural language processing. One of the tasks in this domain is translation between natural languages – neural machine translation (NMT) systems established here the state-of-the-art performance. Recently, NMT produced first encouraging results in the autoformalization task BIBREF0, BIBREF1, BIBREF2, BIBREF3 where given an informal mathematical text in the goal is to translate it to its formal (computer understandable) counterpart. In particular, the NMT performance on a large synthetic -to-Mizar dataset produced by a relatively sophisticated toolchain developed for several decades BIBREF4 is surprisingly good BIBREF3, indicating that neural networks can learn quite complicated algorithms for symbolic data. This inspired us to pose a question: Can NMT models be used in the formal-to-formal setting? In particular: Can NMT models learn symbolic rewriting?", "The answer is relevant to various tasks in automated reasoning. For example, neural models could compete with symbolic methods such as inductive logic programming BIBREF5 (ILP) that have been previously experimented with to learn simple rewrite tasks and theorem-proving heuristics from large formal corpora BIBREF6. Unlike (early) ILP, neural methods can however easily cope with large and rich datasets, without combinatorial explosion.", "Our work is also an inquiry into the capabilities of NNs as such, in the spirit of works like BIBREF7." ], [ "To perform experiments answering our question we prepared two data sets – the first consists of examples extracted from proofs found by ATP (automated theorem prover) in a mathematical domain (AIM loops), whereas the second is a synthetic set of polynomial terms." ], [ "The data consists of sets of ground and nonground rewrites that came from Prover9 proofs of theorems about AIM loops produced by Veroff BIBREF8.", "Many of the inferences in the proofs are paramodulations from an equation and have the form s = t", "u[(s)] = vu[(t)] = v where $s, t, u, v$ are terms and $\\theta $ is a substitution. For the most common equations $s = t$, we gathered corresponding pairs of terms $\\big (u[\\theta (s)], u[\\theta (t)]\\big )$ which were rewritten from one to another with $s = t$. We put the pairs to separate data sets (depending on the corresponding $s = t$): in total 8 data sets for ground rewrites (where $\\theta $ is trivial) and 12 for nonground ones. The goal will be to learn rewriting for each of this 20 rules separately.", "Terms in the examples are treated as linear sequences of tokens where tokens are single symbols (variable / costant / predicate names, brackets, commas). Numbers of examples in each of the data sets vary between 251 and 34101. Lengths of the sequences of tokens vary between 1 and 343, with mean around 35. These 20 data sets were split into training, validation and test sets for our experiments ($60 \\%, 10 \\%, 30 \\%$, respectively).", "In Table TABREF4 and Table TABREF5 there are presented examples of pairs of AIM terms in TPTP BIBREF9 format, before and after rewriting with, respectively, ground and nonground rewrite rules." ], [ "This is a synthetically created data set where the examples are pairs of equivalent polynomial terms. The first element of each pair is a polynomial in an arbitrary form and the second element is the same polynomial in a normalized form. The arbitrary polynomials are created randomly in a recursive manner from a set of available (non-nullary) function symbols, variables and constants. First, one of the symbols is randomly chosen. If it is a constant or a variable it is returned and the process terminates. If a function symbol is chosen, its subterm(s) are constructed recursively in a similar way.", "The parameters of this process are set in such a way that it creates polynomial terms of average length around 25 symbols. Terms longer than 50 are filtered out. Several data sets of various difficulty were created by varying the number of available symbols. This were quite limited – at most 5 different variables and constants being a few first natural numbers. The reason for this limited complexity of the input terms is because normalizing even a relatively simple polynomial can result in a very long term with very large constants – which is related especially to the operation of exponentiation in polynomials.", "Each data set consists of different 300 000 examples, see Table TABREF7 for examples. These data sets were split into training, validation and test sets for our experiments ($60 \\%, 10 \\%, 30 \\%$, respectively)." ], [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism.", "After a small grid search we decided to inherit most of the hyperparameters of the model from the best results achieved in BIBREF3 where -to-Mizar translation is learned. We used relatively small LSTM cells consisting of 2 layers with 128 units. The “scaled Luong” version of the attention mechanism was used, as well as dropout with rate equal $0.2$. The number of training steps was 10000. (This setting was used for all our experiments described below.)" ], [ "First, NMT models were trained for each of the 20 rewrite rules in the AIM data set. It turned out that the models, as long as the number of examples was greater than 1000, were able to learn the rewriting task very well, reaching $90\\%$ of accuracy on separated test sets. This means that the task of applying single rewrite step seems relatively easy to learn by NMT. See Table TABREF11 for all the results.", "We also run an experiment on the joint set of all rewrite rules (consisting of 41396 examples). Here the task was more difficult as a model needed not only to apply rewriting correctly, but also choose “the right” rewrite rule applicable for a given term. Nevertheless, the performance was also very good, reaching $83\\%$ of accuracy." ], [ "Then experiments on more challenging but also much larger data sets for polynomial normalization were performed. Depending on the difficulty of the data, accuracy on the test sets achieved in our experiments varied between $70\\%$ and $99\\%$. The results in terms of accuracy are shown in Table TABREF13.", "This high performance of the model encouraged a closer inspection of the results. First, we checked if in the test sets there are input examples which differs from these in training sets only by renaming of variables. Indeed, for each of the data sets in test sets are $5 - 15 \\%$ of such “renamed” examples. After filtering them out the measured accuracy drops – but only by $1 - 2 \\%$.", "An examination of the examples wrongly rewritten by the model was done. It turns out that the wrong outputs almost always parse (in $97 - 99 \\%$ of cases they are legal polynomial terms). Notably, depending on the difficulty of the data set, as much as $18 - 64 \\%$ of incorrect outputs are wrong only with respect to the constants in the terms. (Typically, NMT model proposes too low constants compared to the correct ones.) Below $1 \\%$ of wrong outputs are correct modulo variable renaming." ], [ "NMT is not typically applied to symbolic problems, but surprisingly, it performed very well for both described tasks. The first one was easier in terms of complexity of the rewriting (only one application of a rewrite rule was performed) but the number of examples was quite limited. The second task involved more difficult rewriting – multiple different rewrite steps were performed to construct the examples. Nevertheless, provided many examples, NMT could learn normalizing polynomials.", "We hope this work provides a baseline and inspiration for continuing this line of research. We see several interesting directions this work can be extended.", "Firstly, more interesting and difficult rewriting problems need to be provided for better delineation of the strength of the neural models. The described data are relatively simple and with no direct relevance to the real unsolved symbolic problems. But the results on these simple problems are encouraging enough to try with more challenging ones, related to real difficulties – e.g. these from TPDB data base.", "Secondly, we are going to develop and test new kinds of neural models tailored for the problem of comprehending symbolic expressions. Specifically, we are going to implement an approach based on the idea of TreeNN, which may be another effective approach for this kind of tasks BIBREF7, BIBREF12, BIBREF13. TreeNNs are built recursively from modules, where the modules corresponds to parts of symbolic expression (symbols) and the shape of the network reflects the parse tree of the processed expression. This way model is explicitly informed on the exact structure of the expression, which in case of formal logic is always unambiguous and easy to extract. Perhaps this way the model could learn more efficiently from examples (and achieve higher results even on the small AIM data sets). The authors have a positive experience of applying TreeNNs to learn remainders of arithmetical expressions modulo small natural numbers – TreeNNs outperformed here neural models based on LSTM cells, giving almost perfect accuracy. However, this is unclear how to translate this TreeNN methodology to the tasks with the structured output, like the symbolic rewriting task.", "Thirdly, there is an idea of integrating neural rewriting architectures into the larger systems for automated reasoning. This can be motivated by the interesting contrast between some simpler ILP systems suffering for combinatorial explosion in presence of a large number of examples and neural methods which definitely benefit form large data sets.", "We hope that this work will inspire and trigger a discussion on the above (and other) ideas." ], [ "Piotrowski was supported by the grant of National Science Center, Poland, no. 2018/29/N/ST6/02903, and by the European Agency COST action CA15123. Urban and Brown were supported by the ERC Consolidator grant no. 649043 AI4REASON and by the Czech project AI&Reasoning CZ.02.1.01/0.0/0.0/15_003/0000466 and the European Regional Development Fund. Kaliszyk was supported by ERC Starting grant no. 714034 SMART." ] ], "section_name": [ "Introduction", "Data", "Data ::: The AIM data set", "Data ::: The polynomial data set", "Experiments", "Experiments ::: AIM data set", "Experiments ::: Polynomial data set", "Conclusions and future work", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "0576e8913436f2a8674fcf4f3be00d1d41a9d1fd", "1615b90590af6d6c7235c3fc26f40a58d2e750e3", "9e0b1c78348297888f5bbec335b91711fe2548da", "bcc304c00be90e24597e94d000ba7af5220b68a0" ], "answer": [ { "evidence": [ "After a small grid search we decided to inherit most of the hyperparameters of the model from the best results achieved in BIBREF3 where -to-Mizar translation is learned. We used relatively small LSTM cells consisting of 2 layers with 128 units. The “scaled Luong” version of the attention mechanism was used, as well as dropout with rate equal $0.2$. The number of training steps was 10000. (This setting was used for all our experiments described below.)" ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "After a small grid search we decided to inherit most of the hyperparameters of the model from the best results achieved in BIBREF3 where -to-Mizar translation is learned. We used relatively small LSTM cells consisting of 2 layers with 128 units. The “scaled Luong” version of the attention mechanism was used, as well as dropout with rate equal $0.2$. The number of training steps was 10000. (This setting was used for all our experiments described below.)" ], "unanswerable": false, "yes_no": true }, { "evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "6b4979d09acf66e9cf1d961e76a9f25fc1e78df3", "acca577a361900c5d2d4365840c0015c2fc499cb", "e76ca35521ce15379473f4df4063243d3f16a7f7" ], "answer": [ { "evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "extractive_spans": [ "NMT architecture BIBREF10" ], "free_form_answer": "", "highlighted_evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "extractive_spans": [ "architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism" ], "free_form_answer": "", "highlighted_evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "extractive_spans": [], "free_form_answer": "LSTM with attention", "highlighted_evidence": [ "For experiments with both data sets we used an established NMT architecture BIBREF10 based on LSTMs (long short-term memory cells) and implementing the attention mechanism." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "8b945fe33b397d7ecb28a8b79db6641ab2a017c7", "a34f3333f2d4a618d73b2f62608cd601e1aed2b5", "f2a9dc6778b802c177b76341677591f7e7114348" ], "answer": [ { "evidence": [ "Neural networks (NNs) turned out to be very useful in several domains. In particular, one of the most spectacular advances achieved with use of NNs has been natural language processing. One of the tasks in this domain is translation between natural languages – neural machine translation (NMT) systems established here the state-of-the-art performance. Recently, NMT produced first encouraging results in the autoformalization task BIBREF0, BIBREF1, BIBREF2, BIBREF3 where given an informal mathematical text in the goal is to translate it to its formal (computer understandable) counterpart. In particular, the NMT performance on a large synthetic -to-Mizar dataset produced by a relatively sophisticated toolchain developed for several decades BIBREF4 is surprisingly good BIBREF3, indicating that neural networks can learn quite complicated algorithms for symbolic data. This inspired us to pose a question: Can NMT models be used in the formal-to-formal setting? In particular: Can NMT models learn symbolic rewriting?" ], "extractive_spans": [], "free_form_answer": "It is a process of translating a set of formal symbolic data to another set of formal symbolic data.", "highlighted_evidence": [ "One of the tasks in this domain is translation between natural languages – neural machine translation (NMT) systems established here the state-of-the-art performance.", "In particular, the NMT performance on a large synthetic -to-Mizar dataset produced by a relatively sophisticated toolchain developed for several decades BIBREF4 is surprisingly good BIBREF3, indicating that neural networks can learn quite complicated algorithms for symbolic data. This inspired us to pose a question: Can NMT models be used in the formal-to-formal setting? In particular: Can NMT models learn symbolic rewriting?" ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "The data consists of sets of ground and nonground rewrites that came from Prover9 proofs of theorems about AIM loops produced by Veroff BIBREF8.", "u[(s)] = vu[(t)] = v where $s, t, u, v$ are terms and $\\theta $ is a substitution. For the most common equations $s = t$, we gathered corresponding pairs of terms $\\big (u[\\theta (s)], u[\\theta (t)]\\big )$ which were rewritten from one to another with $s = t$. We put the pairs to separate data sets (depending on the corresponding $s = t$): in total 8 data sets for ground rewrites (where $\\theta $ is trivial) and 12 for nonground ones. The goal will be to learn rewriting for each of this 20 rules separately." ], "extractive_spans": [], "free_form_answer": "Symbolic rewriting is the method to rewrite ground and nonground data from one to another form using rules.", "highlighted_evidence": [ "The data consists of sets of ground and nonground rewrites that came from Prover9 proofs of theorems about AIM loops produced by Veroff BIBREF8.", "For the most common equations $s = t$, we gathered corresponding pairs of terms $\\big (u[\\theta (s)], u[\\theta (t)]\\big )$ which were rewritten from one to another with $s = t$. ", "The goal will be to learn rewriting for each of this 20 rules separately." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "Do any of the models use attention?", "What translation models are explored?", "What is symbolic rewriting?" ], "question_id": [ "84765903b8c7234ca2919d0a40e3c6a5bcedf45d", "38363a7ed250bc729508c4c1dc975696a65c53cb", "e862ebfdb1b3425af65fec81c8984edca6f89a76" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 4. Results of experiments with AIM data. (Names of the rules correspond to folder names in the Github repo.)", "Table 5. Choosen results of experiments with polynomials. (Characteristic of formulas concerns the input polynomials. Labels of the data sets correspond to folder names in the Github repo.)" ], "file": [ "3-Table4-1.png", "3-Table5-1.png" ] }
[ "What translation models are explored?", "What is symbolic rewriting?" ]
[ [ "1911.04873-Experiments-0" ], [ "1911.04873-Data ::: The AIM data set-0", "1911.04873-Data ::: The AIM data set-2", "1911.04873-Introduction-0" ] ]
[ "LSTM with attention", "Symbolic rewriting is the method to rewrite ground and nonground data from one to another form using rules." ]
23
1606.07043
Toward Interpretable Topic Discovery via Anchored Correlation Explanation
Many predictive tasks, such as diagnosing a patient based on their medical chart, are ultimately defined by the decisions of human experts. Unfortunately, encoding experts' knowledge is often time consuming and expensive. We propose a simple way to use fuzzy and informal knowledge from experts to guide discovery of interpretable latent topics in text. The underlying intuition of our approach is that latent factors should be informative about both correlations in the data and a set of relevance variables specified by an expert. Mathematically, this approach is a combination of the information bottleneck and Total Correlation Explanation (CorEx). We give a preliminary evaluation of Anchored CorEx, showing that it produces more coherent and interpretable topics on two distinct corpora.
{ "paragraphs": [ [ "A clinician can look at a patient's electronic health record (EHR) and not only decide whether the patient has diabetes but also produce a succinct summary of the clinical evidence. Replicating this feat with computational tools has been the focus of much research in clinical informatics. There are major initiatives underway to codify clinical knowledge into formal representations, most often as deterministic rules that can be applied in a semi-automated fashion BIBREF0 . However, representing the intuitive judgments of human experts can be challenging, particularly when the formal system does not match the expert's knowledge. For example, many deterministic disease classifiers used in clinical informatics rely heavily upon administrative codes not available at time of diagnosis. Further, developing and testing such systems is time- and labor-intensive.", "We propose instead a lightweight information theoretic framework for codifying informal human knowledge and then use it to extract interpretable latent topics from text corpora. For example, to discover patients with diabetes in a set of clinical notes, a doctor can begin by specifying disease-specific anchor terms BIBREF1 , BIBREF2 , such as “diabetes” or “insulin.” Our framework then uses these to help discover both latent topics associated with diabetes and records in which diabetes-related topics occur. The user can then add (or remove) additional anchor terms (e.g., “metformin”) to improve the quality of the learned (diabetes) topics.", "In this workshop paper, we introduce a simple approach to anchored information theoretic topic modeling using a novel combination of Correlation Explanation (CorEx) BIBREF3 and the information bottleneck BIBREF4 . This flexible framework enables the user to leverage domain knowledge to guide exploration of a collection of documents and to impose semantics onto latent factors learned by CorEx. We present preliminary experimental results on two text corpora (including a corpus of clinical notes), showing that anchors can be used to discover topics that are more specific and relevant. What is more, we demonstrate the potential for this framework to perform weakly supervised learning in settings where labeling documents is prohibitively expensive BIBREF5 , BIBREF6 .", "With respect to interpretable machine learning, our contributions are twofold. First, our framework provides a way for human users to share domain knowledge with a statistical learning algorithm that is both convenient for the human user and easily digestible by the machine. Second, our experimental results confirm that the introduction of simple anchor words can improve the coherence and human interpretability of topics discovered from data. Both are essential to successful and interactive collaboration between machine learning and human users." ], [ "Anchored Correlation Explanation can be understood as a combination of Total Correlation Explanation (CorEx) BIBREF3 , BIBREF7 and the multivariate information bottleneck BIBREF4 , BIBREF8 . We search for a set of probabilistic functions of the inputs INLINEFORM0 for INLINEFORM1 that optimize the following information theoretic objective: INLINEFORM2 ", "The first term is the CorEx objective INLINEFORM0 , which aims to construct latent variables INLINEFORM1 that best explain multivariate dependencies in the data INLINEFORM2 . Here the data consist of INLINEFORM3 -dimensional binary vectors INLINEFORM4 . Total correlation, or multivariate mutual information BIBREF9 , is specified as INLINEFORM5 where INLINEFORM6 is the KL divergence. Maximizing INLINEFORM7 over latent factors INLINEFORM8 amounts to minimizing INLINEFORM9 , which measures how much dependence in INLINEFORM10 is explained by INLINEFORM11 . At the global optimum, INLINEFORM12 is zero and the observations are independent conditioned on the latent factors. Several papers have explored CorEx for unsupervised hierarchical topic modeling BIBREF3 , BIBREF10 , BIBREF11 .", "The second term involves the mutual information between pairs of latent factors INLINEFORM0 ) and anchor variables INLINEFORM1 specified in the set INLINEFORM2 . This is inspired by the information bottleneck BIBREF4 , BIBREF8 , a supervised information-theoretic approach to discovering latent factors. The bottleneck objective INLINEFORM3 constructs latent factors INLINEFORM4 that trade off compression of INLINEFORM5 against preserving information about relevance variables INLINEFORM6 .", "Anchored CorEx preserves information about anchors while also explaining as much multivariate dependence between observations in INLINEFORM0 as possible. This framework is flexible: we can attach multiple anchors to one factor or one anchor to multiple factors. We have found empirically that INLINEFORM1 works well and does not need to be tuned.", "Anchors allow us to both seed CorEx and impose semantics on latent factors: when analyzing medical documents, for example, we can anchor a diabetes latent factor to the word “diabetes.” The INLINEFORM0 objective then discovers other words associated with “diabetes” and includes them in this topic.", "While there is not space here for a full description of the optimization, it is similar in principle to the approaches in BIBREF3 , BIBREF7 . Two points are worth noting: first, the TC objective is replaced by a lower bound to make optimization feasible BIBREF7 . Second, we impose a sparse connection constraint (each word appears in only one topic) to speed up computation. Open source code implementing CorEx is available on github BIBREF12 ." ], [ "There is a large body of work on integrating domain knowledge into topic models and other unsupervised latent variable models, often in the form of constraints BIBREF13 , prior distributions BIBREF14 , and token labels BIBREF15 . Like Anchored CorEx, seeded latent dirichlet allocation (SeededLDA) allows the specification of word-topic relationships BIBREF16 . However, SeededLDA assumes a more complex latent structure, in which each topic is a mixture of two distributions, one unseeded and one seeded.", " BIBREF1 first proposed anchors in the context of topic modeling: words that are high precision indicators of underlying topics. In contrast to our approach, anchors are typically selected automatically, constrained to appear in only one topic, and used primarily to aid optimization BIBREF17 . In our information theoretic framework, anchors are specified manually and more loosely defined as words having high mutual information with one or more latent factors. The effects of anchors on the interpretability of traditional topic models are often mixed BIBREF18 , but our experiments suggest that our approach yields more coherent topics.", "In health informatics, “anchor” features chosen based on domain knowledge have been used to guide statistical learning BIBREF2 . In BIBREF6 , anchors are used as a source of distant supervision BIBREF19 , BIBREF20 for classifiers in the absence of ground truth labels. While Anchored CorEx can be used for discriminative tasks, it is essentially unsupervised. Recent work by BIBREF21 is perhaps most similar in spirit to ours: they exploit predefined anchors to help learn and impose semantics on a discrete latent factor model with a directed acyclic graph structure. We utilize an information theoretic approach that makes no generative modeling assumptions." ], [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set. Both corpora provide ground truth labels for latent classes that may be thought of as topics." ], [ "The 20 Newsgroups data set is suitable for a straightforward evaluation of anchored topic models. The latent classes represent mutually exclusive categories, and each document is known to originate from a single category. We find that the correlation structure among the latent classes is less complex than in the Obesity Challenge data. Further, each category tends to exhibit some specialized vocabulary not used extensively in other categories (thus satisfying the anchor assumption from BIBREF1 ).", "To prepare the data, we removed headers, footers, and quotes and reduced the vocabulary to the most frequent 20,000 words. Each document was represented as a binary bag-of-words vector. In all experiemnts, we used the standard training/test split. All CorEx models used three layers of 40, 3, and 1 factors. fig:big shows an example hierarchical topic model extracted by Anchored CorEx." ], [ "The Obesity Challenge 2008 data set includes 1237 deidentified clinical discharge summaries from the Partners HealthCare Research Patient Data Repository. All summaries have been labeled by clinical experts with obesity and 15 other conditions commonly comorbid with obesity, ranging from Coronary Artery Disease (663 positives) to Depression (247) to Hypertriglyceridemia (62).", "We preprocessed each document with a standard biomedical text pipeline that extracts common medical terms and phrases (grouping neighboring words where appropriate) and detecting negation (“not” is prepended to negated terms) BIBREF23 , BIBREF24 . We converted each document to a binary bag-of-words with a vocabulary of 4114 (possibly negated) medical phrases. We used the 60/40 training/test split from the competition.", "We are primarily interested in the ability of Anchored CorEx to extract latent topics that are unambiguously associated with the 16 known conditions. We train a series of CorEx models with 32 latent topics in the first layer, each using a different anchor strategy. tab:obesity:topics shows the Obesity and Obstructive Sleep Apnea (OSA) topics for three iterations of Anchored CorEx with the ten most important terms (highest weighted connections to the latent factor) listed for each topic. Unsupervised CorEx (first row) does not discover any topics obviously related to obesity or OSA, so we choose the topics to which the terms obesity and obstructive sleep apnea are assigned. No unambiguous Obesity or OSA topics emerge even as the number of latent factors is decreased or increased.", "In the second iteration (second row), we add the common name of each of the 16 diseases as an anchor to one factor (16 total). Adding obesity as an anchor produces a clear Obesity topic, including several medications known to cause weight gain (e.g., acebutolol, klonopin). The anchored OSA topic, however, is quite poor and in fact resembles the rather generic topic to which obstructive sleep apnea is assigned by Unsupervised CorEx. It includes many spurious or non-specific terms like drug.", "This is likely due to the fact that obesity is a major risk factor of OSA, and so OSA symptoms are highly correlated with obesity and its other symptoms. Thus, the total correlation objective will attempt to group obesity and OSA-related terms together under a single latent factor. The sparse connection constraint mentioned in sec:methods prevents them from being connected to multiple factors. Indeed, sleep apnea appears in the obesity topic, suggesting the two topics are competing to explain OSA terms.", "In the third iteration, we correct this by adding sleep apnea as a second anchor to the OSA topic, and the resulting topic is clearly associated with OSA, including terms related to respiratory problems and medications used to treat (or believed to increase risk for) OSA. There is no noticeable reduction in quality in the Obesity topic." ], [ "In a series of follow-up experiments, we investigate the suitability of using anchored CorEx to perform weakly supervised classification. We interpret each anchored latent factor as a classifier for an associated class label and then compute test set F1 (using a threshold of 0.5) and area under the curve (AUC) scores (Obesity Challenge only).", "tab:class compares the classification performance of Unsupervised and Anchored CorEx on the soc.religion.christianity category from 20 Newsgroups for different choices of anchors. For both types of CorEx, the topic containing the corresponding terms is used as the classifier, but for Anchored CorEx those terms are also used as anchors when estimating the latent factor. Unsupervised CorEx does a reasonable job of discovering a coherent religion topic that already contains the terms God, Christian, and Jesus. However, using the terms Jesus and Christian as anchors yields a topic that better predicts the actual soc.religion.christianity category.", "", "tab:obesity:class shows the Macro-AUC and F1 scores (averaged across all diseases) on the Obesity Challenge data for the final anchored CorEx model and a Naive Bayes (NB) baseline, in which we train a separate classifier for each disease. Surprisingly, Anchored CorEx outperforms Naive Bayes (NB) by a large margin. Of course, Anchored CorEx is not a replacement for supervised learning: NB beats Anchored CorEx on 20 Newsgroups and does not represent a “strong” baseline for Obesity 2008 (teams scored above 0.7 in Macro-F1 during the competition). It is nonetheless remarkable that Anchored CorEx performs as well as it does given that it is fundamentally unsupervised." ], [ "We have introduced a simple information theoretic approach to topic modeling that can leverage domain knowledge specified informally as anchors. Our framework uses a novel combination of CorEx and the information bottleneck. Preliminary results suggest it can extract more precise, interpretable topics through a lightweight interactive process. We next plan to perform further empirical evaluations and to extend the algorithm to handle complex latent structures present in health care data." ], [ "This work was partially supported by DARPA award HR0011-15-C-0115. David Kale was supported by the Alfred E. Mann Innovation in Engineering Doctoral Fellowship." ] ], "section_name": [ "Introduction", "Methods", "Related Work", "Results and Discussion", "20 Newsgroups", "i2b2 Obesity Challenge 2008", "Anchored CorEx for Discriminative Tasks", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "2268abfdc51de7dd14234da4b78ba79ed90a0701", "2f0d997ba74e05de16558ea17b126c07aeb27b9d", "9b1ba4ea75b70528cb03384e6de95aacbf0d0680" ], "answer": [ { "evidence": [ "In this workshop paper, we introduce a simple approach to anchored information theoretic topic modeling using a novel combination of Correlation Explanation (CorEx) BIBREF3 and the information bottleneck BIBREF4 . This flexible framework enables the user to leverage domain knowledge to guide exploration of a collection of documents and to impose semantics onto latent factors learned by CorEx. We present preliminary experimental results on two text corpora (including a corpus of clinical notes), showing that anchors can be used to discover topics that are more specific and relevant. What is more, we demonstrate the potential for this framework to perform weakly supervised learning in settings where labeling documents is prohibitively expensive BIBREF5 , BIBREF6 ." ], "extractive_spans": [], "free_form_answer": "The experts define anchors and the model learns correlations between the anchors and latent topics.", "highlighted_evidence": [ "In this workshop paper, we introduce a simple approach to anchored information theoretic topic modeling using a novel combination of Correlation Explanation (CorEx) BIBREF3 and the information bottleneck BIBREF4 . This flexible framework enables the user to leverage domain knowledge to guide exploration of a collection of documents and to impose semantics onto latent factors learned by CorEx. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "BIBREF1 first proposed anchors in the context of topic modeling: words that are high precision indicators of underlying topics. In contrast to our approach, anchors are typically selected automatically, constrained to appear in only one topic, and used primarily to aid optimization BIBREF17 . In our information theoretic framework, anchors are specified manually and more loosely defined as words having high mutual information with one or more latent factors. The effects of anchors on the interpretability of traditional topic models are often mixed BIBREF18 , but our experiments suggest that our approach yields more coherent topics." ], "extractive_spans": [ "anchors are specified manually and more loosely defined as words having high mutual information with one or more latent factors" ], "free_form_answer": "", "highlighted_evidence": [ "BIBREF1 first proposed anchors in the context of topic modeling: words that are high precision indicators of underlying topics. In contrast to our approach, anchors are typically selected automatically, constrained to appear in only one topic, and used primarily to aid optimization BIBREF17 . In our information theoretic framework, anchors are specified manually and more loosely defined as words having high mutual information with one or more latent factors." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this workshop paper, we introduce a simple approach to anchored information theoretic topic modeling using a novel combination of Correlation Explanation (CorEx) BIBREF3 and the information bottleneck BIBREF4 . This flexible framework enables the user to leverage domain knowledge to guide exploration of a collection of documents and to impose semantics onto latent factors learned by CorEx. We present preliminary experimental results on two text corpora (including a corpus of clinical notes), showing that anchors can be used to discover topics that are more specific and relevant. What is more, we demonstrate the potential for this framework to perform weakly supervised learning in settings where labeling documents is prohibitively expensive BIBREF5 , BIBREF6 ." ], "extractive_spans": [], "free_form_answer": "They use an anchored information theoretic topic modeling using Correlation Explanation and information bottleneck.", "highlighted_evidence": [ "In this workshop paper, we introduce a simple approach to anchored information theoretic topic modeling using a novel combination of Correlation Explanation (CorEx) BIBREF3 and the information bottleneck BIBREF4 . This flexible framework enables the user to leverage domain knowledge to guide exploration of a collection of documents and to impose semantics onto latent factors learned by CorEx." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "5f4fc82b7c92888892a2a9df96a2762a015f35c4", "76edcf0affe532349cbf9162a6197f9409188210", "bcd61ed2cd5e0df9722c885f56332e8be662473c", "f4d553b05d70a78d4e87ff765d3c0119c8e9fcf2" ], "answer": [ { "evidence": [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set. Both corpora provide ground truth labels for latent classes that may be thought of as topics." ], "extractive_spans": [ "20 Newsgroups", "i2b2 2008 Obesity Challenge BIBREF22 data set" ], "free_form_answer": "", "highlighted_evidence": [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set. Both corpora provide ground truth labels for latent classes that may be thought of as topics." ], "extractive_spans": [ "20 Newsgroups ", "i2b2 2008 Obesity Challenge" ], "free_form_answer": "", "highlighted_evidence": [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set. Both corpora provide ground truth labels for latent classes that may be thought of as topics." ], "extractive_spans": [ "20 Newsgroups", "i2b2 2008 Obesity Challenge BIBREF22 data set" ], "free_form_answer": "", "highlighted_evidence": [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set. Both corpora provide ground truth labels for latent classes that may be thought of as topics." ], "extractive_spans": [ " i2b2 2008 Obesity Challenge BIBREF22", "20 Newsgroups" ], "free_form_answer": "", "highlighted_evidence": [ "To demonstrate the utility of Anchored CorEx, we run experiments on two document collections: 20 Newsgroups and the i2b2 2008 Obesity Challenge BIBREF22 data set. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "12b6ba10f1ef9a286117cb606e8c9db4404e9b4c", "2e0d7e2586ec369f9ac3fd5e1b8239f0727946dd", "f19f9194f3e81f8aa4fdbc678cf43e00d9772f55" ], "answer": [ { "evidence": [ "tab:obesity:class shows the Macro-AUC and F1 scores (averaged across all diseases) on the Obesity Challenge data for the final anchored CorEx model and a Naive Bayes (NB) baseline, in which we train a separate classifier for each disease. Surprisingly, Anchored CorEx outperforms Naive Bayes (NB) by a large margin. Of course, Anchored CorEx is not a replacement for supervised learning: NB beats Anchored CorEx on 20 Newsgroups and does not represent a “strong” baseline for Obesity 2008 (teams scored above 0.7 in Macro-F1 during the competition). It is nonetheless remarkable that Anchored CorEx performs as well as it does given that it is fundamentally unsupervised." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "tab:obesity:class shows the Macro-AUC and F1 scores (averaged across all diseases) on the Obesity Challenge data for the final anchored CorEx model and a Naive Bayes (NB) baseline, in which we train a separate classifier for each disease. ", "Of course, Anchored CorEx is not a replacement for supervised learning: NB beats Anchored CorEx on 20 Newsgroups and does not represent a “strong” baseline for Obesity 2008 (teams scored above 0.7 in Macro-F1 during the competition). " ], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [ "tab:obesity:class shows the Macro-AUC and F1 scores (averaged across all diseases) on the Obesity Challenge data for the final anchored CorEx model and a Naive Bayes (NB) baseline, in which we train a separate classifier for each disease. Surprisingly, Anchored CorEx outperforms Naive Bayes (NB) by a large margin. Of course, Anchored CorEx is not a replacement for supervised learning: NB beats Anchored CorEx on 20 Newsgroups and does not represent a “strong” baseline for Obesity 2008 (teams scored above 0.7 in Macro-F1 during the competition). It is nonetheless remarkable that Anchored CorEx performs as well as it does given that it is fundamentally unsupervised." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "tab:obesity:class shows the Macro-AUC and F1 scores (averaged across all diseases) on the Obesity Challenge data for the final anchored CorEx model and a Naive Bayes (NB) baseline, in which we train a separate classifier for each disease." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "How do they incorporate expert knowledge into their topic model?", "On which corpora do they evaluate on?", "Do they compare against popular topic models, such as LDA?" ], "question_id": [ "ec8f39d32084996ab825debd7113c71daac38b06", "a67a2d9acad1787b636ca2681330f4c29a0b0254", "1efaf3bcd66d1b6bdfb124f0cec0cfeee27e6124" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1. A hierarchical topic model learned by CorEx. Anchored latent factors are labeled in red with anchor words marked with a “*”.", "Table 1. Evolution of Obesity and Obstructive Sleep Apnea (OSA) topics as anchors are added. Colors and font weight indicate anchors, spurious terms, and intruder terms from other known topics. Multiword and negated terms are the result of the preprocessing pipeline.", "Table 3. Classification performance on Obesity 2008.", "Table 2. F1 scores on soc.religion.christianity." ], "file": [ "3-Figure1-1.png", "4-Table1-1.png", "4-Table3-1.png", "4-Table2-1.png" ] }
[ "How do they incorporate expert knowledge into their topic model?" ]
[ [ "1606.07043-Introduction-2" ] ]
[ "They use an anchored information theoretic topic modeling using Correlation Explanation and information bottleneck." ]
24
1611.04234
F-Score Driven Max Margin Neural Network for Named Entity Recognition in Chinese Social Media
We focus on named entity recognition (NER) for Chinese social media. With massive unlabeled text and quite limited labelled corpus, we propose a semi-supervised learning model based on B-LSTM neural network. To take advantage of traditional methods in NER such as CRF, we combine transition probability with deep learning in our model. To bridge the gap between label accuracy and F-score of NER, we construct a model which can be directly trained on F-score. When considering the instability of F-score driven method and meaningful information provided by label accuracy, we propose an integrated method to train on both F-score and label accuracy. Our integrated model yields 7.44\% improvement over previous state-of-the-art result.
{ "paragraphs": [ [ "With the development of Internet, social media plays an important role in information exchange. The natural language processing tasks on social media are more challenging which draw attention of many researchers BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . As the foundation of many downstream applications BIBREF4 , BIBREF5 , BIBREF6 such as information extraction, named entity recognition (NER) deserves more research in prevailing and challenging social media text. NER is a task to identify names in texts and to assign names with particular types BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . It is the informality of social media that discourages accuracy of NER systems. While efforts in English have narrowed the gap between social media and formal domains BIBREF3 , the task in Chinese remains challenging. It is caused by Chinese logographic characters which lack many clues to indicate whether a word is a name, such as capitalization. The scant labelled Chinese social media corpus makes the task more challenging BIBREF11 , BIBREF12 , BIBREF13 .", "To address the problem, one approach is to use the lexical embeddings learnt from massive unlabeled text. To take better advantage of unlabeled text, Peng and Dredze peng-dredze:2015:EMNLP evaluates three types of embeddings for Chinese text, and shows the effectiveness of positional character embeddings with experiments. Considering the value of word segmentation in Chinese NER, another approach is to construct an integrated model to jointly train learned representations for both predicting word segmentations and NER BIBREF14 .", "However, the two above approaches are implemented within CRF model. We construct a semi-supervised model based on B-LSTM neural network to learn from the limited labelled corpus by using lexical information provided by massive unlabeled text. To shrink the gap between label accuracy and F-Score, we propose a method to directly train on F-Score rather than label accuracy in our model. In addition, we propose an integrated method to train on both F-Score and label accuracy. Specifically, we make contributions as follows:" ], [ "We construct a semi-supervised model which is based on B-LSTM neural network and combine transition probability to form structured output. We propose a method to train directly on F-Score in our model. In addition, we propose an integrated method to train on both F-Score and label accuracy." ], [ "B-LSTM neural network can learn from past input features and LSTM layer makes it more efficient BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . However, B-LSTM cannot learn sentence level label information. Huang et al. huang2015bidirectional combine CRF to use sentence level label information. We combine transition probability into our model to gain sentence level label information. To combine transition probability into B-LSTM neural network, we construct a Max Margin Neural Network (MMNN) BIBREF19 based on B-LSTM. The prediction of label in position INLINEFORM0 is given as: DISPLAYFORM0 ", "where INLINEFORM0 are the transformation parameters, INLINEFORM1 the hidden vector and INLINEFORM2 the bias parameter. For a input sentence INLINEFORM3 with a label sequence INLINEFORM4 , a sentence-level score is then given as: DISPLAYFORM0 ", "where INLINEFORM0 indicates the probability of label INLINEFORM1 at position INLINEFORM2 by the network with parameters INLINEFORM3 , INLINEFORM4 indicates the matrix of transition probability. In our model, INLINEFORM5 is computed as: DISPLAYFORM0 ", "We define a structured margin loss INLINEFORM0 as Pei et al. pei-ge-chang:2014:P14-1: DISPLAYFORM0 ", "where INLINEFORM0 is the length of setence INLINEFORM1 , INLINEFORM2 is a discount parameter, INLINEFORM3 a given correct label sequence and INLINEFORM4 a predicted label sequence. For a given training instance INLINEFORM5 , our predicted label sequence is the label sequence with highest score: INLINEFORM6 ", "The label sequence with the highest score can be obtained by carrying out viterbi algorithm. The regularized objective function is as follows: DISPLAYFORM0 INLINEFORM0 ", "By minimizing the object, we can increase the score of correct label sequence INLINEFORM0 and decrease the score of incorrect label sequence INLINEFORM1 ." ], [ "Max Margin training method use structured margin loss INLINEFORM0 to describe the difference between the corrected label sequence INLINEFORM1 and predicted label sequence INLINEFORM2 . In fact, the structured margin loss INLINEFORM3 reflect the loss in label accuracy. Considering the gap between label accuracy and F-Score in NER, we introduce a new training method to train directly on F-Score. To introduce F-Score driven training method, we need to take a look at the subgradient of equation ( EQREF9 ): INLINEFORM4 ", "In the subgradient, we can know that structured margin loss INLINEFORM0 contributes nothing to the subgradient of the regularized objective function INLINEFORM1 . The margin loss INLINEFORM2 serves as a trigger function to conduct the training process of B-LSTM based MMNN. We can introduce a new trigger function to guide the training process of neural network.", "F-Score Trigger Function The main criterion of NER task is F-score. However, high label accuracy does not mean high F-score. For instance, if every named entity's last character is labeledas O, the label accuracy can be quite high, but the precision, recall and F-score are 0. We use the F-Score between corrected label sequence and predicted label sequence as trigger function, which can conduct the training process to optimize the F-Score of training examples. Our new structured margin loss can be described as: DISPLAYFORM0 ", "where INLINEFORM0 is the F-Score between corrected label sequence and predicted label sequence.", "F-Score and Label Accuracy Trigger Function The F-Score can be quite unstable in some situation. For instance, if there is no named entity in a sentence, F-Score will be always 0 regardless of the predicted label sequence. To take advantage of meaningful information provided by label accuracy, we introduce an integrated trigger function as follows: DISPLAYFORM0 ", "where INLINEFORM0 is a factor to adjust the weight of label accuracy and F-Score.", "Because F-Score depends on the whole label sequence, we use beam search to find INLINEFORM0 label sequences with top sentece-level score INLINEFORM1 and then use trigger function to rerank the INLINEFORM2 label sequences and select the best." ], [ "Word segmentation takes an important part in Chinese text processing. Both Peng and Dredze peng-dredze:2015:EMNLP and Peng and Dredze peng-dredze:2016:P16-2 show the value of word segmentation to Chinese NER in social media. We present two methods to use word segmentation information in neural network model.", "Character and Position Embeddings To incorporate word segmentation information, we attach every character with its positional tag. This method is to distinguish the same character at different position in the word. We need to word segment the text and learn positional character embeddings from the segmented text.", "Character Embeddings and Word Segmentation Features We can treat word segmentation as discrete features in neural network model. The discrete features can be easily incorporated into neural network model BIBREF20 . We use word embeddings from a LSTM pretrained on MSRA 2006 corpus to initialize the word segmentation features." ], [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], [ "We pre-trained embeddings using word2vec BIBREF22 with the skip-gram training model, without negative sampling and other default parameter settings. Like Mao et al. mao2008chinese, we use bigram features as follow: INLINEFORM0 ", "We use window approach BIBREF20 to extract higher level Features from word feature vectors. We treat bigram features as discrete features BIBREF20 for our neural network. Our models are trained using stochastic gradient descent with an L2 regularizer.", "As for parameters in our models, window size for word embedding is 5, word embedding dimension, feature embedding dimension and hidden vector dimension are all 100, discount INLINEFORM0 in margin loss is INLINEFORM1 , and the hyper parameter for the INLINEFORM2 is INLINEFORM3 . As for learning rate, initial learning rate is INLINEFORM4 with a decay rate INLINEFORM5 . For integrated model, INLINEFORM6 is INLINEFORM7 . We train 20 epochs and choose the best prediction for test." ], [ "We evaluate two methods to incorporate word segmentation information. The results of two methods are shown as Table TABREF22 . We can see that positional character embeddings perform better in neural network. This is probably because positional character embeddings method can learn word segmentation information from unlabeled text while word segmentation can only use training corpus.", "We adopt positional character embeddings in our next four models. Our first model is a B-LSTM neural network (baseline). To take advantage of traditional model BIBREF23 , BIBREF24 such as CRF, we combine transition probability in our B-LSTM based MMNN. We design a F-Score driven training method in our third model F-Score Driven Model I . We propose an integrated training method in our fourth model F-Score Driven Model II .The results of models are depicted as Figure UID11 . From the figure, we can know our models perfrom better with little loss in time.", "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall. Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. By comparing the results of B-LSTM model and B-LSTM + MTNN model, we can know transition probability is significant for NER. Compared with B-LSTM + MMNN model, F-Score Driven Model I improves the result of named entity with a loss in nominal mention. The integrated training model (F-Score Driven Model II) benefits from both label accuracy and F-Score, which achieves a new state-of-the-art NER system in Chinese social media. Our integrated model has better performance on named entity and nominal mention.", "To better understand the impact of the factor INLINEFORM0 , we show the results of our integrated model with different values of INLINEFORM1 in Figure UID13 . From Figure UID13 , we can know that INLINEFORM2 is an important factor for us to balance F-score and accuracy. Our integrated model may help alleviate the influence of noise in NER in Chinese social media." ], [ "The results of our experiments also suggest directions for future work. We can observe all models in Table TABREF23 achieve a much lower recall than precision BIBREF25 . So we need to design some methods to solve the problem." ], [ "Thanks to Shuming Ma for the help on improving the writing. This work was supported in part by National Natural Science Foundation of China (No. 61673028), and National High Technology Research and Development Program of China (863 Program, No. 2015AA015404). Xu Sun is the corresponding author of this paper. The first author focuses on the design of the method and the experimental results. The corresponding author focuses on the design of the method." ] ], "section_name": [ "Introduction", "Model", "Transition Probability", "F-Score Driven Training Method", "Word Segmentation Representation", "Datasets", "Parameter Estimation", "Results and Analysis", "Conclusions and Future Work", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "3375109ed33701d8ac608e38c8716bd08fab1604", "8187262e4489125b19d87485a2c7e0162f4b0798", "bc85cbe85534c83969547f2354e0f92c81f52a9e", "e7ac69863724286de7661c9c6aa169d95931e9e9" ], "answer": [ { "evidence": [ "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall. Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. By comparing the results of B-LSTM model and B-LSTM + MTNN model, we can know transition probability is significant for NER. Compared with B-LSTM + MMNN model, F-Score Driven Model I improves the result of named entity with a loss in nominal mention. The integrated training model (F-Score Driven Model II) benefits from both label accuracy and F-Score, which achieves a new state-of-the-art NER system in Chinese social media. Our integrated model has better performance on named entity and nominal mention.", "FLOAT SELECTED: Table 3: NER results for named and nominal mentions on test data." ], "extractive_spans": [], "free_form_answer": "For Named Entity, F-Score Driven I model had 49.40 F1 score, and F-Score Driven II model had 50.60 F1 score. In case of Nominal Mention, the scores were 58.16 and 59.32", "highlighted_evidence": [ "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall. Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. By comparing the results of B-LSTM model and B-LSTM + MTNN model, we can know transition probability is significant for NER. Compared with B-LSTM + MMNN model, F-Score Driven Model I improves the result of named entity with a loss in nominal mention. The integrated training model (F-Score Driven Model II) benefits from both label accuracy and F-Score, which achieves a new state-of-the-art NER system in Chinese social media. Our integrated model has better performance on named entity and nominal mention.", "FLOAT SELECTED: Table 3: NER results for named and nominal mentions on test data." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 3: NER results for named and nominal mentions on test data." ], "extractive_spans": [], "free_form_answer": "50.60 on Named Entity and 59.32 on Nominal Mention", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: NER results for named and nominal mentions on test data." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall. Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. By comparing the results of B-LSTM model and B-LSTM + MTNN model, we can know transition probability is significant for NER. Compared with B-LSTM + MMNN model, F-Score Driven Model I improves the result of named entity with a loss in nominal mention. The integrated training model (F-Score Driven Model II) benefits from both label accuracy and F-Score, which achieves a new state-of-the-art NER system in Chinese social media. Our integrated model has better performance on named entity and nominal mention.", "FLOAT SELECTED: Table 3: NER results for named and nominal mentions on test data." ], "extractive_spans": [], "free_form_answer": "Best proposed model achieves F1 score of 50.60, 59.32, 54.82, 20.96 on Named Entity, Nominam Mention, Overall, Out of vocabulary respectively.", "highlighted_evidence": [ "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall.", "FLOAT SELECTED: Table 3: NER results for named and nominal mentions on test data." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 3: NER results for named and nominal mentions on test data." ], "extractive_spans": [], "free_form_answer": "Best F1 score obtained is 54.82% overall", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: NER results for named and nominal mentions on test data." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "081566edee75187177c062b0cf18ff26c6789bc9", "773b3b8a96ac4c576051afd9594416bebb0d3214", "7a82582604ed780fb68b0b3f7db01eacd8826b4b" ], "answer": [ { "evidence": [ "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall. Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. By comparing the results of B-LSTM model and B-LSTM + MTNN model, we can know transition probability is significant for NER. Compared with B-LSTM + MMNN model, F-Score Driven Model I improves the result of named entity with a loss in nominal mention. The integrated training model (F-Score Driven Model II) benefits from both label accuracy and F-Score, which achieves a new state-of-the-art NER system in Chinese social media. Our integrated model has better performance on named entity and nominal mention." ], "extractive_spans": [ "Peng and Dredze peng-dredze:2016:P16-2" ], "free_form_answer": "", "highlighted_evidence": [ "Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall. Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. By comparing the results of B-LSTM model and B-LSTM + MTNN model, we can know transition probability is significant for NER. Compared with B-LSTM + MMNN model, F-Score Driven Model I improves the result of named entity with a loss in nominal mention. The integrated training model (F-Score Driven Model II) benefits from both label accuracy and F-Score, which achieves a new state-of-the-art NER system in Chinese social media. Our integrated model has better performance on named entity and nominal mention." ], "extractive_spans": [ "Peng and Dredze peng-dredze:2016:P16-2" ], "free_form_answer": "", "highlighted_evidence": [ "Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall. Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. By comparing the results of B-LSTM model and B-LSTM + MTNN model, we can know transition probability is significant for NER. Compared with B-LSTM + MMNN model, F-Score Driven Model I improves the result of named entity with a loss in nominal mention. The integrated training model (F-Score Driven Model II) benefits from both label accuracy and F-Score, which achieves a new state-of-the-art NER system in Chinese social media. Our integrated model has better performance on named entity and nominal mention." ], "extractive_spans": [ "Peng and Dredze peng-dredze:2016:P16-2" ], "free_form_answer": "", "highlighted_evidence": [ "Table TABREF23 shows results for NER on test sets. In the Table TABREF23 , we also show micro F1-score (Overall) and out-of-vocabulary entities (OOV) recall. Peng and Dredze peng-dredze:2016:P16-2 is the state-of-the-art NER system in Chinese Social media. By comparing the results of B-LSTM model and B-LSTM + MTNN model, we can know transition probability is significant for NER. Compared with B-LSTM + MMNN model, F-Score Driven Model I improves the result of named entity with a loss in nominal mention." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "9556dca92475024bf0a47f27b9bd4b3e55fe8306", "a74ac04c906606887e94841d848ee1ff678ab270", "e84c76ca83c7cb9f8de3d553895e3b2d7b3e8027" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs.", "FLOAT SELECTED: Table 1: Details of Weibo NER corpus." ], "extractive_spans": [ "Sina Weibo service" ], "free_form_answer": "", "highlighted_evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media.", "We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs.\n\n", "FLOAT SELECTED: Table 1: Details of Weibo NER corpus." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], "extractive_spans": [ "Sina Weibo" ], "free_form_answer": "", "highlighted_evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "3efafd3d128023deca554dfa949d29e42917c2a4", "e6d48ae68a6c5d7d3d00b3440280427af7279daa", "e9878cb3ed3fcd7c430f7f7cd6eb53ec6744dcca" ], "answer": [ { "evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], "extractive_spans": [ "Peng and Dredze peng-dredze:2016:P16-2", "Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service" ], "free_form_answer": "", "highlighted_evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media.", "We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], "extractive_spans": [ "Peng and Dredze peng-dredze:2016:P16-2" ], "free_form_answer": "", "highlighted_evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], "extractive_spans": [ "a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2" ], "free_form_answer": "", "highlighted_evidence": [ "We use a modified labelled corpus as Peng and Dredze peng-dredze:2016:P16-2 for NER in Chinese social media. Details of the data are listed in Table TABREF19 . We also use the same unlabelled text as Peng and Dredze peng-dredze:2016:P16-2 from Sina Weibo service in China and the text is word segmented by a Chinese word segmentation system Jieba as Peng and Dredze peng-dredze:2016:P16-2 so that our results are more comparable to theirs." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "What is F-score obtained?", "What is the state-of-the-art?", "Which Chinese social media platform does the data come from?", "What dataset did they use?" ], "question_id": [ "fcdbaa08cccda9968f3fd433c99338cc60f596a7", "2e4688205c8e344cded7a053b6014cce04ef1bd5", "fc436a4f3674e42fb280378314bfe77ba0c99f2e", "a71fb012631e6a8854d5945b6d0ab2ab8e7b7ee6" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "social media", "social media", "social media", "social media" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Table 1: Details of Weibo NER corpus.", "Table 2: Two methods to incorporate word segmentation information.", "Table 3: NER results for named and nominal mentions on test data." ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "4-Table3-1.png" ] }
[ "What is F-score obtained?" ]
[ [ "1611.04234-4-Table3-1.png", "1611.04234-Results and Analysis-2" ] ]
[ "Best F1 score obtained is 54.82% overall" ]
25
2003.07568
XPersona: Evaluating Multilingual Personalized Chatbot
Personalized dialogue systems are an essential step toward better human-machine interaction. Existing personalized dialogue agents rely on properly designed conversational datasets, which are mostly monolingual (e.g., English), which greatly limits the usage of conversational agents in other languages. In this paper, we propose a multi-lingual extension of Persona-Chat, namely XPersona. Our dataset includes persona conversations in six different languages other than English for building and evaluating multilingual personalized agents. We experiment with both multilingual and cross-lingual trained baselines, and evaluate them against monolingual and translation-pipeline models using both automatic and human evaluation. Experimental results show that the multilingual trained models outperform the translation-pipeline and that they are on par with the monolingual models, with the advantage of having a single model across multiple languages. On the other hand, the state-of-the-art cross-lingual trained models achieve inferior performance to the other models, showing that cross-lingual conversation modeling is a challenging task. We hope that our dataset and baselines will accelerate research in multilingual dialogue systems.
{ "paragraphs": [ [ "Personalized dialogue agents have been shown efficient in conducting human-like conversation. This progress has been catalyzed thanks to existing conversational dataset such as Persona-chat BIBREF0, BIBREF1. However, the training data are provided in a single language (e.g., English), and thus the resulting systems can perform conversations only in the training language. For wide, commercial dialogue systems are required to handle a large number of languages since the smart home devices market is increasingly international BIBREF2. Therefore, creating multilingual conversational benchmarks is essential, yet challenging since it is costly to perform human annotation of data in all languages.", "A possible solution is to use translation systems before and after the model inference, a two-step translation from any language to English and from English to any language. This comes with three major problems: 1) amplification of translation errors since the current dialogue systems are far from perfect, especially with noisy input; 2) the three-stage pipeline system is significantly slower in terms of inference speed; and 3) high translation costs since the current state-of-the-art models, especially in low resources languages, are only available using costly APIs.", "In this paper, we analyze two possible workarounds to alleviate the aforementioned challenges. The first is to build a cross-lingual transferable system by aligning cross-lingual representations, as in BIBREF3, in which the system is trained on one language and zero-shot to another language. The second is to learn a multilingual system directly from noisy multilingual data (e.g., translated data), thus getting rid of the translation system dependence at inference time.", "To evaluate the aforementioned systems, we propose a dataset called Multilingual Persona-Chat, or XPersona, by extending the Persona-Chat corpora BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese. In XPersona, the training sets are automatically translated using translation APIs with several human-in-the-loop passes of mistake correction. In contrast, the validation and test sets are annotated by human experts to facilitate both automatic and human evaluations in multiple languages.", "Furthermore, we propose competitive baselines in two training settings, namely, cross-lingual and multilingual, and compare them with translation pipeline models. Our baselines leverage pre-trained cross-lingual BIBREF4 and multilingual BIBREF5 models.", "An extensive automatic and human evaluation BIBREF6 of our models shows that a multilingual system is able to outperform strong translation-based models and on par with or even improve the monolingual model. The cross-lingual performance is still lower than other models, which indicates that cross-lingual conversation modeling is very challenging. The main contribution of this paper are summarized as follows:", "We present the first multilingual non-goal-oriented dialogue benchmark for evaluating multilingual generative chatbots.", "We provide both cross-lingual and multilingual baselines and discuss their limitations to inspire future research.", "We show the potential of multilingual systems to understand the mixed language dialogue context and generate coherent responses." ], [ "are categorized as goal-oriented BIBREF7, BIBREF8 and chit-chat BIBREF9, BIBREF10. Interested readers may refer to BIBREF11 for a general overview. In this paper, we focus on the latter, for which, in recent years, several tasks and datasets have been proposed to ground the conversation on knowledge BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18 such as Wiki-Articles, Reddit-Post, and CNN-Article. In this work, we focus on personalized dialogue agents where the dialogues are grounded on persona information.", "BIBREF19 was the first to introduce a persona-grounded dialogue dataset for improving response consistency. Later on, BIBREF0 and BIBREF1 introduced Persona-chat, a multi-turn conversational dataset, where two speakers are paired, and a persona description (4–5 sentences) is randomly assigned to each of them. By conditioning the response generation on the persona descriptions, a chit-chat model is able to produce a more persona-consistent dialogue BIBREF0. Several works have improved on the initial baselines with various methodologies BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, especially using large pre-trained models BIBREF26, BIBREF27." ], [ "Extensive approaches have been introduced to construct multilingual systems, for example, multilingual semantic role labeling BIBREF28, BIBREF29, multilingual machine translation BIBREF30, multilingual automatic speech recognition BIBREF31, BIBREF32, BIBREF33, BIBREF34, and named entity recognition BIBREF35, BIBREF36. Multilingual deep contextualized model such as Multilingual BERT (M-BERT) BIBREF5 have been commonly used to represent multiple languages and elevate the performance in many NLP applications, such as classification tasks BIBREF37, textual entailment, named entity recognition BIBREF38, and natural language understanding BIBREF39. Multilingual datasets have also been created for a number of NLP tasks, such as named entity recognition or linking BIBREF40, BIBREF41, BIBREF42, BIBREF43, question answering BIBREF44, BIBREF45, semantic role labeling BIBREF46, part-of-speech tagging BIBREF47, dialogue state tracking BIBREF48, and natural language understanding BIBREF49. However, none of these datasets include the multilingual chit-chat task." ], [ "Cross-lingual adaptation learns the inter-connections among languages and circumvents the requirement of extensive training data in target languages BIBREF50, BIBREF51, BIBREF52. Cross-lingual transfer learning methods have been applied to multiple NLP tasks, such as named entity recognition BIBREF53, BIBREF54, natural language understanding BIBREF39, dialogue state tracking BIBREF55, part-of-speech tagging BIBREF50, BIBREF51, BIBREF56, and dependency parsing BIBREF57, BIBREF58. Meanwhile, BIBREF59 and BIBREF60 proposed pre-trained cross-lingual language models to align multiple language representations, achieving state-of-the-art results in many cross-lingual classification tasks. The aforementioned tasks focused on classification and sequence labeling, while instead, BIBREF4 proposed to pre-train both the encoder and decoder of a sequence-to-sequence model (XNLG) to conduct cross-lingual generation tasks, namely, question generation and abstractive summarization. The latter is the closest to our task since it focuses on language generation; however cross-lingual dialogue generation has not yet been explored." ], [ "The proposed XPersona dataset is an extension of the persona-chat dataset BIBREF0, BIBREF1. Specifically, we extend the ConvAI2 BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese. Since the test set of ConvAI2 is hidden, we split the original validation set into a new validation set and test sets. Then, we firstly automatically translate the training, validation, and test set using APIs (PapaGo for Korean, Google Translate for other languages). For each language, we hired native speaker annotators with a fluent level of English and asked them to revise the machine-translated dialogues and persona sentences in the validation set and test set according to original English dialogues. The main goal of human annotation is to ensure the resulting conversations are coherent and fluent despite the cultural differences in target languages. Therefore, annotators are not restricted to only translate the English dialogues, and they are allowed to modify the original dialogues to improve the dialogue coherence in the corresponding language while retaining the persona information. The full annotation instructions are reported in Appendix A.", "Compared to collecting new persona sentences and dialogues in each language, human-annotating the dialogues by leveraging translation APIs has multiple advantages. First, it increases the data distribution similarity across languages BIBREF3, which can better examine the system's cross-lingual transferability. Second, revising the machine-translated dialogues based on the original English dialogue improves the data construction efficiency. Third, it leverages the well-constructed English persona conversations as a reference to ensure the dialogue quality without the need for training a new pool of workers to generate new samples BIBREF3.", "On the other hand, human-translating the entire training-set ($\\sim $130K utterances) in six languages is expensive. Therefore, we propose an iterative method to improve the quality of the automatically translated training set. We firstly sample 200 dialogues from the training set ($\\sim $2600 utterances) in each language, and we assign human annotators to list all frequent translation mistakes in the given dialogues. For example, daily colloquial English expressions such as “cool\", “I see\", and “lol\" are usually literally translated. After that, we use a simple string matching to revise the inappropriate translations in the whole training-set and return a revision log, which records all the revised utterances. Then, we assign human annotators to check all the revised utterances and list translation mistakes again. We repeat this process at least twice for each language. Finally, we summarize the statistics of the collected dataset in Table TABREF6." ], [ "Let us define a dialogue $\\mathcal {D}=\\lbrace U_1,S_1,U_2,S_2, \\dots , U_n, S_n\\rbrace $ as an alternating set of utterances from two speakers, where $U$ and $S$ represent the user and the system, respectively. Each speaker has its corresponding persona description that consists of a set of sentences $\\mathcal {P}=\\lbrace P_1,\\dots ,P_m\\rbrace $. Given the system persona sentences $\\mathcal {P}_s$ and dialogue history $\\mathcal {D}_t=\\lbrace U_1,S_1,U_2, \\dots ,S_{t-1}, U_t\\rbrace $, we are interested in predicting the system utterances $S_t$." ], [ "We explore both encoder-decoder and causal decoder architectures, and we leverage existing pre-trained contextualized multilingual language models as weights initialization. Hence, we firstly define the multilingual embedding layer and then the two multilingual models used in our experiments." ], [ "We define three embedding matrices: word embedding $E^W\\in \\mathbb {R}^{|V| \\times d}$, positional embedding $E^P\\in \\mathbb {R}^{M \\times d}$, and segmentation embedding $E^S\\in \\mathbb {R}^{|S| \\times d}$, where $|.|$ denotes set cardinality, $d$ is the embedding size, $V$ denotes the vocabulary, $M$ denotes the maximum sequence length, and $S$ denotes the set of segmentation tokens. Segmentation embedding BIBREF26 is used to indicate whether the current token is part of i) Persona sentences, ii) System (Sys.) utterances, iii) User utterances, iv) response in Language $l_{id}$. The language embedding $l_{id}$ is used to inform the model which language to generate. Hence, given a sequence of tokens $X$, the embedding functions $E$ are defined as:", "where $\\oplus $ denotes the positional sum, $X_{pos}=\\lbrace 1,\\dots ,|X|\\rbrace $ and $X_{seg}$ is the sequence of segmentation tokens, as in BIBREF26. Figure FIGREF9 shows a visual representation of the embedding process. A more detailed illustration is reported in Appendix B." ], [ "To model the response generation, we use a Transformer BIBREF61 based encoder-decoder BIBREF10. As illustrated in Figure FIGREF9, we concatenate the system persona $\\mathcal {P}_s$ with the dialogue history $\\mathcal {D}_t$. Then we use the embedding layer $E$ to finally pass it to the encoder. In short, we have:", "where $H \\in \\mathbb {R}^{L \\times d_{model}}$ is the hidden representation computed by the encoder, and $L$ denotes the input sequence length. Then, the decoder attends to $H$ and generates the system response $S_t$ token by token. In the decoder, segmentation embedding is the language ID embedding (e.g., we look up the embedding for Italian to decode Italian). Thus:" ], [ "As an alternative to encoder-decoders, the causal-decoders BIBREF62, BIBREF63, BIBREF64 have been used to model conversational responses BIBREF26, BIBREF27 by giving as a prefix the dialogue history. In our model, we concatenate the persona $\\mathcal {P}_s$ and the dialogue history $\\mathcal {D}_t$ as the language model prefix, and autoregressively decode the system response $S_t$ based on language embedding (i.e. $l_{id}$):", "Figure FIGREF9 shows the conceptual differences between the encoder-decoder and casual decoder. Note that in both multilingual models, the dialogue history encoding process is language-agnostic, while decoding language is controlled by the language embedding. Such design allows the model to understand mixed-language dialogue contexts and to responds in the desired language (details in Section SECREF44)." ], [ "We consider two training strategies to learn a multilingual conversational model: multilingual training and cross-lingual training." ], [ "jointly learns to perform personalized conversations in multiple languages. We follow a transfer learning approach BIBREF26, BIBREF65 by initializing our models with the weights of the large multilingual pretrained model M-Bert BIBREF37. For the causal decoder, we add the causal mask into self-attention layer to convert M-Bert encoder to decoder. For encoder-decoder model, we randomly initialize the cross encoder-decoder attention BIBREF66. Then, we train the both models on the combined training set in all 7 languages using cross-entropy loss." ], [ "transfers knowledge from the source language data to the target languages. In this setting, the model is trained on English (source language) conversational samples, and evaluated on the other 6 languages. Following the methodology proposed by BIBREF4, we align the embedded representations of different languages into the same embedding space by applying cross-lingual pre-training to the encoder-decoder model. The pre-training procedure consists of two stages:", "pre-training the encoder and the decoder independently utilizing masked language modeling, as in BIBREF59;", "jointly pre-training the encoder-decoder by using two objective functions: Cross-Lingual Auto-Encoding (XAE) and Denoising Auto-Encoding (DAE) BIBREF4.", "For instance, DAE adds perturbations to the input sentence and tries to reconstructs the original sentence using the decoder, whereas, XAE uses parallel translation data as the supervision signal to pre-train both the encoder and decoder. As in the multilingual models, the language IDs are fed into the decoder to control the language of generated sentences. Both pre-training stages require both parallel and non-parallel data in the target language.", "After the two stages of pre-training, the model is fine-tuned using just the source language samples (i.e., English) with the same cross-entropy loss as for the multilingual training. However, as suggested in BIBREF4, only the encoder parameters are updated with back-propagation and both the decoder and the word embedding layer remain frozen. This retains the decoders' ability to generate multilingual output while still being able to learn new tasks using only the target language." ], [ "Evaluating open-domain chit-chat models is challenging, especially in multiple languages and at the dialogue-level. Hence, we evaluate our models using both automatic and human evaluation. In both cases, human-annotated dialogues are used, which show the importance of the provided dataset." ], [ "For each language, we evaluate responses generated by the models using perplexity (ppl.) and BLEU BIBREF67 with reference to the human-annotated responses. Although these automatic measures are not perfect BIBREF68, they help to roughly estimate the performance of different models under the same test set. More recently, BIBREF69 has shown the correlation between perplexity and human judgment in open-domain chit-chat models." ], [ "Asking humans to evaluate the quality of a dialogue model is challenging, especially when multiple models have to be compared. The likert score (a.k.a. 1 to 5 scoring) has been widely used to evaluate the interactive experience with conversational models BIBREF70, BIBREF65, BIBREF0, BIBREF1. In such evaluation, a human interacts with the systems for several turns, and then they assign a score from 1 to 5 based on three questions BIBREF0 about fluency, engagingness, and consistency. This evaluation is both expensive to conduct and requires many samples to achieve statistically significant results BIBREF6. To cope with these issues, BIBREF6 proposed ACUTE-EVAL, an A/B test evaluation for dialogue systems. The authors proposed two modes: human-model chats and self-chat BIBREF71, BIBREF72. In this work, we opt for the latter since it is cheaper to conduct and achieves similar results BIBREF6 to the former. Another advantage of using this method is the ability to evaluate multi-turn conversations instead of single-turn responses.", "Following ACUTE-EVAL, the annotator is provided with two full dialogues made by self-chat or human-dialogue. The annotator is asked to choose which of the two dialogues is better in terms of engagingness, interestingness, and humanness. For each comparison, we sample 60–100 conversations from both models. In Appendix C, we report the exact questions and instructions given to the annotators, and the user interface used in the evaluation. We hired native speakers annotators for all six considered languages. The annotators were different from the dataset collection annotators to avoid any possible bias." ], [ "We use the \"BERT-Base, Multilingual Cased\" checkpoint, and we denote the multilingual encoder-decoder model as M-Bert2Bert ($\\sim $220M parameters) and causal decoder model as M-CausalBert ($\\sim $110M parameters). We fine-tune both models in the combined training set (English in Persona-chat BIBREF0, six languages in Xpersona) for five epochs with AdamW optimizer and a learning rate of $6.25e$-5." ], [ "To verify whether the multilingual agent will under-perform the monolingual agent in the monolingual conversational task, we build a monolingual encoder-decoder model and causal decoder model for each language. For a fair comparison, we initialize the monolingual models with a pre-trained monolingual BERT BIBREF5, BIBREF73, BIBREF74. We denote the monolingual encoder-decoder model as Bert2Bert ($\\sim $220M parameters) and causal decoder model as CausalBert ($\\sim $110M parameters). Then we fine-tune each model in each language independently for the same number of epoch and optimizer as the multilingual model." ], [ "Another strong baseline we compare with is Poly-encoder BIBREF75, a large-scale pre-trained retrieval model that has shown state-of-the-art performance in the English Persona-chat dataset BIBREF6. We adapt this model to the other languages by using the Google Translate API to translate target languages (e.g., Chinese) query to English as the input to the model, then translate the English response back to the target language. Thus, the response generation flow is: target query $\\rightarrow $ English query $\\rightarrow $ English response $\\rightarrow $ target response. We denote this model as Poly." ], [ "In the first pre-training stage, we use the pre-trained weights from XLMR-base BIBREF60. Then, we follow the second pre-training stage of XNLG BIBREF4 for pre-training Italian, Japanese, Korean, Indonesia cross-lingual transferable models. For Chinese and French, we directly apply the pre-trained XNLG BIBREF4 weights. Then, the pre-trained models are fine-tune on English PersonaChat training set and early stop based on the perplexity on target language validation set." ], [ "Table TABREF20 compares monolingual, multilingual, and cross-lingual models in terms of BLEU and perplexity in the human-translated test set. On both evaluation matrices, the causal decoder models outperform the encoder-decoder models. We observe that the encoder-decoder model tends to overlook dialogue context and generate digressive responses. (Generated samples are available in Appendix D) We hypothesize that this is because the one-to-many problem BIBREF76 in open-domain conversation weakens the relation between encoder and decoder; thus the well pre-trained decoder (Bert) easily converges to a locally-optimal, and learns to ignore the dialogue context from the encoder and generate the response in an unconditional language model way. We leave the investigation of this problem to future work. On the other hand, M-CausalBert achieves a comparable or slightly better performance compared to CausalBert, which suggests that M-CausalBert leverages the data from other languages. As expected, we observe a significant gap between the cross-lingual model and other models, which indicates that cross-lingual zero-shot conversation modeling is very challenging.", "Table TABREF28 shows the human evaluation result of comparing M-CausalBert (Multi) against the human, translation-based Poly-encoder (Poly), and monolingual CausalBert (Mono). The results illustrate that Multi outperforms Mono in English and Chinese, and is on par with Mono in other languages. On the other hand, Poly shows a strong performance in English as it was pre-trained with a large-scale English conversation corpus. In contrast, the performance of Poly drops in other languages, which indicates that the imperfect translation affects translation-based systems." ], [ "We randomly sample 7 self-chat dialogues for each baseline model in the seven languages and report them in Appendix D., And we summarize the generation of each model as follows:" ], [ "Poly-encoder, pretrained on 174 million Reddit data, can accurately retrieve coherent and diverse responses in English. However, in the other six languages, some of the retrieved responses are digressive due to translation error." ], [ "We observe that both the monolingual and multilingual models can generate fluent responses. Compared to Bert2Bert and M-Bert2Bert, CausalBert and M-CausalBert can generate more on-topic responses but sometimes repeat through turns. CausalBert and M-CausalBert are on par with each other in monolingual conversational tasks, while M-CausalBert shows the advantage of handling a mixed-language context. For multilingual speakers, the conversation may involve multiple languages. Therefore, we experiment on M-CausalBert with two settings: 1) many-to-one, in which users converse with the model in 6 languages, and the model generate responses in English, 2) one-to-many, in which users converse with the model using English, and the model generates responses in 6 languages using language embedding and corresponding persona sentences. Table TABREF42 and table TABREF43 illustrate the generation examples under these settings (more examples reported in Appendix C.1). Most of the time, M-CausalBert can understand the mixed-language context, and decode coherent response in different languages. Understanding the mixed-language dialogue context is a desirable skill for end-to-end chit-chat systems, and a systematic study of this research question is needed in future." ], [ "The current state-of-the-art cross-lingual generation approach XNLG BIBREF4 shows inferior performance on multi-turn dialogue tasks, and generates repetitive responses. Although cross-lingual dialogue generation is challenging, it reduces the human effort for data annotation in different languages. Therefore, the cross-language transfer is an important direction to investigate." ], [ "In this paper, we studied both cross-lingual and multilingual approaches in end-to-end personalized dialogue modeling. We presented the XPersona dataset, a multilingual extension of Persona-Chat, for evaluating the multilingual personalized chatbots. We further provided both cross-lingual and multilingual baselines and compared them with the monolingual approach and two-stage translation approach. Extensive automatic evaluation and human evaluation were conducted to examine the models' performance. The experimental results showed that multilingual trained models, with a single model across multiple languages, can outperform the two-stage translation approach and is on par with monolingual models. On the other hand, the current state-of-the-art cross-lingual approach XNLG achieved lower performance than other baselines. In future work, we plan to research a more advanced cross-lingual generation approach and construct a mixed-language conversational benchmark for evaluating multilingual systems." ], [ "In this section, we show the instructions for French annotation:", "There are two existing columns of conversations: the first column (en) is the original conversations in English, the second column (fr) is the conversations translated by an automatic system (e.g., Google Translate).", "You should copy the conversation from the second column (the translated conversations) into the third column (named fr_annotation). In that column, you should then revise the incorrect or inappropriate translations.", "The goal of the revision is to make the conversations more coherent and fluent in the target language (French). Hence you can customize dialogues and persona sentences to make them fluent and coherent in the target language, including by deviating from the original translation. However, you should retain persona and conversation consistency." ], [ "We report our iterative revised training set statistics in Table TABREF53." ], [ "Figure FIGREF55 and FIGREF56 illustrates the details of the multilingual causal decoder and the multilingual encoder-decoder models." ], [ "As illustrated in Figure FIGREF54, the annotator is provided with two full dialogues made by a self-chat model or human-dialogues. Then the annotators are asked the following questions:", "Who would you talk to for a long conversation?", "If you had to say one of these speakers is interesting and one is boring, who would you say is more interesting?", "Which speaker sounds more human?" ], [ "We report more the mixed-language samples generated by M-CausalBert in Table TABREF61 and TABREF62." ], [ "We randomly sample one self-chat dialogue examples for each model in each language and report them in figure 5-32.", "in CausalBert,M-CausalBert,PolyEncoder,M-Bert2Bert", "in CausalBert,M-CausalBert,PolyEncoder,M-Bert2Bert", "in CausalBert,M-CausalBert,PolyEncoder,M-Bert2Bert", "in CausalBert,M-CausalBert,PolyEncoder,M-Bert2Bert", "in CausalBert,M-CausalBert,PolyEncoder,M-Bert2Bert", "in CausalBert,M-CausalBert,PolyEncoder,M-Bert2Bert", "in CausalBert,M-CausalBert,PolyEncoder,M-Bert2Bert" ] ], "section_name": [ "Introduction", "Related Work ::: Dialogue Systems", "Related Work ::: Multilingual", "Related Work ::: Cross-lingual", "Data Collection", "Multilingual Personalized Conversational Models", "Multilingual Personalized Conversational Models ::: Model Architecture", "Multilingual Personalized Conversational Models ::: Model Architecture ::: Embedding", "Multilingual Personalized Conversational Models ::: Model Architecture ::: Encoder-Decoder", "Multilingual Personalized Conversational Models ::: Model Architecture ::: Causal Decoder", "Multilingual Personalized Conversational Models ::: Training Strategy", "Multilingual Personalized Conversational Models ::: Training Strategy ::: Multilingual Training", "Multilingual Personalized Conversational Models ::: Training Strategy ::: Cross-lingual Training", "Experiments ::: Evaluation Metrics", "Experiments ::: Evaluation Metrics ::: Automatic", "Experiments ::: Evaluation Metrics ::: Human", "Experiments ::: Implementation Details ::: Multilingual Models", "Experiments ::: Implementation Details ::: Monolingual Models", "Experiments ::: Implementation Details ::: Translation-based Models", "Experiments ::: Implementation Details ::: Cross-lingual Models.", "Experiments ::: Results and Discussion ::: Quantitative Analysis", "Experiments ::: Results and Discussion ::: Qualitative Analysis and Discussion", "Experiments ::: Results and Discussion ::: Qualitative Analysis and Discussion ::: Poly", "Experiments ::: Results and Discussion ::: Qualitative Analysis and Discussion ::: Monolingual & Multilingual", "Experiments ::: Results and Discussion ::: Qualitative Analysis and Discussion ::: Cross-lingual.", "Conclusion", "Dataset Collection ::: Annotation Instructions", "Dataset Collection ::: Training Set Statistics", "Model Detail", "Human Evaluation", "Generated Samples ::: Mixed-language Samples", "Generated Samples ::: Model Comparison Samples" ] }
{ "answers": [ { "annotation_id": [ "115aa2aee94362702d3df290770ebbf60e76e597", "2e27f9264e6643f5de8811d96d68747511995af1", "6b9fd7c1675812a721581732a77280f6ae72adc1", "7ddfc1b4231c441a5f959928fd3af11237c84e9a" ], "answer": [ { "evidence": [ "Evaluating open-domain chit-chat models is challenging, especially in multiple languages and at the dialogue-level. Hence, we evaluate our models using both automatic and human evaluation. In both cases, human-annotated dialogues are used, which show the importance of the provided dataset.", "Experiments ::: Evaluation Metrics ::: Automatic", "For each language, we evaluate responses generated by the models using perplexity (ppl.) and BLEU BIBREF67 with reference to the human-annotated responses. Although these automatic measures are not perfect BIBREF68, they help to roughly estimate the performance of different models under the same test set. More recently, BIBREF69 has shown the correlation between perplexity and human judgment in open-domain chit-chat models.", "Following ACUTE-EVAL, the annotator is provided with two full dialogues made by self-chat or human-dialogue. The annotator is asked to choose which of the two dialogues is better in terms of engagingness, interestingness, and humanness. For each comparison, we sample 60–100 conversations from both models. In Appendix C, we report the exact questions and instructions given to the annotators, and the user interface used in the evaluation. We hired native speakers annotators for all six considered languages. The annotators were different from the dataset collection annotators to avoid any possible bias." ], "extractive_spans": [], "free_form_answer": "They use automatic evaluation using perplexity and BLEU scores with reference to the human-annotated responses and human evaluation on interestingness, engagingness, and humanness.", "highlighted_evidence": [ "Hence, we evaluate our models using both automatic and human evaluation. ", "Experiments ::: Evaluation Metrics ::: Automatic\nFor each language, we evaluate responses generated by the models using perplexity (ppl.) and BLEU BIBREF67 with reference to the human-annotated responses. ", "The annotator is asked to choose which of the two dialogues is better in terms of engagingness, interestingness, and humanness. " ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Experiments ::: Evaluation Metrics", "Evaluating open-domain chit-chat models is challenging, especially in multiple languages and at the dialogue-level. Hence, we evaluate our models using both automatic and human evaluation. In both cases, human-annotated dialogues are used, which show the importance of the provided dataset.", "Experiments ::: Evaluation Metrics ::: Automatic", "For each language, we evaluate responses generated by the models using perplexity (ppl.) and BLEU BIBREF67 with reference to the human-annotated responses. Although these automatic measures are not perfect BIBREF68, they help to roughly estimate the performance of different models under the same test set. More recently, BIBREF69 has shown the correlation between perplexity and human judgment in open-domain chit-chat models.", "Following ACUTE-EVAL, the annotator is provided with two full dialogues made by self-chat or human-dialogue. The annotator is asked to choose which of the two dialogues is better in terms of engagingness, interestingness, and humanness. For each comparison, we sample 60–100 conversations from both models. In Appendix C, we report the exact questions and instructions given to the annotators, and the user interface used in the evaluation. We hired native speakers annotators for all six considered languages. The annotators were different from the dataset collection annotators to avoid any possible bias." ], "extractive_spans": [ "perplexity (ppl.) and BLEU", "which of the two dialogues is better in terms of engagingness, interestingness, and humanness" ], "free_form_answer": "", "highlighted_evidence": [ "Evaluation Metrics\nEvaluating open-domain chit-chat models is challenging, especially in multiple languages and at the dialogue-level. Hence, we evaluate our models using both automatic and human evaluation. In both cases, human-annotated dialogues are used, which show the importance of the provided dataset.", "Automatic\nFor each language, we evaluate responses generated by the models using perplexity (ppl.) and BLEU BIBREF67 with reference to the human-annotated responses.", "Following ACUTE-EVAL, the annotator is provided with two full dialogues made by self-chat or human-dialogue. The annotator is asked to choose which of the two dialogues is better in terms of engagingness, interestingness, and humanness." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For each language, we evaluate responses generated by the models using perplexity (ppl.) and BLEU BIBREF67 with reference to the human-annotated responses. Although these automatic measures are not perfect BIBREF68, they help to roughly estimate the performance of different models under the same test set. More recently, BIBREF69 has shown the correlation between perplexity and human judgment in open-domain chit-chat models.", "Asking humans to evaluate the quality of a dialogue model is challenging, especially when multiple models have to be compared. The likert score (a.k.a. 1 to 5 scoring) has been widely used to evaluate the interactive experience with conversational models BIBREF70, BIBREF65, BIBREF0, BIBREF1. In such evaluation, a human interacts with the systems for several turns, and then they assign a score from 1 to 5 based on three questions BIBREF0 about fluency, engagingness, and consistency. This evaluation is both expensive to conduct and requires many samples to achieve statistically significant results BIBREF6. To cope with these issues, BIBREF6 proposed ACUTE-EVAL, an A/B test evaluation for dialogue systems. The authors proposed two modes: human-model chats and self-chat BIBREF71, BIBREF72. In this work, we opt for the latter since it is cheaper to conduct and achieves similar results BIBREF6 to the former. Another advantage of using this method is the ability to evaluate multi-turn conversations instead of single-turn responses." ], "extractive_spans": [ "perplexity", "BLEU", "ACUTE-EVA" ], "free_form_answer": "", "highlighted_evidence": [ "For each language, we evaluate responses generated by the models using perplexity (ppl.) and BLEU BIBREF67 with reference to the human-annotated responses.", "To cope with these issues, BIBREF6 proposed ACUTE-EVAL, an A/B test evaluation for dialogue systems. The authors proposed two modes: human-model chats and self-chat BIBREF71, BIBREF72. In this work, we opt for the latter since it is cheaper to conduct and achieves similar results BIBREF6 to the former. Another" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "annotation_id": [ "33a2af0076e9df7a427e052c3a0d54ea918d7ccf", "39c96b6dae4b51d11d47eb4fbaac98835158a373" ], "answer": [ { "evidence": [ "Table TABREF20 compares monolingual, multilingual, and cross-lingual models in terms of BLEU and perplexity in the human-translated test set. On both evaluation matrices, the causal decoder models outperform the encoder-decoder models. We observe that the encoder-decoder model tends to overlook dialogue context and generate digressive responses. (Generated samples are available in Appendix D) We hypothesize that this is because the one-to-many problem BIBREF76 in open-domain conversation weakens the relation between encoder and decoder; thus the well pre-trained decoder (Bert) easily converges to a locally-optimal, and learns to ignore the dialogue context from the encoder and generate the response in an unconditional language model way. We leave the investigation of this problem to future work. On the other hand, M-CausalBert achieves a comparable or slightly better performance compared to CausalBert, which suggests that M-CausalBert leverages the data from other languages. As expected, we observe a significant gap between the cross-lingual model and other models, which indicates that cross-lingual zero-shot conversation modeling is very challenging.", "FLOAT SELECTED: Table 3: Results of automatic evaluation score on test set in seven languages. We compute the BLEU score and perplexity (ppl.) for monolingual, multilingual, and cross-lingual models." ], "extractive_spans": [ "significant gap between the cross-lingual model and other models", "Table TABREF20" ], "free_form_answer": "", "highlighted_evidence": [ "Table TABREF20 compares monolingual, multilingual, and cross-lingual models in terms of BLEU and perplexity in the human-translated test set.", "As expected, we observe a significant gap between the cross-lingual model and other models, which indicates that cross-lingual zero-shot conversation modeling is very challenging.", "FLOAT SELECTED: Table 3: Results of automatic evaluation score on test set in seven languages. We compute the BLEU score and perplexity (ppl.) for monolingual, multilingual, and cross-lingual models." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 3: Results of automatic evaluation score on test set in seven languages. We compute the BLEU score and perplexity (ppl.) for monolingual, multilingual, and cross-lingual models." ], "extractive_spans": [], "free_form_answer": "BLUE score is lower by 4 times than that of the best multilingual model.", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Results of automatic evaluation score on test set in seven languages. We compute the BLEU score and perplexity (ppl.) for monolingual, multilingual, and cross-lingual models." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "095a4caf7eecbad919031a5ca4e93454aa3a10cd", "18829052dc660eaa9dc499e081155cdd98346aca", "c6b590908e794b4dfe117393f987cab4a68683ad" ], "answer": [ { "evidence": [ "Another strong baseline we compare with is Poly-encoder BIBREF75, a large-scale pre-trained retrieval model that has shown state-of-the-art performance in the English Persona-chat dataset BIBREF6. We adapt this model to the other languages by using the Google Translate API to translate target languages (e.g., Chinese) query to English as the input to the model, then translate the English response back to the target language. Thus, the response generation flow is: target query $\\rightarrow $ English query $\\rightarrow $ English response $\\rightarrow $ target response. We denote this model as Poly." ], "extractive_spans": [], "free_form_answer": "Translate source sentence to English with Google Translate API and then translate the result to the target language with Poly-encoder.", "highlighted_evidence": [ "Another strong baseline we compare with is Poly-encoder BIBREF75, a large-scale pre-trained retrieval model that has shown state-of-the-art performance in the English Persona-chat dataset BIBREF6. We adapt this model to the other languages by using the Google Translate API to translate target languages (e.g., Chinese) query to English as the input to the model, then translate the English response back to the target language. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Experiments ::: Implementation Details ::: Multilingual Models", "We use the \"BERT-Base, Multilingual Cased\" checkpoint, and we denote the multilingual encoder-decoder model as M-Bert2Bert ($\\sim $220M parameters) and causal decoder model as M-CausalBert ($\\sim $110M parameters). We fine-tune both models in the combined training set (English in Persona-chat BIBREF0, six languages in Xpersona) for five epochs with AdamW optimizer and a learning rate of $6.25e$-5.", "Experiments ::: Implementation Details ::: Monolingual Models", "To verify whether the multilingual agent will under-perform the monolingual agent in the monolingual conversational task, we build a monolingual encoder-decoder model and causal decoder model for each language. For a fair comparison, we initialize the monolingual models with a pre-trained monolingual BERT BIBREF5, BIBREF73, BIBREF74. We denote the monolingual encoder-decoder model as Bert2Bert ($\\sim $220M parameters) and causal decoder model as CausalBert ($\\sim $110M parameters). Then we fine-tune each model in each language independently for the same number of epoch and optimizer as the multilingual model.", "Experiments ::: Implementation Details ::: Translation-based Models", "Another strong baseline we compare with is Poly-encoder BIBREF75, a large-scale pre-trained retrieval model that has shown state-of-the-art performance in the English Persona-chat dataset BIBREF6. We adapt this model to the other languages by using the Google Translate API to translate target languages (e.g., Chinese) query to English as the input to the model, then translate the English response back to the target language. Thus, the response generation flow is: target query $\\rightarrow $ English query $\\rightarrow $ English response $\\rightarrow $ target response. We denote this model as Poly.", "Experiments ::: Implementation Details ::: Cross-lingual Models.", "In the first pre-training stage, we use the pre-trained weights from XLMR-base BIBREF60. Then, we follow the second pre-training stage of XNLG BIBREF4 for pre-training Italian, Japanese, Korean, Indonesia cross-lingual transferable models. For Chinese and French, we directly apply the pre-trained XNLG BIBREF4 weights. Then, the pre-trained models are fine-tune on English PersonaChat training set and early stop based on the perplexity on target language validation set." ], "extractive_spans": [ "M-Bert2Bert", "M-CausalBert", "Bert2Bert", "CausalBert", "Poly-encoder BIBREF75", "XNLG" ], "free_form_answer": "", "highlighted_evidence": [ "Multilingual Models\nWe use the \"BERT-Base, Multilingual Cased\" checkpoint, and we denote the multilingual encoder-decoder model as M-Bert2Bert ($\\sim $220M parameters) and causal decoder model as M-CausalBert ($\\sim $110M parameters).", "Monolingual Models\nTo verify whether the multilingual agent will under-perform the monolingual agent in the monolingual conversational task, we build a monolingual encoder-decoder model and causal decoder model for each language. For a fair comparison, we initialize the monolingual models with a pre-trained monolingual BERT BIBREF5, BIBREF73, BIBREF74. We denote the monolingual encoder-decoder model as Bert2Bert ($\\sim $220M parameters) and causal decoder model as CausalBert ($\\sim $110M parameters).", "Translation-based Models\nAnother strong baseline we compare with is Poly-encoder BIBREF75, a large-scale pre-trained retrieval model that has shown state-of-the-art performance in the English Persona-chat dataset BIBREF6.", "Cross-lingual Models.\nIn the first pre-training stage, we use the pre-trained weights from XLMR-base BIBREF60." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Another strong baseline we compare with is Poly-encoder BIBREF75, a large-scale pre-trained retrieval model that has shown state-of-the-art performance in the English Persona-chat dataset BIBREF6. We adapt this model to the other languages by using the Google Translate API to translate target languages (e.g., Chinese) query to English as the input to the model, then translate the English response back to the target language. Thus, the response generation flow is: target query $\\rightarrow $ English query $\\rightarrow $ English response $\\rightarrow $ target response. We denote this model as Poly." ], "extractive_spans": [ "Google Translate API" ], "free_form_answer": "", "highlighted_evidence": [ "We adapt this model to the other languages by using the Google Translate API to translate target languages (e.g., Chinese) query to English as the input to the model, then translate the English response back to the target language." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "annotation_id": [ "4565c2adf13a22526cc22dfa8618e185bec641e3", "4f5d9f3d8a86f8e76ec126160061290667d3391d", "a660b69bd7d94d7c99437c0067f7612d53e98915", "f4b85558016a0d6f033188d637d82a4d9a266872" ], "answer": [ { "evidence": [ "To evaluate the aforementioned systems, we propose a dataset called Multilingual Persona-Chat, or XPersona, by extending the Persona-Chat corpora BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese. In XPersona, the training sets are automatically translated using translation APIs with several human-in-the-loop passes of mistake correction. In contrast, the validation and test sets are annotated by human experts to facilitate both automatic and human evaluations in multiple languages." ], "extractive_spans": [ "Chinese", "French", "Indonesian", "Italian", "Korean", "Japanese" ], "free_form_answer": "", "highlighted_evidence": [ "To evaluate the aforementioned systems, we propose a dataset called Multilingual Persona-Chat, or XPersona, by extending the Persona-Chat corpora BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Personalized dialogue agents have been shown efficient in conducting human-like conversation. This progress has been catalyzed thanks to existing conversational dataset such as Persona-chat BIBREF0, BIBREF1. However, the training data are provided in a single language (e.g., English), and thus the resulting systems can perform conversations only in the training language. For wide, commercial dialogue systems are required to handle a large number of languages since the smart home devices market is increasingly international BIBREF2. Therefore, creating multilingual conversational benchmarks is essential, yet challenging since it is costly to perform human annotation of data in all languages.", "To evaluate the aforementioned systems, we propose a dataset called Multilingual Persona-Chat, or XPersona, by extending the Persona-Chat corpora BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese. In XPersona, the training sets are automatically translated using translation APIs with several human-in-the-loop passes of mistake correction. In contrast, the validation and test sets are annotated by human experts to facilitate both automatic and human evaluations in multiple languages." ], "extractive_spans": [ "English", "Chinese", "French", "Indonesian", "Italian", "Korean", "Japanese" ], "free_form_answer": "", "highlighted_evidence": [ " This progress has been catalyzed thanks to existing conversational dataset such as Persona-chat BIBREF0, BIBREF1. However, the training data are provided in a single language (e.g., English), and thus the resulting systems can perform conversations only in the training language.", "To evaluate the aforementioned systems, we propose a dataset called Multilingual Persona-Chat, or XPersona, by extending the Persona-Chat corpora BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The proposed XPersona dataset is an extension of the persona-chat dataset BIBREF0, BIBREF1. Specifically, we extend the ConvAI2 BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese. Since the test set of ConvAI2 is hidden, we split the original validation set into a new validation set and test sets. Then, we firstly automatically translate the training, validation, and test set using APIs (PapaGo for Korean, Google Translate for other languages). For each language, we hired native speaker annotators with a fluent level of English and asked them to revise the machine-translated dialogues and persona sentences in the validation set and test set according to original English dialogues. The main goal of human annotation is to ensure the resulting conversations are coherent and fluent despite the cultural differences in target languages. Therefore, annotators are not restricted to only translate the English dialogues, and they are allowed to modify the original dialogues to improve the dialogue coherence in the corresponding language while retaining the persona information. The full annotation instructions are reported in Appendix A." ], "extractive_spans": [ "Chinese", "French", "Indonesian", "Italian", "Korean", "Japanese" ], "free_form_answer": "", "highlighted_evidence": [ "The proposed XPersona dataset is an extension of the persona-chat dataset BIBREF0, BIBREF1. Specifically, we extend the ConvAI2 BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To evaluate the aforementioned systems, we propose a dataset called Multilingual Persona-Chat, or XPersona, by extending the Persona-Chat corpora BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese. In XPersona, the training sets are automatically translated using translation APIs with several human-in-the-loop passes of mistake correction. In contrast, the validation and test sets are annotated by human experts to facilitate both automatic and human evaluations in multiple languages." ], "extractive_spans": [ "Chinese, French, Indonesian, Italian, Korean, and Japanese" ], "free_form_answer": "", "highlighted_evidence": [ "To evaluate the aforementioned systems, we propose a dataset called Multilingual Persona-Chat, or XPersona, by extending the Persona-Chat corpora BIBREF1 to six languages: Chinese, French, Indonesian, Italian, Korean, and Japanese." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What kind of evaluations do use to evaluate dialogue?", "By how much do their cross-lingual models lag behind other models?", "Which translation pipelines do they use to compare against?", "Which languages does their newly created dataset contain?" ], "question_id": [ "108f99fcaf620fab53077812e8901870896acf36", "6c8dc31a199b155e73c84173816c1e252137a0af", "7125db8334a7efaf9f7753f2c2f0048a56e74c49", "43729be0effb5defc62bae930ceacf7219934f1e" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Multi-turn annotated dialogue samples from test set in seven languages. For simplicity, we only show three turns for each dialogue and the persona in English.", "Table 2: The statistics of the collected dataset. We report the number of dialogues (#Dial.) and utterances (#Utt.) of the validation and test set in six languages. Edit distance per dialogue (Edit) and BLEU score are computed to show the difference between the human-annotated dataset and auto-translated dataset. (Training set is reported in Appendix A)", "Figure 1: (a) Multilingual Encoder-Decoder model. (b) Multilingual Causal Decoder model. (Detailed illustration is reported in Appendix B)", "Table 3: Results of automatic evaluation score on test set in seven languages. We compute the BLEU score and perplexity (ppl.) for monolingual, multilingual, and cross-lingual models.", "Table 4: Results of ACUTE-EVAL human evaluation. Tests are conducted pairwise between M-CausalBert (Multi.) and other models (Human, Poly-encoder (Poly), Monolingual CausalBert (Mono)). Numbers indicate the winning rate of Multi. Numbers in bold are statistically significant (p < 0.05).", "Table 5: Many-to-one: understand mixed-language dialogue context in multiple languages and generate response in one language", "Table 6: One-to-many: response one dialogue context with 7 different languages", "Figure 2: Human evaluation interface modified from ACUTE-EVAL(Li et al., 2019)", "Table 7: The number of dialogues (#Dial.) and utterances (#Utt.) of the training set in six languages. Edit distance per dialogue and BLEU score are computed to show the difference between the iterative revised dataset and auto-translated dataset.", "Figure 4: Multilingual Encoder-Decoder model.", "Table 8: One-to-many by M-CausalBert", "Table 9: Many-to-one by M-CausalBert", "Figure 5: English CausalBert", "Figure 12: Chinese M-Bert2Bert", "Figure 13: Chinese CrossLingual", "Figure 19: France CausalBert", "Figure 23: France CrossLingual", "Figure 27: Japanese M-Bert2Bert", "Figure 31: Korean PolyEncoder", "Figure 29: Korean CausalBert", "Figure 35: Indonesian PolyEncoder", "Figure 37: Indonesian CrossLingual" ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Figure1-1.png", "6-Table3-1.png", "7-Table4-1.png", "8-Table5-1.png", "9-Table6-1.png", "14-Figure2-1.png", "14-Table7-1.png", "15-Figure4-1.png", "16-Table8-1.png", "16-Table9-1.png", "17-Figure5-1.png", "18-Figure12-1.png", "19-Figure13-1.png", "20-Figure19-1.png", "21-Figure23-1.png", "22-Figure27-1.png", "23-Figure31-1.png", "23-Figure29-1.png", "24-Figure35-1.png", "25-Figure37-1.png" ] }
[ "What kind of evaluations do use to evaluate dialogue?", "By how much do their cross-lingual models lag behind other models?", "Which translation pipelines do they use to compare against?" ]
[ [ "2003.07568-Experiments ::: Evaluation Metrics ::: Human-1", "2003.07568-Experiments ::: Evaluation Metrics-0", "2003.07568-Experiments ::: Evaluation Metrics ::: Human-0", "2003.07568-Experiments ::: Evaluation Metrics ::: Automatic-0" ], [ "2003.07568-Experiments ::: Results and Discussion ::: Quantitative Analysis-0", "2003.07568-6-Table3-1.png" ], [ "2003.07568-Experiments ::: Implementation Details ::: Multilingual Models-0", "2003.07568-Experiments ::: Implementation Details ::: Cross-lingual Models.-0", "2003.07568-Experiments ::: Implementation Details ::: Monolingual Models-0", "2003.07568-Experiments ::: Implementation Details ::: Translation-based Models-0" ] ]
[ "They use automatic evaluation using perplexity and BLEU scores with reference to the human-annotated responses and human evaluation on interestingness, engagingness, and humanness.", "BLUE score is lower by 4 times than that of the best multilingual model.", "Translate source sentence to English with Google Translate API and then translate the result to the target language with Poly-encoder." ]
27
1810.02268
A Large-Scale Test Set for the Evaluation of Context-Aware Pronoun Translation in Neural Machine Translation
The translation of pronouns presents a special challenge to machine translation to this day, since it often requires context outside the current sentence. Recent work on models that have access to information across sentence boundaries has seen only moderate improvements in terms of automatic evaluation metrics such as BLEU. However, metrics that quantify the overall translation quality are ill-equipped to measure gains from additional context. We argue that a different kind of evaluation is needed to assess how well models translate inter-sentential phenomena such as pronouns. This paper therefore presents a test suite of contrastive translations focused specifically on the translation of pronouns. Furthermore, we perform experiments with several context-aware models. We show that, while gains in BLEU are moderate for those systems, they outperform baselines by a large margin in terms of accuracy on our contrastive test set. Our experiments also show the effectiveness of parameter tying for multi-encoder architectures.
{ "paragraphs": [ [ "Even though machine translation has improved considerably with the advent of neural machine translation (NMT) BIBREF0 , BIBREF1 , the translation of pronouns remains a major issue. They are notoriously hard to translate since they often require context outside the current sentence.", "As an example, consider the sentences in Figure FIGREF1 . In both languages, there is a pronoun in the second sentence that refers to the European Central Bank. When the second sentence is translated from English to German, the translation of the pronoun it is ambiguous. This ambiguity can only be resolved with context awareness: if a translation system has access to the previous English sentence, the previous German translation, or both, it can determine the antecedent the pronoun refers to. In this German sentence, the antecedent Europäische Zentralbank dictates the feminine gender of the pronoun sie.", "It is unfortunate, then, that current NMT systems generally operate on the sentence level BIBREF2 , BIBREF3 , BIBREF4 . Documents are translated sentence-by-sentence for practical reasons, such as line-based processing in a pipeline and reduced computational complexity. Furthermore, improvements of larger-context models over baselines in terms of document-level metrics such as BLEU or RIBES have been moderate, so that their computational overhead does not seem justified, and so that it is hard to develop more effective context-aware architectures and empirically validate them.", "To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns. The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). The resource is freely available. Additionally, we evaluate several context-aware models that have recently been proposed in the literature on this test set, and extend existing models with parameter tying.", "The main contributions of our paper are:", "Section SECREF2 explains how our paper relates to existing work on context-aware models and the evaluation of pronoun translation. Section SECREF3 describes our test suite. The context-aware models we use in our experiments are detailed in Section SECREF4 . We discuss our experiments in Section SECREF5 and the results in Section SECREF6 ." ], [ "Two lines of work are related to our paper: research on context-aware translation (described in Section SECREF8 ) and research on focused evaluation of pronoun translation (described in Section SECREF11 )." ], [ "If the translation of a pronoun requires context beyond the current sentence (see the example in Figure FIGREF1 ), a natural extension of sentence-level NMT models is to condition the model prediction on this necessary context. In the following, we describe a number of existing approaches to making models “aware” of additional context.", "The simplest possible extension is to translate units larger than sentences. BIBREF5 concatenate each sentence with the sentence that precedes it, for the source side of the corpus or both sides. All of their models are standard sequence-to-sequence models built with recurrent neural networks (RNNs), since the method does not require any architectural change. BIBREF11 use the same concatenation technique with a Transformer architecture BIBREF2 , and experiment with wider context.", "A number of works do propose changes to the NMT architecture. A common technique is to extend a standard encoder-decoder model by additional encoders for the context sentence(s), with a modified attention mechanism BIBREF6 , BIBREF9 , BIBREF8 . One aspect that differs between these works is the architecture of the encoder and attention. While BIBREF6 , BIBREF9 extend an RNN encoder-decoder with a second encoder that the decoder attends to, BIBREF8 extend the Transformer architecture with an encoder that is attended to by the main encoder. BIBREF8 also introduce parameter sharing between the main encoder and the context encoder, but do not empirically demonstrate its importance.", "While the number of encoded sentences in the previous work is fixed, BIBREF7 , BIBREF10 explore the integration of variable-size context through a hierarchical architecture, where a first-level RNN reads in words to produce sentence vectors, which are then fed into a second-level RNN to produce a document summary.", "Apart from differences in the architectures, related work varies in whether it considers source context, target context, or both (see Table TABREF9 for an overview of language arcs and context types). Some work considers only source context, but for pronoun translation, target-side context is intuitively important for disambiguation, especially if the antecedent itself is ambiguous. In our evaluation, we therefore emphasize models that take into account both source and target context.", "Our experiments are based on models from BIBREF9 , who have released their source code. We extend their models with parameter sharing, which was shown to be beneficial by BIBREF8 . Additionally, we consider a concatenative baseline, similar to BIBREF5 , and Transformer-based models BIBREF8 .", "This section describes several context-aware NMT models that we use in our experiments. They fall into two major categories: models based on RNNs and models based on the Transformer architecture BIBREF2 . We experiment with additional context on the source side and target side." ], [ "Pronouns can serve a variety of functions with complex cross-lingual variation BIBREF12 , and hand-picked, manually annotated test suites have been presented for the evaluation of pronoun translation BIBREF13 , BIBREF14 , BIBREF9 . While suitable for analysis, the small size of the test suites makes it hard to make statistically confident comparisons between systems, and the hand-picked nature of the test suites introduces biases. To overcome these problems, we opted for a fully automatic approach to constructing a large-scale test suite.", "Conceptually, our test set is most similar to the “cross-lingual pronoun prediction” task held at DiscoMT and WMT in recent years BIBREF15 , BIBREF16 , BIBREF17 : participants are asked to fill a gap in a target sentence, where gaps correspond to pronouns.", "The first edition of the task focused on English INLINEFORM0 French, and it was found that local context (such as the verb group) was a strong signal for pronoun prediction. Hence, future editions only provided target-side lemmas instead of fully inflected forms, which makes the task less suitable to evaluate end-to-end neural machine translation systems, although such systems have been trained on the task BIBREF18 .", " BIBREF17 do not report on the proportion of intra-sentential and inter-sentential anaphora in their test set, but the two top-performing systems only made use of intra-sentential information. Our test suite focuses on allowing the comparison of end-to-end context-aware NMT systems, and we thus extract a large number of inter-sentential anaphora, with meta-data allowing for a focus on inter-sentential anaphora with a long distance between the pronoun and its antecedent. Our focus on evaluating end-to-end NMT systems also relieves us from having to provide annotated training sets, and reduces pressure to achieve balance and full coverage of phenomena.", "An alternative approach to automatically evaluate pronoun translation are reference-based methods that produce a score based on word alignment between source, translation output, and reference translation, and identification of pronouns in them, such as AutoPRF BIBREF19 and APT BIBREF20 . BIBREF21 perform a human meta-evaluation and show substantial disagreement between reference-based metrics and human judges, especially because there often exist valid alternative translations that use different pronouns than the reference. Our test set, and our protocol of generating contrastive examples, is focused on selected pronouns to minimize the risk of producing contrastive examples that are actually valid translations." ], [ "Contrastive evaluation requires a large set of suitable examples that involve the translation of pronouns. As additional goals, our test set is designed to 1) focus on hard cases, so that it can be used as a benchmark to track progress in context-aware translation and 2) allow for fine-grained analysis.", "Section SECREF14 describes how we extract our data set. Section SECREF26 explains how, given a set of contrastive examples, contrastive evaluation works." ], [ "We automatically create a test set from the OpenSubtitles corpus BIBREF22 . The goal is to provide a large number of difficult test cases where an English pronoun has to be translated to a German pronoun.", "The most challenging cases are translating it to either er, sie or es, depending on the grammatical gender of the antecedent. Not only is the translation of it ambiguous, there is also class imbalance in the training data (see Table TABREF18 ). There is roughly a 30% probability that it is aligned to es, which makes it difficult to learn to translate er and sie. We use parsing and automatic co-reference resolution to find translation pairs that satisfy our constraints.", "To provide a basis for filtering with constraints, we tokenize the whole data set with the Moses tokenizer, generate symmetric word alignments with fast_align BIBREF23 , parse the English text with CoreNLP BIBREF24 , parse the German text with ParZu BIBREF25 and perform coreference resolution on both sides. The coreference chains are obtained with the neural model of CoreNLP for English, and with CorZu for German BIBREF26 , respectively.", "Then we opt for high-precision, aggressive filtering, according to the following protocol: for each pair of sentences INLINEFORM0 in English and German, extract iff", " INLINEFORM0 contains the English pronoun it, and INLINEFORM1 contains a German pronoun that is third person singular (er, sie or es), as indicated by their part-of-speech tags;", "those pronouns are aligned to each other;", "both pronouns are in a coreference chain;", "their nominal antecedents in the coreference chain are aligned on word level.", "This removes most candidate pairs, but is necessary to overcome the noise introduced by our preprocessing pipeline, most notably coreference resolution. From the filtered set, we create a balanced test set by randomly sampling 4000 instances of each of the three translations of it under consideration (er, sie, es). We do not balance antecedent distance. See Table TABREF25 for the distribution of pronoun pairs and antecedent distance in the test set.", "For each sentence pair in the resulting test set, we introduce contrastive translations. A contrastive translation is a translation variant where the correct pronoun is swapped with an incorrect one. For an example, see Table TABREF19 , where the pronoun it in the original translation corresponds to sie because the antecedent bat is a feminine noun in German (Fledermaus). We produce wrong translations by replacing sie with one of the other pronouns (er, es).", "Note that, by themselves, these contrastive translations are grammatically correct if the antecedent is outside the current sentence. The test set also contains pronouns with an antecedent in the same sentence (antecedent distance 0). Those examples do not require any additional context for disambiguation and we therefore expect the sentence-level baseline to perform well on them.", "We take extra care to ensure that the resulting contrastive translations are grammatically correct, because ungrammatical sentences are easily dismissed by an NMT system. For instance, if there are any possessive pronouns (such as seine) in the sentence, we also change their gender to match the personal pronoun replacement.", "The German coreference resolution system does not resolve es because most instances of es in German are either non-referential forms, or they refer to a clause instead of a nominal antecedent. We limit the test set to nominal antecedents, as these are the only ambiguous cases with respect to translation. For this reason, we have to rely entirely on the English coreference links for the extraction of sentence pairs with it INLINEFORM0 es, as opposed to pairs with it INLINEFORM1 er and it INLINEFORM2 sie where we have coreference chains in both languages.", "Our extraction process respects document boundaries, to ensure we always search for the right context. We extract additional information from the annotated documents, such as the distance (in sentences) between pronouns and their antecedents, the document of origin, lemma, morphology and dependency information if available." ], [ "Contrastive evaluation is different from conventional evaluation of machine translation in that it does not require any translation. Rather than testing a model's ability to translate, it is a method to test a model's ability to discriminate between given good and bad translations.", "We exploit the fact that NMT systems are in fact language models of the target language, conditioned on source text. Like language models, NMT systems can be used to compute a model score (the negative log probability) for an existing translation. Contrastive evaluation, then, means to compare the model score of two pairs of inputs: INLINEFORM0 and INLINEFORM1 . If the model score of the actual reference translation is higher, we assume that this model can detect wrong pronoun translations.", "However, this does not mean that systems actually produce the reference translation when given the source sentence for translation. An entirely different target sequence might rank higher in the system's beam during decoding. The only conclusion permitted by contrastive evaluation is whether or not the reference translation is more probable than a contrastive variant.", "If the model score of the reference is indeed higher, we refer to this outcome as a “correct decision” by the model. The model's decision is only correct if the reference translation has a higher score than any contrastive translation. In our evaluation, we aggregate model decisions on the whole test set and report the overall percentage of correct decisions as accuracy.", "During scoring, the model is provided with reference translations as target context, while during translation, the model needs to predict the full sequence. It is an open question to what extent performance deteriorates when context is itself predicted, and thus noisy. We highlight that the same problem arises for sentence-level NMT, and has been addressed with alternative training strategies BIBREF27 ." ], [ "We consider the following recurrent baselines:", "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. Then we train on this new data set without any changes to the model architecture. This very simple method is inspired by BIBREF5 .", "The following models are taken, or slightly adapted, from BIBREF9 . For this reason, we give only a very short description of them here and the reader is referred to their work for details.", "s-hier A multi-encoder architecture with hierarchical attention. This model has access to one additional context: the previous source sentence. It is read by a separate encoder, and attended to by an additional attention network. The output of the resulting two attention vectors is combined with yet another attention network.", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. Both are read by separate encoders, and sequences from all encoders are combined with hierarchical attention.", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. This model achieved the best results in BIBREF9 .", "For each variant, we also introduce and test weight tying: we share the parameters of embedding matrices between encoders that read the same kind of text (source or target side)." ], [ "All remaining models are based on the Transformer architecture BIBREF2 . A Transformer avoids recurrence completely: it follows an encoder-decoder architecture using stacked self-attention and fully connected layers for both the encoder and decoder.", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.", "concat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 .", " BIBREF8 A more sophisticated context-aware Transformer that uses source context only. It has a separate encoder for source context, but all layers except the last one are shared between encoders. A source and context sentence are first encoded independently, and then a single attention layer and a gating function are used to produce a context-aware representation of the source sentence. Such restricted interaction with context is shown to be beneficial for analysis of contextual phenomena captured by the model. For details the reader is referred to their work." ], [ "We train all models on the data from the WMT 2017 English INLINEFORM0 German news translation shared task ( INLINEFORM1 5.8 million sentence pairs). These corpora do not have document boundaries, therefore a small fraction of sentences will be paired with wrong context, but we expect the model to be robust against occasional random context (see also BIBREF8 ). Experimental setups for the RNN and Transformer models are different, and we describe them separately.", "All RNN-based models are trained with Nematus BIBREF28 . We learn a joint BPE model with 89.5k merge operations BIBREF29 . We train shallow models with an embedding size of 512, a hidden layer size of 1024 and layer normalization. Models are trained with Adam BIBREF30 , with an initial learning rate of 0.0001. We apply early stopping based on validation perplexity. The batch size for training is 80, and the maximum length of training sequences is 100 (if input sentences are concatenated) or 50 (if input lines are single sentences).", "For our Transformer-based experiments, we use a custom implementation and follow the hyperparameters from BIBREF2 , BIBREF8 . Systems are trained on lowercased text that was encoded using BPE (32k merge operations). Models consist of 6 encoder and decoder layers with 8 attention heads. The hidden state size is 512, the size of feedforward layers is 2048.", "Model performance is evaluated in terms of BLEU, on newstest2017, newstest2018 and all sentence pairs from our pronoun test set. We compute scores with SacreBLEU BIBREF31 . Evaluation with BLEU is done mainly to control for overall translation quality.", "To evaluate pronoun translation, we perform contrastive evaluation and report the accuracy of models on our contrastive test set." ], [ "The BLEU scores in Table TABREF30 show a moderate improvement for most context-aware systems. This suggests that the architectural changes for the context-aware models do not degrade overall translation quality. The contrastive evaluation on our test set on the other hand shows a clear increase in the accuracy of pronoun translation: The best model s-hier-to-2.tied achieves a total of +16 percentage points accuracy on the test set over the baseline, see Table TABREF31 .", "Table TABREF32 shows that context-aware models perform better than the baseline when the antecedent is outside the current sentence. In our experiments, all context-aware models consider one preceding sentence as context. The evaluation according to the distance of the antecedent in Table TABREF35 confirms that the subset of sentences with antecedent distance 1 benefits most from the tested context-aware models (up to +20 percentage points accuracy). However, we note two surprising patterns:", "The first observation can be explained by the distribution of German pronouns in the test set. The further away the antecedent, the higher the percentage of it INLINEFORM0 es cases, which are the majority class, and thus the class that will be predicted most often if evidence for other classes is lacking. We speculate that this is due to our more permissive extraction heuristics for it INLINEFORM1 es.", "We attribute the second observation to the existence of coreference chains where the preceding sentence contains a pronoun that refers to the same nominal antecedent as the pronoun in the current sentence. Consider the example in Table TABREF36 : The nominal antecedent of it in the current sentence is door, Tür in German with feminine gender. The nominal antecedent occurs two sentences before the current sentence, but the German sentence in between contains the pronoun sie, which is a useful signal for the context-aware models, even though they cannot know the nominal antecedent.", "Note that only models aware of target-side context can benefit from such circumstances: The s-hier models as well as the Transformer model by BIBREF8 only see source side context, which results in lower accuracy if the distance to the antecedent is INLINEFORM0 1, see Table TABREF35 .", "While such coreference chains complicate the interpretation of the results, we note that improvements on inter-sentential anaphora with antecedent distance INLINEFORM0 are relatively small (compared to distance 1), and that performance is still relatively poor (especially for the minority classes er and sie). We encourage evaluation of wider-context models on this subset, which is still large thanks to the size of the full test set.", "Regarding the comparison of different context-aware architectures, our results demonstrate the effectiveness of parameter sharing between the main encoder (or decoder) and the contextual encoder. We observe an improvement of 5 percentage points from s-hier-to-2 to s-hier-to-2.tied, and 4 percentage points from s-t-hier to s-t-hier.tied. Context encoders introduce a large number of extra parameters, while inter-sentential context is only relevant for a relatively small number of predictions. We hypothesize that the training signal is thus too weak to train a strong contextual encoder in an end-to-end fashion without parameter sharing. Our results also confirm the finding by BIBREF9 that multi-encoder architectures, specifically s-hier-to-2(.tied), can outperform a simple concatenation system in the translation of coreferential pronouns.", "The Transformer-based models perform strongest on pronouns with intra-segmental antecedent, outperforming the recurrent baseline by 9–18 percentage points. This is likely an effect of increased model depth and the self-attentional architecture in this set of experiments. The model by BIBREF8 only uses source context, and outperforms the most comparable RNN system, s-hier.tied. However, the Transformer-based concat22 slightly underperforms the RNN-based concat22, and we consider it future research how to better exploit target context with Transformer-based models." ], [ "We present a large-scale test suite to specifically test the capacity of NMT models to translate pronouns correctly. The test set contains 12,000 difficult cases of pronoun translations from English it to its German counterparts er, sie and es, extracted automatically from OpenSubtitles BIBREF22 .", "We evaluate recently proposed context-aware models on our test set. Even though the increase in BLEU score is moderate for all context-aware models, the improvement in the translation of pronouns is considerable: The best model (s-hier-to-2.tied) achieves a +16 percentage points gain in accuracy over the baseline.", "Our experiments confirm the importance of careful architecture design, with multi-encoder architectures outperforming a model that simply concatenates context sentences. We also demonstrate the effectiveness of parameter sharing between encoders of a context-aware model.", "We hope the test set will prove useful for empirically validating novel architectures for context-aware NMT. So far, we have only evaluated models that consider one sentence of context, but the nominal antecedent is more distant for a sizable proportion of the test set, and the evaluation of variable-size context models BIBREF7 , BIBREF10 is interesting future work." ], [ "We are grateful to the Swiss National Science Foundation (SNF) for supporting the project CoNTra (grant number 105212_169888)." ] ], "section_name": [ "Introduction", "Related Work", "Context-Aware NMT Models", "Evaluation of Pronoun Translation", "Test set with contrastive examples", "Automatic extraction of contrastive examples from corpora", "Evaluation by scoring", "Recurrent Models", "Transformer Models", "Experiments", "Evaluation", "Conclusions", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "10afb95ba5499c6dee08f72d91f08a23b877ec5c", "3c9d47b4a9cb80dfb5c629cfe5698b477f0931f8", "55912f5118690277f9d138b9dbbf454c3ab3d044", "ce4d3688f77e3fb19658291b7c382a04d1f81b72" ], "answer": [ { "evidence": [ "Automatic extraction of contrastive examples from corpora", "We automatically create a test set from the OpenSubtitles corpus BIBREF22 . The goal is to provide a large number of difficult test cases where an English pronoun has to be translated to a German pronoun." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Automatic extraction of contrastive examples from corpora\nWe automatically create a test set from the OpenSubtitles corpus BIBREF22 ." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "Contrastive evaluation requires a large set of suitable examples that involve the translation of pronouns. As additional goals, our test set is designed to 1) focus on hard cases, so that it can be used as a benchmark to track progress in context-aware translation and 2) allow for fine-grained analysis.", "We automatically create a test set from the OpenSubtitles corpus BIBREF22 . The goal is to provide a large number of difficult test cases where an English pronoun has to be translated to a German pronoun." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Contrastive evaluation requires a large set of suitable examples that involve the translation of pronouns. As additional goals, our test set is designed to 1) focus on hard cases, so that it can be used as a benchmark to track progress in context-aware translation and 2) allow for fine-grained analysis.", "We automatically create a test set from the OpenSubtitles corpus BIBREF22 . The goal is to provide a large number of difficult test cases where an English pronoun has to be translated to a German pronoun." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "We automatically create a test set from the OpenSubtitles corpus BIBREF22 . The goal is to provide a large number of difficult test cases where an English pronoun has to be translated to a German pronoun." ], "extractive_spans": [], "free_form_answer": "It is automatically created from the OpenSubtitles corpus.", "highlighted_evidence": [ "We automatically create a test set from the OpenSubtitles corpus BIBREF22 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns. The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). The resource is freely available. Additionally, we evaluate several context-aware models that have recently been proposed in the literature on this test set, and extend existing models with parameter tying." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns. The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). The resource is freely available." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "aec6d187943165f39d0ee8be814e01f5e659ae90", "b1f41fc35091704b8374874f0cb98bdde5ab5774", "ca17f7efdcf5006231f473a04ab8ad0b5e98474c" ], "answer": [ { "evidence": [ "We consider the following recurrent baselines:", "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. Then we train on this new data set without any changes to the model architecture. This very simple method is inspired by BIBREF5 .", "s-hier A multi-encoder architecture with hierarchical attention. This model has access to one additional context: the previous source sentence. It is read by a separate encoder, and attended to by an additional attention network. The output of the resulting two attention vectors is combined with yet another attention network.", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. Both are read by separate encoders, and sequences from all encoders are combined with hierarchical attention.", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. This model achieved the best results in BIBREF9 .", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.", "concat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 ." ], "extractive_spans": [ "bidirectional RNN model with attention", "concat22", "s-hier", "s-t-hier", "s-hier-to-2", "Transformer-base", "concat22", "concat21" ], "free_form_answer": "", "highlighted_evidence": [ "We consider the following recurrent baselines:\n\nbaseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. ", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. ", "s-hier A multi-encoder architecture with hierarchical attention. ", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. Both are read by separate encoders, and sequences from all encoders are combined with hierarchical attention.", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22.", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.", "concat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "All remaining models are based on the Transformer architecture BIBREF2 . A Transformer avoids recurrence completely: it follows an encoder-decoder architecture using stacked self-attention and fully connected layers for both the encoder and decoder.", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 ." ], "extractive_spans": [ " standard bidirectional RNN model with attention", "A standard context-agnostic Transformer" ], "free_form_answer": "", "highlighted_evidence": [ "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "All remaining models are based on the Transformer architecture BIBREF2 . A Transformer avoids recurrence completely: it follows an encoder-decoder architecture using stacked self-attention and fully connected layers for both the encoder and decoder.\n\nbaseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We consider the following recurrent baselines:", "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. Then we train on this new data set without any changes to the model architecture. This very simple method is inspired by BIBREF5 .", "s-hier A multi-encoder architecture with hierarchical attention. This model has access to one additional context: the previous source sentence. It is read by a separate encoder, and attended to by an additional attention network. The output of the resulting two attention vectors is combined with yet another attention network.", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. Both are read by separate encoders, and sequences from all encoders are combined with hierarchical attention.", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. This model achieved the best results in BIBREF9 .", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.", "concat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 .", "BIBREF8 A more sophisticated context-aware Transformer that uses source context only. It has a separate encoder for source context, but all layers except the last one are shared between encoders. A source and context sentence are first encoded independently, and then a single attention layer and a gating function are used to produce a context-aware representation of the source sentence. Such restricted interaction with context is shown to be beneficial for analysis of contextual phenomena captured by the model. For details the reader is referred to their work." ], "extractive_spans": [ "standard bidirectional RNN model with attention", "concat22", "s-hier A multi-encoder architecture with hierarchical attention", "s-t-hier ", "s-hier-to-2 ", "A standard context-agnostic Transformer.", "concat22", "concat21", "BIBREF8" ], "free_form_answer": "", "highlighted_evidence": [ "We consider the following recurrent baselines:\n\nbaseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. ", "s-hier A multi-encoder architecture with hierarchical attention. ", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. ", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. ", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .\n\nconcat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.\n\nconcat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 .\n\nBIBREF8 A more sophisticated context-aware Transformer that uses source context only. I" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "980d429255b89f5f06fe84b4b54c47e2fd513c60", "bfde49133be9ac7026682fa1ebe66592f9a797ac", "caa3c74e9d900efaeb883faaa0ce43e76fc1ac32" ], "answer": [ { "evidence": [ "We consider the following recurrent baselines:", "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. Then we train on this new data set without any changes to the model architecture. This very simple method is inspired by BIBREF5 .", "s-hier A multi-encoder architecture with hierarchical attention. This model has access to one additional context: the previous source sentence. It is read by a separate encoder, and attended to by an additional attention network. The output of the resulting two attention vectors is combined with yet another attention network.", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. Both are read by separate encoders, and sequences from all encoders are combined with hierarchical attention.", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. This model achieved the best results in BIBREF9 .", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.", "concat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 .", "BIBREF8 A more sophisticated context-aware Transformer that uses source context only. It has a separate encoder for source context, but all layers except the last one are shared between encoders. A source and context sentence are first encoded independently, and then a single attention layer and a gating function are used to produce a context-aware representation of the source sentence. Such restricted interaction with context is shown to be beneficial for analysis of contextual phenomena captured by the model. For details the reader is referred to their work." ], "extractive_spans": [ "standard bidirectional RNN model with attention", "concat22", "s-hier", "s-t-hier", "s-hier-to-2", "standard context-agnostic Transformer", "concat22", "concat21", "BIBREF8" ], "free_form_answer": "", "highlighted_evidence": [ "We consider the following recurrent baselines:\n\nbaseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus.", "s-hier A multi-encoder architecture with hierarchical attention. ", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. ", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. ", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .\n\nconcat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.\n\nconcat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 .\n\nBIBREF8 A more sophisticated context-aware Transformer that uses source context only. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Section SECREF2 explains how our paper relates to existing work on context-aware models and the evaluation of pronoun translation. Section SECREF3 describes our test suite. The context-aware models we use in our experiments are detailed in Section SECREF4 . We discuss our experiments in Section SECREF5 and the results in Section SECREF6 .", "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. Then we train on this new data set without any changes to the model architecture. This very simple method is inspired by BIBREF5 .", "s-hier A multi-encoder architecture with hierarchical attention. This model has access to one additional context: the previous source sentence. It is read by a separate encoder, and attended to by an additional attention network. The output of the resulting two attention vectors is combined with yet another attention network.", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. Both are read by separate encoders, and sequences from all encoders are combined with hierarchical attention.", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. This model achieved the best results in BIBREF9 .", "baseline A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.", "concat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 .", "BIBREF8 A more sophisticated context-aware Transformer that uses source context only. It has a separate encoder for source context, but all layers except the last one are shared between encoders. A source and context sentence are first encoded independently, and then a single attention layer and a gating function are used to produce a context-aware representation of the source sentence. Such restricted interaction with context is shown to be beneficial for analysis of contextual phenomena captured by the model. For details the reader is referred to their work." ], "extractive_spans": [ "bidirectional RNN", "concat22", "s-hier", "s-t-hier", "s-hier-to-2", "Transformer-base", "concat22", "concat21", "BIBREF8" ], "free_form_answer": "", "highlighted_evidence": [ " The context-aware models we use in our experiments are detailed in Section SECREF4 .", "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. ", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. ", "s-hier A multi-encoder architecture with hierarchical attention.", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. ", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22.", "A standard context-agnostic Transformer. All model parameters are identical to a Transformer-base in BIBREF2 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.", "concat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. ", "BIBREF8 A more sophisticated context-aware Transformer that uses source context only. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "This section describes several context-aware NMT models that we use in our experiments. They fall into two major categories: models based on RNNs and models based on the Transformer architecture BIBREF2 . We experiment with additional context on the source side and target side.", "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. Then we train on this new data set without any changes to the model architecture. This very simple method is inspired by BIBREF5 .", "s-hier A multi-encoder architecture with hierarchical attention. This model has access to one additional context: the previous source sentence. It is read by a separate encoder, and attended to by an additional attention network. The output of the resulting two attention vectors is combined with yet another attention network.", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. Both are read by separate encoders, and sequences from all encoders are combined with hierarchical attention.", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. This model achieved the best results in BIBREF9 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.", "concat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 .", "BIBREF8 A more sophisticated context-aware Transformer that uses source context only. It has a separate encoder for source context, but all layers except the last one are shared between encoders. A source and context sentence are first encoded independently, and then a single attention layer and a gating function are used to produce a context-aware representation of the source sentence. Such restricted interaction with context is shown to be beneficial for analysis of contextual phenomena captured by the model. For details the reader is referred to their work." ], "extractive_spans": [ "a standard bidirectional RNN model with attention", "concat22 ", "s-hier", "s-t-hier", "s-hier-to-2", "concat21 ", "BIBREF8 " ], "free_form_answer": "", "highlighted_evidence": [ "This section describes several context-aware NMT models that we use in our experiments. They fall into two major categories: models based on RNNs and models based on the Transformer architecture BIBREF2 . We experiment with additional context on the source side and target side.", "baseline Our baseline model is a standard bidirectional RNN model with attention, trained with Nematus. It operates on the sentence level and does not see any additional context. The input and output embeddings of the decoder are tied, encoder embeddings are not.", "concat22 We concatenate each sentence with one preceding sentence, for both the source and target side of the corpus. Then we train on this new data set without any changes to the model architecture. This very simple method is inspired by BIBREF5 .", "s-hier A multi-encoder architecture with hierarchical attention. This model has access to one additional context: the previous source sentence. It is read by a separate encoder, and attended to by an additional attention network. The output of the resulting two attention vectors is combined with yet another attention network.", "s-t-hier Identical to s-hier, except that it considers two additional contexts: the previous source sentence and previous target sentence. Both are read by separate encoders, and sequences from all encoders are combined with hierarchical attention.", "s-hier-to-2 The model has an additional encoder for source context, whereas the target side of the corpus is concatenated, in the same way as for concat22. This model achieved the best results in BIBREF9 .", "concat22 A simple concatentation model where only the training data is modified, in the same way as for the recurrent concat22 model.\n\nconcat21 Trained on data where the preceding sentence is concatenated to the current one only on the source side. This model is also taken from BIBREF5 .", "BIBREF8 A more sophisticated context-aware Transformer that uses source context only. It has a separate encoder for source context, but all layers except the last one are shared between encoders. A source and context sentence are first encoded independently, and then a single attention layer and a gating function are used to produce a context-aware representation of the source sentence. Such restricted interaction with context is shown to be beneficial for analysis of contextual phenomena captured by the model. For details the reader is referred to their work" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "3809d618c702a4dc46f8279beaee8b2d68436b63", "e6f4e9326c8da28d8a670d77a9e35b2d98b63037", "fa0d1a156175cde585c55681bdffd889487baee0" ], "answer": [ { "evidence": [ "We train all models on the data from the WMT 2017 English INLINEFORM0 German news translation shared task ( INLINEFORM1 5.8 million sentence pairs). These corpora do not have document boundaries, therefore a small fraction of sentences will be paired with wrong context, but we expect the model to be robust against occasional random context (see also BIBREF8 ). Experimental setups for the RNN and Transformer models are different, and we describe them separately." ], "extractive_spans": [ "English", "German" ], "free_form_answer": "", "highlighted_evidence": [ "We train all models on the data from the WMT 2017 English INLINEFORM0 German news translation shared task ( INLINEFORM1 5.8 million sentence pairs). " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We automatically create a test set from the OpenSubtitles corpus BIBREF22 . The goal is to provide a large number of difficult test cases where an English pronoun has to be translated to a German pronoun." ], "extractive_spans": [ "English", "German " ], "free_form_answer": "", "highlighted_evidence": [ "We automatically create a test set from the OpenSubtitles corpus BIBREF22 . The goal is to provide a large number of difficult test cases where an English pronoun has to be translated to a German pronoun." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We train all models on the data from the WMT 2017 English INLINEFORM0 German news translation shared task ( INLINEFORM1 5.8 million sentence pairs). These corpora do not have document boundaries, therefore a small fraction of sentences will be paired with wrong context, but we expect the model to be robust against occasional random context (see also BIBREF8 ). Experimental setups for the RNN and Transformer models are different, and we describe them separately." ], "extractive_spans": [ "English ", "German " ], "free_form_answer": "", "highlighted_evidence": [ "We train all models on the data from the WMT 2017 English INLINEFORM0 German news translation shared task ( INLINEFORM1 5.8 million sentence pairs). These corpora do not have document boundaries, therefore a small fraction of sentences will be paired with wrong context, but we expect the model to be robust against occasional random context (see also BIBREF8 ). Experimental setups for the RNN and Transformer models are different, and we describe them separately." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "did they collect their own contrastive test set?", "what are the baselines?", "what context aware models were experimented?", "what languages did they experiment on?" ], "question_id": [ "ae2142ee9e093ce485025168f4bcb3da4602739d", "ebe1084a06abdabefffc66f029eeb0b69f114fd9", "cfdd583d01abaca923f5c466bb20e1d4b8c749ff", "554d798e4ce58fd30820200c474d7e796dc8ba89" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Table 1: Overview of context-aware translation models in related work.", "Table 2: Frequency and probability of alignments of it in the training data of our systems (all data from the WMT 2017 news translation task). Alignments are produced by a fast_align model.", "Table 3: Example sentence pair with contrastive translations. An antecedent distance of 1 means that the antecedent is in the immediately preceding sentence.", "Table 4: Test set frequencies of pronoun pairs and antecedent distance (measured in sentences).", "Table 5: English→German BLEU scores on newstest2017, newstest2018 and all sentence pairs from our pronoun test set. Case-sensitive and case-insensitive (uncased) scores are reported. Higher is better, and the best scores are marked in bold.", "Table 6: Accuracy on contrastive test set (N=4000 per pronoun) with regard to reference pronoun.", "Table 7: Accuracy on contrastive test set with regard to antecedent location (within segment vs. outside segment).", "Table 8: Accuracy on contrastive test set with regard to antecedent distance of antecedent (in sentences).", "Table 9: Example where 1) antecedent distance is >1 and 2) the context given contains another pronoun as an additional hint." ], "file": [ "4-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "6-Table4-1.png", "9-Table5-1.png", "9-Table6-1.png", "9-Table7-1.png", "10-Table8-1.png", "10-Table9-1.png" ] }
[ "did they collect their own contrastive test set?" ]
[ [ "1810.02268-Test set with contrastive examples-0", "1810.02268-Introduction-3", "1810.02268-Automatic extraction of contrastive examples from corpora-0" ] ]
[ "It is automatically created from the OpenSubtitles corpus." ]
28
1909.12079
Improving Fine-grained Entity Typing with Entity Linking
Fine-grained entity typing is a challenging problem since it usually involves a relatively large tag set and may require to understand the context of the entity mention. In this paper, we use entity linking to help with the fine-grained entity type classification process. We propose a deep neural model that makes predictions based on both the context and the information obtained from entity linking results. Experimental results on two commonly used datasets demonstrates the effectiveness of our approach. On both datasets, it achieves more than 5\% absolute strict accuracy improvement over the state of the art.
{ "paragraphs": [ [ "Given a piece of text and the span of an entity mention in this text, fine-grained entity typing (FET) is the task of assigning fine-grained type labels to the mention BIBREF0. The assigned labels should be context dependent BIBREF1. For example, in the sentence “Trump threatens to pull US out of World Trade Organization,” the mention “Trump” should be labeled as /person and /person/politician, although Donald Trump also had other occupations such as businessman, TV personality, etc.", "This task is challenging because it usually uses a relatively large tag set, and some mentions may require the understanding of the context to be correctly labeled. Moreover, since manual annotation is very labor-intensive, existing approaches have to rely on distant supervision to train models BIBREF0, BIBREF2.", "Thus, the use of extra information to help with the classification process becomes very important. In this paper, we improve FET with entity linking (EL). EL is helpful for a model to make typing decisions because if a mention is correctly linked to its target entity, we can directly obtain the type information about this entity in the knowledge base (KB). For example, in the sentence “There were some great discussions on a variety of issues facing Federal Way,” the mention “Federal Way” may be incorrectly labeled as a company by some FET models. Such a mistake can be avoided after linking it to the city Federal Way, Washington. For cases that require the understanding of the context, using entity linking results is also beneficial. In the aforementioned example where “Trump” is the mention, obtaining all the types of Donald Trump in the knowledge base (e.g., politician, businessman, TV personality, etc.) is still informative for inferring the correct type (i.e., politician) that fits the context, since they narrows the possible labels down.", "However, the information obtained through EL should not be fully trusted since it is not always accurate. Even when a mention is correctly linked to an entity, the type information of this entity in the KB may be incomplete or outdated. Thus, in this paper, we propose a deep neural fine-grained entity typing model that flexibly predicts labels based on the context, the mention string, and the type information from KB obtained with EL.", "Using EL also introduces a new problem for the training process. Currently, a widely used approach to create FET training samples is to use the anchor links in Wikipedia BIBREF0, BIBREF3. Each anchor link is regarded as a mention, and is weakly labeled with all the types of its referred entity (the Wikipedia page the anchor link points to) in KB. Our approach, when links the mention correctly, also uses all the types of the referred entity in KB as extra information. This may cause the trained model to overfit the weakly labeled data. We design a variant of the hinge loss and introduce noise during training to address this problem.", "We conduct experiments on two commonly used FET datasets. Experimental results show that introducing information obtained through entity linking and having a deep neural model both helps to improve FET performance. Our model achieves more than 5% absolute strict accuracy improvement over the state of the art on both datasets.", "Our contributions are summarized as follows:", "We propose a deep neural fine-grained entity typing model that utilizes type information from KB obtained through entity linking.", "We address the problem that our model may overfit the weakly labeled data by using a variant of the hinge-loss and introducing noise during training.", "We demonstrate the effectiveness of our approach with experimental results on commonly used FET datasets.", "Our code is available at https://github.com/HKUST-KnowComp/IFETEL." ], [ "An early effort of classifying named entities into fine-grained types can be found in BIBREF4, which only focuses on person names. Latter, datasets with larger type sets are constructed BIBREF5, BIBREF0, BIBREF6. These datasets are more preferred by recent studies BIBREF3, BIBREF7.", "Most of the existing approaches proposed for FET are learning based. The features used by these approaches can either be hand-crafted BIBREF0, BIBREF1 or learned from neural network models BIBREF8, BIBREF9, BIBREF10. Since FET systems usually use distant supervision for training, the labels of the training samples can be noisy, erroneous or overly specific. Several studies BIBREF11, BIBREF12, BIBREF9 address these problems by separating clean mentions and noisy mentions, modeling type correction BIBREF3, using a hierarchy-aware loss BIBREF9, etc.", "BIBREF13 and BIBREF14 are two studies that are most related to this paper. BIBREF13 propose an unsupervised FET system where EL is an importat component. But they use EL to help with clustering and type name selection, which is very different from how we use it to improve the performance of a supervised FET model. BIBREF14 finds related entities based on the context instead of directly applying EL. The types of these entities are then used for inferring the type of the mention." ], [ "Let $T$ be a predefined tag set, which includes all the types we want to assign to mentions. Given a mention $m$ and its context, the task is to predict a set of types $\\mathbf {\\tau }\\subset T$ suitable for this mention. Thus, this is a multi-class, multi-label classification problem BIBREF0. Next, we will introduce our approach for this problem in detail, including the neural model, the training of the model, and the entity linking algorithm we use." ], [ "Each input sample to our FET system contains one mention and the sentence it belongs to. We denote $w_1,w_2,...,w_n$ as the words in the current sentence, $w_{p_1},w_{p_2},...,w_{p_l}$ as the words in the mention string, where $n$ is the number of words in the sentence, $p_1,...,p_l$ are the indices of the words in the mention string, $l$ is the number of words in the mention string. We also use a set of pretrained word embeddings.", "Our FET approach is illustrated in Figure FIGREF4. It first constructs three representations: context representation, mention string representation, and KB type representation. Note that the KB type representation is obtained from a knowledge base through entity linking and is independent of the context of the mention." ], [ "To obtain the context representation, we first use a special token $w_m$ to represent the mention (the token “[Mention]” in Figure FIGREF4). Then, the word sequence of the sentence becomes $w_1,...,w_{p_l-1},w_m,w_{p_l+1},...,w_n$. Their corresponding word embeddings are fed into two layers of BiLSTMs. Let $\\mathbf {h}_m^1$ and $\\mathbf {h}_m^2$ be the output of the first and the second layer of BiLSTMs for $w_m$, respectively. We use $\\mathbf {f}_c=\\mathbf {h}_m^1+\\mathbf {h}_m^2$ as the context representation vector." ], [ "Let $\\mathbf {x}_1,...,\\mathbf {x}_l$ be the word embeddings of the mention string words $w_{p_1},...,w_{p_l}$. Then the mention string representation $\\mathbf {f}_s=(\\sum _{i=1}^l \\mathbf {x}_i)/l$." ], [ "To obtain the KB type representation, we run an EL algorithm for the current mention. If the EL algorithm returns an entity, we retrieve the types of of this entity from the KB. We use Freebase as our KB. Since the types in Freebase is different from $T$, the target type set, they are mapped to the types in $T$ with rules similar to those used in BIBREF14. Afterwards, we perform one hot encoding on these types to get the KB Type Representation $\\mathbf {f}_e$. If the EL algorithm returns NIL (i.e., the mention cannot be linked to an entity), we simply one hot encode the empty type set." ], [ "Apart from the three representations, we also obtain the score returned by our entity linking algorithm, which indicates its confidence on the linking result. We denote it as a one dimensional vector $\\mathbf {g}$. Then, we get $\\mathbf {f}=\\mathbf {f}_c\\oplus \\mathbf {f}_s\\oplus \\mathbf {f}_e\\oplus \\mathbf {g}$, where $\\oplus $ means concatenation. $\\mathbf {f}$ is then fed into an MLP that contains three dense layers to obtain $\\mathbf {u}_m$, out final representation for the current mention sample $m$. Let $t_1,t_2,...,t_k$ be all the types in $T$, where $k=|T|$. We embed them into the same space as $\\mathbf {u}_m$ by assigning each of them a dense vector BIBREF15. These vectors are denoted as $\\mathbf {t}_1,...,\\mathbf {t}_k$. Then the score of the mention $m$ having the type $t_i\\in T$ is calculated as the dot product of $\\mathbf {u}_m$ and $\\mathbf {t}_i$:", "We predict $t_i$ as a type of $m$ if $s(m,t_i)>0$." ], [ "Following existing studies, we also generate training data by using the anchor links in Wikipedia. Each anchor link can be used as a mention. These mentions are labeled by mapping the Freebase types of the target entries to the tag set $T$ BIBREF0.", "Since the KB type representations we use in our FET model are also obtained through mapping Freebase types, they will perfectly match the automatically generated labels for the mentions that are correctly linked (i.e., when the entity returned by the EL algorithm and the target entry of the anchor link are the same). For example, in Figure FIGREF4, suppose the example sentence is a training sample obtained from Wikipedia, where “Donald Trump” is an anchor link points to the Wikipedia page of Donald Trump. After mapping the Freebase types of Donald Trump to the target tag set, this sample will be weakly annotated as /person/politician, /person/tv_personality, and /person/business, which is exactly the same as the type information (the “Types From KB” in Figure FIGREF4) obtained through EL. Thus, during training, when the EL system links the mention to the correct entity, the model only needs to output the types in the KB type representation. This may cause the trained model to overfit the weakly labeled training data. For most types of entities such as locations and organizations, it is fine since they usually have the same types in different contexts. But it is problematic for person mentions, as their types can be context dependent.", "To address this problem, during training, if a mention is linked to a person entity by our entity linking algorithm, we add a random fine-grained person type label that does not belong to this entity while generating the KB type representation. For example, if the mention is linked to a person with types /person/actor and /person/author, a random label /person/politician may be added. This will force the model to still infer the type labels from the context even when the mention is correctly linked, since the KB type representation no longer perfectly match the weak labels.", "To make it more flexible, we also propose to use a variant of the hinge loss used by BIBREF16 to train our model:", "where $\\tau _m$ is the correct type set for mention $m$, $\\bar{\\tau }_m$ is the incorrect type set. $\\lambda (t)\\in [1,+\\infty )$ is a predefined parameter to impose a larger penalty if the type $t$ is incorrectly predicted as positive. Since the problem of overfitting the weakly annotated labels is more severe for person mentions, we set $\\lambda (t)=\\lambda _P$ if $t$ is a fine-grained person type, and $\\lambda (t)=1$ for all other types.", "During training, we also randomly set the EL results of half of the training samples to be NIL. So that the model can perform well for mentions that cannot be linked to the KB at test time." ], [ "In this paper, we use a simple EL algorithm that directly links the mention to the entity with the greatest commonness score. Commonness BIBREF17, BIBREF18 is calculated base on the anchor links in Wikipedia. It estimates the probability of an entity given only the mention string. In our FET approach, the commonness score is also used as the confidence on the linking result (i.e., the $\\mathbf {g}$ used in the prediction part of Subsection SECREF5). Within a same document, we also use the same heuristic used in BIBREF19 to find coreferences of generic mentions of persons (e.g., “Matt”) to more specific mentions (e.g., “Matt Damon”).", "We also tried other more advanced EL methods in our experiments. However, they do not improve the final performance of our model. Experimental results of using the EL system proposed in BIBREF19 is provided in Section SECREF4." ], [ "We use two datasets: FIGER (GOLD) BIBREF0 and BBN BIBREF5. The sizes of their tag sets are 113 and 47, respectively. FIGER (GOLD) allows mentions to have multiple type paths, but BBN does not. Another commonly used dataset, OntoNotes BIBREF1, is not used since it contains many pronoun and common noun phrase mentions such as “it,” “he,” “a thrift institution,” which are not suitable to directly apply entity linking on.", "Following BIBREF0, we generate weakly labeled datasets for training with Wikipedia anchor links. Since the tag sets used by FIGER (GOLD) and BBN are different, we create a training set for each of them. For each dataset, $2,000$ weakly labeled samples are randomly picked to form a development set. We also manually annotated 50 person mentions collected from news articles for tuning the parameter $\\lambda _P$.", "We use the 300 dimensional pretrained GloVe word vectors provided by BIBREF20. The hidden layer sizes of the two layers of BiLSTMs are both set to 250. For the three-layer MLP, the size of the two hidden layers are both set to 500. The size of the type embeddings is 500. $\\lambda _P$ is set to 2.0. We also apply batch normalization and dropout to the input of each dense layer in our three-layer MLP during training.", "We use strict accuracy, Macro F1, and Micro F1 to evaluate fine-grained typing performance BIBREF0." ], [ "We compare with the following existing approaches: AFET BIBREF3, AAA BIBREF16, NFETC BIBREF9, and CLSC BIBREF21.", "We use Ours (Full) to represent our full model, and also compare with five variants of our own approach: Ours (DirectTrain) is trained without adding random person types while obtaining the KB type representation, and $\\lambda _P$ is set to 1; Ours (NoEL) does not use entity linking, i.e., the KB type representation and the entity linking confidence score are removed, and the model is trained in DirectTrain style; Ours (NonDeep) uses one BiLSTM layer and replaces the MLP with a dense layer; Ours (NonDeep NoEL) is the NoEL version of Ours (NonDeep); Ours (LocAttEL) uses the entity linking approach proposed in BIBREF19 instead of our own commonness based approach. Ours (Full), Ours (DirectTrain), and Ours (NonDeep) all use our own commonness based entity linking approach." ], [ "The experimental results are listed in Table TABREF16. As we can see, our approach performs much better than existing approaches on both datasets.", "The benefit of using entity linking in our approach can be verified by comparing Ours (Full) and Ours (NoEL). The performance on both datasets decreases if the entity linking part is removed. Especially on FIGER (GOLD), the strict accuracy drops from 75.5 to 69.8. Using entity linking improves less on BBN. We think this is because of three reasons: 1) BBN has a much smaller tag set than FIGER (GOLD); 2) BBN does not allow a mention to be annotated with multiple type paths (e.g., labeling a mention with both /building and /location is not allowed), thus the task is easier; 3) By making the model deep, the performance on BBN is already improved a lot, which makes further improvement harder.", "The improvement of our full approach over Ours (DirectTrain) on FIGER (GOLD) indicates that the techniques we use to avoid overfitting the weakly labeled data are also effective.", "Ours (LocAttEL), which uses a more advanced EL system, does not achieve better performance than Ours (Full), which uses our own EL approach. After manually checking the results of the two EL approaches and the predictions of our model on FIGER (GOLD), we think this is mainly because: 1) Our model also uses the context while making predictions. Sometimes, if it “thinks” that the type information provided by EL is incorrect, it may not use it. 2) The performances of different EL approaches also depends on the dataset and the types of entities used for evaluation. We find that on FIGER (GOLD), the approach in BIBREF19 is better at distinguishing locations and sports teams, but it may also make some mistakes that our simple EL method does not. For example, it may incorrectly link “March,” the month, to an entity whose Wikipedia description fits the context better. 3) For some mentions, although the EL system links it to an incorrect entity, the type of this entity is the same with the correct entity." ], [ "We propose a deep neural model to improve fine-grained entity typing with entity linking. The problem of overfitting the weakly labeled training data is addressed by using a variant of the hinge loss and introducing noise during training. We conduct experiments on two commonly used dataset. The experimental results demonstrates the effectiveness of our approach." ], [ "This paper was supported by the Early Career Scheme (ECS, No. 26206717) from Research Grants Council in Hong Kong and WeChat-HKUST WHAT Lab on Artificial Intelligence Technology." ] ], "section_name": [ "Introduction", "Related Work", "Method", "Method ::: Fine-grained Entity Typing Model ::: Input", "Method ::: Fine-grained Entity Typing Model ::: Context Representation", "Method ::: Fine-grained Entity Typing Model ::: Mention String Representation", "Method ::: Fine-grained Entity Typing Model ::: KB Type Representation", "Method ::: Fine-grained Entity Typing Model ::: Prediction", "Method ::: Model Training", "Method ::: Entity Linking Algorithm", "Experiments ::: Setup", "Experiments ::: Compared Methods", "Experiments ::: Results", "Conclusions", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "284055bca74cacc96f5c0860374fd313dd871753", "98e303088dbd884baf4ffea8825465b51a05f61e", "f799692f42c93fadb7835ccfc15dd3558240f117" ], "answer": [ { "evidence": [ "Given a piece of text and the span of an entity mention in this text, fine-grained entity typing (FET) is the task of assigning fine-grained type labels to the mention BIBREF0. The assigned labels should be context dependent BIBREF1. For example, in the sentence “Trump threatens to pull US out of World Trade Organization,” the mention “Trump” should be labeled as /person and /person/politician, although Donald Trump also had other occupations such as businessman, TV personality, etc.", "Thus, the use of extra information to help with the classification process becomes very important. In this paper, we improve FET with entity linking (EL). EL is helpful for a model to make typing decisions because if a mention is correctly linked to its target entity, we can directly obtain the type information about this entity in the knowledge base (KB). For example, in the sentence “There were some great discussions on a variety of issues facing Federal Way,” the mention “Federal Way” may be incorrectly labeled as a company by some FET models. Such a mistake can be avoided after linking it to the city Federal Way, Washington. For cases that require the understanding of the context, using entity linking results is also beneficial. In the aforementioned example where “Trump” is the mention, obtaining all the types of Donald Trump in the knowledge base (e.g., politician, businessman, TV personality, etc.) is still informative for inferring the correct type (i.e., politician) that fits the context, since they narrows the possible labels down.", "In this paper, we use a simple EL algorithm that directly links the mention to the entity with the greatest commonness score. Commonness BIBREF17, BIBREF18 is calculated base on the anchor links in Wikipedia. It estimates the probability of an entity given only the mention string. In our FET approach, the commonness score is also used as the confidence on the linking result (i.e., the $\\mathbf {g}$ used in the prediction part of Subsection SECREF5). Within a same document, we also use the same heuristic used in BIBREF19 to find coreferences of generic mentions of persons (e.g., “Matt”) to more specific mentions (e.g., “Matt Damon”)." ], "extractive_spans": [], "free_form_answer": "They use an EL algorithm that links the mention to the entity with the help of the greatest commonness score.", "highlighted_evidence": [ "Given a piece of text and the span of an entity mention in this text, fine-grained entity typing (FET) is the task of assigning fine-grained type labels to the mention BIBREF0. ", "In this paper, we improve FET with entity linking (EL). ", "In this paper, we use a simple EL algorithm that directly links the mention to the entity with the greatest commonness score. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper, we use a simple EL algorithm that directly links the mention to the entity with the greatest commonness score. Commonness BIBREF17, BIBREF18 is calculated base on the anchor links in Wikipedia. It estimates the probability of an entity given only the mention string. In our FET approach, the commonness score is also used as the confidence on the linking result (i.e., the $\\mathbf {g}$ used in the prediction part of Subsection SECREF5). Within a same document, we also use the same heuristic used in BIBREF19 to find coreferences of generic mentions of persons (e.g., “Matt”) to more specific mentions (e.g., “Matt Damon”)." ], "extractive_spans": [], "free_form_answer": "The mention is linked to the entity with the greatest commonness score.", "highlighted_evidence": [ "In this paper, we use a simple EL algorithm that directly links the mention to the entity with the greatest commonness score. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper, we use a simple EL algorithm that directly links the mention to the entity with the greatest commonness score. Commonness BIBREF17, BIBREF18 is calculated base on the anchor links in Wikipedia. It estimates the probability of an entity given only the mention string. In our FET approach, the commonness score is also used as the confidence on the linking result (i.e., the $\\mathbf {g}$ used in the prediction part of Subsection SECREF5). Within a same document, we also use the same heuristic used in BIBREF19 to find coreferences of generic mentions of persons (e.g., “Matt”) to more specific mentions (e.g., “Matt Damon”)." ], "extractive_spans": [ "we use a simple EL algorithm that directly links the mention to the entity with the greatest commonness score. Commonness BIBREF17, BIBREF18 is calculated base on the anchor links in Wikipedia. It estimates the probability of an entity given only the mention string." ], "free_form_answer": "", "highlighted_evidence": [ "In this paper, we use a simple EL algorithm that directly links the mention to the entity with the greatest commonness score. Commonness BIBREF17, BIBREF18 is calculated base on the anchor links in Wikipedia. It estimates the probability of an entity given only the mention string. In our FET approach, the commonness score is also used as the confidence on the linking result (i.e., the $\\mathbf {g}$ used in the prediction part of Subsection SECREF5). Within a same document, we also use the same heuristic used in BIBREF19 to find coreferences of generic mentions of persons (e.g., “Matt”) to more specific mentions (e.g., “Matt Damon”)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "003db3e0483221cda6b9e26edd8a3287c6cc574a", "c91bb3bc56e1557bb6b60c91ddc32434447eb322", "df999bb28a94325ab3fcc0b03727d060c99132f7" ], "answer": [ { "evidence": [ "Our FET approach is illustrated in Figure FIGREF4. It first constructs three representations: context representation, mention string representation, and KB type representation. Note that the KB type representation is obtained from a knowledge base through entity linking and is independent of the context of the mention.", "FLOAT SELECTED: Figure 1: Our approach. The example sentence is “Earlier on Tuesday, Donald Trump pledged to help hard-hit U.S. farmers caught in the middle of the escalating trade war.” Here, the correct label for the mention Donald Trump should be /person, /person/politician. “[Mention]” is a special token that we use to represent the mention.", "To obtain the context representation, we first use a special token $w_m$ to represent the mention (the token “[Mention]” in Figure FIGREF4). Then, the word sequence of the sentence becomes $w_1,...,w_{p_l-1},w_m,w_{p_l+1},...,w_n$. Their corresponding word embeddings are fed into two layers of BiLSTMs. Let $\\mathbf {h}_m^1$ and $\\mathbf {h}_m^2$ be the output of the first and the second layer of BiLSTMs for $w_m$, respectively. We use $\\mathbf {f}_c=\\mathbf {h}_m^1+\\mathbf {h}_m^2$ as the context representation vector.", "Apart from the three representations, we also obtain the score returned by our entity linking algorithm, which indicates its confidence on the linking result. We denote it as a one dimensional vector $\\mathbf {g}$. Then, we get $\\mathbf {f}=\\mathbf {f}_c\\oplus \\mathbf {f}_s\\oplus \\mathbf {f}_e\\oplus \\mathbf {g}$, where $\\oplus $ means concatenation. $\\mathbf {f}$ is then fed into an MLP that contains three dense layers to obtain $\\mathbf {u}_m$, out final representation for the current mention sample $m$. Let $t_1,t_2,...,t_k$ be all the types in $T$, where $k=|T|$. We embed them into the same space as $\\mathbf {u}_m$ by assigning each of them a dense vector BIBREF15. These vectors are denoted as $\\mathbf {t}_1,...,\\mathbf {t}_k$. Then the score of the mention $m$ having the type $t_i\\in T$ is calculated as the dot product of $\\mathbf {u}_m$ and $\\mathbf {t}_i$:" ], "extractive_spans": [ "BiLSTMs ", "MLP " ], "free_form_answer": "", "highlighted_evidence": [ "Our FET approach is illustrated in Figure FIGREF4. It first constructs three representations: context representation, mention string representation, and KB type representation. Note that the KB type representation is obtained from a knowledge base through entity linking and is independent of the context of the mention.", "FLOAT SELECTED: Figure 1: Our approach. The example sentence is “Earlier on Tuesday, Donald Trump pledged to help hard-hit U.S. farmers caught in the middle of the escalating trade war.” Here, the correct label for the mention Donald Trump should be /person, /person/politician. “[Mention]” is a special token that we use to represent the mention.", "To obtain the context representation, we first use a special token $w_m$ to represent the mention (the token “[Mention]” in Figure FIGREF4). Then, the word sequence of the sentence becomes $w_1,...,w_{p_l-1},w_m,w_{p_l+1},...,w_n$. Their corresponding word embeddings are fed into two layers of BiLSTMs. Let $\\mathbf {h}_m^1$ and $\\mathbf {h}_m^2$ be the output of the first and the second layer of BiLSTMs for $w_m$, respectively. We use $\\mathbf {f}_c=\\mathbf {h}_m^1+\\mathbf {h}_m^2$ as the context representation vector.", "Apart from the three representations, we also obtain the score returned by our entity linking algorithm, which indicates its confidence on the linking result. We denote it as a one dimensional vector $\\mathbf {g}$. Then, we get $\\mathbf {f}=\\mathbf {f}_c\\oplus \\mathbf {f}_s\\oplus \\mathbf {f}_e\\oplus \\mathbf {g}$, where $\\oplus $ means concatenation. $\\mathbf {f}$ is then fed into an MLP that contains three dense layers to obtain $\\mathbf {u}_m$, out final representation for the current mention sample $m$. Let $t_1,t_2,...,t_k$ be all the types in $T$, where $k=|T|$. We embed them into the same space as $\\mathbf {u}_m$ by assigning each of them a dense vector BIBREF15." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Figure 1: Our approach. The example sentence is “Earlier on Tuesday, Donald Trump pledged to help hard-hit U.S. farmers caught in the middle of the escalating trade war.” Here, the correct label for the mention Donald Trump should be /person, /person/politician. “[Mention]” is a special token that we use to represent the mention." ], "extractive_spans": [], "free_form_answer": "BiLSTM with a three-layer perceptron", "highlighted_evidence": [ "FLOAT SELECTED: Figure 1: Our approach. The example sentence is “Earlier on Tuesday, Donald Trump pledged to help hard-hit U.S. farmers caught in the middle of the escalating trade war.” Here, the correct label for the mention Donald Trump should be /person, /person/politician. “[Mention]” is a special token that we use to represent the mention." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To obtain the context representation, we first use a special token $w_m$ to represent the mention (the token “[Mention]” in Figure FIGREF4). Then, the word sequence of the sentence becomes $w_1,...,w_{p_l-1},w_m,w_{p_l+1},...,w_n$. Their corresponding word embeddings are fed into two layers of BiLSTMs. Let $\\mathbf {h}_m^1$ and $\\mathbf {h}_m^2$ be the output of the first and the second layer of BiLSTMs for $w_m$, respectively. We use $\\mathbf {f}_c=\\mathbf {h}_m^1+\\mathbf {h}_m^2$ as the context representation vector." ], "extractive_spans": [ "BiLSTM" ], "free_form_answer": "", "highlighted_evidence": [ "To obtain the context representation, we first use a special token $w_m$ to represent the mention (the token “[Mention]” in Figure FIGREF4).", "Their corresponding word embeddings are fed into two layers of BiLSTMs." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "9c6bf458f86dcc3ef2ba3704775dedb45e538ff5", "a44611e5f3862f6c4f88aea96c43e42d8b84eb6f", "a82db6645ee3820fc45089c45ba396d64daace0a" ], "answer": [ { "evidence": [ "We use two datasets: FIGER (GOLD) BIBREF0 and BBN BIBREF5. The sizes of their tag sets are 113 and 47, respectively. FIGER (GOLD) allows mentions to have multiple type paths, but BBN does not. Another commonly used dataset, OntoNotes BIBREF1, is not used since it contains many pronoun and common noun phrase mentions such as “it,” “he,” “a thrift institution,” which are not suitable to directly apply entity linking on." ], "extractive_spans": [ "FIGER (GOLD) BIBREF0", "BBN BIBREF5" ], "free_form_answer": "", "highlighted_evidence": [ "We use two datasets: FIGER (GOLD) BIBREF0 and BBN BIBREF5. The sizes of their tag sets are 113 and 47, respectively. FIGER (GOLD) allows mentions to have multiple type paths, but BBN does not. Another commonly used dataset, OntoNotes BIBREF1, is not used since it contains many pronoun and common noun phrase mentions such as “it,” “he,” “a thrift institution,” which are not suitable to directly apply entity linking on." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use two datasets: FIGER (GOLD) BIBREF0 and BBN BIBREF5. The sizes of their tag sets are 113 and 47, respectively. FIGER (GOLD) allows mentions to have multiple type paths, but BBN does not. Another commonly used dataset, OntoNotes BIBREF1, is not used since it contains many pronoun and common noun phrase mentions such as “it,” “he,” “a thrift institution,” which are not suitable to directly apply entity linking on." ], "extractive_spans": [ "FIGER (GOLD) ", "BBN" ], "free_form_answer": "", "highlighted_evidence": [ "We use two datasets: FIGER (GOLD) BIBREF0 and BBN BIBREF5. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use two datasets: FIGER (GOLD) BIBREF0 and BBN BIBREF5. The sizes of their tag sets are 113 and 47, respectively. FIGER (GOLD) allows mentions to have multiple type paths, but BBN does not. Another commonly used dataset, OntoNotes BIBREF1, is not used since it contains many pronoun and common noun phrase mentions such as “it,” “he,” “a thrift institution,” which are not suitable to directly apply entity linking on." ], "extractive_spans": [ "FIGER (GOLD)", "BBN" ], "free_form_answer": "", "highlighted_evidence": [ "We use two datasets: FIGER (GOLD) BIBREF0 and BBN BIBREF5. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "How do they obtain the entity linking results in their model?", "Which model architecture do they use?", "Which datasets do they evaluate on?" ], "question_id": [ "91e361e85c6d3884694f3c747d61bfcef171bab0", "6295951fda0cfa2eb4259d544b00bc7dade7c01e", "3f717e6eceab0a066af65ddf782c1ebc502c28c0" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Our approach. The example sentence is “Earlier on Tuesday, Donald Trump pledged to help hard-hit U.S. farmers caught in the middle of the escalating trade war.” Here, the correct label for the mention Donald Trump should be /person, /person/politician. “[Mention]” is a special token that we use to represent the mention.", "Table 1: Fine-grained entity typing performance. The performance of “Ours (DirectTrain)” on BBN is omitted since this dataset does not have fine-grained types for person." ], "file": [ "3-Figure1-1.png", "5-Table1-1.png" ] }
[ "How do they obtain the entity linking results in their model?", "Which model architecture do they use?" ]
[ [ "1909.12079-Introduction-2", "1909.12079-Method ::: Entity Linking Algorithm-0", "1909.12079-Introduction-0" ], [ "1909.12079-Method ::: Fine-grained Entity Typing Model ::: Input-1", "1909.12079-Method ::: Fine-grained Entity Typing Model ::: Context Representation-0", "1909.12079-Method ::: Fine-grained Entity Typing Model ::: Prediction-0", "1909.12079-3-Figure1-1.png" ] ]
[ "The mention is linked to the entity with the greatest commonness score.", "BiLSTM with a three-layer perceptron" ]
29
2003.11687
Common-Knowledge Concept Recognition for SEVA
We build a common-knowledge concept recognition system for a Systems Engineer's Virtual Assistant (SEVA) which can be used for downstream tasks such as relation extraction, knowledge graph construction, and question-answering. The problem is formulated as a token classification task similar to named entity extraction. With the help of a domain expert and text processing methods, we construct a dataset annotated at the word-level by carefully defining a labelling scheme to train a sequence model to recognize systems engineering concepts. We use a pre-trained language model and fine-tune it with the labeled dataset of concepts. In addition, we also create some essential datasets for information such as abbreviations and definitions from the systems engineering domain. Finally, we construct a simple knowledge graph using these extracted concepts along with some hyponym relations.
{ "paragraphs": [ [ "The Systems Engineer's Virtual Assistant (SEVA) BIBREF0 was introduced with the goal to assist systems engineers (SE) in their problem-solving abilities by keeping track of large amounts of information of a NASA-specific project and using the information to answer queries from the user. In this work, we address a system element by constructing a common-knowledge concept recognition system for improving the performance of SEVA, using the static knowledge collected from the Systems Engineering Handbook BIBREF1 that is widely used in projects across the organization as domain-specific commonsense knowledge. At NASA, although there exists knowledge engines and ontologies for the SE domain such as MBSE BIBREF2, IMCE BIBREF3, and OpenCaesar BIBREF4, generic commonsense acquisition is rarely discussed; we aim to address this challenge. SE commonsense comes from years of experience and learning which involves background knowledge that goes beyond any handbook. Although constructing an assistant like SEVA system is the overarching objective, a key problem to first address is to extract elementary common-knowledge concepts using the SE handbook and domain experts. We use the term `common-knowledge' as the `commonsense' knowledge of a specific domain. This knowledge can be seen as a pivot that can be used later to collect `commonsense' knowledge for the SE domain. We propose a preliminary research study that can pave a path towards a comprehensive commonsense knowledge acquisition for an effective Artificial Intelligence (AI) application for the SE domain. Overall structure of this work is summarized in Figure 1. Implementation with demo and dataset is available at: https://github.com/jitinkrishnan/NASA-SE ." ], [ "Creating commonsense AI still remains an important and challenging task in AI research today. Some of the inspiring works are the CYC project BIBREF5 that tries to serve as a foundational knowledge to all systems with millions of everyday life commonsense assertions, Mosaic Commonsense Knowledge Graphs and Reasoning BIBREF6 that addresses aspects like social situations, mental states, and causal relationships, and Aristo System BIBREF7 that focuses on basic science knowledge. In NASA's context, systems engineering combines several engineering disciplines requiring extreme coordination and is prone to human errors. This, in combination with the lack of efficient knowledge transfer of generic lessons-learned makes most technology-based missions risk-averse. Thus, a comprehensive commonsense engine can significantly enhance the productivity of any mission by letting the experts focus on what they do best.", "Concept Recognition (CR) is a task identical to the traditional Named Entity Recognition (NER) problem. A typical NER task seeks to identify entities like name of a person such as `Shakespeare', a geographical location such as `London', or name of an organisation such as `NASA' from unstructured text. A supervised NER dataset consists of the above mentioned entities annotated at the word-token level using labelling schemes such as BIO which provides beginning (B), continuation or inside (I), and outside (O) representation for each word of an entity. BIBREF8 is the current top-performing NER model for CoNLL-2003 shared task BIBREF9. Off-the-shelf named entity extractors do not suffice in the SE common-knowledge scenario because the entities we want to extract are domain-specific concepts such as `system architecture' or `functional requirements' rather than physical entities such as `Shakespeare' or `London'. This requires defining new labels and fine-tuning.", "Relation extraction tasks extract semantic relationships from text. These extractors aim to connect named entities such as `Shakespeare' and `England' using relations such as `born-in'. Relations can be as simple as using hand-built patterns or as challenging as using unsupervised methods like Open IE BIBREF10; with bootstrapping, supervised, and semi-supervised methods in between. BIBREF11 and BIBREF12 are some of the high performing models that extract relations from New York Times Corpus BIBREF13 and TACRED challenges BIBREF14 respectively. Hyponyms represent hierarchical connection between entities of a domain and represent important relationships. For instance, a well-known work by BIBREF15 uses syntactic patterns such as [Y such as A, B, C], [Y including X], or [Y, including X] to extract hyponyms. Our goal is to extract preliminary hyponym relations from the concepts extracted by the CR and to connect the entities through verb phrases." ], [ "SE concepts are less ambiguous as compared to generic natural language text. A word usually means one concept. For example, the word `system' usually means the same when referring to a `complex system', `system structure', or `management system' in the SE domain. In generic text, the meaning of terms like `evaluation', `requirement', or `analysis' may contextually differ. We would like domain specific phrases such as `system evaluation', `performance requirement', or `system analysis' to be single entities. Based on the operational and system concepts described in BIBREF0, we carefully construct a set of concept-labels for the SE handbook which is shown in the next section." ], [ "abb: represents abbreviations such as TRL representing Technology Readiness Level.", "grp: represents a group of people or an individual such as Electrical Engineers, Systems Engineers or a Project Manager.", "syscon: represents any system concepts such as engineering unit, product, hardware, software, etc. They mostly represent physical concepts.", "opcon: represents operational concepts such as decision analysis process, technology maturity assessment, system requirements review, etc.", "seterm: represents generic terms that are frequently used in SE text and those that do not fall under syscon or opcon such as project, mission, key performance parameter, audit etc.", "event: represents event-like information in SE text such as Pre-Phase A, Phase A, Phase B, etc.", "org: represents an organization such as `NASA', `aerospace industry', etc.", "art: represents names of artifacts or instruments such as `AS1300'", "cardinal: represents numerical values such as `1', `100', 'one' etc.", "loc: represents location-like entities such as component facilities or centralized facility.", "mea: represents measures, features, or behaviors such as cost, risk, or feasibility." ], [ "Abbreviations are used frequently in SE text. We automatically extract abbreviations using simple pattern-matching around parentheses. Given below is a sample regex that matches most abbreviations in the SE handbook.", "r\"\\([ ]*[A-Z][A-Za-z]*[ ]*\\)\"", "An iterative regex matching procedure using this pattern over the preceding words will produce the full phrase of the abbreviation. `A process to determine a system’s technological maturity based on Technology Readiness Levels (TRLs)' produces the abbreviation TRL which stands for Technology Readiness Levels. `Define one or more initial Concept of Operations (ConOps) scenarios' produces the abbreviation ConOps which stands for Concept of Operations. We pre-label these abbreviations as concept entities. Many of these abbreviations are also provided in the Appendix section of the handbook which is also extracted and used as concepts." ], [ "Various locations of the handbook and the glossary provide definitions of several SE concepts. We collect these and compile a comprehensive definitions document which is also used for the concept recognition task. An example definition and its description is shown below:", "Definition: Acceptable Risk", "Description: The risk that is understood and agreed to by the program/project, governing authority, mission directorate, and other customer(s) such that no further specific mitigating action is required." ], [ "Using python tools such as PyPDF2, NLTK, and RegEx we build a pipeline to convert PDF to raw text along with extensive pre-processing which includes joining sentences that are split, removing URLs, shortening duplicate non-alpha characters, and replacing full forms of abbreviations with their shortened forms. We assume that the SE text is free of spelling errors. For the CR dataset, we select coherent paragraphs and full sentences by avoiding headers and short blurbs. Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level. An example is shown in Figure 2 and the unique tag count is shown in Table 1." ], [ "Any language model can be used for the purpose of customizing an NER problem to CR. We choose to go with BERT BIBREF16 because of its general-purpose nature and usage of contextualized word embeddings.", "In the hand-labelled dataset, each word gets a label. The idea is to perform multi-class classification using BERT's pre-trained cased language model. We use pytorch transformers and hugging face as per the tutorial by BIBREF17 which uses $BertForTokenClassification$. The text is embedded as tokens and masks with a maximum token length. This embedded tokens are provided as the input to the pre-trained BERT model for a full fine-tuning. The model gives an F1-score of $0.89$ for the concept recognition task. An 80-20 data split is used for training and evaluation. Detailed performance of the CR is shown in Table 2 and 3. Additionally, we also implemented CR using spaCy BIBREF18 which also produced similar results." ], [ "In this work, for relation extraction, we focus on hyponyms and verb phrase chunking. Hyponyms are more specific concepts such as earth to planet or rose to flower. Verb phrase chunking connects the named entities recognized by the CR model through verbs." ], [ "The definition document consists of 241 SE definitions and their descriptions. We iteratively construct entities in increasing order of number of words in the definitions with the help of their parts-of-speech tags. This helps in creating subset-of relation between a lower-word entity and a higher-word entity. Each root entity is lemmatized such that entities like processes and process appear only once.", "" ], [ "Using the words (especially nouns) that surround an already identified named entity, more specific entities can be identified. This is performed on a few selected entity tags such as opcon and syscon. For example, consider the sentence `SE functions should be performed'. `SE' has tag NNP and `functions' has tag NNS. We create a relation called subset-of between `SE functions' and `SE'.", "" ], [ "", "Relations from abbreviations are simple direct connections between the abbreviation and its full form described in the abbreviations dataset. Figure FIGREF25 shows a snippet of knowledge graph constructed using stands-for and subset-of relationships. Larger graphs are shown in the demo." ], [ "Finally, we explore creating contextual triples from sentences using all the entities extracted using the CR model and entities from definitions. Only those phrases that connect two entities are selected for verb phrase extraction. Using NLTK's regex parser and chunker, a grammar such as", "VP: {(<MD>|<R.*>|<I.*>|<VB.*>|<JJ.*>|", "<TO>)*<VB.*>+(<MD>|<R.*>|<I.*>|<VB.*>|", "<JJ.*>|<TO>)*}", "with at least one verb, can extract relation-like phrases from the phrase that links two concepts. An example is shown in Figure FIGREF27. Further investigation of relation extraction from SE handbook is left as future work." ], [ "We presented a common-knowledge concept extractor for the Systems Engineer's Virtual Assistant (SEVA) system and showed how it can be beneficial for downstream tasks such as relation extraction and knowledge graph construction. We construct a word-level annotated dataset with the help of a domain expert by carefully defining a labelling scheme to train a sequence labelling task to recognize SE concepts. Further, we also construct some essential datasets from the SE domain which can be used for future research. Future directions include constructing a comprehensive common-knowledge relation extractor from SE handbook and incorporating such human knowledge into a more comprehensive machine-processable commonsense knowledge base for the SE domain." ] ], "section_name": [ "INTRODUCTION", "BACKGROUND AND MOTIVATION", "CONCEPT RECOGNITION", "CONCEPT RECOGNITION ::: BIO Labelling Scheme", "CONCEPT RECOGNITION ::: Abbreviations", "CONCEPT RECOGNITION ::: Common-Knowledge Definitions", "CONCEPT RECOGNITION ::: CR Dataset Construction and Pre-processing", "CONCEPT RECOGNITION ::: Fine tuning with BERT", "RELATION EXTRACTION", "RELATION EXTRACTION ::: Hyponyms from Definitions", "RELATION EXTRACTION ::: Hyponyms from POS tags", "RELATION EXTRACTION ::: Relations from Abbreviations", "RELATION EXTRACTION ::: Relation Extraction using Verb Phrase Chunking", "CONCLUSION AND FUTURE WORK" ] }
{ "answers": [ { "annotation_id": [ "2a0fb8db6ed7d5917ad97a0925223bb750282ef3", "607ddc5824ef5a51a8136c2bb1e37ba5dc616eeb", "f515b392318535b7ab1fc338005a331f2a2f2508" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Using python tools such as PyPDF2, NLTK, and RegEx we build a pipeline to convert PDF to raw text along with extensive pre-processing which includes joining sentences that are split, removing URLs, shortening duplicate non-alpha characters, and replacing full forms of abbreviations with their shortened forms. We assume that the SE text is free of spelling errors. For the CR dataset, we select coherent paragraphs and full sentences by avoiding headers and short blurbs. Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level. An example is shown in Figure 2 and the unique tag count is shown in Table 1." ], "extractive_spans": [], "free_form_answer": "1", "highlighted_evidence": [ "Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Using python tools such as PyPDF2, NLTK, and RegEx we build a pipeline to convert PDF to raw text along with extensive pre-processing which includes joining sentences that are split, removing URLs, shortening duplicate non-alpha characters, and replacing full forms of abbreviations with their shortened forms. We assume that the SE text is free of spelling errors. For the CR dataset, we select coherent paragraphs and full sentences by avoiding headers and short blurbs. Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level. An example is shown in Figure 2 and the unique tag count is shown in Table 1." ], "extractive_spans": [], "free_form_answer": "One domain expert.", "highlighted_evidence": [ "Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "066914d0ab625d6185adf12ede12fc676fe50c4a", "f5b521061974a51970bb888a166fd84404a149f6", "4f5bba38bd2ac1eb23888823857cb2a0a3955c73" ], "answer": [ { "evidence": [ "In the hand-labelled dataset, each word gets a label. The idea is to perform multi-class classification using BERT's pre-trained cased language model. We use pytorch transformers and hugging face as per the tutorial by BIBREF17 which uses $BertForTokenClassification$. The text is embedded as tokens and masks with a maximum token length. This embedded tokens are provided as the input to the pre-trained BERT model for a full fine-tuning. The model gives an F1-score of $0.89$ for the concept recognition task. An 80-20 data split is used for training and evaluation. Detailed performance of the CR is shown in Table 2 and 3. Additionally, we also implemented CR using spaCy BIBREF18 which also produced similar results." ], "extractive_spans": [ "F1-score" ], "free_form_answer": "", "highlighted_evidence": [ "The model gives an F1-score of $0.89$ for the concept recognition task." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 2: Performance of different labels" ], "extractive_spans": [], "free_form_answer": "precision, recall, f1-score, and support", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Performance of different labels" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 2: Performance of different labels" ], "extractive_spans": [], "free_form_answer": "Precision, recall, f1-score, and support.", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Performance of different labels" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "06b0b2691e0dcaaf4a2f6bebfe6fb9b337c27313", "0703a2a623ad13cc104dd1175bc709f3e7482f69", "a324ab7dfc82af2eaf05531bee3f1818ca1f93c3" ], "answer": [ { "evidence": [ "In the hand-labelled dataset, each word gets a label. The idea is to perform multi-class classification using BERT's pre-trained cased language model. We use pytorch transformers and hugging face as per the tutorial by BIBREF17 which uses $BertForTokenClassification$. The text is embedded as tokens and masks with a maximum token length. This embedded tokens are provided as the input to the pre-trained BERT model for a full fine-tuning. The model gives an F1-score of $0.89$ for the concept recognition task. An 80-20 data split is used for training and evaluation. Detailed performance of the CR is shown in Table 2 and 3. Additionally, we also implemented CR using spaCy BIBREF18 which also produced similar results." ], "extractive_spans": [ "F1-score of $0.89$" ], "free_form_answer": "", "highlighted_evidence": [ "The model gives an F1-score of $0.89$ for the concept recognition task." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In the hand-labelled dataset, each word gets a label. The idea is to perform multi-class classification using BERT's pre-trained cased language model. We use pytorch transformers and hugging face as per the tutorial by BIBREF17 which uses $BertForTokenClassification$. The text is embedded as tokens and masks with a maximum token length. This embedded tokens are provided as the input to the pre-trained BERT model for a full fine-tuning. The model gives an F1-score of $0.89$ for the concept recognition task. An 80-20 data split is used for training and evaluation. Detailed performance of the CR is shown in Table 2 and 3. Additionally, we also implemented CR using spaCy BIBREF18 which also produced similar results." ], "extractive_spans": [ "The model gives an F1-score of $0.89$ for the concept recognition task." ], "free_form_answer": "", "highlighted_evidence": [ "The model gives an F1-score of $0.89$ for the concept recognition task." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In the hand-labelled dataset, each word gets a label. The idea is to perform multi-class classification using BERT's pre-trained cased language model. We use pytorch transformers and hugging face as per the tutorial by BIBREF17 which uses $BertForTokenClassification$. The text is embedded as tokens and masks with a maximum token length. This embedded tokens are provided as the input to the pre-trained BERT model for a full fine-tuning. The model gives an F1-score of $0.89$ for the concept recognition task. An 80-20 data split is used for training and evaluation. Detailed performance of the CR is shown in Table 2 and 3. Additionally, we also implemented CR using spaCy BIBREF18 which also produced similar results." ], "extractive_spans": [ " F1-score of $0.89$" ], "free_form_answer": "", "highlighted_evidence": [ " The model gives an F1-score of $0.89$ for the concept recognition task." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "ba7ea94532a4af950942aa8c403cb5eb1fb6d43a", "c76cfda631c3f46ba3f0d80950fa2e2d6fa0cdf1", "e3122daa6fc9705fbea240ac15e24f5b32d0eb50" ], "answer": [ { "evidence": [ "The Systems Engineer's Virtual Assistant (SEVA) BIBREF0 was introduced with the goal to assist systems engineers (SE) in their problem-solving abilities by keeping track of large amounts of information of a NASA-specific project and using the information to answer queries from the user. In this work, we address a system element by constructing a common-knowledge concept recognition system for improving the performance of SEVA, using the static knowledge collected from the Systems Engineering Handbook BIBREF1 that is widely used in projects across the organization as domain-specific commonsense knowledge. At NASA, although there exists knowledge engines and ontologies for the SE domain such as MBSE BIBREF2, IMCE BIBREF3, and OpenCaesar BIBREF4, generic commonsense acquisition is rarely discussed; we aim to address this challenge. SE commonsense comes from years of experience and learning which involves background knowledge that goes beyond any handbook. Although constructing an assistant like SEVA system is the overarching objective, a key problem to first address is to extract elementary common-knowledge concepts using the SE handbook and domain experts. We use the term `common-knowledge' as the `commonsense' knowledge of a specific domain. This knowledge can be seen as a pivot that can be used later to collect `commonsense' knowledge for the SE domain. We propose a preliminary research study that can pave a path towards a comprehensive commonsense knowledge acquisition for an effective Artificial Intelligence (AI) application for the SE domain. Overall structure of this work is summarized in Figure 1. Implementation with demo and dataset is available at: https://github.com/jitinkrishnan/NASA-SE ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Implementation with demo and dataset is available at: https://github.com/jitinkrishnan/NASA-SE ." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "The Systems Engineer's Virtual Assistant (SEVA) BIBREF0 was introduced with the goal to assist systems engineers (SE) in their problem-solving abilities by keeping track of large amounts of information of a NASA-specific project and using the information to answer queries from the user. In this work, we address a system element by constructing a common-knowledge concept recognition system for improving the performance of SEVA, using the static knowledge collected from the Systems Engineering Handbook BIBREF1 that is widely used in projects across the organization as domain-specific commonsense knowledge. At NASA, although there exists knowledge engines and ontologies for the SE domain such as MBSE BIBREF2, IMCE BIBREF3, and OpenCaesar BIBREF4, generic commonsense acquisition is rarely discussed; we aim to address this challenge. SE commonsense comes from years of experience and learning which involves background knowledge that goes beyond any handbook. Although constructing an assistant like SEVA system is the overarching objective, a key problem to first address is to extract elementary common-knowledge concepts using the SE handbook and domain experts. We use the term `common-knowledge' as the `commonsense' knowledge of a specific domain. This knowledge can be seen as a pivot that can be used later to collect `commonsense' knowledge for the SE domain. We propose a preliminary research study that can pave a path towards a comprehensive commonsense knowledge acquisition for an effective Artificial Intelligence (AI) application for the SE domain. Overall structure of this work is summarized in Figure 1. Implementation with demo and dataset is available at: https://github.com/jitinkrishnan/NASA-SE ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Implementation with demo and dataset is available at: https://github.com/jitinkrishnan/NASA-SE ." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "The Systems Engineer's Virtual Assistant (SEVA) BIBREF0 was introduced with the goal to assist systems engineers (SE) in their problem-solving abilities by keeping track of large amounts of information of a NASA-specific project and using the information to answer queries from the user. In this work, we address a system element by constructing a common-knowledge concept recognition system for improving the performance of SEVA, using the static knowledge collected from the Systems Engineering Handbook BIBREF1 that is widely used in projects across the organization as domain-specific commonsense knowledge. At NASA, although there exists knowledge engines and ontologies for the SE domain such as MBSE BIBREF2, IMCE BIBREF3, and OpenCaesar BIBREF4, generic commonsense acquisition is rarely discussed; we aim to address this challenge. SE commonsense comes from years of experience and learning which involves background knowledge that goes beyond any handbook. Although constructing an assistant like SEVA system is the overarching objective, a key problem to first address is to extract elementary common-knowledge concepts using the SE handbook and domain experts. We use the term `common-knowledge' as the `commonsense' knowledge of a specific domain. This knowledge can be seen as a pivot that can be used later to collect `commonsense' knowledge for the SE domain. We propose a preliminary research study that can pave a path towards a comprehensive commonsense knowledge acquisition for an effective Artificial Intelligence (AI) application for the SE domain. Overall structure of this work is summarized in Figure 1. Implementation with demo and dataset is available at: https://github.com/jitinkrishnan/NASA-SE ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Implementation with demo and dataset is available at: https://github.com/jitinkrishnan/NASA-SE ." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "09c2396e0a194ce33c43324ef69ed58ae3bceaff", "18c80200bd576020c07268c396bf275595f24272", "edf104525d7732daa7f252e286dd9cde00f38b49" ], "answer": [ { "evidence": [ "SE concepts are less ambiguous as compared to generic natural language text. A word usually means one concept. For example, the word `system' usually means the same when referring to a `complex system', `system structure', or `management system' in the SE domain. In generic text, the meaning of terms like `evaluation', `requirement', or `analysis' may contextually differ. We would like domain specific phrases such as `system evaluation', `performance requirement', or `system analysis' to be single entities. Based on the operational and system concepts described in BIBREF0, we carefully construct a set of concept-labels for the SE handbook which is shown in the next section.", "CONCEPT RECOGNITION ::: BIO Labelling Scheme", "abb: represents abbreviations such as TRL representing Technology Readiness Level.", "grp: represents a group of people or an individual such as Electrical Engineers, Systems Engineers or a Project Manager.", "syscon: represents any system concepts such as engineering unit, product, hardware, software, etc. They mostly represent physical concepts.", "opcon: represents operational concepts such as decision analysis process, technology maturity assessment, system requirements review, etc.", "seterm: represents generic terms that are frequently used in SE text and those that do not fall under syscon or opcon such as project, mission, key performance parameter, audit etc.", "event: represents event-like information in SE text such as Pre-Phase A, Phase A, Phase B, etc.", "org: represents an organization such as `NASA', `aerospace industry', etc.", "art: represents names of artifacts or instruments such as `AS1300'", "cardinal: represents numerical values such as `1', `100', 'one' etc.", "loc: represents location-like entities such as component facilities or centralized facility.", "mea: represents measures, features, or behaviors such as cost, risk, or feasibility." ], "extractive_spans": [], "free_form_answer": "Based on operation and system concepts, the labels are abb, grp, syscon, opcon, seterm, event, org, art, cardinal, loc and mea.", "highlighted_evidence": [ " Based on the operational and system concepts described in BIBREF0, we carefully construct a set of concept-labels for the SE handbook which is shown in the next section.\n\nCONCEPT RECOGNITION ::: BIO Labelling Scheme\nabb: represents abbreviations such as TRL representing Technology Readiness Level.\n\ngrp: represents a group of people or an individual such as Electrical Engineers, Systems Engineers or a Project Manager.\n\nsyscon: represents any system concepts such as engineering unit, product, hardware, software, etc. They mostly represent physical concepts.\n\nopcon: represents operational concepts such as decision analysis process, technology maturity assessment, system requirements review, etc.\n\nseterm: represents generic terms that are frequently used in SE text and those that do not fall under syscon or opcon such as project, mission, key performance parameter, audit etc.\n\nevent: represents event-like information in SE text such as Pre-Phase A, Phase A, Phase B, etc.\n\norg: represents an organization such as `NASA', `aerospace industry', etc.\n\nart: represents names of artifacts or instruments such as `AS1300'\n\ncardinal: represents numerical values such as `1', `100', 'one' etc.\n\nloc: represents location-like entities such as component facilities or centralized facility.\n\nmea: represents measures, features, or behaviors such as cost, risk, or feasibility." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "CONCEPT RECOGNITION ::: BIO Labelling Scheme", "abb: represents abbreviations such as TRL representing Technology Readiness Level.", "grp: represents a group of people or an individual such as Electrical Engineers, Systems Engineers or a Project Manager.", "syscon: represents any system concepts such as engineering unit, product, hardware, software, etc. They mostly represent physical concepts.", "opcon: represents operational concepts such as decision analysis process, technology maturity assessment, system requirements review, etc.", "seterm: represents generic terms that are frequently used in SE text and those that do not fall under syscon or opcon such as project, mission, key performance parameter, audit etc.", "event: represents event-like information in SE text such as Pre-Phase A, Phase A, Phase B, etc.", "org: represents an organization such as `NASA', `aerospace industry', etc.", "art: represents names of artifacts or instruments such as `AS1300'", "cardinal: represents numerical values such as `1', `100', 'one' etc.", "loc: represents location-like entities such as component facilities or centralized facility.", "mea: represents measures, features, or behaviors such as cost, risk, or feasibility." ], "extractive_spans": [ "BIO Labelling Scheme\nabb: represents abbreviations such as TRL representing Technology Readiness Level.\n\ngrp: represents a group of people or an individual such as Electrical Engineers, Systems Engineers or a Project Manager.\n\nsyscon: represents any system concepts such as engineering unit, product, hardware, software, etc. They mostly represent physical concepts.\n\nopcon: represents operational concepts such as decision analysis process, technology maturity assessment, system requirements review, etc.\n\nseterm: represents generic terms that are frequently used in SE text and those that do not fall under syscon or opcon such as project, mission, key performance parameter, audit etc.\n\nevent: represents event-like information in SE text such as Pre-Phase A, Phase A, Phase B, etc.\n\norg: represents an organization such as `NASA', `aerospace industry', etc.\n\nart: represents names of artifacts or instruments such as `AS1300'\n\ncardinal: represents numerical values such as `1', `100', 'one' etc.\n\nloc: represents location-like entities such as component facilities or centralized facility.\n\nmea: represents measures, features, or behaviors such as cost, risk, or feasibility." ], "free_form_answer": "", "highlighted_evidence": [ "BIO Labelling Scheme\nabb: represents abbreviations such as TRL representing Technology Readiness Level.\n\ngrp: represents a group of people or an individual such as Electrical Engineers, Systems Engineers or a Project Manager.\n\nsyscon: represents any system concepts such as engineering unit, product, hardware, software, etc. They mostly represent physical concepts.\n\nopcon: represents operational concepts such as decision analysis process, technology maturity assessment, system requirements review, etc.\n\nseterm: represents generic terms that are frequently used in SE text and those that do not fall under syscon or opcon such as project, mission, key performance parameter, audit etc.\n\nevent: represents event-like information in SE text such as Pre-Phase A, Phase A, Phase B, etc.\n\norg: represents an organization such as `NASA', `aerospace industry', etc.\n\nart: represents names of artifacts or instruments such as `AS1300'\n\ncardinal: represents numerical values such as `1', `100', 'one' etc.\n\nloc: represents location-like entities such as component facilities or centralized facility.\n\nmea: represents measures, features, or behaviors such as cost, risk, or feasibility." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "CONCEPT RECOGNITION ::: BIO Labelling Scheme", "abb: represents abbreviations such as TRL representing Technology Readiness Level.", "grp: represents a group of people or an individual such as Electrical Engineers, Systems Engineers or a Project Manager.", "syscon: represents any system concepts such as engineering unit, product, hardware, software, etc. They mostly represent physical concepts.", "opcon: represents operational concepts such as decision analysis process, technology maturity assessment, system requirements review, etc.", "seterm: represents generic terms that are frequently used in SE text and those that do not fall under syscon or opcon such as project, mission, key performance parameter, audit etc.", "event: represents event-like information in SE text such as Pre-Phase A, Phase A, Phase B, etc.", "org: represents an organization such as `NASA', `aerospace industry', etc.", "art: represents names of artifacts or instruments such as `AS1300'", "cardinal: represents numerical values such as `1', `100', 'one' etc.", "loc: represents location-like entities such as component facilities or centralized facility.", "mea: represents measures, features, or behaviors such as cost, risk, or feasibility." ], "extractive_spans": [], "free_form_answer": "1. abb\n2. grp\n3. syscon\n4. opcon\n5. seterm\n6. event\n7. org\n8. art\n9. cardinal\n10. loc\n11. mea", "highlighted_evidence": [ "CONCEPT RECOGNITION ::: BIO Labelling Scheme\nabb: represents abbreviations such as TRL representing Technology Readiness Level.\n\ngrp: represents a group of people or an individual such as Electrical Engineers, Systems Engineers or a Project Manager.\n\nsyscon: represents any system concepts such as engineering unit, product, hardware, software, etc. They mostly represent physical concepts.\n\nopcon: represents operational concepts such as decision analysis process, technology maturity assessment, system requirements review, etc.\n\nseterm: represents generic terms that are frequently used in SE text and those that do not fall under syscon or opcon such as project, mission, key performance parameter, audit etc.\n\nevent: represents event-like information in SE text such as Pre-Phase A, Phase A, Phase B, etc.\n\norg: represents an organization such as `NASA', `aerospace industry', etc.\n\nart: represents names of artifacts or instruments such as `AS1300'\n\ncardinal: represents numerical values such as `1', `100', 'one' etc.\n\nloc: represents location-like entities such as component facilities or centralized facility.\n\nmea: represents measures, features, or behaviors such as cost, risk, or feasibility." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "031570ce1b9babce381b027869814d5994a8f2f0", "6805c91545574ff5f3c027d9c968b8506d8f332c", "e78afe1423dc38d438b2f829c518884a9d04871f" ], "answer": [ { "evidence": [ "Any language model can be used for the purpose of customizing an NER problem to CR. We choose to go with BERT BIBREF16 because of its general-purpose nature and usage of contextualized word embeddings." ], "extractive_spans": [ "BERT" ], "free_form_answer": "", "highlighted_evidence": [ "Any language model can be used for the purpose of customizing an NER problem to CR. We choose to go with BERT BIBREF16 because of its general-purpose nature and usage of contextualized word embeddings." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Any language model can be used for the purpose of customizing an NER problem to CR. We choose to go with BERT BIBREF16 because of its general-purpose nature and usage of contextualized word embeddings." ], "extractive_spans": [ "BERT" ], "free_form_answer": "", "highlighted_evidence": [ "We choose to go with BERT BIBREF16 because of its general-purpose nature and usage of contextualized word embeddings." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Any language model can be used for the purpose of customizing an NER problem to CR. We choose to go with BERT BIBREF16 because of its general-purpose nature and usage of contextualized word embeddings." ], "extractive_spans": [ "BERT " ], "free_form_answer": "", "highlighted_evidence": [ "We choose to go with BERT BIBREF16 because of its general-purpose nature and usage of contextualized word embeddings." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "0b934d16c1556f17a48e87759539914defec10de", "e5cd635074809c8ed11c9ec32aa7b0d337eb16a0", "5c82c4ab86752bf6bee3782404359add5967412a" ], "answer": [ { "evidence": [ "Using python tools such as PyPDF2, NLTK, and RegEx we build a pipeline to convert PDF to raw text along with extensive pre-processing which includes joining sentences that are split, removing URLs, shortening duplicate non-alpha characters, and replacing full forms of abbreviations with their shortened forms. We assume that the SE text is free of spelling errors. For the CR dataset, we select coherent paragraphs and full sentences by avoiding headers and short blurbs. Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level. An example is shown in Figure 2 and the unique tag count is shown in Table 1." ], "extractive_spans": [ "3700 sentences" ], "free_form_answer": "", "highlighted_evidence": [ "Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Using python tools such as PyPDF2, NLTK, and RegEx we build a pipeline to convert PDF to raw text along with extensive pre-processing which includes joining sentences that are split, removing URLs, shortening duplicate non-alpha characters, and replacing full forms of abbreviations with their shortened forms. We assume that the SE text is free of spelling errors. For the CR dataset, we select coherent paragraphs and full sentences by avoiding headers and short blurbs. Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level. An example is shown in Figure 2 and the unique tag count is shown in Table 1." ], "extractive_spans": [ "3700 sentences " ], "free_form_answer": "", "highlighted_evidence": [ "For the CR dataset, we select coherent paragraphs and full sentences by avoiding headers and short blurbs. Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Using python tools such as PyPDF2, NLTK, and RegEx we build a pipeline to convert PDF to raw text along with extensive pre-processing which includes joining sentences that are split, removing URLs, shortening duplicate non-alpha characters, and replacing full forms of abbreviations with their shortened forms. We assume that the SE text is free of spelling errors. For the CR dataset, we select coherent paragraphs and full sentences by avoiding headers and short blurbs. Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level. An example is shown in Figure 2 and the unique tag count is shown in Table 1." ], "extractive_spans": [ "roughly 3700 sentences at the word-token level" ], "free_form_answer": "", "highlighted_evidence": [ "Using domain keywords and a domain expert, we annotate roughly 3700 sentences at the word-token level." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero", "zero", "zero" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "question": [ "How many domain experts were involved into creation of dataset?", "What metrics are used for evaluation?", "What is the performance of fine tuned model on this dataset?", "Are constructed datasets open sourced?", "How does labeling scheme look like?", "What pretrained language model is used?", "How big is constructed dataset?" ], "question_id": [ "f5603271a04452cbdbb07697859bef2a2030d75c", "6575ffec1844e6fde5a668bce2afb16b67b65c1f", "77c3416578b52994227bae7f2529600f02183e12", "2abcff4fdedf9b17f76875cc338ba4ab8d1eccd3", "6df57a21ca875e63fb39adece6a9ace5bb2b2cfa", "b39b278aa1cf2f87ad4159725dff77b387f2df84", "814e945668e2b6f31b088918758b120fb00ada7d" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "", "", "", "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Common-knowledge concept recognition and simple relation extraction", "Table 2: Performance of different labels", "Table 1: Unique Tag Count from the CR dataset", "Figure 2: A Snippet of the concept-labelled dataset", "Table 3: Overall Performance of CR; For fairness, we also provide the accuracy when the most common ‘O’-tag is excluded from the analysis.", "Figure 3: A snippet of the knowledge graph generated", "Figure 4: Relation Extraction using Verb Phrase" ], "file": [ "1-Figure1-1.png", "3-Table2-1.png", "3-Table1-1.png", "3-Figure2-1.png", "3-Table3-1.png", "4-Figure3-1.png", "4-Figure4-1.png" ] }
[ "How many domain experts were involved into creation of dataset?", "What metrics are used for evaluation?", "How does labeling scheme look like?" ]
[ [ "2003.11687-CONCEPT RECOGNITION ::: CR Dataset Construction and Pre-processing-0" ], [ "2003.11687-3-Table2-1.png", "2003.11687-CONCEPT RECOGNITION ::: Fine tuning with BERT-1" ], [ "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-4", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-0", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-7", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-10", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-3", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-6", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-8", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-9", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-1", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-2", "2003.11687-CONCEPT RECOGNITION-0", "2003.11687-CONCEPT RECOGNITION ::: BIO Labelling Scheme-5" ] ]
[ "One domain expert.", "Precision, recall, f1-score, and support.", "1. abb\n2. grp\n3. syscon\n4. opcon\n5. seterm\n6. event\n7. org\n8. art\n9. cardinal\n10. loc\n11. mea" ]
30
1703.10152
Automatic Argumentative-Zoning Using Word2vec
In comparison with document summarization on the articles from social media and newswire, argumentative zoning (AZ) is an important task in scientific paper analysis. Traditional methodology to carry on this task relies on feature engineering from different levels. In this paper, three models of generating sentence vectors for the task of sentence classification were explored and compared. The proposed approach builds sentence representations using learned embeddings based on neural network. The learned word embeddings formed a feature space, to which the examined sentence is mapped to. Those features are input into the classifiers for supervised classification. Using 10-cross-validation scheme, evaluation was conducted on the Argumentative-Zoning (AZ) annotated articles. The results showed that simply averaging the word vectors in a sentence works better than the paragraph to vector algorithm and by integrating specific cuewords into the loss function of the neural network can improve the classification performance. In comparison with the hand-crafted features, the word2vec method won for most of the categories. However, the hand-crafted features showed their strength on classifying some of the categories.
{ "paragraphs": [ [ "One of the crucial tasks for researchers to carry out scientific investigations is to detect existing ideas that are related to their research topics. Research ideas are usually documented in scientific publications. Normally, there is one main idea stated in the abstract, explicitly presenting the aim of the paper. There are also other sub-ideas distributed across the entire paper. As the growth rate of scientific publication has been rising dramatically, researchers are overwhelmed by the explosive information. It is almost impossible to digest the ideas contained in the documents emerged everyday. Therefore, computer assisted technologies such as document summarization are expected to play a role in condensing information and providing readers with more relevant short texts. Unlike document summarization from news circles, where the task is to identify centroid sentences BIBREF0 or to extract the first few sentences of the paragraphs BIBREF1 , summarization of scientific articles involves extra text processing stage BIBREF2 . After highest ranked texts are extracted, rhetorical status analysis will be conducted on the selected sentences. Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences. The results of AZ provide readers with general discourse context from which the scientific ideas could be better linked, compared and analyzed. For example, given a specific task, which sentences should be shown to the reader is related to the features of the sentences. For the task of identifying a paper's unique contribution, sentences expressing research purpose should be retrieved with higher priority. For comparing ideas, statements of comparison with other works would be more useful. Teufel et. al. BIBREF2 introduced their rhetorical annotation scheme which takes into account of the aspects of argumentation, metadiscourse and relatedness to other works. Their scheme resulted seven categories of rhetorical status and the categories are assigned to full sentences. Examples of human annotated sentences with their rhetorical status are shown in Table. TABREF2 . The seven categories are aim, contrast, own, background, other, basis and textual.", "Analyzing the rhetorical status of sentences manually requires huge amount of efforts, especially for structuring information from multiple documents. Fortunately, computer algorithms have been introduced to solve this problem. With the development of artificial intelligence, machine learning and computational linguistics, Natural Language Processing (NLP) has become a popular research area BIBREF4 , BIBREF5 . NLP covers the applications from document retrieval, text categorization BIBREF6 , document summarization BIBREF7 to sentiment analysis BIBREF8 , BIBREF9 . Those applications are targeting different types of text resources, such as articles from social media BIBREF10 and scientific publications BIBREF2 . There are several approaches to tackle these tasks. From machine learning prospective, text can be analysed via supervised BIBREF2 , semi-supervised BIBREF11 and unsupervised BIBREF12 algorithms.", "Document summarization from social media and news circles has received much attention for the past decades. Those problems have been addressed from many angles, one of which is feature extraction and representation. At the early stage of document summarization, features are usually engineered manually. Although the hand-crafted features have shown the ability for document summarization and sentiment analysis BIBREF13 , BIBREF9 , there are not enough efficient features to capture the semantic relations between words, phrases and sentences. Moreover, building a sufficient pool of features manually is difficult, because it requires expert knowledge and it is time-consuming. Teufel et. al. BIBREF2 have built feature pool of sixteen types of features to classify sentences, such as the position of sentence, sentence length and tense. Widyantoro et. al. used content features, qualifying adjectives and meta-discourse features BIBREF14 to explore AZ task. It took efforts to engineer these features and it is also time consuming to optimize the combination of the entire features. With the advent of neural networks BIBREF15 , it is possible for computers to learn feature representations automatically. Recently, word embedding technique BIBREF16 has been widely used in the NLP community. There are plenty of cases where word embedding and sentence representations have been applied to short text classification BIBREF17 and paraphrase detection BIBREF18 . However, the effectiveness of this technique on AZ needs further study. The research question is, is it possible to extract word embeddings as features to classify sentences into the seven categories mentioned above using supervised machine learning approach?" ], [ "The tool of word2vec proposed by Mikolov et al. BIBREF16 has gained a lot attention recently. With word2vec tool, word embeddings can be learnt from big amount of text corpus and the semantic relationships between words can be measured by the cosine distances between the vectors. The idea behind word embeddings is to use distributed representation BIBREF19 to map each word into k-dimension vector. How these vectors are generated using word2vec tool? The common method to derive the vectors is using neural probabilistic language model BIBREF20 . The underlying word representations for each word are obtained while training the language model. Similar to the mechanism in language model, Mikolov et al. BIBREF16 introduced two architectures: Skip-gram model and continuous bag of words (CBOW) model. Each of the model has two different training strategies, such as hierarchical softmax and negative sampling. Both these two models have three layers: input, projection and output layer. The word vectors are obtained once the models are optimized. Usually, this optimizing process is done using stochastic gradient descent method. It doesn't need labels when training the models, which makes word2vec algorithm more valuable compared with traditional supervised machine learning methods that require a big amount of annotated data. Given enough text corpus, the word2vec can generate meaningful representations.", "Word2vec has been applied to sentiment analysis BIBREF21 , BIBREF22 , BIBREF23 and text classification BIBREF24 . Sadeghian and Sharafat BIBREF25 explored averaging of the word vectors in a sentiment review statement. Their results indicated that word2vec models significantly outperform the vanilla bag-of-words model. Amongst the word2vec based models, softmax provides the best form of classification. Tang et al. BIBREF21 used the concatenation of vectors derived from different convolutional layers to analyze the sentiment statements. They also trained sentiment-specific word embeddings to improve the twitter sentiment classification results. This work is aiming at learning word embeddings for the task of AZ. The results were compared from three aspects: the impact of the training corpus, the effectiveness of specific word embeddings and different ways of constructing sentence representations based on the learned word vectors.", "Le and Mikolov BIBREF26 introduced the concept of word vector representation in a formal way:", "Given a sequence of training words INLINEFORM0 , the objective of the word2vec model is to maximize the average log probability:", " INLINEFORM0 INLINEFORM1 INLINEFORM2 p INLINEFORM3 (1)", "Using softmax technique, the prediction can be formalized as:", "p INLINEFORM0 = INLINEFORM1 (2)", "Each of INLINEFORM0 is un-normalized log probability for each output word INLINEFORM1 :", " INLINEFORM0 (3)" ], [ "In this study, sentence embeddings were learned from large text corpus as features to classify sentences into seven categories in the task of AZ. Three models were explored to obtain the sentence vectors: averaging the vectors of the words in one sentence, paragraph vectors and specific word vectors.", "The first model, averaging word vectors ( INLINEFORM0 ), is to average the vectors in word sequence INLINEFORM1 . The main process in this model is to learn the word embedding matrix INLINEFORM2 :", " INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 (4)", "where INLINEFORM0 is the word embedding for word INLINEFORM1 , which is learned by the classical word2vec algorithm BIBREF16 .", "The second model, INLINEFORM0 , is aiming at training paragraph vectors. It is also called distributed memory model of paragraph vectors (PV-DM) BIBREF26 , which is an extension of word2vec. In comparison with the word2vec framework, the only change in PV-DM is in the equation (3), where INLINEFORM1 is constructed from INLINEFORM2 and INLINEFORM3 , where matrix INLINEFORM4 is the word vector and INLINEFORM5 holds the paragraph vectors in such a way that every paragraph is mapped to a unique vector represented by a column in matrix INLINEFORM6 .", "The third model is constructed for the purpose of improving classification results for a certain category. In this study specifically, the optimization task was focused on identifying the category INLINEFORM0 . In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). After obtaining the word vectors via INLINEFORM4 , the same scheme was used to average the vectors in one sentence as in the model INLINEFORM5 ." ], [ "The learned word embeddings are input into a classifier as features under a supervised machine learning framework. Similar to sentiment classification using word embeddings BIBREF21 , where they try to predict each tweet to be either positive or negative, in the task of AZ, the embeddings are used to classify each sentence into one of the seven categories.", "To evaluate the classification performance, precision, recall and F-measure were computed." ], [ " INLINEFORM0 collection. ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which 622,144 sentences were generated after filtering out sentences with lower quality.", " INLINEFORM0 collection contains 6,778 sentences, extracted from the titles and abstracts of publications provided by WEB OF SCIENCE ." ], [ "Argumentative Zoning Corpus ( INLINEFORM0 corpus) consists of 80 AZ INLINEFORM1 annotated conference articles in computational linguistics, originally drawn from the Cmplg arXiv. . After Concatenating sub-sentences, 7,347 labeled sentences were obtained." ], [ "To compare the three models effectiveness on the AZ task, the three models on a same ACL dataset (introduced int he dataset section) were trained. The word2vec were also trained using different parameters, such as different dimension of features. To evaluate the impact from different domains, the first model was trained on different corpus.", "The characteristics of word embeddings based on different model and dataset are listed in Table. TABREF12 ." ], [ "Inspired by the work from Sadeghian and Sharafat BIBREF25 , the word to vector features were set up as follows: the Minimum word count is 40; The number of threads to run in parallel is 4 and the context window is 10." ], [ "In imbalanced data sets, some classes are significantly outnumbered by other classes BIBREF27 , which affects the classification results. In this experiment, the test dataset is an imbalanced data set. Table. TABREF16 shows the distribution of rhetorical categories from the INLINEFORM0 test dataset. The categories OWN and OTH are significantly outnumbering other categories.", "To deal with the problem of classification on unbalanced data, synthetic Minority Over-sampling TEchnique (SMOTE) BIBREF28 were performed on the original dataset. 10-cross validation scheme was adopted and the results were averaged from 10 iterations." ], [ "Table. TABREF19 and TABREF20 show the classification performance of different methods. ", "The results were examined from the following aspects:", "When the feature dimension is set to 100 and the training corpus is ACL, the results generated by different models were compared (AVGWVEC,", "PARAVEC and AVGWVEC+BSWE for BAS category only). Looking at the F-measure, AVGWVEC performs better than PARAVEC, but PARAVEC gave a better precision results on several categories, such as AIM, CTR, TXT and OWN. The results showed that PARAVEC model is not robust, for example, it performs badly for the category of BAS. For specific category classification, take the BAS category for example, the BSWE model outperforms others in terms of F-measure.", "When the model is fixed to AVGWVEC and the training corpus is ACL, the feature size impact (300 and 100 dimensions) was investigated. From the F-measure, it can be seen that for some categories, 300-dimension features perform better than the 100-dimension ones, for example, CTR and BKG, but they are not as good as 100-dimension features for some categories, such as BAS.", "When the model is set to AVGWVEC and the feature dimension is 100, the results computed from different training corpus were compared (ACL+AZ, MixedAbs and Brown corpus). ACL+AZ outperforms others and brown corpus is better than MixedAbs for most of the categories, but brown corpus is not as good as MixedAbs for the category of OWN.", "Finally, the results were compared between word embeddings and the methods of cuewords, Teufel 2002 and baseline. To evaluate word embeddings on AZ, the model AVGWVEC trained on ACL+AZ was used for the comparison. It can be seen from the table. TABREF19 , the model of word embeddings is better than the method using cuewords matching. It also outperforms Teufel 2002 for most of the cases, except AIM, BAS and OWN. It won baseline for most of the categories, except OWN." ], [ "The classification results showed that the type of word embeddings and the training corpus affect the AZ performance. As the simple model, INLINEFORM0 performs better than others, which indicate averaging the word vectors in a sentence can capture the semantic property of statements. By training specific argumentation word embeddings, the performance can be improved, which can be seen from the case of detecting BAS status using INLINEFORM1 model.", "Feature dimension doesn't dominate the results. There is no significant difference between the resutls generated by 300-dimension of features and 100 dimensions.", "Training corpus affects the results. ACL+AZ outperforming others indicates that the topics of the training corpus are important factors in argumentative zoning. Although Brown corpus has more vocabularies, it doesn't win ACL+AZ.", "In general, the classification performance of word embeddings is competitive in terms of F-measure for most of the categories. But for classifying the categories AIM, BAS and OWN, the manually crafted features proposed by Teufel et al. BIBREF2 gave better results." ], [ "In this paper, different word embedding models on the task of argumentative zoning were compared . The results showed that word embeddings are effective on sentence classification from scientific papers. Word embeddings trained on a relevant corpus can capture the semantic features of statements and they are easier to be obtained than hand engineered features.", "To improve the sentence classification for a specific category, integrating word specific embedding strategy helps. The size of the feature pool doesn't matter too much on the results, nor does the vocabulary size. In comparison, the domain of the training corpus affects the classification performance." ] ], "section_name": [ "Introduction", "Related Work", "Models", "Classification and evaluation", "Training Dataset", "Test Dataset", "Training strategy", "Parameters", "Strategy of dealing with unbalanced data", "Results of classification for per category", "Discussion", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "16e2b3ebfd9fad50e967d2ae386428d960a340c5", "19db334ffb8401f699e0c915f9c190bb0424a9f4", "dfe5363ec3ec2fccc972f5a79b5763a0702bdbbd" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 4. Performance of sentence classification per category I (precision/recall/Fmeasure)" ], "extractive_spans": [], "free_form_answer": "Precision, recall and F-measure.", "highlighted_evidence": [ "FLOAT SELECTED: Table 4. Performance of sentence classification per category I (precision/recall/Fmeasure)" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To evaluate the classification performance, precision, recall and F-measure were computed." ], "extractive_spans": [ "precision", "recall", "F-measure" ], "free_form_answer": "", "highlighted_evidence": [ "To evaluate the classification performance, precision, recall and F-measure were computed." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To evaluate the classification performance, precision, recall and F-measure were computed." ], "extractive_spans": [ "precision, recall and F-measure" ], "free_form_answer": "", "highlighted_evidence": [ "To evaluate the classification performance, precision, recall and F-measure were computed." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "010fd112c1b0d1cfb6591f20e73c752a33d97465", "98950854b3e2c4f1cdf1a54d8e472d2bb8455a60", "d4c5403306d7ae1e0394c9f1222dd326884851a5" ], "answer": [ { "evidence": [ "Document summarization from social media and news circles has received much attention for the past decades. Those problems have been addressed from many angles, one of which is feature extraction and representation. At the early stage of document summarization, features are usually engineered manually. Although the hand-crafted features have shown the ability for document summarization and sentiment analysis BIBREF13 , BIBREF9 , there are not enough efficient features to capture the semantic relations between words, phrases and sentences. Moreover, building a sufficient pool of features manually is difficult, because it requires expert knowledge and it is time-consuming. Teufel et. al. BIBREF2 have built feature pool of sixteen types of features to classify sentences, such as the position of sentence, sentence length and tense. Widyantoro et. al. used content features, qualifying adjectives and meta-discourse features BIBREF14 to explore AZ task. It took efforts to engineer these features and it is also time consuming to optimize the combination of the entire features. With the advent of neural networks BIBREF15 , it is possible for computers to learn feature representations automatically. Recently, word embedding technique BIBREF16 has been widely used in the NLP community. There are plenty of cases where word embedding and sentence representations have been applied to short text classification BIBREF17 and paraphrase detection BIBREF18 . However, the effectiveness of this technique on AZ needs further study. The research question is, is it possible to extract word embeddings as features to classify sentences into the seven categories mentioned above using supervised machine learning approach?" ], "extractive_spans": [ "position of sentence", "sentence length", "tense", "qualifying adjectives", "meta-discourse features" ], "free_form_answer": "", "highlighted_evidence": [ "Teufel et. al. BIBREF2 have built feature pool of sixteen types of features to classify sentences, such as the position of sentence, sentence length and tense. Widyantoro et. al. used content features, qualifying adjectives and meta-discourse features BIBREF14 to explore AZ task. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "One of the crucial tasks for researchers to carry out scientific investigations is to detect existing ideas that are related to their research topics. Research ideas are usually documented in scientific publications. Normally, there is one main idea stated in the abstract, explicitly presenting the aim of the paper. There are also other sub-ideas distributed across the entire paper. As the growth rate of scientific publication has been rising dramatically, researchers are overwhelmed by the explosive information. It is almost impossible to digest the ideas contained in the documents emerged everyday. Therefore, computer assisted technologies such as document summarization are expected to play a role in condensing information and providing readers with more relevant short texts. Unlike document summarization from news circles, where the task is to identify centroid sentences BIBREF0 or to extract the first few sentences of the paragraphs BIBREF1 , summarization of scientific articles involves extra text processing stage BIBREF2 . After highest ranked texts are extracted, rhetorical status analysis will be conducted on the selected sentences. Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences. The results of AZ provide readers with general discourse context from which the scientific ideas could be better linked, compared and analyzed. For example, given a specific task, which sentences should be shown to the reader is related to the features of the sentences. For the task of identifying a paper's unique contribution, sentences expressing research purpose should be retrieved with higher priority. For comparing ideas, statements of comparison with other works would be more useful. Teufel et. al. BIBREF2 introduced their rhetorical annotation scheme which takes into account of the aspects of argumentation, metadiscourse and relatedness to other works. Their scheme resulted seven categories of rhetorical status and the categories are assigned to full sentences. Examples of human annotated sentences with their rhetorical status are shown in Table. TABREF2 . The seven categories are aim, contrast, own, background, other, basis and textual." ], "extractive_spans": [ " sentences with their rhetorical status " ], "free_form_answer": "", "highlighted_evidence": [ " Examples of human annotated sentences with their rhetorical status are shown in Table. TABREF2 . The seven categories are aim, contrast, own, background, other, basis and textual." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "098c73cfe387e99c3154752ccb933808fa96eb54", "368aca9a0cc0e756b317b85fa96d2b2e48427aeb", "3ab007c3360915d311d38e14a1cbec71dbac5f7a" ], "answer": [ { "evidence": [ "The third model is constructed for the purpose of improving classification results for a certain category. In this study specifically, the optimization task was focused on identifying the category INLINEFORM0 . In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). After obtaining the word vectors via INLINEFORM4 , the same scheme was used to average the vectors in one sentence as in the model INLINEFORM5 ." ], "extractive_spans": [ "INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 )" ], "free_form_answer": "", "highlighted_evidence": [ " In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The third model is constructed for the purpose of improving classification results for a certain category. In this study specifically, the optimization task was focused on identifying the category INLINEFORM0 . In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). After obtaining the word vectors via INLINEFORM4 , the same scheme was used to average the vectors in one sentence as in the model INLINEFORM5 .", "To compare the three models effectiveness on the AZ task, the three models on a same ACL dataset (introduced int he dataset section) were trained. The word2vec were also trained using different parameters, such as different dimension of features. To evaluate the impact from different domains, the first model was trained on different corpus." ], "extractive_spans": [ "Sentiment-Specific Word Embedding", "word2vec" ], "free_form_answer": "", "highlighted_evidence": [ "In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). ", "The word2vec were also trained using different parameters, such as different dimension of features. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The first model, averaging word vectors ( INLINEFORM0 ), is to average the vectors in word sequence INLINEFORM1 . The main process in this model is to learn the word embedding matrix INLINEFORM2 :", "INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 (4)", "where INLINEFORM0 is the word embedding for word INLINEFORM1 , which is learned by the classical word2vec algorithm BIBREF16 .", "The third model is constructed for the purpose of improving classification results for a certain category. In this study specifically, the optimization task was focused on identifying the category INLINEFORM0 . In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). After obtaining the word vectors via INLINEFORM4 , the same scheme was used to average the vectors in one sentence as in the model INLINEFORM5 ." ], "extractive_spans": [ "word2vec", "Sentiment-Specific Word Embedding" ], "free_form_answer": "", "highlighted_evidence": [ "The main process in this model is to learn the word embedding matrix INLINEFORM2 :\n\nINLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 (4)\n\nwhere INLINEFORM0 is the word embedding for word INLINEFORM1 , which is learned by the classical word2vec algorithm BIBREF16 .", " In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 )." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "010da767b9ad52488cfa4a0319581159f794d159", "5e6f7ebe439345b36064b9148db323ba44ec4377", "f535b99ee28dc47b0f0c84bec4d579863bc6a296" ], "answer": [ { "evidence": [ "Argumentative Zoning Corpus ( INLINEFORM0 corpus) consists of 80 AZ INLINEFORM1 annotated conference articles in computational linguistics, originally drawn from the Cmplg arXiv. . After Concatenating sub-sentences, 7,347 labeled sentences were obtained." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Argumentative Zoning Corpus ( INLINEFORM0 corpus) consists of 80 AZ INLINEFORM1 annotated conference articles in computational linguistics, originally drawn from the Cmplg arXiv. . After Concatenating sub-sentences, 7,347 labeled sentences were obtained." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "Training Dataset", "INLINEFORM0 collection. ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which 622,144 sentences were generated after filtering out sentences with lower quality.", "INLINEFORM0 collection contains 6,778 sentences, extracted from the titles and abstracts of publications provided by WEB OF SCIENCE .", "Test Dataset", "Argumentative Zoning Corpus ( INLINEFORM0 corpus) consists of 80 AZ INLINEFORM1 annotated conference articles in computational linguistics, originally drawn from the Cmplg arXiv. . After Concatenating sub-sentences, 7,347 labeled sentences were obtained." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Training Dataset\nINLINEFORM0 collection. ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which 622,144 sentences were generated after filtering out sentences with lower quality.\n\nINLINEFORM0 collection contains 6,778 sentences, extracted from the titles and abstracts of publications provided by WEB OF SCIENCE .\n\nTest Dataset\nArgumentative Zoning Corpus ( INLINEFORM0 corpus) consists of 80 AZ INLINEFORM1 annotated conference articles in computational linguistics, originally drawn from the Cmplg arXiv. . After Concatenating sub-sentences, 7,347 labeled sentences were obtained." ], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "0d0e0246e5dda9087b43b3a69181602ddeb241d6", "1a59568e1bdc1d873498303e01f031a885cd3bfc", "7a1df0e99975c3667e7cbcfc843c1eadb0c6b48f" ], "answer": [ { "evidence": [ "In this study, sentence embeddings were learned from large text corpus as features to classify sentences into seven categories in the task of AZ. Three models were explored to obtain the sentence vectors: averaging the vectors of the words in one sentence, paragraph vectors and specific word vectors." ], "extractive_spans": [ "sentence embeddings were learned from large text corpus as features to classify sentences into seven categories in the task of AZ. Three models were explored to obtain the sentence vectors: averaging the vectors of the words in one sentence, paragraph vectors and specific word vectors" ], "free_form_answer": "", "highlighted_evidence": [ "In this study, sentence embeddings were learned from large text corpus as features to classify sentences into seven categories in the task of AZ. Three models were explored to obtain the sentence vectors: averaging the vectors of the words in one sentence, paragraph vectors and specific word vectors." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this study, sentence embeddings were learned from large text corpus as features to classify sentences into seven categories in the task of AZ. Three models were explored to obtain the sentence vectors: averaging the vectors of the words in one sentence, paragraph vectors and specific word vectors.", "The second model, INLINEFORM0 , is aiming at training paragraph vectors. It is also called distributed memory model of paragraph vectors (PV-DM) BIBREF26 , which is an extension of word2vec. In comparison with the word2vec framework, the only change in PV-DM is in the equation (3), where INLINEFORM1 is constructed from INLINEFORM2 and INLINEFORM3 , where matrix INLINEFORM4 is the word vector and INLINEFORM5 holds the paragraph vectors in such a way that every paragraph is mapped to a unique vector represented by a column in matrix INLINEFORM6 .", "The third model is constructed for the purpose of improving classification results for a certain category. In this study specifically, the optimization task was focused on identifying the category INLINEFORM0 . In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). After obtaining the word vectors via INLINEFORM4 , the same scheme was used to average the vectors in one sentence as in the model INLINEFORM5 ." ], "extractive_spans": [], "free_form_answer": "Averaging the vectors of the words in a sentence, directly learning paragraph vectors using PV-DM, taking average of the SSWE of the words in a sentence.", "highlighted_evidence": [ "Three models were explored to obtain the sentence vectors: averaging the vectors of the words in one sentence, paragraph vectors and specific word vectors.", "The second model, INLINEFORM0 , is aiming at training paragraph vectors. It is also called distributed memory model of paragraph vectors (PV-DM) BIBREF26 , which is an extension of word2vec. In comparison with the word2vec framework, the only change in PV-DM is in the equation (3), where INLINEFORM1 is constructed from INLINEFORM2 and INLINEFORM3 , where matrix INLINEFORM4 is the word vector and INLINEFORM5 holds the paragraph vectors in such a way that every paragraph is mapped to a unique vector represented by a column in matrix INLINEFORM6 .", "The third model is constructed for the purpose of improving classification results for a certain category. In this study specifically, the optimization task was focused on identifying the category INLINEFORM0 . In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). After obtaining the word vectors via INLINEFORM4 , the same scheme was used to average the vectors in one sentence as in the model INLINEFORM5 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The first model, averaging word vectors ( INLINEFORM0 ), is to average the vectors in word sequence INLINEFORM1 . The main process in this model is to learn the word embedding matrix INLINEFORM2 :", "The second model, INLINEFORM0 , is aiming at training paragraph vectors. It is also called distributed memory model of paragraph vectors (PV-DM) BIBREF26 , which is an extension of word2vec. In comparison with the word2vec framework, the only change in PV-DM is in the equation (3), where INLINEFORM1 is constructed from INLINEFORM2 and INLINEFORM3 , where matrix INLINEFORM4 is the word vector and INLINEFORM5 holds the paragraph vectors in such a way that every paragraph is mapped to a unique vector represented by a column in matrix INLINEFORM6 .", "The third model is constructed for the purpose of improving classification results for a certain category. In this study specifically, the optimization task was focused on identifying the category INLINEFORM0 . In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 ). After obtaining the word vectors via INLINEFORM4 , the same scheme was used to average the vectors in one sentence as in the model INLINEFORM5 ." ], "extractive_spans": [ " average the vectors in word sequence", "training paragraph vectors", "Sentiment-Specific Word Embedding" ], "free_form_answer": "", "highlighted_evidence": [ "The first model, averaging word vectors ( INLINEFORM0 ), is to average the vectors in word sequence INLINEFORM1 .", "The second model, INLINEFORM0 , is aiming at training paragraph vectors. It is also called distributed memory model of paragraph vectors (PV-DM) BIBREF26 , which is an extension of word2vec.", "The third model is constructed for the purpose of improving classification results for a certain category. In this study specifically, the optimization task was focused on identifying the category INLINEFORM0 . In this study, INLINEFORM1 specific word embeddings were trained ( INLINEFORM2 ) inspired by Tang et al. BIBREF21 's model: Sentiment-Specific Word Embedding (unified model: INLINEFORM3 )." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "91093164af771628d5763e1c2eccba97f6bb1894", "aa8cb2e7b9649dc54176506db4ff52db527a748f", "d65fe76f539c34b8d7a9a1e49766ca69c953f04b" ], "answer": [ { "evidence": [ "One of the crucial tasks for researchers to carry out scientific investigations is to detect existing ideas that are related to their research topics. Research ideas are usually documented in scientific publications. Normally, there is one main idea stated in the abstract, explicitly presenting the aim of the paper. There are also other sub-ideas distributed across the entire paper. As the growth rate of scientific publication has been rising dramatically, researchers are overwhelmed by the explosive information. It is almost impossible to digest the ideas contained in the documents emerged everyday. Therefore, computer assisted technologies such as document summarization are expected to play a role in condensing information and providing readers with more relevant short texts. Unlike document summarization from news circles, where the task is to identify centroid sentences BIBREF0 or to extract the first few sentences of the paragraphs BIBREF1 , summarization of scientific articles involves extra text processing stage BIBREF2 . After highest ranked texts are extracted, rhetorical status analysis will be conducted on the selected sentences. Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences. The results of AZ provide readers with general discourse context from which the scientific ideas could be better linked, compared and analyzed. For example, given a specific task, which sentences should be shown to the reader is related to the features of the sentences. For the task of identifying a paper's unique contribution, sentences expressing research purpose should be retrieved with higher priority. For comparing ideas, statements of comparison with other works would be more useful. Teufel et. al. BIBREF2 introduced their rhetorical annotation scheme which takes into account of the aspects of argumentation, metadiscourse and relatedness to other works. Their scheme resulted seven categories of rhetorical status and the categories are assigned to full sentences. Examples of human annotated sentences with their rhetorical status are shown in Table. TABREF2 . The seven categories are aim, contrast, own, background, other, basis and textual." ], "extractive_spans": [ " Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences" ], "free_form_answer": "", "highlighted_evidence": [ "Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "One of the crucial tasks for researchers to carry out scientific investigations is to detect existing ideas that are related to their research topics. Research ideas are usually documented in scientific publications. Normally, there is one main idea stated in the abstract, explicitly presenting the aim of the paper. There are also other sub-ideas distributed across the entire paper. As the growth rate of scientific publication has been rising dramatically, researchers are overwhelmed by the explosive information. It is almost impossible to digest the ideas contained in the documents emerged everyday. Therefore, computer assisted technologies such as document summarization are expected to play a role in condensing information and providing readers with more relevant short texts. Unlike document summarization from news circles, where the task is to identify centroid sentences BIBREF0 or to extract the first few sentences of the paragraphs BIBREF1 , summarization of scientific articles involves extra text processing stage BIBREF2 . After highest ranked texts are extracted, rhetorical status analysis will be conducted on the selected sentences. Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences. The results of AZ provide readers with general discourse context from which the scientific ideas could be better linked, compared and analyzed. For example, given a specific task, which sentences should be shown to the reader is related to the features of the sentences. For the task of identifying a paper's unique contribution, sentences expressing research purpose should be retrieved with higher priority. For comparing ideas, statements of comparison with other works would be more useful. Teufel et. al. BIBREF2 introduced their rhetorical annotation scheme which takes into account of the aspects of argumentation, metadiscourse and relatedness to other works. Their scheme resulted seven categories of rhetorical status and the categories are assigned to full sentences. Examples of human annotated sentences with their rhetorical status are shown in Table. TABREF2 . The seven categories are aim, contrast, own, background, other, basis and textual." ], "extractive_spans": [ "process of assigning rhetorical status to the extracted sentences" ], "free_form_answer": "", "highlighted_evidence": [ "Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "One of the crucial tasks for researchers to carry out scientific investigations is to detect existing ideas that are related to their research topics. Research ideas are usually documented in scientific publications. Normally, there is one main idea stated in the abstract, explicitly presenting the aim of the paper. There are also other sub-ideas distributed across the entire paper. As the growth rate of scientific publication has been rising dramatically, researchers are overwhelmed by the explosive information. It is almost impossible to digest the ideas contained in the documents emerged everyday. Therefore, computer assisted technologies such as document summarization are expected to play a role in condensing information and providing readers with more relevant short texts. Unlike document summarization from news circles, where the task is to identify centroid sentences BIBREF0 or to extract the first few sentences of the paragraphs BIBREF1 , summarization of scientific articles involves extra text processing stage BIBREF2 . After highest ranked texts are extracted, rhetorical status analysis will be conducted on the selected sentences. Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences. The results of AZ provide readers with general discourse context from which the scientific ideas could be better linked, compared and analyzed. For example, given a specific task, which sentences should be shown to the reader is related to the features of the sentences. For the task of identifying a paper's unique contribution, sentences expressing research purpose should be retrieved with higher priority. For comparing ideas, statements of comparison with other works would be more useful. Teufel et. al. BIBREF2 introduced their rhetorical annotation scheme which takes into account of the aspects of argumentation, metadiscourse and relatedness to other works. Their scheme resulted seven categories of rhetorical status and the categories are assigned to full sentences. Examples of human annotated sentences with their rhetorical status are shown in Table. TABREF2 . The seven categories are aim, contrast, own, background, other, basis and textual." ], "extractive_spans": [ "a process of assigning rhetorical status to the extracted sentences" ], "free_form_answer": "", "highlighted_evidence": [ "Rhetorical sentence classification, also known as argumentative zoning (AZ) BIBREF3 , is a process of assigning rhetorical status to the extracted sentences." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five", "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "question": [ "What metric is considered?", "What hand-crafted features are used?", "What word embeddings are used?", "Do they annotate their own dataset?", "How are the sentence embeddings generated?", "What is argumentative zoning?" ], "question_id": [ "d4456e9029fcdcb6e0149dd8f57b77d16ead1bc4", "d0b967bfca2039c7fb05b931c8b9955f99a468dc", "31e6062ba45d8956791e1b86bad7efcb6d1b191a", "38b29b0dcb87868680f9934af71ef245ebb122e4", "6e134d51a795c385d72f38f36bca4259522bcf51", "0778cbbd093f8b779f7cf26302b2a8e081ccfb40" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "word2vec", "word2vec", "word2vec", "word2vec", "word2vec", "word2vec" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1. Examples of annotated sentences with their rhetorical status", "Table 2. Characteristics of word embeddings based on different model and dataset", "Table 3. Distribution of rhetorical categories", "Table 4. Performance of sentence classification per category I (precision/recall/Fmeasure)", "Table 5. Performance of sentence classification per category II (precision/recall/Fmeasure)" ], "file": [ "2-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "8-Table5-1.png" ] }
[ "What metric is considered?", "How are the sentence embeddings generated?" ]
[ [ "1703.10152-8-Table4-1.png", "1703.10152-Classification and evaluation-1" ], [ "1703.10152-Models-1", "1703.10152-Models-4", "1703.10152-Models-5", "1703.10152-Models-0" ] ]
[ "Precision, recall and F-measure.", "Averaging the vectors of the words in a sentence, directly learning paragraph vectors using PV-DM, taking average of the SSWE of the words in a sentence." ]
31
1907.04072
Multitask Learning for Blackmarket Tweet Detection
Online social media platforms have made the world more connected than ever before, thereby making it easier for everyone to spread their content across a wide variety of audiences. Twitter is one such popular platform where people publish tweets to spread their messages to everyone. Twitter allows users to Retweet other users' tweets in order to broadcast it to their network. The more retweets a particular tweet gets, the faster it spreads. This creates incentives for people to obtain artificial growth in the reach of their tweets by using certain blackmarket services to gain inorganic appraisals for their content. In this paper, we attempt to detect such tweets that have been posted on these blackmarket services in order to gain artificially boosted retweets. We use a multitask learning framework to leverage soft parameter sharing between a classification and a regression based task on separate inputs. This allows us to effectively detect tweets that have been posted to these blackmarket services, achieving an F1-score of 0.89 when classifying tweets as blackmarket or genuine.
{ "paragraphs": [ [ "Twitter is an important medium for people and companies to promote their products, ideologies, or to reach out and connect with other people in the form of micro-conversations. Twitter provides users with multiple ways of showing their support towards a tweet in the form of Likes, Retweets and Quotes. These content-level appraisals help in spreading the content further and act as a measure of users' agreement on the value of the content. The count of these content-level appraisals therefore determines the influence of a particular tweet and its author. This has led to the creation of certain blackmarket services such as FreeFollowers (https://www.freefollowers.io/), Like4Like (https://like4like.org/), YouLikeHits (https://www.youlikehits.com/), JustRetweet (http://justretweet.com), which allow users to post their tweets in order to gain inorganic appraisals in the form of Likes, Retweets and Quotes BIBREF0 , BIBREF1 .", "There has been a lot of research on the detection of fraudulent activities on Twitter such as detection of bots BIBREF2 , fake followers BIBREF3 , collusive retweeters BIBREF0 , BIBREF1 , and social spam BIBREF4 . However, the problem of detecting tweets that are posted to these blackmarket services has not been tackled before. The tweets submitted to blackmarket services are not necessarily spam or promotional tweets. As we observe in our data, there is some intersection between spammers and blackmarket users since spammers may also try to gain more appraisals by using these services. However, existing spam tweet detection approaches do not work that well in identifying individual tweets as blackmarket tweets (as shown in Table TABREF29 ).", "Table TABREF1 shows a sample tweet that was posted on a blackmarket service and another sample tweet that was not. In this paper, we make the first attempt to detect tweets that are posted on blackmarket services. Our aim is to build a system that can flag tweets soon after they are posted, which is why we do not consider temporal features such as the number of retweets or likes that a tweet keeps gaining over time. Instead, we only rely on the features and representations extracted from the content of the tweets.", "We curate a novel dataset of tweets that have been posted to blackmarket services, and a corresponding set of tweets that haven't. We propose a multitask learning approach to combine properties from the characterization of blackmarket tweets via traditional feature extraction, with a deep learning based feature representation of the tweets. We train a neural network which takes as input both the traditional feature representation as well as the deep learning based representation generated using the Tweet2Vec model BIBREF5 , and utilizes cross-stitch units BIBREF6 to learn an optimal combination of shared and task-specific knowledge via soft parameter sharing.", "We show that our multitask learning approach outperforms Twitter spam detection approaches, as well as state-of-the-art classifiers by 14.1% (in terms of F1-score), achieving an F1-score of 0.89 on our dataset. In short, the contributions of the paper are threefold: a new dataset, characterization of blackmarket tweets, and a novel multitask learning framework to detect tweets posted on blackmarket services." ], [ "Several studies have focused on detecting malicious activities such as spam, fake content and blackmarket services. Here, we mention some of these studies which we deem as pertinent to our work. We also mention the prior usage of multitask learning in a similar context.", "Spam/Fake Tweet Detection: The problem of fake and spam tweets is not new. Many solutions have been proposed to tackle this problem. Yardi et al. BIBREF7 showed that the network structure of spammers and non-spammers is different, and also tracked the life cycle of endogenous Twitter content. Chen et al. BIBREF8 conducted a comprehensive evaluation of several machine learning algorithms for timely detection of spam. Fake tweets, on the other hand, are the tweets which spread misinformation. Serrano et al. BIBREF9 provided an extensive survey on fake tweet detection. Unlike spam tweets, fake tweets are mostly associated with major events, and the accounts that produce these fake contents are mostly created during these events BIBREF10 , BIBREF11 .", "Blackmarket Services: Blackmarket services have recently received considerable attention due to the increase in the number of users using them. Analysis of such underground services was first documented in BIBREF12 where the authors examined the properties of social networks formed for blackmarket services. Liu et al. BIBREF13 proposed DetectVC which incorporates graph structure and the prior knowledge from the collusive followers to solve a voluntary following problem. Motoyama et al. BIBREF12 provided a detailed analysis of six underground forms, examining the properties of those social network structures that are formed and services that are being exchanged. Dutta et al. BIBREF0 investigated the customers involved in gaining fake retweets. Chetan et al. BIBREF1 proposed CoReRank, an unsupervised model and CoReRank+, a semi-supervised model which extends CoReRank to detect collusive users involved in retweeting activities.", "Multitask Learning: Multitask learning is used whenever we have two or more similar tasks to optimise together. Most of the related studies on multitask learning are based on how the tasks can be better learned together. Zhang et al. BIBREF14 classified multitask learning models into five types and reported the characteristics of each approach. Cross-Stitch units were introduced by Misra et al. BIBREF6 , which can learn an optimal combination of shared and task-specific representations. Gupta et al. BIBREF15 proposed GIRNet, a unified position-sensitive multitask recurrent neural network architecture." ], [ "blackAs studied in BIBREF0 , there are two prevalent models of blackmarket services, namely premium and freemium. Premium services are only available upon payment from customers, whereas freemium services offer both paid and unpaid options. The unpaid services are available to the users when they contribute to the blackmarket by providing appraisals for other users' content. Here, we mainly concentrate on freemium services. The freemium services can be further divided into three categories: (i) social-share services (request customers to spread the content on social media), (ii) credit-based services (customers earn credits by providing appraisals, and can then use the credits earned to gain appraisals for their content), and (iii) auto-time retweet services (customers need to provide their Twitter access tokens, upon which their content is retweeted 10-20 times for each 15-minute window)." ], [ "We collected data from Credit-based Freemium services because their service model is easy to understand. We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019. We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets. We used Twitter's REST API to collect the tweet objects of these tweets. The timelines of the authors of these tweets were also collected, allowing us to find genuine tweets by the same users that have not been posted to these blackmarket sites." ], [ "In total, we collected INLINEFORM0 tweets posted on blackmarket sites. Out of these, we removed non-English tweets and tweets with a length of less than two characters. Finally, we were left with INLINEFORM1 blackmarket tweets. Then, from the timelines of the authors of these tweets, we randomly sampled INLINEFORM2 genuine tweets that were not posted on these blackmarket sites during the same period. Both the blackmarket and genuine tweets were also inspected manually." ], [ "To further understand the purpose of the collusive users behind the usage of blackmarket services, we annotated blackmarket tweets in our test set into a few discrete categories. The statistics of the categories are as follows: Promotional - 43.75%, Entertainment - 15.89%, Spam - 13.57%, News - 7.86%, Politics - 4.82%, and Others - 14.11%. We considered a tweet as Promotional only if the tweet attempts to promote a website/product. Most of the tweets in the Others category include personal tweets without any call to action or promotion, but this also can be considered as self-promotion. We further noticed that there were about 5% of normal tweets on concerning issues such as “pray for ...\", indicating that blackmarket services are also being used for non-business purposes. 99% of tweets other than the tweets from Others class included at least one URL, and 100% of the URLs in the blackmarket tweets were shortened." ], [ "This section describes the features and tweet representation methodology, and the proposed model to solve the problem." ], [ "We use the following features based on the tweet content:", " INLINEFORM0 : Number of user mentions in the tweet", " INLINEFORM0 : Number of hashtags in the tweet", " INLINEFORM0 : Number of URLs in the tweet", " INLINEFORM0 : Count of media content in the tweet", " INLINEFORM0 : Is the tweet a reply to another tweet?", " INLINEFORM0 : Number of special characters (non alpha-numeric) in the tweet", " INLINEFORM0 : Length of the content (number of characters) in the tweet", " INLINEFORM0 : Sentiment score of the tweet obtained using SentiWordNet, ranging from -1 (negative) to +1 (positive)", " INLINEFORM0 : Number of noun words in the tweet", " INLINEFORM0 : Number of adjective words in the tweet", " INLINEFORM0 : Number of pronoun words in the tweet", " INLINEFORM0 : Number of verbs in the tweet" ], [ "We use the Tweet2Vec model BIBREF5 to generate a vector-space representation of each of the tweets. Tweet2Vec is a character-level deep learning based encoder for social media posts trained on the task of predicting the associated hashtags. It considers the assumption that posts with the same hashtags should have similar representation. It uses a bi-directional Gated Recurrent Unit (Bi-GRU) for learning the tweet representation. To get the representation for a particular tweet, the model combines the final GRU states by going through a forward and backward pass over the entire sequence.", "We use the pre-trained model provided by Dhingra et al. BIBREF5 , which is trained on a dataset of 2 million tweets, to get the tweet representation. This gives us a 500-dimensional representation of each tweet, based on its content." ], [ "The architecture of our model is shown in Figure FIGREF21 . We adopt multitask learning to develop our model. The primary task is set as a binary classification problem, wherein the tweets are classified as blackmarket or genuine. The secondary task is set as a regression problem, wherein the number of likes and retweets that a tweet will gain after five days of being posted is predicted.", "The model takes a different input feature vector for each of the tasks.", "Primary Input: The primary task takes as input the tweet content representation generated by the Tweet2Vec model, which is a 500-dimensional vector for each of the tweets, as described above.", "Secondary Input: The secondary task takes as input the vector of tweet content features, which is a 12-dimensional vector, as described above.", "As shown in Figure FIGREF21 , the inputs are fed into separate fully connected (FC) layers with cross-stitch units stacked between successive layers. The cross-stitch units find the best shared representations using linear combinations, and learn the optimal linear combinations for a given set of tasks. The cross-stitch units essentially allow us to unify two separate networks for two separate tasks into a single network wherein each layer of the network shares the parameters from the other network using linear combinations. The network also employs batch-normalization and dropout to avoid overfitting.", "The output layer of the first task classifies tweets as blackmarket or genuine using a cross entropy loss function. The output layer of the second task predicts the numerical values for the number of retweets and likes that a tweet will gain after five days of being posted by using a Mean Squared Error (MSE) loss. Note that the performance of the secondary task is not of importance to us, however, the secondary task helps the primary task. Therefore, we focus on the performance of the model in the primary task during training and evaluation." ], [ "Since there is no prior work on blackmarket tweet detection, we chose state-of-the-art Twitter spam detection methods as baselines, along with training some state-of-the-art classifiers on the features we generated for our dataset.", "Spam Detection 1: We use the Twitter spam detection method proposed by Wu et al. BIBREF4 . It uses the Word2Vec and Doc2Vec models to encode the tweets into a vector representation, which is fed to a MLP classifier in order to classify the tweets as spam or not-spam. We use the same methodology to classify tweets in our dataset as blackmarket or genuine.", "Spam Detection 2: For baseline 2, we consider the approach proposed by Rajdev et. al. BIBREF11 . They proposed flat and hierarchical classifications approaches with few of the standard set of features which can classify spam, fake and legitimate tweets. We use their experimental setup with Random Forest classifier on our dataset.", "We generate a combined feature vector by concatenating the tweet content features and the encoding generated by Tweet2Vec. This feature vector is then fed to state-of-the-art machine learning classifiers - Random Forest (RF), Multi-layer Perceptron (MLP), and Support Vector Machine (SVM)." ], [ "We consider the problem as a binary classification problem, where the tweets are classified into two classes - blackmarket and genuine. The performance of each competing method is measured using the following metrics: Precision, Recall, and F1-score. The primary output of the multitask learning model gives us the classification result, which is what we use to evaluate our model. All hyperparameters of the models are appropriately tuned. The average results are reported after 5-fold cross-validation." ], [ "As shown in Table TABREF29 , we observe that the multitask learning based model which uses the Tweet2Vec encoding and the content features as inputs to two separate tasks outperforms all the baselines, achieving an F1-score of 0.89 for classification of tweets as Blackmarket or Genuine. The best baseline is Spam Detector 2 which achieves an F1-score of 0.77.", "blackWe analyse the false negatives generated by our model to find which type of tweets the model finds difficult to classify. The percentage of each class in the false negatives is as follows: Promotional - 23.29%, Politics - 10.96%, Entertainment - 21.92%, News - 9.59%, Spam - 5.48%, and Others - 28.77%. We observe that the tweets belonging to the category Others are difficult to classify since they are similar to genuine tweets in terms of content. The results also indicate that our model is robust while classifying blackmarket tweets belonging to the following categories – News, Spam and Politics." ], [ "In this paper, we presented a novel multitask learning approach to solve the problem of identification of tweets that are submitted to blackmarket services, without the use of any temporal features. To sum up, our contributions are three-fold: (i) Characterization: We proposed 12 tweet content based features that are useful in the task of identifying blackmarket tweets, (ii) Classification: We developed a novel Multitask Learning based model to classify tweets as blackmarket tweets or genuine tweets, (iii) Dataset: We collected a dataset consisting of tweets that have been submitted to blackmarket services in order to gain inorganic appraisals." ], [ "The work was partially funded by DST (ECR/2017/00l691, DST/INT/UK/P158/2017), Ramanujan Fellowship, and the Infosys Centre of AI, IIIT-Delhi, India." ] ], "section_name": [ "Introduction", "Related Work", "Blackmarket Services", "Data Collection", "Dataset Description", "Analysis of Blackmarket Tweets", "Proposed Approach", "Tweet Content Features", "Tweet Content Representation", "Proposed Model", "Baseline Methods", "Evaluation Setup", "Experimental Results", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "1a80fc28d26e6e66529301781e8dcc3d803e8e53", "482050582df191b621b95d688531d15eb8e179ec", "6f93e639ad2b3bf539770ad33a9aaa8ecf7765b4" ], "answer": [ { "evidence": [ "We collected data from Credit-based Freemium services because their service model is easy to understand. We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019. We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets. We used Twitter's REST API to collect the tweet objects of these tweets. The timelines of the authors of these tweets were also collected, allowing us to find genuine tweets by the same users that have not been posted to these blackmarket sites." ], "extractive_spans": [ "crawled two blackmarket sites", "used Twitter's REST API" ], "free_form_answer": "", "highlighted_evidence": [ "We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019. We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets. We used Twitter's REST API to collect the tweet objects of these tweets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We collected data from Credit-based Freemium services because their service model is easy to understand. We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019. We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets. We used Twitter's REST API to collect the tweet objects of these tweets. The timelines of the authors of these tweets were also collected, allowing us to find genuine tweets by the same users that have not been posted to these blackmarket sites." ], "extractive_spans": [], "free_form_answer": "By crawling YouLikeHits and Like4Like sites and then using Twitter's REST API", "highlighted_evidence": [ "We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019.", "We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets.", "We used Twitter's REST API to collect the tweet objects of these tweets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We collected data from Credit-based Freemium services because their service model is easy to understand. We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019. We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets. We used Twitter's REST API to collect the tweet objects of these tweets. The timelines of the authors of these tweets were also collected, allowing us to find genuine tweets by the same users that have not been posted to these blackmarket sites." ], "extractive_spans": [ "We used Twitter's REST API" ], "free_form_answer": "", "highlighted_evidence": [ "We used Twitter's REST API to collect the tweet objects of these tweets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "471bc581f837bc36e8337013a4181094a54536ff", "5d28537e058a6b4f31348ad57dc0d9876531507c", "a93b0d524ae09c72ff5996e845e544d9c4dc5085" ], "answer": [ { "evidence": [ "Since there is no prior work on blackmarket tweet detection, we chose state-of-the-art Twitter spam detection methods as baselines, along with training some state-of-the-art classifiers on the features we generated for our dataset.", "Spam Detection 1: We use the Twitter spam detection method proposed by Wu et al. BIBREF4 . It uses the Word2Vec and Doc2Vec models to encode the tweets into a vector representation, which is fed to a MLP classifier in order to classify the tweets as spam or not-spam. We use the same methodology to classify tweets in our dataset as blackmarket or genuine.", "Spam Detection 2: For baseline 2, we consider the approach proposed by Rajdev et. al. BIBREF11 . They proposed flat and hierarchical classifications approaches with few of the standard set of features which can classify spam, fake and legitimate tweets. We use their experimental setup with Random Forest classifier on our dataset.", "We generate a combined feature vector by concatenating the tweet content features and the encoding generated by Tweet2Vec. This feature vector is then fed to state-of-the-art machine learning classifiers - Random Forest (RF), Multi-layer Perceptron (MLP), and Support Vector Machine (SVM)." ], "extractive_spans": [], "free_form_answer": " spam detection method proposed by Wu et al. BIBREF4 , spam detection method proposed by Rajdev et. al. BIBREF11, feature vector by concatenating the tweet content features with Random Forest, feature vector by concatenating the tweet content features with Multi-layer Perception and feature vector by concatenating the tweet content features with Support Vector Machine.", "highlighted_evidence": [ "Since there is no prior work on blackmarket tweet detection, we chose state-of-the-art Twitter spam detection methods as baselines, along with training some state-of-the-art classifiers on the features we generated for our dataset.\n\nSpam Detection 1: We use the Twitter spam detection method proposed by Wu et al. BIBREF4 . It uses the Word2Vec and Doc2Vec models to encode the tweets into a vector representation, which is fed to a MLP classifier in order to classify the tweets as spam or not-spam. We use the same methodology to classify tweets in our dataset as blackmarket or genuine.\n\nSpam Detection 2: For baseline 2, we consider the approach proposed by Rajdev et. al. BIBREF11 . They proposed flat and hierarchical classifications approaches with few of the standard set of features which can classify spam, fake and legitimate tweets. We use their experimental setup with Random Forest classifier on our dataset.\n\nWe generate a combined feature vector by concatenating the tweet content features and the encoding generated by Tweet2Vec. This feature vector is then fed to state-of-the-art machine learning classifiers - Random Forest (RF), Multi-layer Perceptron (MLP), and Support Vector Machine (SVM)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Spam Detection 1: We use the Twitter spam detection method proposed by Wu et al. BIBREF4 . It uses the Word2Vec and Doc2Vec models to encode the tweets into a vector representation, which is fed to a MLP classifier in order to classify the tweets as spam or not-spam. We use the same methodology to classify tweets in our dataset as blackmarket or genuine.", "Spam Detection 2: For baseline 2, we consider the approach proposed by Rajdev et. al. BIBREF11 . They proposed flat and hierarchical classifications approaches with few of the standard set of features which can classify spam, fake and legitimate tweets. We use their experimental setup with Random Forest classifier on our dataset." ], "extractive_spans": [ "Wu et al. BIBREF4", "Rajdev et. al. BIBREF11" ], "free_form_answer": "", "highlighted_evidence": [ "We use the Twitter spam detection method proposed by Wu et al. BIBREF4 .", "For baseline 2, we consider the approach proposed by Rajdev et. al. BIBREF11 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Spam Detection 1: We use the Twitter spam detection method proposed by Wu et al. BIBREF4 . It uses the Word2Vec and Doc2Vec models to encode the tweets into a vector representation, which is fed to a MLP classifier in order to classify the tweets as spam or not-spam. We use the same methodology to classify tweets in our dataset as blackmarket or genuine.", "Spam Detection 2: For baseline 2, we consider the approach proposed by Rajdev et. al. BIBREF11 . They proposed flat and hierarchical classifications approaches with few of the standard set of features which can classify spam, fake and legitimate tweets. We use their experimental setup with Random Forest classifier on our dataset." ], "extractive_spans": [], "free_form_answer": "Word2Vec and Doc2Vec to encode the tweets, then MLP classifier; Random Forest classifier on a standard set of features", "highlighted_evidence": [ " We use the Twitter spam detection method proposed by Wu et al. BIBREF4 . It uses the Word2Vec and Doc2Vec models to encode the tweets into a vector representation, which is fed to a MLP classifier in order to classify the tweets as spam or not-spam. We use the same methodology to classify tweets in our dataset as blackmarket or genuine.", " For baseline 2, we consider the approach proposed by Rajdev et. al. BIBREF11 . They proposed flat and hierarchical classifications approaches with few of the standard set of features which can classify spam, fake and legitimate tweets. We use their experimental setup with Random Forest classifier on our dataset." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "01663b8bac6f5e3a366f344894f3188b79b8abe4", "18296e2714014b842e34274c4e087476e189303a", "589c88b8439e2af9913633b813a0b6fc1081f187" ], "answer": [ { "evidence": [ "In total, we collected INLINEFORM0 tweets posted on blackmarket sites. Out of these, we removed non-English tweets and tweets with a length of less than two characters. Finally, we were left with INLINEFORM1 blackmarket tweets. Then, from the timelines of the authors of these tweets, we randomly sampled INLINEFORM2 genuine tweets that were not posted on these blackmarket sites during the same period. Both the blackmarket and genuine tweets were also inspected manually." ], "extractive_spans": [], "free_form_answer": "English", "highlighted_evidence": [ "Out of these, we removed non-English tweets and tweets with a length of less than two characters." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In total, we collected INLINEFORM0 tweets posted on blackmarket sites. Out of these, we removed non-English tweets and tweets with a length of less than two characters. Finally, we were left with INLINEFORM1 blackmarket tweets. Then, from the timelines of the authors of these tweets, we randomly sampled INLINEFORM2 genuine tweets that were not posted on these blackmarket sites during the same period. Both the blackmarket and genuine tweets were also inspected manually." ], "extractive_spans": [], "free_form_answer": "English", "highlighted_evidence": [ "In total, we collected INLINEFORM0 tweets posted on blackmarket sites. Out of these, we removed non-English tweets and tweets with a length of less than two characters." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In total, we collected INLINEFORM0 tweets posted on blackmarket sites. Out of these, we removed non-English tweets and tweets with a length of less than two characters. Finally, we were left with INLINEFORM1 blackmarket tweets. Then, from the timelines of the authors of these tweets, we randomly sampled INLINEFORM2 genuine tweets that were not posted on these blackmarket sites during the same period. Both the blackmarket and genuine tweets were also inspected manually." ], "extractive_spans": [], "free_form_answer": "English", "highlighted_evidence": [ "In total, we collected INLINEFORM0 tweets posted on blackmarket sites. Out of these, we removed non-English tweets and tweets with a length of less than two characters." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "2a550859df6666abbf3b185bb200271e4d8bf0ac", "75cc416faecfe0a0d49dfde16773bd4e445fb8ae", "dabf601c7b01ccdb18d1499f30c705f945da2a89" ], "answer": [ { "evidence": [ "We collected data from Credit-based Freemium services because their service model is easy to understand. We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019. We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets. We used Twitter's REST API to collect the tweet objects of these tweets. The timelines of the authors of these tweets were also collected, allowing us to find genuine tweets by the same users that have not been posted to these blackmarket sites." ], "extractive_spans": [ "Credit-based Freemium services" ], "free_form_answer": "", "highlighted_evidence": [ "We collected data from Credit-based Freemium services because their service model is easy to understand. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We collected data from Credit-based Freemium services because their service model is easy to understand. We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019. We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets. We used Twitter's REST API to collect the tweet objects of these tweets. The timelines of the authors of these tweets were also collected, allowing us to find genuine tweets by the same users that have not been posted to these blackmarket sites." ], "extractive_spans": [ "Credit-based Freemium services" ], "free_form_answer": "", "highlighted_evidence": [ "\nWe collected data from Credit-based Freemium services because their service model is easy to understand." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We collected data from Credit-based Freemium services because their service model is easy to understand. We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019. We created dummy accounts (after careful IRB approval) on these sites to participate in the platform and recorded Tweet IDs of the tweets that were posted for gaining retweets. We used Twitter's REST API to collect the tweet objects of these tweets. The timelines of the authors of these tweets were also collected, allowing us to find genuine tweets by the same users that have not been posted to these blackmarket sites." ], "extractive_spans": [ "YouLikeHits and Like4Like" ], "free_form_answer": "", "highlighted_evidence": [ " We crawled two blackmarket sites – YouLikeHits and Like4Like, between the period of February and April 2019." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "How did they obtain the tweets?", "What baseline do they compare to?", "What language is explored in this paper?", "What blackmarket services do they look at?" ], "question_id": [ "578add9d3dadf86cd0876d42b03bf0114f83d0e7", "4d5b74499804ea5bc5520beb88d0f9816f67205a", "baec99756b80eec7c0234a08bc2855e6770bcaeb", "46d051b8924ad0ef8cfba9c7b5b84707ee72f26a" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Fig. 1. Architecture of our proposed multitask learning model for the detection of blackmarket tweets.", "TABLE II PERFORMANCE OF THE COMPETING METHODS." ], "file": [ "3-Figure1-1.png", "4-TableII-1.png" ] }
[ "How did they obtain the tweets?", "What baseline do they compare to?", "What language is explored in this paper?" ]
[ [ "1907.04072-Data Collection-0" ], [ "1907.04072-Baseline Methods-3", "1907.04072-Baseline Methods-1", "1907.04072-Baseline Methods-2", "1907.04072-Baseline Methods-0" ], [ "1907.04072-Dataset Description-0" ] ]
[ "By crawling YouLikeHits and Like4Like sites and then using Twitter's REST API", "Word2Vec and Doc2Vec to encode the tweets, then MLP classifier; Random Forest classifier on a standard set of features", "English" ]
32
1909.10481
Cross-Lingual Natural Language Generation via Pre-Training
In this work we focus on transferring supervision signals of natural language generation (NLG) tasks between multiple languages. We propose to pretrain the encoder and the decoder of a sequence-to-sequence model under both monolingual and cross-lingual settings. The pre-training objective encourages the model to represent different languages in the shared space, so that we can conduct zero-shot cross-lingual transfer. After the pre-training procedure, we use monolingual data to fine-tune the pre-trained model on downstream NLG tasks. Then the sequence-to-sequence model trained in a single language can be directly evaluated beyond that language (i.e., accepting multi-lingual input and producing multi-lingual output). Experimental results on question generation and abstractive summarization show that our model outperforms the machine-translation-based pipeline methods for zero-shot cross-lingual generation. Moreover, cross-lingual transfer improves NLG performance of low-resource languages by leveraging rich-resource language data. Our implementation and data are available at https://github.com/CZWin32768/xnlg.
{ "paragraphs": [ [ "Learning natural language generation (NLG) models heavily relies on annotated training data. However, most available datasets are collected in a single language (typically English), which restricts deploying the applications to other languages. In this work, we aim at transferring the supervision of a monolingual NLG dataset to unseen languages, so that we can boost performance for the low-resource settings.", "Various methods have been proposed over the years to learn universal cross-lingual word embeddings BIBREF0, BIBREF1, BIBREF2 or sentence encoders BIBREF3, BIBREF4, BIBREF5, which tries to encode multilingual texts into a single shared vector space. Despite achieving promising results on cross-lingual classification problems, cross-lingual pre-trained models purposed for NLG tasks remains relatively understudied.", "The cross-lingual generation problem is challenging due to the following reasons. First, it requires the models to understand multilingual input texts, and generate multilingual target sequences. So both encoder and decoder should be pre-trained together. Second, the many-to-many nature of cross-lingual NLG increases language pairs with the square of the number of languages. Third, the prediction space of cross-lingual NLG is much larger than classification tasks, which makes the knowledge transfer of decoders quite critical.", "Previous work mainly relies on machine translation (MT) systems to map texts to different languages. The first strand of research directly uses MT in a pipeline manner BIBREF6. For example, the input written in other languages is first translated to English, and fed into the NLG model that is trained by English data. Then the generated English text is translated back to the target language. Another strand of work employs MT to generate pseudo training data for other language pairs that are lack of annotations BIBREF7, BIBREF8. However, such methods have to use multiple MT systems, which renders them suffering from error propagation. Moreover, because the pipeline-based methods do not explicitly share the same parameter space across the languages, we can not directly transfer the task-specific supervision to other low-resource languages.", "In this paper, we propose a cross-lingual pre-trained model (named as Xnlg) in order to transfer monolingual NLG supervision to other pre-trained languages by fine-tuning. Specifically, Xnlg shares the same sequence-to-sequence model across languages, and is pre-trained with both monolingual and cross-lingual objectives. The model not only learns to understand multilingual input, but also is able to generate specific languages by conditioning on the encoded semantics. Figure FIGREF2 demonstrates how to use Xnlg to perform cross-lingual transfer for downstream tasks. The proposed model enables us to fine-tune the pre-trained model on monolingual NLG training data, and then evaluate it beyond a single language, including zero-shot cross-lingual generation. Besides, we explore several fine-tuning strategies to make a compromise between cross-lingual ability and task ability. In addition, we introduce two cross-lingual NLG datasets (i.e., question generation, and abstractive summarization) for evaluation, which includes three languages, namely English, Chinese, and French. Experimental results on the NLG tasks show that Xnlg achieves competitive performance compared with the machine-translation-based pipeline model in zero-shot cross-lingual settings." ], [ "Several previous methods have been proposed for cross-lingual abstractive summarization. BIBREF7 xnhg and BIBREF8 xsummacl use translated documents or summaries as pseudo training data. BIBREF9 ncls incorporate monolingual summarization and machine translation in the training procedure to improve cross-lingual summarization. However, the systems only conduct experiments that generate summaries with different languages from the input language, rather than transferring supervision signals across all language pairs. BIBREF10 kumar2019cross introduce a cross-lingual model for question generation, which uses training data annotated in multiple languages to jointly train a sequence-to-sequence model. In contrast, our method can also be applied to zero-shot settings across languages." ], [ "Various training objectives are designed to pretrain text encoders used for general-purpose language representations, such as language modeling BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, auto-encoding BIBREF16, and machine translation BIBREF17. Apart from pre-training encoders, several pre-trained models BIBREF18, BIBREF19 are proposed for generation tasks. In comparison, our goal is to investigate a pre-training method for cross-lingual NLG tasks." ], [ "Cross-lingual pre-training aims at building universal cross-lingual encoders that can encode multilingual sentences to a shared embedding space. BIBREF20 artetxe2018massively use the sequence encoder of the multilingual translation model BIBREF3 to produce cross-lingual sentence embeddings. However, as shown in the experiments (Section SECREF4), it is difficult to control the target language by directly fine-tuning the pre-trained translation model on downstream NLG tasks. BIBREF4 xnli propose an alignment loss function to encourage parallel sentences to have similar representations. By pre-training BERT BIBREF13 on corpora of multiple languages, it shows a surprising ability to produce cross-lingual representations BIBREF21. More recently, BIBREF5 xlm extend mask language modeling pre-training to cross-lingual settings, which shows significant improvements on cross-lingual text classification and unsupervised machine translation. By comparison, we pretrain both encoder and decoder for cross-lingual generation tasks, rather than only focusing on encoder." ], [ "Xnlg is a pre-trained sequence-to-sequence model, which is based on Transformer BIBREF22. Both the encoder and the decoder are supposed to support multiple languages. Following BIBREF5, we use language tag embeddings to distinguish the source and target languages. Given a sentence and its corresponding language tag, Xnlg encodes the input into vector representations. By conditioning on the encoding vectors and a specific language tag, the decoder generates the output sequence in the target language. Figure FIGREF6 illustrates the pre-training objectives and the pre-training protocol designed for Xnlg." ], [ "The masked language modeling (MLM) BIBREF13 task, also known as the Cloze task BIBREF23, aims at predicting the randomly masked words according to their context. The objective pretrains the bidirectional encoder to obtain contextual representations. Following BIBREF13, we randomly mask 15% of the tokens in a monolingual sentence. For each masked token, we substitute it with a special token M, a random token, or the unchanged token with a probability of 0.8, 0.1, and 0.1, respectively. Let $x$ denote a sentence from the monolingual training corpus, and $M_{x}$ the set of randomly masked positions. The monolingual MLM loss is defined as: MLM(x) = -i Mxp( xi | xMx) where $x_{\\setminus M_{x}}$ is the masked version of input $x$. Notice that language tags are fed into the model for all pre-training tasks." ], [ "We use the denoising auto-encoding (DAE) objective BIBREF24 to pretrain the encoder-decoder attention mechanism. Given sentence $x$ from the monolingual corpus, we use three types of noise to obtain the randomly perturbed text $\\hat{x}$. First, the word order is locally shuffled. Second, we randomly drop tokens of the sentence with a probability of $0.1$. Third, we substitute tokens with the special padding token P with a probability of $0.1$. The pre-training objective is to recover the original sentence $x$ by conditioning on $\\hat{x}$. The DAE loss is computed via: DAE(x) = -p(x|x) = -i = 1|x|p(xi | x, x<i) where $x_{<i}$ represents the tokens of previous time steps $x_1,\\cdots ,x_{i-1}$." ], [ "Similar to monolingual MLM, the masked token prediction task can be extended to cross-lingual settings BIBREF5. To be specific, given a parallel corpus, we concatenate the pair of bilingual sentences $(x,y)$ to a whole sequence, and use it as the input of MLM. The language tags are also fed into the model to indicate the languages of tokens. During training, we adopt the same masking strategy as monolingual MLM. Apart from using monolingual context to predict the masked tokens, XMLM encourages the model to utilize the alignment of bilingual sentences, so that the model learns to map cross-lingual texts into a shared vector space. Similar to eq:mlm, the cross-lingual MLM loss is: XMLM(x,y) = -i Mxp( xi | xMx , yMy)", "-i Myp( yi | xMx , yMy) where $M_x, M_y$ represent the masked positions of $x$ and $y$, respectively." ], [ "If only DAE is used as the pre-training task for the decoder, we found that the model ignores the target language tag while generating just the same language as the input, caused by the spurious correlation issue BIBREF25. In other words, the DAE loss captures the spurious correlation between the source language tag and the target sentences, but we expect the language of generated sentences can be controlled by the target language tag. To solve the above problem, we use machine translation as the cross-lingual auto-encoding (XAE) task, which decreases mutual information between the target sentences and the source language tag. XAE can be viewed as the multilingual-version DAE task in the sense that both of them recover the sentence by conditioning on the encoded representations. The cross-lingual auto-encoding loss is defined as: XAE(x,y) = -p(y|x) - p(x|y) where $(x,y)$ is a pair of sentences in the parallel corpus." ], [ "As shown in Figure FIGREF6(b), we propose a two-stage pre-training protocol for Xnlg. The first stage pretrains the encoding components, where the model learns to encode multilingual sentences to a shared embedding space. We consider using MLM and XMLM as the pre-training tasks. The objective of the first stage is to minimize: 1= (x,y) p XMLM(x,y) + x m MLM(x) where ${_{\\textnormal {p}}}$ indicates the parallel corpus, and ${_{\\textnormal {m}}}$ is the monolingual corpus.", "Although the pre-trained encoder in the first stage enables the model to encode multilingual sentences. However, it cannot directly be used in cross-lingual NLG because: 1) encoder-decoder attention is not pre-trained; 2) the decoding algorithm is different between masked language modeling and autoregressive decoding, resulting in the mismatch between pre-training and fine-tuning. Therefore, we conduct decoding pre-training in the second stage by using DAE and XAE as the tasks. Besides, we only update decoder parameters and keep the encoder fixed. The objective of the second stage is to minimize: 2 = (x,y) pXAE(x,y) + x mDAE(x)" ], [ "In the fine-tuning procedure, let us assume that we only have English training data for downstream NLG tasks. According to whether the target language is English, the directions of NLG can be categorized into two classes: any languages to non-English languages (Any-to-Others), and any languages to English (Any-to-English)." ], [ "Ideally, the model can be fine-tuned towards a new task without losing its cross-lingual ability. However, we observe the catastrophic forgetting phenomenon of target language controllability, if we fine-tune all the model parameters for Any-to-Others NLG. So we keep the decoder and word embeddings frozen and only update the encoder parameters during fine-tuning. In practice, we found that the proposed fine-tuning method prevents the model from only decoding English words for the Any-to-Others setting." ], [ "For the Any-to-English NLG transfer, the decoder always generates English. So we can freeze the encoder parameters, and update the decoder parameters to retain the cross-lingual ability. As an alternative way, we can also fine-tune all the parameters to obtain the best results on the English dataset while having a slight drop in performance." ], [ "We conduct experiments over two cross-lingual NLG downstream tasks, i.e., cross-lingual question generation, and cross-lingual abstractive summarization. We compare Xnlg with state-of-the-art cross-lingual pre-trained models, and machine-translation-based pipelines." ], [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs." ], [ "For fine-tuning on downstream NLG tasks, we use Adam optimizer with a learning rate of $5\\times 10^{-6}$. We set the batch size as 16 and 32 for question generation and abstractive summarization, respectively. When the target language is the same as the language of training data, we fine-tune all parameters. When the target language is different from the language of training data, we fine-tune the Transformer layers of the encoder. We truncate the input sentences to the first 256 tokens. During decoding, we use beam search with beam size of 3, and limit the length of the target sequence to 80 tokens." ], [ "We evaluate our model on the zero-shot cross-lingual answer-aware question generation task. The goal of question generation (QG) is to generate a question that asks towards the answer with the given passage and the expected answer. In the following experiments, we extend the QG task to the cross-lingual setting. By only using English QG training data, our goal is to generate questions in English or Chinese with the given passage-answer pair in English or Chinese.", "We use SQuAD 1.1 BIBREF30 as the English QG dataset. It is a popular English question answering dataset containing over 100,000 questions and their corresponding annotated passages. Following BIBREF31, we regard the original development set as the test set, and sample 5000 examples from the training data of two datasets as the development sets. For Chinese QG, we follow the default data splits of WebQA BIBREF32. We regard the provided annotated evidence sentences as the input passages instead of entire documents. To construct the input sequence, we view the whole input passage as a single sentence, and concatenate the passage and the answer into one sequence with a special token S between them. During decoding Chinese, we utilize a subset of vocabulary, which is obtained from the passage sentences of the WebQA dataset." ], [ "We first conduct experiments on the supervised English-English QG setting. We compare our model to the following baselines:", "CorefNqg BIBREF33 A sequence-to-sequence model with attention mechanism and a feature-rich encoder.", "Mp-Gsn BIBREF31 A sequence-to-sequence model with gated self-attention and maxout pointer mechanism.", "Xlm BIBREF5 The current state-of-the-art cross-lingual pre-training model. We initialize the Transformer-based sequence-to-sequence model with pre-trained XLM.", "We evaluate models with BLEU-4 (BL-4), ROUGE (RG) and METEOR (MTR) metrics. As shown in Table TABREF16, our model outperforms the baselines, which demonstrates that our pre-trained model provides a good initialization for NLG." ], [ "We conduct experiments on the zero-shot Chinese-Chinese QG task to evaluate the cross-lingual transfer ability. In this task, models are trained with English QG data but evaluated with Chinese QG examples. We include the following models as our baselines:", "Xlm Fine-tuning XLM with the English QG data.", "Pipeline (Xlm) The pipeline of translating input Chinese sentences into English first, then performing En-En-QG with the XLM model, and finally translating back to the Chinese. We use the Transformer as the translator, which is also trained on the MultiUN dataset.", "Pipeline (Xlm) with Google Translator Same to Pipeline (Xlm) but using Google Translator to translate the texts.", "We evaluate models by both automatic evaluation metrics and human experts. The automatic metrics scores are computed by regarding each Chinese character as a token. For human evaluation, we consider three metrics for the generated questions: relatedness, fluency, and correctness, which are represented as integers ranged from 1 to 3. We randomly select 100 passage-answer pairs from the English QG test set, and use the models to generate questions. Then we present these examples to three experts to ask for the above scores. In Table TABREF17 and Table TABREF18, we present the results for the zero-shot Zh-Zh-QG. The results of monolingual supervised models are also reported in Table TABREF16 as reference. In the automatic evaluation, our model consistently performs better than baselines in both zero-shot and monolingual supervised setting. In the human evaluation, our model also obtains significant improvements in terms of relatedness and correctness." ], [ "In the zero-shot English-Chinese question generation experiments, we use Xlm and Pipeline (Xlm) as our baselines. Pipeline (Xlm) is a pipeline method that uses En-En-QG with Xlm to generate questions, and then translates the results to Chinese. Because there is no annotations for En-Zh-QG, we perform human evaluation studies for this setting. Table TABREF19 shows the human evaluation results, where our model surpasses all the baselines especially in terms of relatedness and correctness." ], [ "We also conduct experiments for zero-shot Chinese-English question generation, and adopt the same evaluation procedure to En-Zh-QG. Pipeline (Xlm) first translates Chinese input to English, and then conduct En-En-QG with Xlm. As shown in Table TABREF20, human evaluation results indicate that Xnlg achieves significant improvements on the three metrics." ], [ "We conduct experiments on cross-lingual abstractive summarization (AS). AS is the task of converting the input sentences into summaries while preserving the key meanings. For evaluation, we use English/French/Chinese Gigaword to extract the first sentence and the headline of each article, and regard them as input document and predicted summaries, respectively. For each language, we sample 500k/5k/5k examples for training/validation/test." ], [ "In the zero-shot setting, we only use English data for training, and directly evaluate the model on other languages. In Table TABREF22 and Table TABREF23, we present the results for French/Chinese AS, which are evaluated by the ROUGE-1, ROUGE-2 and ROUGE-L metrics. We also report the results of supervised AS in Table TABREF21 for reference. We find that Xnlg outperforms all the baseline models on both French and Chinese AS. Comparing with French, there is a larger gap between baselines and our model on zero-shot Chinese AS, which indicates that the error propagation issue is more serious on distant language pairs." ], [ "We conduct ablation studies for pre-training objectives, and the results can be seen in Table TABREF40. We observe that our model greatly benefits from the DAE objective for the zero-shot Chinese question generation task. The results also demonstrate that combining DAE and XAE can alleviate the spurious correlation issue and improves cross-lingual NLG." ], [ "As shown in Table TABREF41, we use the En-En-QG and Zh-Zh-QG tasks to analyze the effects of using different fine-tuning strategies. It can be observed that fine-tuning encoder parameters, our model obtain an impressive performance for both English and Chinese QG, which shows the strong cross-lingual transfer ability of our model. When fine-tuning all the parameters, the model achieves the best score for English QG, but it suffers a performance drop when evaluating on Chinese QG. We find that fine-tuning decoder hurts cross-lingual decoding, and the model learns to only decodes English words. For only fine-tuning decoder, the performance degrades by a large margin for both languages because of the underfitting issue, which indicates the necessity of fine-tuning encoder." ], [ "We examine whether low-resource NLG can benefit from cross-lingual transfer. We consider English as the rich-resource language, and conduct experiments for few-shot French/Chinese AS. Specifically, we first fine-tune Xnlg on the English AS data, and then fine-tune it on the French or Chinese AS data. We compare with the monolingual supervised model that Xnlg is only fine-tuned on the dataset of the target language. As shown in Figure FIGREF49, we can observe that the cross-lingual supervision improves performance for few-shot abstractive summarization. As the training data size becomes larger, the performance of two models is getting closer." ], [ "As shown in Figure FIGREF42, we present some examples generated by Xnlg and the baselines in four directions (En-En, En-Zh, Zh-En, and Zh-Zh). When decoding on an unseen language, Xlm tends to generate random output, because it is not designed for cross-lingual NLG. In terms of the pipeline model, we can observe that it suffers from the error propagation issue, especially when the source and target languages are all different from the training data. For example, when the pipeline model performs Zh-Zh-QG, keywords are translated twice, increasing the risk of mistranslation. In the second example, “atomic bomb” is mistranslated to “nuclear bomb”, resulting in its low correctness. On the contrary, by directly transferring English supervision signals to the other generation directions, the generated questions of Xnlg match the references better than baselines." ], [ "In this paper, we propose a pre-training method for cross-lingual natural language generation (NLG) that can transfer monolingual NLG supervision signals to all pre-trained languages. With the pre-trained model, we achieve zero-shot cross-lingual NLG on several languages by only fine-tuning once. Experimental results show that our model outperforms the machine-translation-based pipeline model on several cross-lingual NLG tasks. For future work, we would like to improve our pre-training method towards the fully unsupervised setting." ] ], "section_name": [ "Introduction", "Related Work ::: Cross-Lingual NLG", "Related Work ::: Monolingual Pre-Training", "Related Work ::: Cross-Lingual Pre-Training", "Methods", "Methods ::: Pre-Training Tasks ::: Monolingual MLM", "Methods ::: Pre-Training Tasks ::: Denoising Auto-Encoding (DAE)", "Methods ::: Pre-Training Tasks ::: Cross-Lingual MLM (XMLM)", "Methods ::: Pre-Training Tasks ::: Cross-Lingual Auto-Encoding (XAE)", "Methods ::: Pre-Training Protocol", "Methods ::: Fine-Tuning on Downstream NLG Tasks", "Methods ::: Fine-Tuning on Downstream NLG Tasks ::: Fine-Tuning for Any-to-Others NLG", "Methods ::: Fine-Tuning on Downstream NLG Tasks ::: Fine-Tuning for Any-to-English NLG", "Experiments", "Experiments ::: Training Details ::: Pre-Training", "Experiments ::: Training Details ::: Fine-Tuning", "Experiments ::: Question Generation", "Experiments ::: Question Generation ::: English-English Question Generation", "Experiments ::: Question Generation ::: Chinese-Chinese Question Generation", "Experiments ::: Question Generation ::: English-Chinese Question Generation", "Experiments ::: Question Generation ::: Chinese-English Question Generation", "Experiments ::: Abstractive Summarization", "Experiments ::: Abstractive Summarization ::: Zero-Shot Summarization", "Experiments ::: Ablation Studies ::: Effects of Pre-Training", "Experiments ::: Ablation Studies ::: Effects of Fine-Tuning Strategies", "Experiments ::: Ablation Studies ::: Effects of Cross-Lingual Transfer", "Experiments ::: Case Studies", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "2bb60ad5c14731590234adfc159ec26bfab571a4", "82e6e7cd1217db9fc590c3de14e1f1c77488ed57", "f62ac103d33c9b09217721a8f66043aab681325f" ], "answer": [ { "evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs." ], "extractive_spans": [ "English", "French", "Chinese" ], "free_form_answer": "", "highlighted_evidence": [ " We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs." ], "extractive_spans": [ "English", "Chinese", "French" ], "free_form_answer": "", "highlighted_evidence": [ "We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Related Work ::: Monolingual Pre-Training", "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs." ], "extractive_spans": [ "English/French/Chinese" ], "free_form_answer": "", "highlighted_evidence": [ "Pre-Training\nWe use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "cb8f0892c5b38d63df5966b2a96027487919c5a9", "d35a0e30ee2de656a825b2d776c10ee36ccc8efa", "df0889629b7a331f9b872784bd0a88093059b296" ], "answer": [ { "evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs." ], "extractive_spans": [ "pre-trained Xnlg", "6-layer decoder" ], "free_form_answer": "", "highlighted_evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs." ], "extractive_spans": [], "free_form_answer": "6 transformer layers, each layer containing 1024 hidden units, 8 attention heads, and GELU activations.", "highlighted_evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use the denoising auto-encoding (DAE) objective BIBREF24 to pretrain the encoder-decoder attention mechanism. Given sentence $x$ from the monolingual corpus, we use three types of noise to obtain the randomly perturbed text $\\hat{x}$. First, the word order is locally shuffled. Second, we randomly drop tokens of the sentence with a probability of $0.1$. Third, we substitute tokens with the special padding token P with a probability of $0.1$. The pre-training objective is to recover the original sentence $x$ by conditioning on $\\hat{x}$. The DAE loss is computed via: DAE(x) = -p(x|x) = -i = 1|x|p(xi | x, x<i) where $x_{<i}$ represents the tokens of previous time steps $x_1,\\cdots ,x_{i-1}$." ], "extractive_spans": [ "denoising auto-encoding (DAE) objective BIBREF24" ], "free_form_answer": "", "highlighted_evidence": [ "We use the denoising auto-encoding (DAE) objective BIBREF24 to pretrain the encoder-decoder attention mechanism." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "23413f8ffea0d4d3bbe83b6c75b0e577f7f94c30", "9e449fdefbd330743f01d3c14dd2ad829c18e1d3", "ebea122cd441970e06379e03114bed7b8054a1a1" ], "answer": [ { "evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs." ], "extractive_spans": [ "pre-trained Xnlg with a 10-layer encoder" ], "free_form_answer": "", "highlighted_evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use the denoising auto-encoding (DAE) objective BIBREF24 to pretrain the encoder-decoder attention mechanism. Given sentence $x$ from the monolingual corpus, we use three types of noise to obtain the randomly perturbed text $\\hat{x}$. First, the word order is locally shuffled. Second, we randomly drop tokens of the sentence with a probability of $0.1$. Third, we substitute tokens with the special padding token P with a probability of $0.1$. The pre-training objective is to recover the original sentence $x$ by conditioning on $\\hat{x}$. The DAE loss is computed via: DAE(x) = -p(x|x) = -i = 1|x|p(xi | x, x<i) where $x_{<i}$ represents the tokens of previous time steps $x_1,\\cdots ,x_{i-1}$." ], "extractive_spans": [ "denoising auto-encoding (DAE) objective BIBREF24" ], "free_form_answer": "", "highlighted_evidence": [ "We use the denoising auto-encoding (DAE) objective BIBREF24 to pretrain the encoder-decoder attention mechanism." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. In the first pre-training stage, we directly use the 15-language pre-trained XLM BIBREF5 to initialize the parameters of our encoder and decoder. In the second stage, we use Wikipedia as the monolingual data for the DAE objective, and MultiUN BIBREF27 as the parallel data for the XAE objective. The DAE loss is trained with a weight of $0.5$. We train a two-language (English/Chinese) and a three-language (English/French/Chinese) Xnlg for two downstream NLG tasks, respectively. Following BIBREF5, we use the tokenizer provided by BIBREF28 for Chinese, and Moses for other languages, respectively. Then the words in all languages are split with a shared subword vocabulary learned by BPE BIBREF29. We use Adam optimizer with a linear warm-up over the first 4,000 steps and linear decay for later steps, and the learning rate is set to $10^{-4}$. The pre-training batch size is 64, and the sequence length is set to 256. It takes about 30 hours to run 23,000 steps for the pre-training procedure by using 4 Nvidia Telsa V100-16GB GPUs." ], "extractive_spans": [], "free_form_answer": "10 transformer layers, each layer containing 1024 hidden units, 8 attentions heads, and GELU activations.", "highlighted_evidence": [ "We use a pre-trained Xnlg with a 10-layer encoder and a 6-layer decoder. For every Transformer layer, we use 1024 hidden units, 8 attention heads, and GELU activations BIBREF26. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "0180c2bb825251cb7f1997f2c9c0180afda08205", "1decea3d47430e70c1cbcffa670c0d596d05a7fb", "f636a3bbd490784b398651902ced1f81a20709f6" ], "answer": [ { "evidence": [ "We first conduct experiments on the supervised English-English QG setting. We compare our model to the following baselines:", "CorefNqg BIBREF33 A sequence-to-sequence model with attention mechanism and a feature-rich encoder.", "Mp-Gsn BIBREF31 A sequence-to-sequence model with gated self-attention and maxout pointer mechanism.", "Xlm BIBREF5 The current state-of-the-art cross-lingual pre-training model. We initialize the Transformer-based sequence-to-sequence model with pre-trained XLM.", "We conduct experiments on the zero-shot Chinese-Chinese QG task to evaluate the cross-lingual transfer ability. In this task, models are trained with English QG data but evaluated with Chinese QG examples. We include the following models as our baselines:", "Xlm Fine-tuning XLM with the English QG data.", "Pipeline (Xlm) The pipeline of translating input Chinese sentences into English first, then performing En-En-QG with the XLM model, and finally translating back to the Chinese. We use the Transformer as the translator, which is also trained on the MultiUN dataset.", "Pipeline (Xlm) with Google Translator Same to Pipeline (Xlm) but using Google Translator to translate the texts." ], "extractive_spans": [ "CorefNqg BIBREF33", "Mp-Gsn BIBREF31", "Xlm BIBREF5", "Xlm Fine-tuning", "Pipeline (Xlm)", "Pipeline (Xlm) with Google Translator" ], "free_form_answer": "", "highlighted_evidence": [ "We first conduct experiments on the supervised English-English QG setting. We compare our model to the following baselines:\n\nCorefNqg BIBREF33 A sequence-to-sequence model with attention mechanism and a feature-rich encoder.\n\nMp-Gsn BIBREF31 A sequence-to-sequence model with gated self-attention and maxout pointer mechanism.\n\nXlm BIBREF5 The current state-of-the-art cross-lingual pre-training model. We initialize the Transformer-based sequence-to-sequence model with pre-trained XLM.", "We include the following models as our baselines:\n\nXlm Fine-tuning XLM with the English QG data.\n\nPipeline (Xlm) The pipeline of translating input Chinese sentences into English first, then performing En-En-QG with the XLM model, and finally translating back to the Chinese. We use the Transformer as the translator, which is also trained on the MultiUN dataset.\n\nPipeline (Xlm) with Google Translator Same to Pipeline (Xlm) but using Google Translator to translate the texts." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We first conduct experiments on the supervised English-English QG setting. We compare our model to the following baselines:", "CorefNqg BIBREF33 A sequence-to-sequence model with attention mechanism and a feature-rich encoder.", "Mp-Gsn BIBREF31 A sequence-to-sequence model with gated self-attention and maxout pointer mechanism.", "Xlm BIBREF5 The current state-of-the-art cross-lingual pre-training model. We initialize the Transformer-based sequence-to-sequence model with pre-trained XLM.", "We conduct experiments on the zero-shot Chinese-Chinese QG task to evaluate the cross-lingual transfer ability. In this task, models are trained with English QG data but evaluated with Chinese QG examples. We include the following models as our baselines:", "Xlm Fine-tuning XLM with the English QG data.", "Pipeline (Xlm) The pipeline of translating input Chinese sentences into English first, then performing En-En-QG with the XLM model, and finally translating back to the Chinese. We use the Transformer as the translator, which is also trained on the MultiUN dataset.", "Pipeline (Xlm) with Google Translator Same to Pipeline (Xlm) but using Google Translator to translate the texts." ], "extractive_spans": [ "CorefNqg", "Mp-Gsn", "Xlm", "Pipeline (Xlm)", "Pipeline (Xlm) with Google Translator" ], "free_form_answer": "", "highlighted_evidence": [ "We compare our model to the following baselines:\n\nCorefNqg BIBREF33 A sequence-to-sequence model with attention mechanism and a feature-rich encoder.\n\nMp-Gsn BIBREF31 A sequence-to-sequence model with gated self-attention and maxout pointer mechanism.\n\nXlm BIBREF5 The current state-of-the-art cross-lingual pre-training model. We initialize the Transformer-based sequence-to-sequence model with pre-trained XLM.", "We include the following models as our baselines:\n\nXlm Fine-tuning XLM with the English QG data.\n\nPipeline (Xlm) The pipeline of translating input Chinese sentences into English first, then performing En-En-QG with the XLM model, and finally translating back to the Chinese. We use the Transformer as the translator, which is also trained on the MultiUN dataset.\n\nPipeline (Xlm) with Google Translator Same to Pipeline (Xlm) but using Google Translator to translate the texts." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We first conduct experiments on the supervised English-English QG setting. We compare our model to the following baselines:", "CorefNqg BIBREF33 A sequence-to-sequence model with attention mechanism and a feature-rich encoder.", "Mp-Gsn BIBREF31 A sequence-to-sequence model with gated self-attention and maxout pointer mechanism.", "Xlm BIBREF5 The current state-of-the-art cross-lingual pre-training model. We initialize the Transformer-based sequence-to-sequence model with pre-trained XLM." ], "extractive_spans": [ "CorefNqg BIBREF33 ", "Mp-Gsn BIBREF31", "Xlm BIBREF5" ], "free_form_answer": "", "highlighted_evidence": [ "We compare our model to the following baselines:\n\nCorefNqg BIBREF33 A sequence-to-sequence model with attention mechanism and a feature-rich encoder.\n\nMp-Gsn BIBREF31 A sequence-to-sequence model with gated self-attention and maxout pointer mechanism.\n\nXlm BIBREF5 The current state-of-the-art cross-lingual pre-training model. We initialize the Transformer-based sequence-to-sequence model with pre-trained XLM." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What languages do they use during pretraining?", "What is the architecture of the decoder?", "What is the architecture of the encoder?", "What is their baseline?" ], "question_id": [ "dae2f135e50d77867c3f57fc3cb0427b2443e126", "38055717edf833566d912f14137b92a1d9c4f65a", "b6aa5665c981e3b582db4760759217e2979d5626", "c0355afc7871bf2e12260592873ffdb5c0c4c919" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: We use a monolingual (such as English) NLG dataset to fine-tune the pre-trained model XNLG, and then evaluate it beyond the language for both source and target sides (e.g., Chinese, and French).", "Figure 2: Overview of the pre-training tasks and the pre-training protocol designed for XNLG.", "Table 3: Human evaluation results of zero-shot Chinese-Chinese question generation. Rel is short for relatedness, Flu for fluency, and Corr for correctness. “*” indicates the improvements are significant at p < 0.05.", "Table 1: Evaluation results of monolingual supervised question generation for English and Chinese. BL is short for BLEU, MTR for METEOR, and RG for ROUGE. The results with “†” are reported on different data splits.", "Table 2: Evaluation results of zero-shot ChineseChinese question generation. Same shorthands apply as in Table 1.", "Table 4: Human evaluation results of zero-shot English-Chinese question generation. “*” indicates the improvements are significant at p < 0.05. Same shorthands apply as in Table 3.", "Table 5: Human evaluation results of zero-shot Chinese-English question generation. “*” indicates the improvements are significant at p < 0.05. Same shorthands apply as in Table 3.", "Table 7: Evaluation results of zero-shot French abstractive summarization. Same shorthands apply as in Table 1.", "Table 6: Evaluation results of monolingual supervised abstractive summarization. Same shorthands apply as in Table 1.", "Table 8: Evaluation results of zero-shot Chinese abstractive summarization. Same shorthands apply as in Table 1.", "Table 9: Ablations for pre-training objectives, where models are evaluated on zero-shot Chinese-Chinese question generation. Same shorthands apply as in Table 1.", "Table 10: Effects of different fine-tuning strategies. Dec, Enc and ET represent fine-tuning the parameters of the decoder, encoder, and Transformer layers of encoder, respectively. Same shorthands apply as in Table 1.", "Figure 3: Examples of generated questions by XNLG and the baselines in four directions (En-En,En-Zh,Zh-En and Zh-Zh). “*”: Because XLM is not designed for cross-lingual NLG, it is hard to produce meaningful sentences for En-Zh-QG and Zh-Zh-QG.", "Figure 4: ROUGE-2 scores for few-shot French/Chinese abstractive summarization with different training data sizes." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "5-Table3-1.png", "5-Table1-1.png", "5-Table2-1.png", "5-Table4-1.png", "6-Table5-1.png", "6-Table7-1.png", "6-Table6-1.png", "6-Table8-1.png", "7-Table9-1.png", "7-Table10-1.png", "8-Figure3-1.png", "9-Figure4-1.png" ] }
[ "What is the architecture of the decoder?", "What is the architecture of the encoder?" ]
[ [ "1909.10481-Methods ::: Pre-Training Tasks ::: Denoising Auto-Encoding (DAE)-0", "1909.10481-Experiments ::: Training Details ::: Pre-Training-0" ], [ "1909.10481-Methods ::: Pre-Training Tasks ::: Denoising Auto-Encoding (DAE)-0", "1909.10481-Experiments ::: Training Details ::: Pre-Training-0" ] ]
[ "6 transformer layers, each layer containing 1024 hidden units, 8 attention heads, and GELU activations.", "10 transformer layers, each layer containing 1024 hidden units, 8 attentions heads, and GELU activations." ]
33
1805.04833
Hierarchical Neural Story Generation
We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.
{ "paragraphs": [ [ "Story-telling is on the frontier of current text generation technology: stories must remain thematically consistent across the complete document, requiring modeling very long range dependencies; stories require creativity; and stories need a high level plot, necessitating planning ahead rather than word-by-word generation BIBREF0 .", "We tackle the challenges of story-telling with a hierarchical model, which first generates a sentence called the prompt describing the topic for the story, and then conditions on this prompt when generating the story. Conditioning on the prompt or premise makes it easier to generate consistent stories because they provide grounding for the overall plot. It also reduces the tendency of standard sequence models to drift off topic.", "We find that standard sequence-to-sequence (seq2seq) models BIBREF1 applied to hierarchical story generation are prone to degenerating into language models that pay little attention to the writing prompt (a problem that has been noted in other domains, such as dialogue response generation BIBREF2 ). This failure is due to the complex and underspecified dependencies between the prompt and the story, which are much harder to model than the closer dependencies required for language modeling (for example, consider the subtle relationship between the first sentence and prompt in Figure FIGREF1 ).", "To improve the relevance of the generated story to its prompt, we introduce a fusion mechanism BIBREF3 where our model is trained on top of an pre-trained seq2seq model. To improve over the pre-trained model, the second model must focus on the link between the prompt and the story. For the first time, we show that fusion mechanisms can help seq2seq models build dependencies between their input and output.", "Another major challenge in story generation is the inefficiency of modeling long documents with standard recurrent architectures—stories contain 734 words on average in our dataset. We improve efficiency using a convolutional architecture, allowing whole stories to be encoded in parallel. Existing convolutional architectures only encode a bounded amount of context BIBREF4 , so we introduce a novel gated self-attention mechanism that allows the model to condition on its previous outputs at different time-scales.", "To train our models, we gathered a large dataset of 303,358 human generated stories paired with writing prompts from an online forum. Evaluating free form text is challenging, so we also introduce new evaluation metrics which isolate different aspects of story generation.", "Experiments show that our fusion and self-attention mechanisms improve over existing techniques on both automated and human evaluation measures. Our new dataset and neural architectures allow for models which can creatively generate longer, more consistent and more fluent passages of text. Human judges prefer our hierarchical model's stories twice as often as those of a non-hierarchical baseline." ], [ "We collect a hierarchical story generation dataset from Reddit's WritingPrompts forum. WritingPrompts is a community where online users inspire each other to write by submitting story premises, or prompts, and other users freely respond. Each prompt can have multiple story responses. The prompts have a large diversity of topic, length, and detail. The stories must be at least 30 words, avoid general profanity and inappropriate content, and should be inspired by the prompt (but do not necessarily have to fulfill every requirement). Figure FIGREF1 shows an example.", "We scraped three years of prompts and their associated stories using the official Reddit API. We clean the dataset by removing automated bot posts, deleted posts, special announcements, comments from moderators, and stories shorter than 30 words. We use NLTK for tokenization. The dataset models full text to generate immediately human-readable stories. We reserve 5% of the prompts for a validation set and 5% for a test set, and present additional statistics about the dataset in Table TABREF4 .", "For our experiments, we limit the length of the stories to 1000 words maximum and limit the vocabulary size for the prompts and the stories to words appearing more than 10 times each. We model an unknown word token and an end of document token. This leads to a vocabulary size of 19,025 for the prompts and 104,960 for the stories. As the dataset is scraped from an online forum, the number of rare words and misspellings is quite large, so modeling the full vocabulary is challenging and computationally intensive." ], [ "The challenges of WritingPrompts are primarily in modeling long-range dependencies and conditioning on an abstract, high-level prompt. Recurrent and convolutional networks have successfully modeled sentences BIBREF5 , BIBREF4 , but accurately modeling several paragraphs is an open problem. While seq2seq networks have strong performance on a variety of problems, we find that they are unable to build stories that accurately reflect the prompts. We will evaluate strategies to address these challenges in the following sections." ], [ "High-level structure is integral to good stories, but language models generate on a strictly-word-by-word basis and so cannot explicitly make high-level plans. We introduce the ability to plan by decomposing the generation process into two levels. First, we generate the premise or prompt of the story using the convolutional language model from BIBREF4 . The prompt gives a sketch of the structure of the story. Second, we use a seq2seq model to generate a story that follows the premise. Conditioning on the prompt makes it easier for the story to remain consistent and also have structure at a level beyond single phrases." ], [ "The length of stories in our dataset is a challenge for RNNs, which process tokens sequentially. To transform prompts into stories, we instead build on the convolutional seq2seq model of BIBREF6 , which uses deep convolutional networks as the encoder and decoder. Convolutional models are ideally suited to modeling long sequences, because they allow parallelism of computation within the sequence. In the Conv seq2seq model, the encoder and decoder are connected with attention modules BIBREF7 that perform a weighted sum of encoder outputs, using attention at each layer of the decoder." ], [ "CNNs can only model a bounded context window, preventing the modeling of long-range dependencies within the output story. To enable modeling of unbounded context, we supplement the decoder with a self-attention mechanism BIBREF8 , BIBREF9 , which allows the model to refer to any previously generated words. The self-attention mechanism improves the model's ability to extract long-range context with limited computational impact due to parallelism.", "Gated Attention: Similar to BIBREF9 , we use multi-head attention to allow each head to attend to information at different positions. However, the queries, keys and values are not given by linear projections but by more expressive gated deep neural nets with Gated Linear Unit BIBREF4 activations. We show that gating lends the self-attention mechanism crucial capacity to make fine-grained selections.", "Multi-Scale Attention: Further, we propose to have each head operating at a different time scale, depicted in Figure FIGREF7 . Thus the input to each head is downsampled a different amount—the first head sees the full input, the second every other input timestep, the third every third input timestep, etc. The different scales encourage the heads to attend to different information. The downsampling operation limits the number of tokens in the attention maps, making them sharper.", "The output of a single attention head is given by DISPLAYFORM0 ", " where INLINEFORM0 contains the hidden states up to time INLINEFORM1 at layer INLINEFORM2 , and INLINEFORM3 are gated downsampling networks as shown in Figure FIGREF7 . Unlike BIBREF9 , we allow the model to optionally attend to a 0 vector at each timestep, if it chooses to ignore the information of past timesteps (see Figure FIGREF8 ). This mechanism allows the model to recover the non-self-attention architecture and avoid attending to the past if it provides only noise. Additionally, we do not allow the self-attention mechanism to attend to the current timestep, only the past." ], [ "Unlike tasks such as translation, where the semantics of the target are fully specified by the source, the generation of stories from prompts is far more open-ended. We find that seq2seq models ignore the prompt and focus solely on modeling the stories, because the local dependencies required for language modeling are easier to model than the subtle dependencies between prompt and story.", "We propose a fusion-based approach to encourage conditioning on the prompt. We train a seq2seq model that has access to the hidden states of a pretrained seq2seq model. Doing so can be seen as a type of boosting or residual learning that allows the second model to focus on what the first model failed to learn—such as conditioning on the prompt. To our knowledge, this paper is the first to show that fusion reduces the problem of seq2seq models degenerating into language models that capture primarily syntactic and grammatical information.", "The cold fusion mechanism of BIBREF3 pretrains a language model and subsequently trains a seq2seq model with a gating mechanism that learns to leverage the final hidden layer of the language model during seq2seq training. We modify this approach by combining two seq2seq models as follows (see Figure FIGREF13 ): DISPLAYFORM0 ", " where the hidden state of the pretrained seq2seq model and training seq2seq model (represented by INLINEFORM0 ) are concatenated to learn gates INLINEFORM1 . The gates are computed using a linear projection with the weight matrix INLINEFORM2 . The gated hidden layers are combined by concatenation and followed by more fully connected layers with GLU activations (see Appendix). We use layer normalization BIBREF10 after each fully connected layer." ], [ "Sequence-to-sequence neural networks BIBREF1 have achieved state of the art performance on a variety of text generation tasks, such as machine translation BIBREF1 and summarization BIBREF11 . Recent work has applied these models to more open-ended generation tasks, including writing Wikipedia articles BIBREF12 and poetry BIBREF13 .", "Previous work on story generation has explored seq2seq RNN architectures BIBREF14 , but has focused largely on using various content to inspire the stories. For instance, BIBREF15 uses photos to inspire short paragraphs trained on romance novels, and BIBREF16 chain a series of independent descriptions together into a short story. BIBREF17 decompose story generation into two steps, first converting text into event representations, then modeling stories as sequences of events before translating back to natural language. Similarly, BIBREF18 generate summaries of movies as sequences of events using an RNN, then sample event representations using MCMC. They find this technique can generate text of the desired genre, but the movie plots are not interpretable (as the model outputs events, not raw text). However, we are not aware of previous work that has used hierarchical generation from a textual premise to improve the coherence and structure of stories." ], [ "Previous work has proposed decomposing the challenge of generating long sequences of text into a hierarchical generation task. For instance, BIBREF19 use an LSTM to hierarchically learn word, then sentence, then paragraph embeddings, then transform the paragraph embeddings into text. BIBREF20 generate a discrete latent variable based on the context, then generates text conditioned upon it." ], [ "Previous work has investigated the integration of language models with seq2seq models. The two models can be leveraged together without architectural modifications: BIBREF21 use language models to initialize the encoder and decoder side of the seq2seq model independently, and BIBREF22 combine the predictions of the language model and seq2seq model solely at inference time. Recent work has also proposed deeper integration. BIBREF23 combined a trained language model with a trained seq2seq model to learn a gating function that joins them. BIBREF3 propose training the seq2seq model given the fixed language model then learning a gate to filter the information from the language model." ], [ "We evaluate a number of baselines:", "(1) Language Models: Non-hierarchical models for story generation, which do not condition on the prompt. We use both the gated convolutional language (GCNN) model of BIBREF4 and our additional self-attention mechanism.", "(2) seq2seq: using LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention.", "(3) Ensemble: an ensemble of two Conv seq2seq with self-attention models.", "(4) KNN: we also compare with a KNN model to find the closest prompt in the training set for each prompt in the test set. A TF-IDF vector for each prompt was created using fasttext BIBREF24 and faiss BIBREF25 was used for KNN search. The retrieved story from the training set is limited to 150 words to match the length of generated stories." ], [ "To train the fusion model, we first pretrain a Conv seq2seq with self-attention model on the WritingPrompts dataset. This pretrained model is fixed and provided to the second Conv seq2seq with self-attention model during training time. The two models are integrated with the fusion mechanism described in Section SECREF11 ." ], [ "We implement models with the fairseq-py library in PyTorch. Similar to BIBREF6 , we train using the Nesterov accelerated gradient method BIBREF26 using gradient clipping BIBREF27 . We perform hyperparameter optimization on each of our models by cross-validating with random search on a validation set. We provide model architectures in the appendix." ], [ "We generate stories from our models using a top-k random sampling scheme. At each timestep, the model generates the probability of each word in the vocabulary being the likely next word. We randomly sample from the INLINEFORM0 most likely candidates from this distribution. Then, subsequent timesteps generate words based on the previously selected words. We find this sampling strategy substantially more effective than beam search, which tends to produce common phrases and repetitive text from the training set BIBREF28 , BIBREF29 . Sentences produced by beam search tend to be short and generic. Completely random sampling can introduce very unlikely words, which can damage generation as the model has not seen such mistakes at training time. The restriction of sampling from the 10 most likely candidates reduces the risk of these low-probability samples.", "For each model, we tune a temperature parameter for the softmax at generation time. To ease human evaluation, we generate stories of 150 words and do not generate unknown word tokens.", "For prompt generation, we use a self-attentive GCNN language model trained with the same prompt-side vocabulary as the sequence-to-sequence story generation models. The language model to generate prompts has a validation perplexity of 63.06. Prompt generation is conducted using the top-k random sampling from the 10 most likely candidates, and the prompt is completed when the language model generates the end of prompt token." ], [ "We propose a number of evaluation metrics to quantify the performance of our models. Many commonly used metrics, such as BLEU for machine translation or ROUGE for summarization, compute an n-gram overlap between the generated text and the human text—however, in our open-ended generation setting, these are not useful. We do not aim to generate a specific story; we want to generate viable and novel stories. We focus on measuring both the fluency of our models and their ability to adhere to the prompt.", "For automatic evaluation, we measure model perplexity on the test set and prompt ranking accuracy. Perplexity is commonly used to evaluate the quality of language models, and it reflects how fluently the model can produce the correct next word given the preceding words. We use prompt ranking to assess how strongly a model's output depends on its input. Stories are decoded under 10 different prompts—9 randomly sampled prompts and 1 true corresponding prompt—and the likelihood of the story given the various prompts is recorded. We measure the percentage of cases where the true prompt is the most likely to generate the story. In our evaluation, we examined 1000 stories from the test set for each model.", "For human evaluation, we use Amazon Mechanical Turk to conduct a triple pairing task. We use each model to generate stories based on held-out prompts from the test set. Then, groups of three stories are presented to the human judges. The stories and their corresponding prompts are shuffled, and human evaluators are asked to select the correct pairing for all three prompts. 105 stories per model are grouped into questions, and each question is evaluated by 15 judges.", "Lastly, we conduct human evaluation to evaluate the importance of hierarchical generation for story writing. We use Amazon Mechanical Turk to compare the stories from hierarchical generation from a prompt with generation without a prompt. 400 pairs of stories were evaluated by 5 judges each in a blind test." ], [ "We analyze the effect of our modeling improvements on the WritingPrompts dataset." ], [ "Our proposed fusion model is capable of generating unique text without copying directly from the training set. When analyzing 500 150-word generated stories from test-set prompts, the average longest common subsequence is 8.9. In contrast, the baseline Conv seq2seq model copies 10.2 words on average and the KNN baseline copies all 150 words from a story in the training set.", "Figure FIGREF27 shows the values of the fusion gates for an example story, averaged at each timestep. The pretrained seq2seq model acts similarly to a language model producing common words and punctuation. The second seq2seq model learns to focus on rare words, such as horned and robe.", "However, the fusion model has limitations. Using random sampling to generate can produce errors. For example, can't is tokenized to ca n't, and the model occasionally produces the first token but misses the second. A similar error is after one line of dialogue, the model may move to another line of dialogue without generating a newline token. A further obstacle is repetition. The model focuses frequently on what it has recently produced, which leads to the generation of similar text multiple times.", "In the generation of prompts using the GCNN language model, we find that prompts are fairly generic compared to human prompts. Language models often struggle to model rare words accurately, as the probability distribution over the next word is dominated by more common words. This tends to produce similar prompts, particularly at the start — we see many prompts that start with the man. In contrast, many of the human prompts are very unique (e.g. prompting stories in fantasy worlds such as Harry Potter and Game of Thrones) and the language model rarely produces the specific vocabulary required by these settings." ], [ "We analyze the encoder-decoder attention in the fusion model and find that unlike attention maps in machine translation, where each decoder timestep tends to attend to a different word on the encoder-side, the attention map for each decoder timestep looks similar and focuses mainly on salient words in the prompt. We further look at the usage of the self-attention layers within the decoder. While they could be leveraged to look at words generated very far in the past, at many timesteps the self-attention focuses on the recent past." ], [ "We have collected the first dataset for creative text generation based on short writing prompts. This new dataset pushes the boundaries of text generation by requiring longer range dependencies and conditioning on an abstract premise. Building on this dataset, we show through automatic and human evaluation that novel hierarchical models, self-attention mechanisms and model fusion significantly improves the fluency, topicality, and overall quality of the generated stories." ], [ "9 layers with hidden unit sizes INLINEFORM0 and convolutional kernel widths INLINEFORM1 . Learning rate 1, momentum 0.99, dropout 0.1, embedding size 300, l2 normalization INLINEFORM2 , 4 decoder self-attention heads." ], [ "3 layers in encoder with hidden unit sizes INLINEFORM0 and convolutional kernel widths INLINEFORM1 . 8 layers in the decoder with hidden unit sizes INLINEFORM2 with convolutional kernel widths INLINEFORM3 . Learning rate 0.25, momentum 0.99, dropout 0.3, embedding size 256, output embedding size 256, l2 nomalization INLINEFORM4 , 4 decoder self-attention heads." ], [ "Two different Conv seq2seq models were trained and ensembled together by averaging with equal weights." ], [ "The pretrained seq2seq model is the model in Section SECREF37 . The additional fused model has the following architecture:", "5 layers in the encoder with hidden unit sizes INLINEFORM0 and convolutional kernel widths INLINEFORM1 . 5 layers in the decoder with hidden unit sizes INLINEFORM2 and convolutional kernel widths INLINEFORM3 . Learning rate 0.25, momentum 0.99, dropout 0.3, embedding size 256, output embedding size 256, l2 normalization INLINEFORM4 , 4 decoder self-attention heads." ] ], "section_name": [ "Introduction", "Writing Prompts Dataset", "Approach", "Hierarchical Story Generation", "Efficient Learning with Convolutional Sequence-to-Sequence Model", "Modeling Unbounded Context with Gated Multi-Scale Self-attention", "Improving Relevance to Input Prompt with Model Fusion", "Story Generation", "Hierarchical Text Generation", "Fusion Models", "Baselines", "Fusion Training", "Training", "Generation", "Evaluation", "Results", "Generation Quality", "Use of Attention", "Conclusion", "GCNN Language Model + Self-Attention", "Conv seq2seq + self-attention", "Ensemble: Conv seq2seq + self-attention", "Fusion: Conv seq2seq + self-attention" ] }
{ "answers": [ { "annotation_id": [ "5f330deb8154fc24ecbf356bf07100c46e5bb665", "7d8b95a5b12dbd566f6facf2a945b593682ee2cf", "cf28fcb2fd5a61a08899b6ef112aa5e59873268d" ], "answer": [ { "evidence": [ "Lastly, we conduct human evaluation to evaluate the importance of hierarchical generation for story writing. We use Amazon Mechanical Turk to compare the stories from hierarchical generation from a prompt with generation without a prompt. 400 pairs of stories were evaluated by 5 judges each in a blind test.", "FLOAT SELECTED: Table 4: Effect of Hierarchical Generation. Human judges prefer stories that were generated hierarchically by first creating a premise and creating a full story based on it with a seq2seq model." ], "extractive_spans": [], "free_form_answer": "human preference", "highlighted_evidence": [ "Lastly, we conduct human evaluation to evaluate the importance of hierarchical generation for story writing. We use Amazon Mechanical Turk to compare the stories from hierarchical generation from a prompt with generation without a prompt. 400 pairs of stories were evaluated by 5 judges each in a blind test.", "FLOAT SELECTED: Table 4: Effect of Hierarchical Generation. Human judges prefer stories that were generated hierarchically by first creating a premise and creating a full story based on it with a seq2seq model." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For human evaluation, we use Amazon Mechanical Turk to conduct a triple pairing task. We use each model to generate stories based on held-out prompts from the test set. Then, groups of three stories are presented to the human judges. The stories and their corresponding prompts are shuffled, and human evaluators are asked to select the correct pairing for all three prompts. 105 stories per model are grouped into questions, and each question is evaluated by 15 judges.", "Lastly, we conduct human evaluation to evaluate the importance of hierarchical generation for story writing. We use Amazon Mechanical Turk to compare the stories from hierarchical generation from a prompt with generation without a prompt. 400 pairs of stories were evaluated by 5 judges each in a blind test." ], "extractive_spans": [ "triple pairing task", "hierarchical generation" ], "free_form_answer": "", "highlighted_evidence": [ "For human evaluation, we use Amazon Mechanical Turk to conduct a triple pairing task. We use each model to generate stories based on held-out prompts from the test set. Then, groups of three stories are presented to the human judges. The stories and their corresponding prompts are shuffled, and human evaluators are asked to select the correct pairing for all three prompts. 105 stories per model are grouped into questions, and each question is evaluated by 15 judges.\n\nLastly, we conduct human evaluation to evaluate the importance of hierarchical generation for story writing. We use Amazon Mechanical Turk to compare the stories from hierarchical generation from a prompt with generation without a prompt. 400 pairs of stories were evaluated by 5 judges each in a blind test." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Figure 5: Human accuracy at pairing stories with the prompts used to generate them. People find that our fusion model significantly improves the link between the prompt and generated stories.", "FLOAT SELECTED: Figure 6: Accuracy of prompt ranking. The fusion model most accurately pairs prompt and stories." ], "extractive_spans": [], "free_form_answer": "Accuracy at pairing stories with the prompts used to generate them; accuracy of prompt ranking", "highlighted_evidence": [ "FLOAT SELECTED: Figure 5: Human accuracy at pairing stories with the prompts used to generate them. People find that our fusion model significantly improves the link between the prompt and generated stories.", "FLOAT SELECTED: Figure 6: Accuracy of prompt ranking. The fusion model most accurately pairs prompt and stories." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "70e5e05a0ad195fba37951a119c468d36e6f2804", "72393a0daa46d624619c825d708ad44075f29ed0", "d36669a8ef221d603829d13d5fa3326951ac6d88" ], "answer": [ { "evidence": [ "For automatic evaluation, we measure model perplexity on the test set and prompt ranking accuracy. Perplexity is commonly used to evaluate the quality of language models, and it reflects how fluently the model can produce the correct next word given the preceding words. We use prompt ranking to assess how strongly a model's output depends on its input. Stories are decoded under 10 different prompts—9 randomly sampled prompts and 1 true corresponding prompt—and the likelihood of the story given the various prompts is recorded. We measure the percentage of cases where the true prompt is the most likely to generate the story. In our evaluation, we examined 1000 stories from the test set for each model." ], "extractive_spans": [ "perplexity", "prompt ranking accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "For automatic evaluation, we measure model perplexity on the test set and prompt ranking accuracy." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For automatic evaluation, we measure model perplexity on the test set and prompt ranking accuracy. Perplexity is commonly used to evaluate the quality of language models, and it reflects how fluently the model can produce the correct next word given the preceding words. We use prompt ranking to assess how strongly a model's output depends on its input. Stories are decoded under 10 different prompts—9 randomly sampled prompts and 1 true corresponding prompt—and the likelihood of the story given the various prompts is recorded. We measure the percentage of cases where the true prompt is the most likely to generate the story. In our evaluation, we examined 1000 stories from the test set for each model." ], "extractive_spans": [ "model perplexity on the test set ", "prompt ranking accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "For automatic evaluation, we measure model perplexity on the test set and prompt ranking accuracy. Perplexity is commonly used to evaluate the quality of language models, and it reflects how fluently the model can produce the correct next word given the preceding words. We use prompt ranking to assess how strongly a model's output depends on its input." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For automatic evaluation, we measure model perplexity on the test set and prompt ranking accuracy. Perplexity is commonly used to evaluate the quality of language models, and it reflects how fluently the model can produce the correct next word given the preceding words. We use prompt ranking to assess how strongly a model's output depends on its input. Stories are decoded under 10 different prompts—9 randomly sampled prompts and 1 true corresponding prompt—and the likelihood of the story given the various prompts is recorded. We measure the percentage of cases where the true prompt is the most likely to generate the story. In our evaluation, we examined 1000 stories from the test set for each model." ], "extractive_spans": [ "perplexity ", "prompt ranking accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "For automatic evaluation, we measure model perplexity on the test set and prompt ranking accuracy. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "901752ff2b2ff6d495abbe08926dee16e41f2ec0", "a8897769e0400bbd261f4af975ca87b69d97f8ca", "ac4f2330010c3af22c13fdd10f754a83706b0f27" ], "answer": [ { "evidence": [ "We evaluate a number of baselines:", "(1) Language Models: Non-hierarchical models for story generation, which do not condition on the prompt. We use both the gated convolutional language (GCNN) model of BIBREF4 and our additional self-attention mechanism.", "(2) seq2seq: using LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention.", "(3) Ensemble: an ensemble of two Conv seq2seq with self-attention models.", "(4) KNN: we also compare with a KNN model to find the closest prompt in the training set for each prompt in the test set. A TF-IDF vector for each prompt was created using fasttext BIBREF24 and faiss BIBREF25 was used for KNN search. The retrieved story from the training set is limited to 150 words to match the length of generated stories." ], "extractive_spans": [ "gated convolutional language (GCNN) model of BIBREF4 and our additional self-attention mechanism", "LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention", "an ensemble of two Conv seq2seq with self-attention models", "KNN model" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate a number of baselines:\n\n(1) Language Models: Non-hierarchical models for story generation, which do not condition on the prompt. We use both the gated convolutional language (GCNN) model of BIBREF4 and our additional self-attention mechanism.\n\n(2) seq2seq: using LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention.\n\n(3) Ensemble: an ensemble of two Conv seq2seq with self-attention models.\n\n(4) KNN: we also compare with a KNN model to find the closest prompt in the training set for each prompt in the test set. A TF-IDF vector for each prompt was created using fasttext BIBREF24 and faiss BIBREF25 was used for KNN search." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate a number of baselines:", "(1) Language Models: Non-hierarchical models for story generation, which do not condition on the prompt. We use both the gated convolutional language (GCNN) model of BIBREF4 and our additional self-attention mechanism.", "(2) seq2seq: using LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention.", "(3) Ensemble: an ensemble of two Conv seq2seq with self-attention models.", "(4) KNN: we also compare with a KNN model to find the closest prompt in the training set for each prompt in the test set. A TF-IDF vector for each prompt was created using fasttext BIBREF24 and faiss BIBREF25 was used for KNN search. The retrieved story from the training set is limited to 150 words to match the length of generated stories." ], "extractive_spans": [ "Language Models", "seq2seq", "Ensemble", "KNN" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate a number of baselines:\n\n(1) Language Models: Non-hierarchical models for story generation, which do not condition on the prompt. We use both the gated convolutional language (GCNN) model of BIBREF4 and our additional self-attention mechanism.\n\n(2) seq2seq: using LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention.\n\n(3) Ensemble: an ensemble of two Conv seq2seq with self-attention models.\n\n(4) KNN: we also compare with a KNN model to find the closest prompt in the training set for each prompt in the test set. A TF-IDF vector for each prompt was created using fasttext BIBREF24 and faiss BIBREF25 was used for KNN search. The retrieved story from the training set is limited to 150 words to match the length of generated stories." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate a number of baselines:", "(1) Language Models: Non-hierarchical models for story generation, which do not condition on the prompt. We use both the gated convolutional language (GCNN) model of BIBREF4 and our additional self-attention mechanism.", "(2) seq2seq: using LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention.", "(3) Ensemble: an ensemble of two Conv seq2seq with self-attention models.", "(4) KNN: we also compare with a KNN model to find the closest prompt in the training set for each prompt in the test set. A TF-IDF vector for each prompt was created using fasttext BIBREF24 and faiss BIBREF25 was used for KNN search. The retrieved story from the training set is limited to 150 words to match the length of generated stories." ], "extractive_spans": [ "Language Models", "seq2seq: using LSTMs and convolutional seq2seq architectures", "Conv seq2seq with decoder self-attention", "an ensemble of two Conv seq2seq with self-attention models", "KNN model" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate a number of baselines:\n\n(1) Language Models: Non-hierarchical models for story generation, which do not condition on the prompt. We use both the gated convolutional language (GCNN) model of BIBREF4 and our additional self-attention mechanism.\n\n(2) seq2seq: using LSTMs and convolutional seq2seq architectures, and Conv seq2seq with decoder self-attention.\n\n(3) Ensemble: an ensemble of two Conv seq2seq with self-attention models.\n\n(4) KNN: we also compare with a KNN model to find the closest prompt in the training set for each prompt in the test set. A TF-IDF vector for each prompt was created using fasttext BIBREF24 and faiss BIBREF25 was used for KNN search. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "209d68f5cc3623049ac0871cbe7668cae8bd2a96", "8ef8f9fa89b05d59c695585cfae3f43ce46e58f7", "b55f7914cb77916231e47d6a5b5645fe18935df2" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "02d3d3299b0b2dfc0cdad825f3b8f48593f8d006", "48e9d7bfaf1eefd9d5ed6fe4b7fae7e594c67f5b", "bec9986f436915c73be498fe54ef0f1088ac41a1" ], "answer": [ { "evidence": [ "High-level structure is integral to good stories, but language models generate on a strictly-word-by-word basis and so cannot explicitly make high-level plans. We introduce the ability to plan by decomposing the generation process into two levels. First, we generate the premise or prompt of the story using the convolutional language model from BIBREF4 . The prompt gives a sketch of the structure of the story. Second, we use a seq2seq model to generate a story that follows the premise. Conditioning on the prompt makes it easier for the story to remain consistent and also have structure at a level beyond single phrases." ], "extractive_spans": [ "convolutional language model from BIBREF4" ], "free_form_answer": "", "highlighted_evidence": [ " First, we generate the premise or prompt of the story using the convolutional language model from BIBREF4 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "High-level structure is integral to good stories, but language models generate on a strictly-word-by-word basis and so cannot explicitly make high-level plans. We introduce the ability to plan by decomposing the generation process into two levels. First, we generate the premise or prompt of the story using the convolutional language model from BIBREF4 . The prompt gives a sketch of the structure of the story. Second, we use a seq2seq model to generate a story that follows the premise. Conditioning on the prompt makes it easier for the story to remain consistent and also have structure at a level beyond single phrases." ], "extractive_spans": [ " convolutional language model from BIBREF4" ], "free_form_answer": "", "highlighted_evidence": [ " First, we generate the premise or prompt of the story using the convolutional language model from BIBREF4 . " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "High-level structure is integral to good stories, but language models generate on a strictly-word-by-word basis and so cannot explicitly make high-level plans. We introduce the ability to plan by decomposing the generation process into two levels. First, we generate the premise or prompt of the story using the convolutional language model from BIBREF4 . The prompt gives a sketch of the structure of the story. Second, we use a seq2seq model to generate a story that follows the premise. Conditioning on the prompt makes it easier for the story to remain consistent and also have structure at a level beyond single phrases." ], "extractive_spans": [ "convolutional language model" ], "free_form_answer": "", "highlighted_evidence": [ "First, we generate the premise or prompt of the story using the convolutional language model from BIBREF4 . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "292ec85e3df3f626f22890da388c9f431155c9da", "88485f6f41f7ffd0bc1b36ad87955274ab1dd1ef", "89e309bc8570b40c5b1648b6cba1e09ffbaf9f5f" ], "answer": [ { "evidence": [ "We propose a number of evaluation metrics to quantify the performance of our models. Many commonly used metrics, such as BLEU for machine translation or ROUGE for summarization, compute an n-gram overlap between the generated text and the human text—however, in our open-ended generation setting, these are not useful. We do not aim to generate a specific story; we want to generate viable and novel stories. We focus on measuring both the fluency of our models and their ability to adhere to the prompt." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We do not aim to generate a specific story; we want to generate viable and novel stories. " ], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [ "We collect a hierarchical story generation dataset from Reddit's WritingPrompts forum. WritingPrompts is a community where online users inspire each other to write by submitting story premises, or prompts, and other users freely respond. Each prompt can have multiple story responses. The prompts have a large diversity of topic, length, and detail. The stories must be at least 30 words, avoid general profanity and inappropriate content, and should be inspired by the prompt (but do not necessarily have to fulfill every requirement). Figure FIGREF1 shows an example." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We collect a hierarchical story generation dataset from Reddit's WritingPrompts forum. WritingPrompts is a community where online users inspire each other to write by submitting story premises, or prompts, and other users freely respond." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "084f1ee877860a1ac92bd1640e7f1c272377be4f", "26641c307ff92421b527958722814046cd315c66", "edf8a40873f530fba9c9b76e0c7abd3ac74dbb1e" ], "answer": [ { "evidence": [ "To train our models, we gathered a large dataset of 303,358 human generated stories paired with writing prompts from an online forum. Evaluating free form text is challenging, so we also introduce new evaluation metrics which isolate different aspects of story generation." ], "extractive_spans": [ "online forum" ], "free_form_answer": "", "highlighted_evidence": [ "To train our models, we gathered a large dataset of 303,358 human generated stories paired with writing prompts from an online forum." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We collect a hierarchical story generation dataset from Reddit's WritingPrompts forum. WritingPrompts is a community where online users inspire each other to write by submitting story premises, or prompts, and other users freely respond. Each prompt can have multiple story responses. The prompts have a large diversity of topic, length, and detail. The stories must be at least 30 words, avoid general profanity and inappropriate content, and should be inspired by the prompt (but do not necessarily have to fulfill every requirement). Figure FIGREF1 shows an example." ], "extractive_spans": [ "Reddit's WritingPrompts forum" ], "free_form_answer": "", "highlighted_evidence": [ "We collect a hierarchical story generation dataset from Reddit's WritingPrompts forum." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We collect a hierarchical story generation dataset from Reddit's WritingPrompts forum. WritingPrompts is a community where online users inspire each other to write by submitting story premises, or prompts, and other users freely respond. Each prompt can have multiple story responses. The prompts have a large diversity of topic, length, and detail. The stories must be at least 30 words, avoid general profanity and inappropriate content, and should be inspired by the prompt (but do not necessarily have to fulfill every requirement). Figure FIGREF1 shows an example." ], "extractive_spans": [ "Reddit's WritingPrompts forum" ], "free_form_answer": "", "highlighted_evidence": [ "We collect a hierarchical story generation dataset from Reddit's WritingPrompts forum." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "", "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "", "" ], "question": [ "What human evaluation metrics do they look at?", "Which automated evaluation metrics are used?", "What baselines do they compare against?", "Do they use pre-trained embeddings like BERT?", "What model is used to generate the premise?", "Are the stories in the dataset fictional stories?", "Where are the stories collected from?" ], "question_id": [ "afeceee343360d3fe715f405dac7760d9a6754a7", "cc3dd701f3a674618de95a4196e9c7f4c8fbf1e5", "d66550f65484696c1284903708b87809ea705786", "29ba93bcd99c2323d04d4692d3672967cca4915e", "804bf5adc6dc5dd52f8079cf041ed3a710e03f8a", "f2dba5bf75967407cce5d0a9c2618269225081f5", "b783ec5cb9ad595da7db2c0ddf871152ae382c5f" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "", "" ] }
{ "caption": [ "Figure 1: Example prompt and beginning of a story from our dataset. We train a hierarchical model that first generates a prompt, and then conditions on the prompt when generating a story.", "Table 1: Statistics of WRITINGPROMPTS dataset", "Figure 2: Self-Attention Mechanism of a single head, with GLU gating and downsampling. Multiple heads are concatenated, with each head using a separate downsampling function.", "Figure 3: Multihead self-attention mechanism. The decoder layer depicted attends with itself to gate the input of the subsequent decoder layer.", "Figure 4: Diagram of our fusion model, which learns a second seq2seq model to improve a pretrained model. The separate hidden states are combined after gating through concatenation.", "Table 2: Effect of new attention mechanism. Gated multi-scale attention significantly improves the perplexity on the WRITINGPROMPTS dataset.", "Table 3: Perplexity on WRITINGPROMPTS. We dramatically improve over standard seq2seq models.", "Figure 5: Human accuracy at pairing stories with the prompts used to generate them. People find that our fusion model significantly improves the link between the prompt and generated stories.", "Figure 6: Accuracy of prompt ranking. The fusion model most accurately pairs prompt and stories.", "Figure 7: Accuracy on the prompt/story pairing task vs. number of generated stories. Our generative fusion model can produce many stories without degraded performance, while the KNN can only produce a limited number relevant stories.", "Table 4: Effect of Hierarchical Generation. Human judges prefer stories that were generated hierarchically by first creating a premise and creating a full story based on it with a seq2seq model.", "Figure 8: Average weighting of each model in our Fusion model for the beginning of the generated story for the prompt Gates of Hell. The fused model (orange) is primarily used for words which are closely related to the prompt, whereas generic words are generated by the pre-trained model (green).", "Table 5: Example stories generated by the proposed hierarchical fusion approach compared to stories generated by a language model. Stories generated by the fusion model relate to the desired prompt and show increased coherence between sentences and ability to stay on one topic compared to the language modeling baseline." ], "file": [ "1-Figure1-1.png", "2-Table1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "5-Table2-1.png", "6-Table3-1.png", "6-Figure5-1.png", "6-Figure6-1.png", "6-Figure7-1.png", "6-Table4-1.png", "7-Figure8-1.png", "9-Table5-1.png" ] }
[ "What human evaluation metrics do they look at?" ]
[ [ "1805.04833-Evaluation-2", "1805.04833-Evaluation-3", "1805.04833-6-Figure6-1.png", "1805.04833-6-Table4-1.png", "1805.04833-6-Figure5-1.png" ] ]
[ "Accuracy at pairing stories with the prompts used to generate them; accuracy of prompt ranking" ]
34
1805.07882
Sentence Modeling via Multiple Word Embeddings and Multi-level Comparison for Semantic Textual Similarity
Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (M-MaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTM-CNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pre-trained word embeddings to have the same dimension.
{ "paragraphs": [ [ "Measuring the semantic similarity/relation of two pieces of short text plays a fundamental role in a variety of language processing tasks (i.e., plagiarism detection, question answering, and machine translation). Semantic textual similarity (STS) task is challenging because of the diversity of linguistic expression. For example, two sentences with different lexicons could have a similar meaning. Moreover, the task requires to measure similarity at several levels (e.g., word level, phrase level, sentence level). These challenges give difficulties to conventional approaches using hand-crafted features.", "Recently, the emergence of word embedding techniques, which encode the semantic properties of a word into a low dimension vector, leads to the successes of many learning models in natural language processing (NLP). For example, BIBREF0 randomly initialize word vectors, then tunes them during the training phase of a sentence classification task. By contrast, BIBREF1 initialize word vectors via the pre-train word2vec model trained on Google News BIBREF2 . BIBREF3 train a word embedding model on the paraphrase dataset PPDB, then apply the word representation for word and bi-gram similarity tasks.", "Several pre-trained word embeddings are available, which are trained on various corpora under different models. BIBREF4 observed that different word embedding models capture different aspects of linguistic properties: a Bag-of-Words contexts based model tends to reflect the domain aspect (e.g., scientist and research) while a paraphrase-relationship based model captures semantic similarities of words (e.g., boy and kid). From experiments, we also observed that the performance of a word embedding model is usually inconsistent over different datasets. This inspired us to develop a model taking advantages of various pre-trained word embeddings for measuring textual similarity/relation.", "In this paper, we propose a convolutional neural network (CNN) to learn a multi-aspect word embedding from various pre-trained word embeddings. We then apply the max-pooling scheme and Long Short Term Memory (LSTM) on this embedding to form a sentence representation. In STS tasks, BIBREF5 shows the efficiency of the max-pooling scheme in modeling sentences from word embedding representations refined via CNN. However, the max-pooling scheme lacks the property of word order (e.g., sentence(“Bob likes Marry”) = sentence(“Marry likes Bob”)). To address this weakness, we use LSTM as an additional scheme for modeling sentences with word order characteristics. For measuring the similarity/relation between two sentence representations, we propose Multi-level comparison which consists of word-word level, sentence-sentence level, and word-sentence level. Through these levels, our model comprehensively evaluates the similarity/relation between two sentences.", "We evaluate our M-MaxLSTM-CNN model on three tasks: STS, textual entailment recognition, paraphrase identification. The advantages of M-MaxLSTM-CNN are: i) simple but efficient for combining various pre-trained word embeddings with different dimensions; ii) using Multi-level comparison shows better performances compared to using only sentence-sentence comparison; iii) does not require hand-crafted features (e.g., alignment features, Ngram overlaps, syntactic features, dependency features) compared to the state-of-the-art ECNU BIBREF6 on STS Benchmark dataset.", "Our main contributions are as follows:", "The remainder of this paper is organized as follows: Section 2 reviews the previous research, Section 3 introduces the architecture of our model, Section 4 describes the three tasks and datasets, Section 5 describes the experiment setting, Section 6 reports and discusses the results of the experiments, and Section 7 concludes our work." ], [ "Most prior research on modeling textual similarity relied on feature engineering. BIBREF7 extract INLINEFORM0 -gram overlap features and dependency-based features, while BIBREF8 employ features based on machine translation metrics. BIBREF9 propose a method using corpus-based and knowledge-based measures of similarity. BIBREF10 design a model which incorporates both syntax and lexical semantics using dependency grammars. BIBREF11 combine the fine-grained n-gram overlap features with the latent representation from matrix factorization. BIBREF12 develop a latent variable model which jointly learns paraphrase relations between word and sentence pairs. Using Dependency trees, BIBREF13 propose a robust monolingual aligner and successfully applied it for STS tasks.", "The recent emergence of deep learning models has provided an efficient way to learn continuous vectors representing words/sentences. By using a neural network in the context of a word prediction task, BIBREF14 and BIBREF15 generate word embedding vectors carrying semantic meanings. The embedding vectors of words which share similar meanings are close to each other. To capture the morphology of words, BIBREF16 enrich the word embedding with character n-grams information. Closest to this approach, BIBREF17 also propose to represent a word or sentence using a character n-gram count vector. However, the objective function for learning these embeddings is based on paraphrase pairs.", "For modeling sentences, composition approach attracted many studies. BIBREF18 model each word as a matrix and used iterated matrix multiplication to present a phrase. BIBREF19 design a Dependency Tree-Structured LSTM for modeling sentences. This model outperforms the linear chain LSTM in STS tasks. Convolutional neural network (CNN) has recently been applied efficiently for semantic composition BIBREF0 , BIBREF20 , BIBREF5 . This technique uses convolutional filters to capture local dependencies in term of context windows and applies a pooling layer to extract global features. BIBREF21 use CNN to extract features at multiple level of granularity. The authors then compare their sentence representations via multiple similarity metrics at several granularities. BIBREF22 propose a hierarchical CNN-LSTM architecture for modeling sentences. In this approach, CNN is used as an encoder to encode an sentence into a continuous representation, and LSTM is used as a decoder. BIBREF23 train a sentence encoder on a textual entailment recognition database using a BiLSTM-Maxpooling network. This encoder achieves competitive results on a wide range of transfer tasks.", "At SemEval-2017 STS task, hybrid approaches obtain strong performances. BIBREF24 train a linear regression model with WordNet, alignment features and the word embedding word2vec. BIBREF6 develop an ensemble model with multiple boosting techniques (i.e., Random Forest, Gradient Boosting, and XGBoost). This model incorporates traditional features (i.e., n-gram overlaps, syntactic features, alignment features, bag-of-words) and sentence modeling methods (i.e., Averaging Word Vectors, Projecting Averaging Word Vectors, LSTM).", "MVCNN model BIBREF25 and MGNC-CNN model BIBREF26 are close to our approach. In MVCNN, the authors use variable-size convolution filters on various pre-trained word embeddings for extracting features. However, MVCNN requires word embeddings to have the same size. In MGNC-CNN, the authors apply independently CNN on each pre-trained word embedding for extracting features and then concatenate these features for sentence classification. By contrast, our M-MaxLSTM-CNN model jointly applies CNN on all pre-trained word embeddings to learn a multi-aspect word embedding. From this word representation, we encode sentences via the max-pooling and LSTM. To learn the similarity/relation between two sentences, we employ Multi-level comparison." ], [ "Our model (shown in Figure FIGREF4 ) consists of three main components: i) learning a multi-aspect word embedding (Section 3.1); ii) modeling sentences from this embedding (Section 3.2); iii) measuring the similarity/relation between two sentences via Multi-level comparison (section 3.3)." ], [ "Given a word INLINEFORM0 , we transfer it into a word vector INLINEFORM1 via INLINEFORM2 pre-trained word embeddings as follows: DISPLAYFORM0 ", "where INLINEFORM0 is concatenation operator, INLINEFORM1 is the word embedding vector of INLINEFORM2 in the INLINEFORM3 th pre-trained embedding.", "To learn a multi-aspect word embedding INLINEFORM0 from the representation INLINEFORM1 , we design INLINEFORM2 convolutional filters. Each filter INLINEFORM3 is denoted as a weight vector with the same dimension as INLINEFORM4 and a bias value INLINEFORM5 . The INLINEFORM6 is obtained by applying these filters on the INLINEFORM7 as follows: DISPLAYFORM0 ", " where INLINEFORM0 denotes a logistic sigmoid function.", "The next section explains how to model a sentence from its multiple-aspect word embeddings." ], [ "Given an input sentence INLINEFORM0 , we obtain a sequence of multiple-aspect word embeddings INLINEFORM1 using Eq. (1-3). For modeling the sentence from the representation INLINEFORM2 , we use two schemes: max-pooling and LSTM.", "Max-pooling scheme: To construct a max-pooling sentence embedding INLINEFORM0 , the most potential features are extracted from the representation INLINEFORM1 as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the INLINEFORM1 th element of INLINEFORM2 .", "LSTM scheme: From Eq. (4), we find that the max-pooling scheme ignores the property of word order. Therefore, we construct a LSTM sentence embedding INLINEFORM0 to support the sentence embedding INLINEFORM1 . The representation INLINEFORM2 is transformed to a fix-length vector by recursively applying a LSTM unit to each input INLINEFORM3 and the previous step INLINEFORM4 . At each time step INLINEFORM5 , the LSTM unit with INLINEFORM6 -memory dimension defines six vectors in INLINEFORM7 : input gate INLINEFORM8 , forget gate INLINEFORM9 , output gate INLINEFORM10 , tanh layer INLINEFORM11 , memory cell INLINEFORM12 and hidden state INLINEFORM13 as follows (from BIBREF19 ): DISPLAYFORM0 DISPLAYFORM1 ", " where INLINEFORM0 respectively denote a logistic sigmoid function and element-wise multiplication; INLINEFORM1 are respectively two weights matrices and a bias vector for input gate INLINEFORM2 . The denotation is similar to forget gate INLINEFORM3 , output gate INLINEFORM4 , tanh layer INLINEFORM5 , memory cell INLINEFORM6 and hidden state INLINEFORM7 .", "Finally, the sentence embedding INLINEFORM0 is obtained by concatenating the two sentence embeddings INLINEFORM1 and INLINEFORM2 : DISPLAYFORM0 " ], [ "In this section, we describe the process for evaluating the similarity/relation between two sentences. We compare two sentences via three levels: word-word, sentence-sentence and word-sentence.", "Given two input sentences INLINEFORM0 and INLINEFORM1 , we encode them into two sequences of multi-aspect word embeddings INLINEFORM2 and INLINEFORM3 (Section 3.2). We then compute a word-word similarity vector INLINEFORM4 as follows: DISPLAYFORM0 ", " where INLINEFORM0 is the INLINEFORM1 th multi-aspect word embedding of sentence INLINEFORM2 ; INLINEFORM3 is a function to flatten a matrix into a vector; and INLINEFORM4 and INLINEFORM5 are respectively a weight matrix and a bias parameter.", "Given two input sentences INLINEFORM0 and INLINEFORM1 , we encode them into two sentence embeddings INLINEFORM2 and INLINEFORM3 (Section 3.1 and 3.2). To compute the similarity/relation between the two embeddings, we introduce four comparison metrics:", "Cosine similarity: DISPLAYFORM0 ", "Multiplication vector & Absolute difference: DISPLAYFORM0 ", " where INLINEFORM0 is element-wise multiplication.", "Neural difference: DISPLAYFORM0 ", " where INLINEFORM0 and INLINEFORM1 are respectively a weight matrix and a bias parameter.", "As a result, we have a sentence-sentence similarity vector INLINEFORM0 as follows: DISPLAYFORM0 ", " where INLINEFORM0 and INLINEFORM1 are respectively a weight matrix and a bias parameter.", "Given a sentence embedding INLINEFORM0 and a sequence of multi-aspect word embeddings INLINEFORM1 , we compute a word-sentence similarity matrix INLINEFORM2 as follows: DISPLAYFORM0 ", " where INLINEFORM0 is the multi-aspect word embedding of the INLINEFORM1 th word in sentence INLINEFORM2 ; INLINEFORM3 and INLINEFORM4 are respectively a weight matrix and a bias parameter.", "As a result, we have a word-sentence similarity vector INLINEFORM0 for the two sentences as follows: DISPLAYFORM0 ", "where INLINEFORM0 is a function to flatten a matrix into a vector; INLINEFORM1 and INLINEFORM2 are respectively a weight matrix and a bias parameter.", "Finally, we compute a target score/label of a sentence pair as follows: DISPLAYFORM0 ", " where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are model parameters; INLINEFORM4 is a predicted target score/label." ], [ "We evaluate our model on three tasks:", "Table TABREF30 shows the statistic of the three datasets. Because of not dealing with name entities and multi-word idioms, the vocabulary size of SICK is quite small compared to the others." ], [ "We study five pre-trained word embeddings for our model:", "word2vec is trained on Google News dataset (100 billion tokens). The model contains 300-dimensional vectors for 3 million words and phrases.", "fastText is learned via skip-gram with subword information on Wikipedia text. The embedding representations in fastText are 300-dimensional vectors.", "GloVe is a 300-dimensional word embedding model learned on aggregated global word-word co-occurrence statistics from Common Crawl (840 billion tokens).", "Baroni uses a context-predict approach to learn a 400-dimensional semantic embedding model. It is trained on 2.8 billion tokens constructed from ukWaC, the English Wikipedia and the British National Corpus.", "SL999 is trained under the skip-gram objective with negative sampling on word pairs from the paraphrase database PPDB. This 300-dimensional embedding model is tuned on SimLex-999 dataset BIBREF27 ." ], [ "In all of the tasks, we used the same model configuration as follows:", "Convolutional filters: we used 1600 filters. It is also the dimension of the word embedding concatenated from the five pre-trained word embeddings.", "LSTM dimension: we also selected 1600 for LSTM dimension.", "Neural similarity layers: the dimension of INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are respectively 50, 5, 5 and 100.", "Penultimate fully-connected layer: has the dimension of 250 and is followed by a drop-out layer ( INLINEFORM0 ).", "We conducted a grid search on 30% of STSB dataset to select these optimal hyper-parameters." ], [ "In these tasks, we use the cross-entropy objective function and employ AdaDelta as the stochastic gradient descent (SGD) update rule with mini-batch size as 30. Details of Adadelta method can be found in BIBREF28 . During the training phase, the pre-trained word embeddings are fixed.", "To compute a similarity score of a sentence pair in the range INLINEFORM0 , where INLINEFORM1 is an integer, we replace Eq. (27) with the equations in BIBREF19 as follows: DISPLAYFORM0 ", " where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are parameters; INLINEFORM4 ; INLINEFORM5 is a predicted similarity score.", "A sparse target distribution INLINEFORM0 which satisfies INLINEFORM1 is computed as: DISPLAYFORM0 ", "for INLINEFORM0 , and INLINEFORM1 is the similarity score.", "To train the model, we minimize the regularized KL-divergence between INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0 ", "where INLINEFORM0 is the number of training pairs and INLINEFORM1 denotes the model parameters. The gradient descent optimization Adadelta is used to learn the model parameters. We also use mini-batch size as 30 and keep the pre-trained word embeddings fixed during the training phase. We evaluate our models through Pearson correlation INLINEFORM2 ." ], [ "This section describes two experiments: i) compare our model against recent systems; ii) evaluate the efficiency of using multiple pre-trained word embeddings." ], [ "Besides existing methods, we also compare our model with several sentence modeling approaches using multiple pre-trained word embeddings:", "Word Average: DISPLAYFORM0 ", "where INLINEFORM0 is the sentence embedding of a INLINEFORM1 -words sentence, and INLINEFORM2 is from Eq. (1)", "Project Average: DISPLAYFORM0 ", "where INLINEFORM0 is a INLINEFORM1 weight matrix, and INLINEFORM2 is a 1600 bias vector.", "LSTM: apply Eq. (5-11) on INLINEFORM0 to construct the 1600-dimension INLINEFORM1 sentence embedding.", "Max-CNN: apply Eq. (2-4) on INLINEFORM0 to construct the 1600-dimension INLINEFORM1 sentence embedding.", "We report the results of these methods in Table TABREF49 . Overall, our M-MaxLSTM-CNN shows competitive performances in these tasks. Especially in the STS task, M-MaxLSTM-CNN outperforms the state-of-the-art methods on the two datasets. Because STSB includes complicated samples compared to SICK, the performances of methods on STSB are quite lower. In STSB, the prior top performance methods use ensemble approaches mixing hand-crafted features (word alignment, syntactic features, N-gram overlaps) and neural sentence representations, while our approach is only based on a neural sentence modeling architecture. In addition, we observed that InferSent shows the strong performance on SICK-R but quite low on STSB while our model consistently obtains the strong performances on both of the datasets. InferSent uses transfer knowledge on textual entailment data, consequently it obtains the strong performance on this entailment task.", "According to BIBREF31 , using Word Average as the compositional architecture outperforms the other architectures (e.g., Project Average, LSTM) for STS tasks. In a multiple word embeddings setting, however, Word Average does not show its efficiency. Each word embedding model has its own architecture as well as objective function. These factors makes the vector spaces of word embeddings are different. Therefore, we intuitively need a step to learn or refine a representation from a set of pre-trained word embeddings rather than only averaging them. Because Project Average model, LSTM model and Max-CNN model have their parameters for learning sentence embeddings, they significantly outperform Word Average model.", "We observed that MaxLSTM-CNN outperforms Max-CNN in both of the settings (i.e., sentence-sentence comparison, Multi-level comparison). As mentioned in Section 1, Max-CNN ignores the property of word order. Therefore, our model achieves improvement compared to Max-CNN by additionally employing LSTM for capturing this property.", "We only applied Multi-level comparison on Max-CNN and MaxLSTM-CNN because these encoders generate multi-aspect word embeddings. The experimental results prove the efficiency of using Multi-level comparison. In the textual entailment dataset SICK-E, the task mainly focuses on interpreting the meaning of a whole sentence pair rather than comparing word by word. Therefore, the performance of Multi-level comparison is quite similar to sentence-sentence comparison in the SICK-E task. This is also the reason why LSTM, which captures global relationships in sentences, has the strong performance in this task." ], [ "In this section, we evaluate the efficiency of using multiple pre-trained word embeddings. We compare our multiple pre-trained word embeddings model against models using only one pre-trained word embedding. The same objective function and Multi-level comparison are applied for these models. In case of using one pre-trained word embedding, the dimension of LSTM and the number of convolutional filters are set to the length of the corresponding word embedding. Table TABREF57 shows the experimental results of this comparison. Because the approach using five word embeddings outperforms the approaches using two, three, or four word embeddings, we only report the performance of using five word embeddings. We also report INLINEFORM0 which is the proportion of vocabulary available in a pre-trained word embedding. SICK dataset ignores idiomatic multi-word expressions, and named entities, consequently the INLINEFORM1 of SICK is quite high.", "We observed that no word embedding has strong results on all the tasks. Although trained on the paraphrase database and having the highest INLINEFORM0 , the SL999 embedding could not outperform the Glove embedding in SICK-R. HCTI BIBREF5 , which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding. However, the performance of HTCI in STSB ( INLINEFORM1 ) is lower than our model using the Glove embedding. In SICK-R, InferSent BIBREF23 achieves a strong performance ( INLINEFORM2 ) using the Glove embedding with transfer knowledge, while our model with only the Glove embedding achieves a performance close to the performance of InferSent. These results confirm the efficiency of Multi-level comparison.", "In STSB and MRPC, as employing the five pre-trained embeddings, the INLINEFORM0 is increased. This factor limits the number of random values when initializing word embedding representations because a word out of a pre-trained word embedding is assigned a random word embedding representation. In other words, a word out of a pre-trained word embedding is assigned a random semantic meaning. Therefore, the increase of the INLINEFORM1 improves the performance of measuring textual similarity. In STSB and MRPC, our multiple pre-trained word embedding achieves a significant improvement in performance compared against using one word embedding. In SICK-R and SICK-E, although the INLINEFORM2 is not increased when employing five pre-trained embeddings, the performance of our model is improved. This fact shows that our model learned an efficient word embedding via these pre-trained word embeddings." ], [ "In this work, we study an approach employing multiple pre-trained word embeddings and Multi-level comparison for measuring semantic textual relation. The proposed M-MaxLSTM-CNN architecture consistently obtains strong performances on several tasks. Compared to the state-of-the art methods in STS tasks, our model does not require handcrafted features (e.g., word alignment, syntactic features) as well as transfer learning knowledge. In addition, it allows using several pre-trained word embeddings with different dimensions.", "Future work could apply our multiple word embeddings approach for transfer learning tasks. This strategy allows making use of pre-trained word embeddings as well as available resources." ], [ "This work was done while Nguyen Tien Huy was an intern at Toshiba Research Center." ] ], "section_name": [ "Introduction", "Related work", "Model description", "Multi-aspect word embedding", "Sentence modeling", "Multi-level comparison", "Tasks & Datasets", "Pre-trained word embeddings", "Model configuration", "Training Setting", "Experiments and Discussion", "Overall evaluation", "Evaluation of exploiting multiple pre-trained word embeddings", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "517a767308accfd20fea719876112e44444c3d25", "8c9bb4e0f2193dee2e1b9a7c819b54935eca7328", "d3e44a72f50eb967a67c0864a90237cac33aa41b" ], "answer": [ { "evidence": [ "We study five pre-trained word embeddings for our model:", "word2vec is trained on Google News dataset (100 billion tokens). The model contains 300-dimensional vectors for 3 million words and phrases.", "fastText is learned via skip-gram with subword information on Wikipedia text. The embedding representations in fastText are 300-dimensional vectors.", "GloVe is a 300-dimensional word embedding model learned on aggregated global word-word co-occurrence statistics from Common Crawl (840 billion tokens).", "Baroni uses a context-predict approach to learn a 400-dimensional semantic embedding model. It is trained on 2.8 billion tokens constructed from ukWaC, the English Wikipedia and the British National Corpus.", "SL999 is trained under the skip-gram objective with negative sampling on word pairs from the paraphrase database PPDB. This 300-dimensional embedding model is tuned on SimLex-999 dataset BIBREF27 ." ], "extractive_spans": [ "word2vec ", "fastText ", "GloVe ", "Baroni ", "SL999 " ], "free_form_answer": "", "highlighted_evidence": [ "We study five pre-trained word embeddings for our model:\n\nword2vec is trained on Google News dataset (100 billion tokens). The model contains 300-dimensional vectors for 3 million words and phrases.\n\nfastText is learned via skip-gram with subword information on Wikipedia text. The embedding representations in fastText are 300-dimensional vectors.\n\nGloVe is a 300-dimensional word embedding model learned on aggregated global word-word co-occurrence statistics from Common Crawl (840 billion tokens).\n\nBaroni uses a context-predict approach to learn a 400-dimensional semantic embedding model. It is trained on 2.8 billion tokens constructed from ukWaC, the English Wikipedia and the British National Corpus.\n\nSL999 is trained under the skip-gram objective with negative sampling on word pairs from the paraphrase database PPDB. This 300-dimensional embedding model is tuned on SimLex-999 dataset BIBREF27 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We study five pre-trained word embeddings for our model:", "word2vec is trained on Google News dataset (100 billion tokens). The model contains 300-dimensional vectors for 3 million words and phrases.", "GloVe is a 300-dimensional word embedding model learned on aggregated global word-word co-occurrence statistics from Common Crawl (840 billion tokens).", "Baroni uses a context-predict approach to learn a 400-dimensional semantic embedding model. It is trained on 2.8 billion tokens constructed from ukWaC, the English Wikipedia and the British National Corpus.", "fastText is learned via skip-gram with subword information on Wikipedia text. The embedding representations in fastText are 300-dimensional vectors.", "SL999 is trained under the skip-gram objective with negative sampling on word pairs from the paraphrase database PPDB. This 300-dimensional embedding model is tuned on SimLex-999 dataset BIBREF27 ." ], "extractive_spans": [ "word2vec", "fastText", "GloVe", "Baroni", "SL999" ], "free_form_answer": "", "highlighted_evidence": [ "We study five pre-trained word embeddings for our model:\n\nword2vec is trained on Google News dataset (100 billion tokens). ", "GloVe is a 300-dimensional word embedding model learned on aggregated global word-word co-occurrence statistics from Common Crawl (840 billion tokens).", "Baroni uses a context-predict approach to learn a 400-dimensional semantic embedding model.", "fastText is learned via skip-gram with subword information on Wikipedia text. The embedding representations in fastText are 300-dimensional vectors.", "SL999 is trained under the skip-gram objective with negative sampling on word pairs from the paraphrase database PPDB. This 300-dimensional embedding model is tuned on SimLex-999 dataset BIBREF27 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We study five pre-trained word embeddings for our model:", "word2vec is trained on Google News dataset (100 billion tokens). The model contains 300-dimensional vectors for 3 million words and phrases.", "fastText is learned via skip-gram with subword information on Wikipedia text. The embedding representations in fastText are 300-dimensional vectors.", "GloVe is a 300-dimensional word embedding model learned on aggregated global word-word co-occurrence statistics from Common Crawl (840 billion tokens).", "Baroni uses a context-predict approach to learn a 400-dimensional semantic embedding model. It is trained on 2.8 billion tokens constructed from ukWaC, the English Wikipedia and the British National Corpus.", "SL999 is trained under the skip-gram objective with negative sampling on word pairs from the paraphrase database PPDB. This 300-dimensional embedding model is tuned on SimLex-999 dataset BIBREF27 ." ], "extractive_spans": [ "word2vec", "fastText", "GloVe", "Baroni", "SL999" ], "free_form_answer": "", "highlighted_evidence": [ "We study five pre-trained word embeddings for our model:\n\nword2vec is trained on Google News dataset (100 billion tokens). The model contains 300-dimensional vectors for 3 million words and phrases.\n\nfastText is learned via skip-gram with subword information on Wikipedia text. The embedding representations in fastText are 300-dimensional vectors.\n\nGloVe is a 300-dimensional word embedding model learned on aggregated global word-word co-occurrence statistics from Common Crawl (840 billion tokens).\n\nBaroni uses a context-predict approach to learn a 400-dimensional semantic embedding model. It is trained on 2.8 billion tokens constructed from ukWaC, the English Wikipedia and the British National Corpus.\n\nSL999 is trained under the skip-gram objective with negative sampling on word pairs from the paraphrase database PPDB. This 300-dimensional embedding model is tuned on SimLex-999 dataset BIBREF27 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "0e76f65dd995b954f90c8766ed0f586db4f857f2", "c186470bb1af4bab6f65cd659d42f932cbea70eb", "cd38f6f3dfa933e2435b1e8e2f1f2f7d91ccea13" ], "answer": [ { "evidence": [ "We report the results of these methods in Table TABREF49 . Overall, our M-MaxLSTM-CNN shows competitive performances in these tasks. Especially in the STS task, M-MaxLSTM-CNN outperforms the state-of-the-art methods on the two datasets. Because STSB includes complicated samples compared to SICK, the performances of methods on STSB are quite lower. In STSB, the prior top performance methods use ensemble approaches mixing hand-crafted features (word alignment, syntactic features, N-gram overlaps) and neural sentence representations, while our approach is only based on a neural sentence modeling architecture. In addition, we observed that InferSent shows the strong performance on SICK-R but quite low on STSB while our model consistently obtains the strong performances on both of the datasets. InferSent uses transfer knowledge on textual entailment data, consequently it obtains the strong performance on this entailment task.", "In STSB and MRPC, as employing the five pre-trained embeddings, the INLINEFORM0 is increased. This factor limits the number of random values when initializing word embedding representations because a word out of a pre-trained word embedding is assigned a random word embedding representation. In other words, a word out of a pre-trained word embedding is assigned a random semantic meaning. Therefore, the increase of the INLINEFORM1 improves the performance of measuring textual similarity. In STSB and MRPC, our multiple pre-trained word embedding achieves a significant improvement in performance compared against using one word embedding. In SICK-R and SICK-E, although the INLINEFORM2 is not increased when employing five pre-trained embeddings, the performance of our model is improved. This fact shows that our model learned an efficient word embedding via these pre-trained word embeddings." ], "extractive_spans": [ "STSB ", "SICK", "MRPC" ], "free_form_answer": "", "highlighted_evidence": [ "We report the results of these methods in Table TABREF49 . Overall, our M-MaxLSTM-CNN shows competitive performances in these tasks. Especially in the STS task, M-MaxLSTM-CNN outperforms the state-of-the-art methods on the two datasets. Because STSB includes complicated samples compared to SICK, the performances of methods on STSB are quite lower.", "In STSB and MRPC, as employing the five pre-trained embeddings, the INLINEFORM0 is increased. This factor limits the number of random values when initializing word embedding representations because a word out of a pre-trained word embedding is assigned a random word embedding representation. In other words, a word out of a pre-trained word embedding is assigned a random semantic meaning. Therefore, the increase of the INLINEFORM1 improves the performance of measuring textual similarity. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Tasks & Datasets", "We evaluate our model on three tasks:", "Table TABREF30 shows the statistic of the three datasets. Because of not dealing with name entities and multi-word idioms, the vocabulary size of SICK is quite small compared to the others.", "FLOAT SELECTED: Table 1: Statistic of datasets. |V |, l denote the vocabulary size, and the average length of sentences respectively." ], "extractive_spans": [], "free_form_answer": "STSB, SICK, MRPC", "highlighted_evidence": [ "Tasks & Datasets\nWe evaluate our model on three tasks:\n\nTable TABREF30 shows the statistic of the three datasets.", "FLOAT SELECTED: Table 1: Statistic of datasets. |V |, l denote the vocabulary size, and the average length of sentences respectively." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In STSB and MRPC, as employing the five pre-trained embeddings, the INLINEFORM0 is increased. This factor limits the number of random values when initializing word embedding representations because a word out of a pre-trained word embedding is assigned a random word embedding representation. In other words, a word out of a pre-trained word embedding is assigned a random semantic meaning. Therefore, the increase of the INLINEFORM1 improves the performance of measuring textual similarity. In STSB and MRPC, our multiple pre-trained word embedding achieves a significant improvement in performance compared against using one word embedding. In SICK-R and SICK-E, although the INLINEFORM2 is not increased when employing five pre-trained embeddings, the performance of our model is improved. This fact shows that our model learned an efficient word embedding via these pre-trained word embeddings.", "In this section, we evaluate the efficiency of using multiple pre-trained word embeddings. We compare our multiple pre-trained word embeddings model against models using only one pre-trained word embedding. The same objective function and Multi-level comparison are applied for these models. In case of using one pre-trained word embedding, the dimension of LSTM and the number of convolutional filters are set to the length of the corresponding word embedding. Table TABREF57 shows the experimental results of this comparison. Because the approach using five word embeddings outperforms the approaches using two, three, or four word embeddings, we only report the performance of using five word embeddings. We also report INLINEFORM0 which is the proportion of vocabulary available in a pre-trained word embedding. SICK dataset ignores idiomatic multi-word expressions, and named entities, consequently the INLINEFORM1 of SICK is quite high." ], "extractive_spans": [ "SICK", "STSB", "MRPC" ], "free_form_answer": "", "highlighted_evidence": [ "In STSB and MRPC, as employing the five pre-trained embeddings, the INLINEFORM0 is increased.", "SICK dataset ignores idiomatic multi-word expressions, and named entities, consequently the INLINEFORM1 of SICK is quite high." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "02e3979026d8102ef3d0d4dd86a01faf207124fe", "0bfdb55c66859631012ce4e66acbf15c7bad9e79", "9927b270af92190c8c6b4b697fbbc966460628c4" ], "answer": [ { "evidence": [ "We observed that no word embedding has strong results on all the tasks. Although trained on the paraphrase database and having the highest INLINEFORM0 , the SL999 embedding could not outperform the Glove embedding in SICK-R. HCTI BIBREF5 , which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding. However, the performance of HTCI in STSB ( INLINEFORM1 ) is lower than our model using the Glove embedding. In SICK-R, InferSent BIBREF23 achieves a strong performance ( INLINEFORM2 ) using the Glove embedding with transfer knowledge, while our model with only the Glove embedding achieves a performance close to the performance of InferSent. These results confirm the efficiency of Multi-level comparison.", "We evaluate our M-MaxLSTM-CNN model on three tasks: STS, textual entailment recognition, paraphrase identification. The advantages of M-MaxLSTM-CNN are: i) simple but efficient for combining various pre-trained word embeddings with different dimensions; ii) using Multi-level comparison shows better performances compared to using only sentence-sentence comparison; iii) does not require hand-crafted features (e.g., alignment features, Ngram overlaps, syntactic features, dependency features) compared to the state-of-the-art ECNU BIBREF6 on STS Benchmark dataset." ], "extractive_spans": [ "ECNU", "HCTI" ], "free_form_answer": "", "highlighted_evidence": [ "HCTI BIBREF5 , which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding. However, the performance of HTCI in STSB ( INLINEFORM1 ) is lower than our model using the Glove embedding. ", "We evaluate our M-MaxLSTM-CNN model on three tasks: STS, textual entailment recognition, paraphrase identification. The advantages of M-MaxLSTM-CNN are: i) simple but efficient for combining various pre-trained word embeddings with different dimensions; ii) using Multi-level comparison shows better performances compared to using only sentence-sentence comparison; iii) does not require hand-crafted features (e.g., alignment features, Ngram overlaps, syntactic features, dependency features) compared to the state-of-the-art ECNU BIBREF6 on STS Benchmark dataset." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We observed that no word embedding has strong results on all the tasks. Although trained on the paraphrase database and having the highest INLINEFORM0 , the SL999 embedding could not outperform the Glove embedding in SICK-R. HCTI BIBREF5 , which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding. However, the performance of HTCI in STSB ( INLINEFORM1 ) is lower than our model using the Glove embedding. In SICK-R, InferSent BIBREF23 achieves a strong performance ( INLINEFORM2 ) using the Glove embedding with transfer knowledge, while our model with only the Glove embedding achieves a performance close to the performance of InferSent. These results confirm the efficiency of Multi-level comparison." ], "extractive_spans": [ "HCTI BIBREF5", "InferSent BIBREF23 " ], "free_form_answer": "", "highlighted_evidence": [ "We observed that no word embedding has strong results on all the tasks. Although trained on the paraphrase database and having the highest INLINEFORM0 , the SL999 embedding could not outperform the Glove embedding in SICK-R. HCTI BIBREF5 , which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding. However, the performance of HTCI in STSB ( INLINEFORM1 ) is lower than our model using the Glove embedding. In SICK-R, InferSent BIBREF23 achieves a strong performance ( INLINEFORM2 ) using the Glove embedding with transfer knowledge, while our model with only the Glove embedding achieves a performance close to the performance of InferSent. These results confirm the efficiency of Multi-level comparison." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We evaluate our M-MaxLSTM-CNN model on three tasks: STS, textual entailment recognition, paraphrase identification. The advantages of M-MaxLSTM-CNN are: i) simple but efficient for combining various pre-trained word embeddings with different dimensions; ii) using Multi-level comparison shows better performances compared to using only sentence-sentence comparison; iii) does not require hand-crafted features (e.g., alignment features, Ngram overlaps, syntactic features, dependency features) compared to the state-of-the-art ECNU BIBREF6 on STS Benchmark dataset.", "We observed that no word embedding has strong results on all the tasks. Although trained on the paraphrase database and having the highest INLINEFORM0 , the SL999 embedding could not outperform the Glove embedding in SICK-R. HCTI BIBREF5 , which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding. However, the performance of HTCI in STSB ( INLINEFORM1 ) is lower than our model using the Glove embedding. In SICK-R, InferSent BIBREF23 achieves a strong performance ( INLINEFORM2 ) using the Glove embedding with transfer knowledge, while our model with only the Glove embedding achieves a performance close to the performance of InferSent. These results confirm the efficiency of Multi-level comparison." ], "extractive_spans": [ "ECNU BIBREF6", "HCTI BIBREF5" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluate our M-MaxLSTM-CNN model on three tasks: STS, textual entailment recognition, paraphrase identification. The advantages of M-MaxLSTM-CNN are: i) simple but efficient for combining various pre-trained word embeddings with different dimensions; ii) using Multi-level comparison shows better performances compared to using only sentence-sentence comparison; iii) does not require hand-crafted features (e.g., alignment features, Ngram overlaps, syntactic features, dependency features) compared to the state-of-the-art ECNU BIBREF6 on STS Benchmark dataset.", "HCTI BIBREF5 , which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "which pretrained embeddings were experimented with?", "what datasets where used?", "what are the state of the art methods they compare with?" ], "question_id": [ "3eb107f35f4f5f5f527a93ffb487aa2e3fe51efd", "47d54a6dd50cab8dab64bfa1f9a1947a8190080c", "67cb001f8ca122ea859724804b41529fea5faeef" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: The proposed M-MaxLSTM-CNN model: (a) MaxLSTM-CNN encoder; (b) Multi-level comparison.", "Table 1: Statistic of datasets. |V |, l denote the vocabulary size, and the average length of sentences respectively.", "Table 2: Test set results with Pearson’s r score×100 for STS tasks, and accuracy for other tasks. Boldface values show the highest scores in each dataset. SICK-R and SICK-E denote the STS task and the entailment task in SICK dataset respectively.", "Table 3: Evaluation of exploiting multiple pre-trained word embeddings. |V |avai is the proportion of vocabulary available in a word embedding. In case of using all word embeddings, |V |avai denotes the proportion of vocabulary where each word is available in at least one word embedding." ], "file": [ "4-Figure1-1.png", "5-Table1-1.png", "7-Table2-1.png", "8-Table3-1.png" ] }
[ "what datasets where used?" ]
[ [ "1805.07882-Overall evaluation-7", "1805.07882-Evaluation of exploiting multiple pre-trained word embeddings-0", "1805.07882-Evaluation of exploiting multiple pre-trained word embeddings-2", "1805.07882-Tasks & Datasets-0", "1805.07882-5-Table1-1.png", "1805.07882-Tasks & Datasets-1" ] ]
[ "STSB, SICK, MRPC" ]
35
2004.01820
Aggressive, Repetitive, Intentional, Visible, and Imbalanced: Refining Representations for Cyberbullying Classification
Cyberbullying is a pervasive problem in online communities. To identify cyberbullying cases in large-scale social networks, content moderators depend on machine learning classifiers for automatic cyberbullying detection. However, existing models remain unfit for real-world applications, largely due to a shortage of publicly available training data and a lack of standard criteria for assigning ground truth labels. In this study, we address the need for reliable data using an original annotation framework. Inspired by social sciences research into bullying behavior, we characterize the nuanced problem of cyberbullying using five explicit factors to represent its social and linguistic aspects. We model this behavior using social network and language-based features, which improve classifier performance. These results demonstrate the importance of representing and modeling cyberbullying as a social phenomenon.
{ "paragraphs": [ [ "Cyberbullying poses a serious threat to the safety of online communities. The Centers for Disease Control and Prevention (CDC) identify cyberbullying as a “growing public health problem in need of additional research and prevention efforts” BIBREF0. Cyberbullying has been linked to negative mental health outcomes, including depression, anxiety, and other forms of self-harm, suicidal ideation, suicide attempts, and difficulties with social and emotional processing BIBREF1, BIBREF2, BIBREF3. Where traditional bullying was once limited to a specific time and place, cyberbullying can occur at any hour and from any location on earth BIBREF4. Once the first message has been sent, the attack can escalate rapidly as harmful content is spread across shared media, compounding these negative effects BIBREF5, BIBREF6.", "Internet users depend on content moderators to flag abusive text and ban cyberbullies from participating in online communities. However, due to the overwhelming volume of social media data produced every day, manual human moderation is often unfeasible. For this reason, social media platforms are beginning to rely instead on machine learning classifiers for automatic cyberbullying detection BIBREF7.", "The research community has developed increasingly competitive classifiers to detect harmful or aggressive content in text. Despite significant progress in recent years, however, existing models remain unfit for real-world applications. This is due, in part, to shortcomings in the training and testing data BIBREF8, BIBREF9, BIBREF10. Most annotation schemes have ignored the importance of social context, and researchers have neglected to provide annotators with objective criteria for distinguishing cyberbullying from other crude messages.", "To address the urgent need for reliable data, we provide an original annotation framework and an annotated Twitter dataset. The key advantages to our labeling approach are:", "[leftmargin=.2in]", "Contextually-informed ground truth. We provide annotators with the social context surrounding each message, including the contents of the reply thread and the account information of each user involved.", "Clear labeling criteria. We ask annotators to provide labels for five clear cyberbullying criteria. These criteria can be combined and adapted for revised definitions of cyberbullying.", "Using our new dataset, we experiment with existing NLP features and compare results with a newly-proposed set of features. We designed these features to encode the dynamic relationship between a potential bully and victim, using comparative measures from their relative linguistic and social network profiles. Additionally, our features have low computational complexity, so they can scale to internet-scale datasets, unlike expensive network centrality and clustering measurements.", "Results from our experiments suggest that, although existing NLP models can reliably detect aggressive language in text, these lexically-trained classifiers will fall short of the more subtle goal of cyberbullying detection. With $n$-grams and dictionary-based features, classifiers prove unable to detect harmful intent, visibility among peers, power imbalance, or the repetitive nature of aggression with sufficiently high precision and recall. However, our proposed feature set improves $F_1$ scores on all four of these social measures. Real-world detection systems can benefit from our proposed approach, incorporating the social aspects of cyberbullying into existing models and training these models on socially-informed ground truth labels." ], [ "Existing approaches to cyberbullying detection generally follow a common workflow. Data is collected from social networks or other online sources, and ground truth is established through manual human annotation. Machine learning algorithms are trained on the labeled data using the message text or hand-selected features. Then results are typically reported using precision, recall, and $F_1$ scores. Comparison across studies is difficult, however, because the definition of cyberbullying has not been standardized. Therefore, an important first step for the field is to establish an objective definition of cyberbullying." ], [ "Some researchers view cyberbullying as an extension of more “traditional” bullying behaviors BIBREF16, BIBREF17, BIBREF18. In one widely-cited book, the psychologist Dan Olweus defines schoolyard bullying in terms of three criteria: repetition, harmful intent, and an imbalance of power BIBREF19. He then identifies bullies by their intention to “inflict injury or discomfort” upon a weaker victim through repeated acts of aggression.", "Social scientists have extensively studied this form of bullying as it occurs among adolescents in school BIBREF20, BIBREF21. However, experts disagree whether cyberbullying should be studied as a form of traditional bullying or a fundamentally different phenomenon BIBREF20, BIBREF17. Some argue that, although cyberbullying might involve repeated acts of aggression, this condition might not necessarily hold in all cases, since a single message can be otherwise forwarded and publicly viewed without repeated actions from the author BIBREF22, BIBREF5. Similarly, the role of power imbalance is uncertain in online scenarios. Power imbalances of physical strength or numbers may be less relevant, whereas bully anonymity and the permanence of online messages may be sufficient to render the victim defenseless BIBREF23.", "The machine learning community has not reached a unanimous definition of cyberbullying either. They have instead echoed the uncertainty of the social scientists. Moreover, some authors have neglected to publish any objective cyberbullying criteria or even a working definition for their annotators, and among those who do, the formulation varies. This disagreement has slowed progress in the field, since classifiers and datasets cannot be as easily compared. Upon review, however, we found that all available definitions contained a strict subset of the following criteria: aggression (aggr), repetition (rep), harmful intent (harm), visibility among peers (peer), and power imbalance (power). The datasets built from these definitions are outlined in Table TABREF1." ], [ "According to BIBREF7, data collection is the most restrictive “bottleneck” in cyberbullying research. Because there are very few publicly available datasets, some researchers have turned to crowdsourcing using Amazon Mechanical Turk or similar platforms.", "In most studies to date, annotators labeled individual messages instead of message threads, ignoring social context altogether BIBREF11, BIBREF13, BIBREF24, BIBREF14, BIBREF25, BIBREF15. Only three of the papers that we reviewed incorporated social context in the annotation process. BIBREF4 considered batches of time-sorted tweets called sessions, which were grouped by user accounts, but they did not include message threads or any other form of context. BIBREF7 presented “original conversation[s] when possible,” but they did not explain when this information was available. BIBREF8 was the only study to label full message reply threads as they appeared in the original online source." ], [ "A large body of work has been published on cyberbullying detection and prediction, primarily through the use of natural language processing techniques. Most common approaches have relied on lexical features such as $n$-grams BIBREF8, BIBREF7, BIBREF26, TF-IDF vectors BIBREF27, BIBREF28, BIBREF15, word embeddings BIBREF29, or phonetic representations of messages BIBREF30, as well as dictionary-based counts on curse words, hateful or derogatory terms, pronouns, emoticons, and punctuation BIBREF11, BIBREF31, BIBREF14, BIBREF25. Some studies have also used message sentiment BIBREF25, BIBREF15, BIBREF7 or the age, gender, personality, and psychological state of the message author according to text from their timelines BIBREF11, BIBREF31. These methods have been reported with appreciable success as shown in Table TABREF8.", "Some researchers argue, however, that lexical features alone may not adequately represent the nuances of cyberbullying. BIBREF12 found that among Instagram media sessions containing profane or vulgar content, only 30% were acts of cyberbullying. They also found that while cyberbullying posts contained a moderate proportion of negative terms, the most negative posts were not considered cases of cyberbullying by the annotators. Instead, these negative posts referred to politics, sports, and other domestic matters between friends BIBREF12.", "The problem of cyberbullying cuts deeper than merely the exchange of aggressive language. The meaning and intent of an aggressive post is revealed through conversation and interaction between peers. Therefore, to properly distinguish cyberbullying from other uses of aggressive or profane language, future studies should incorporate key indicators from the social context of each message. Specifically, researchers can measure the author's status or social advantage, the author's harmful intent, the presence of repeated aggression in the thread, and the visibility of the thread among peers BIBREF12, BIBREF10, BIBREF9.", "Since cyberbullying is an inherently social phenomenon, some studies have naturally considered social network measures for classification tasks. Several features have been derived from the network representations of the message interactions. The degree and eigenvector centralities of nodes, the $k$-core scores, and clustering of communities, as well as the tie strength and betweenness centralities of mention edges have all been shown to improve text-based models BIBREF13, BIBREF25. Additionally, bullies and victims can be more accurately identified by their relative network positions. For example, the Jaccard coefficient between neighborhood sets in bully and victim networks has been found to be statistically significant BIBREF32. The ratio of all messages sent and received by each user was also significant.", "These findings show promising directions for future work. Social network features may provide the information necessary to reliably classify cyberbullying. However, it may be prohibitively expensive to build out social networks for each user due to time constraints and the limitations of API calls BIBREF33. For this reason, alternative measurements of online social relationships should be considered.", "In the present study, we leverage prior work by incorporating linguistic signals into our classifiers. We extend prior work by developing a dataset that better reflects the definitions of cyberbullying presented by social scientists, and by proposing and evaluating a feature set that represents information pertaining to the social processes that underlie cyberbullying behavior." ], [ "Here, we provide an original annotation framework and a new dataset for cyberbullying research, built to unify existing methods of ground truth annotation. In this dataset, we decompose the complex issue of cyberbullying into five key criteria, which were drawn from the social science and machine learning communities. These criteria can be combined and adapted for revised definitions of cyberbullying." ], [ "We collected a sample of 1.3 million unlabeled tweets from the Twitter Filter API. Since cyberbullying is a social phenomenon, we chose to filter for tweets containing at least one “@” mention. To restrict our investigation to original English content, we removed all non-English posts and retweets (RTs), narrowing the size of our sample to 280,301 tweets.", "Since aggressive language is a key component of cyberbullying BIBREF12, we ran the pre-trained classifier of BIBREF35 over our dataset to identify hate speech and aggressive language and increase the prevalence of cyberbullying examples . This gave us a filtered set of 9,803 aggressive tweets.", "We scraped both the user and timeline data for each author in the aggressive set, as well as any users who were mentioned in one of the aggressive tweets. In total, we collected data from 21,329 accounts. For each account, we saved the full user object, including profile name, description, location, verified status, and creation date. We also saved a complete list of the user's friends and followers, and a 6-month timeline of all their posts and mentions from January $1^\\text{st}$ through June $10^\\text{th}$, 2019. For author accounts, we extended our crawl to include up to four years of timeline content. Lastly, we collected metadata for all tweets belonging to the corresponding message thread for each aggressive message." ], [ "We presented each tweet in the dataset to three separate annotators as a Human Intelligence Task (HIT) on Amazon's Mechanical Turk (MTurk) platform. By the time of recruitment, 6,897 of the 9,803 aggressive tweets were accessible from the Twitter web page. The remainder of the tweets had been removed, or the Twitter account had been locked or suspended.", "We asked our annotators to consider the full message thread for each tweet as displayed on Twitter's web interface. We also gave them a list of up to 15 recent mentions by the author of the tweet, directed towards any of the other accounts mentioned in the original thread. Then we asked annotators to interpret each tweet in light of this social context, and had them provide us with labels for five key cyberbullying criteria. We defined these criteria in terms of the author account (“who posted the given tweet?”) and the target (“who was the tweet about?” – not necessarily the first mention). We also stated that “if the target is not on Twitter or their handle cannot be identified” the annotator should “please write OTHER.” With this framework established, we gave the definitions for our five cyberbullying criteria as follows.", "Aggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive. The user either addresses a group or individual, and the message contains at least one phrase that could be described as confrontational, derogatory, insulting, threatening, hostile, violent, hateful, or sexually abusive.", "Repetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).", "Harmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. The target does not respond agreeably as to a joke or an otherwise lighthearted comment.", "Visibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.", "Power imbalance: (power) Power is derived from authority and perceived social advantage. Celebrities and public figures are more powerful than common users. Minorities and disadvantaged groups have less power. Bullies can also derive power from peer support.", "Each of these criteria was represented as a binary label, except for power imbalance, which was ternary. We asked “Is there strong evidence that the author is more powerful than the target? Is the target more powerful? Or if there is not any good evidence, just mark equal.” We recognized that an imbalance of power might arise in a number of different circumstances. Therefore, we did not restrict our definition to just one form of power, such as follower count or popularity.", "For instructional purposes, we provided five sample threads to demonstrate both positive and negative examples for each of the five criteria. Two of these threads are shown here. The thread in Figure FIGREF18 displays bullying behavior that is targeted against the green user, with all five cyberbullying criteria displayed. The thread includes repeated use of aggressive language such as “she really fucking tried” and “she knows she lost.” The bully's harmful intent is evident in the victim's defensive responses. And lastly, the thread is visible among four peers as three gang up against one, creating a power imbalance.", "The final tweet in Figure FIGREF18 shows the importance of context in the annotation process. If we read only this individual message, we might decide that the post is cyberbullying, but given the social context here, we can confidently assert that this post is not cyberbullying. Although it contains the aggressive phrase “FUCK YOU TOO BITCH”, the author does not intend harm. The message is part of a joking exchange between two friends or equals, and no other peers have joined in the conversation or interacted with the thread.", "After asking workers to review these examples, we gave them a short 7-question quiz to test their knowledge. Workers were given only one quiz attempt, and they were expected to score at least 6 out of 7 questions correctly before they could proceed to the paid HIT. Workers were then paid $\\$0.12$ for each thread that they annotated.", "We successfully recruited 170 workers to label all 6,897 available threads in our dataset. They labeled an average of 121.7 threads and a median of 7 threads each. They spent an average time of 3 minutes 50 seconds, and a median time of 61 seconds per thread. For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17.", "We determined ground truth for our data using a 2 out of 3 majority vote as in BIBREF12. If the message thread was missing or a target user could not be identified, we removed the entry from the dataset, since later we would need to draw our features from both the thread and the target profile. After filtering in this way, we were left with 5,537 labeled tweets." ], [ "As discussed earlier, some experts have argued that cyberbullying is different from online aggression BIBREF12, BIBREF10, BIBREF9. We asked our annotators to weigh in on this issue by asking them the subjective question for each thread: “Based on your own intuition, is this tweet an example of cyberbullying?” We did not use the cyberbullying label as ground truth for training models; we used this label to better understand worker perceptions of cyberbullying. We found that our workers believed cyberbullying will depend on a weighted combination of the five criteria presented in this paper, with the strongest correlate being harmful intent as shown in Table TABREF17.", "Furthermore, the annotators decided our dataset contained 74.8% aggressive messages as shown in the Positive Balance column of Table TABREF17. We found that a large majority of these aggressive tweets were not labeled as “cyberbullying.” Rather, only 10.5% were labeled by majority vote as cyberbullying, and only 21.5% were considered harmful. From this data, we propose that cyberbullying and cyberaggression are not equivalent classes. Instead, cyberbullying transcends cyberaggression." ], [ "We have established that cyberbullying is a complex social phenomenon, different from the simpler notion of cyberaggression. Standard Bag of Words (BoW) features based on single sentences, such as $n$-grams and word embeddings, may thus lead machine learning algorithms to incorrectly classify friendly or joking behavior as cyberbullying BIBREF12, BIBREF10, BIBREF9. To more reliably capture the nuances of repetition, harmful intent, visibility among peers, and power imbalance, we designed a new set of features from the social and linguistic traces of Twitter users. These measures allow our classifiers to encode the dynamic relationship between the message author and target, using network and timeline similarities, expectations from language models, and other signals taken from the message thread.", "For each feature and each cyberbullying criterion, we compare the cumulative distributions of the positive and negative class using the two-sample Kolmogorov-Smirnov test. We report the Kolmogorov-Smirnov statistic $D$ (a normalized distance between the CDF of the positive and negative class) as well as the $p$-value with $\\alpha = 0.05$ as our level for statistical significance." ], [ "To construct realistic and competitive baseline models, we consider a set of standard text-based features that have been used widely throughout the literature. Specifically, we use the NLTK library BIBREF36 to construct unigrams, bigrams, and trigrams for each labeled message. This parallels the work of BIBREF8, BIBREF7, and BIBREF26. Following BIBREF30, we incorporate counts from the Linguistic Inquiry and Word Count (LIWC) dictionary to measure the linguistic and psychological processes that are represented in the text BIBREF37. We also use a modified version of the Flesch-Kincaid Grade Level and Flesch Reading Ease scores as computed in BIBREF35. Lastly, we encode the sentiment scores for each message using the Valence Aware Dictionary and sEntiment Reasoner (VADER) of BIBREF38." ], [ "Network features have been shown to improve text-based models BIBREF6, BIBREF25, and they can help classifiers distinguish between bullies and victims BIBREF32. These features may also capture some of the more social aspects of cyberbullying, such as power imbalance and visibility among peers. However, many centrality measures and clustering algorithms require detailed network representations. These features may not be scalable for real-world applications. We propose a set of low-complexity measurements that can be used to encode important higher-order relations at scale. Specifically, we measure the relative positions of the author and target accounts in the directed following network by computing modified versions of Jaccard's similarity index as we now explain." ], [ "Let $N^{+}(u)$ be the set of all accounts followed by user $u$ and let $N^{-}(u)$ be the set of all accounts that follow user $u$. Then $N(u) = N^{+}(u) \\cup N^{-}(u)$ is the neighborhood set of $u$. We consider five related measurements of neighborhood overlap for a given author $a$ and target $t$, listed here.", "Downward overlap measures the number of two-hop paths from the author to the target along following relationships; upward overlap measures two-hop paths in the opposite direction. Inward overlap measures the similarity between the two users' follower sets, and outward overlap measures the similarity between their sets of friends. Bidirectional overlap then is a more generalized measure of social network similarity. We provide a graphical depiction for each of these features on the right side of Figure FIGREF18.", "High downward overlap likely indicates that the target is socially relevant to the author, as high upward overlap indicates the author is relevant to the target. Therefore, when the author is more powerful, downward overlap is expected to be lower and upward overlap is expected be higher. This trend is slight but visible in the cumulative distribution functions of Figure FIGREF26 (a): downward overlap is indeed lower when the author is more powerful than when the users are equals ($D=0.143$). However, there is not a significant difference for upward overlap ($p=0.85$). We also observe that, when the target is more powerful, downward and upward overlap are both significantly lower ($D=0.516$ and $D=0.540$ respectively). It is reasonable to assume that messages can be sent to celebrities and other powerful figures without the need for common social connections.", "Next, we consider inward and outward overlap. When the inward overlap is high, the author and target could have more common visibility. Similarly, if the outward overlap is high, then the author and target both follow similar accounts, so they might have similar interests or belong to the same social circles. Both inward and outward overlaps are expected to be higher when a post is visible among peers. This is true of both distributions in Figure FIGREF26. The difference in outward overlap is significant ($D=0.04$, $p=0.03$), and the difference for inward overlap is short of significant ($D=0.04$, $p=0.08$)." ], [ "We also use basic user account metrics drawn from the author and target profiles. Specifically, we count the friends and followers of each user, their verified status, and the number of tweets posted within six-month snapshots of their timelines, as in BIBREF11, BIBREF4, and BIBREF8." ], [ "Here, we consider linguistic features, drawn from both the author and target timelines. These are intended to capture the social relationship between each user, their common interests, and the surprise of a given message relative to the author's timeline history." ], [ "To more clearly represent the social relationship between the author and target users, we consider the messages sent between them as follows:", "Downward mention count: How many messages has the author sent to the target?", "Upward mention count: How many messages has the target sent to the author?", "Mention overlap: Let $M_a$ be the set of all accounts mentioned by author $a$, and let $M_t$ be the set of all accounts mentioned by target $t$. We compute the ratio $\\frac{|M_a \\cap M_t|}{|M_a \\cup M_t|}$.", "Multiset mention overlap: Let $\\hat{M}_a$ be the multiset of all accounts mentioned by author $a$ (with repeats for each mention), and let $\\hat{M}_t$ be the multiset of all accounts mentioned by target $t$. We measure $\\frac{|\\hat{M}_a \\cap ^{*} \\hat{M}_t|}{|\\hat{M}_a \\cup \\hat{M}_t|}$ where $\\cap ^{*}$ takes the multiplicity of each element to be the sum of the multiplicity from $\\hat{M}_a $ and the multiplicity from $\\hat{M}_b$", "The direct mention count measures the history of repeated communication between the author and the target. For harmful messages, downward overlap is higher ($D=0.178$) and upward overlap is lower ($D=0.374$) than for harmless messages, as shown in Figure FIGREF38. This means malicious authors tend to address the target repeatedly while the target responds with relatively few messages.", "Mention overlap is a measure of social similarity that is based on shared conversations between the author and the target. Multiset mention overlap measures the frequency of communication within this shared space. These features may help predict visibility among peers, or repeated aggression due to pile-on bullying situations. We see in Figure FIGREF38 that repeated aggression is linked to slightly greater mention overlap ($D=0.07$, $p=0.07$), but the trend is significant only for multiset mention overlap ($D=0.08$, $p=0.03$)." ], [ "Timeline similarity is used to indicate common interests and shared topics of conversation between the author and target timelines. High similarity scores might reflect users' familiarity with one another, or suggest that they occupy similar social positions. This can be used to distinguish cyberbullying from harmless banter between friends and associates. To compute this metric, we represent the author and target timelines as TF-IDF vectors $\\vec{A}$ and $\\vec{T}$. We then take the cosine similarity between the vectors as", "A cosine similarity of 1 means that users' timelines had identical counts across all weighted terms; a cosine similarity of 0 means that their timelines did not contain any words in common. We expect higher similarity scores between friends and associates.", "In Figure FIGREF44 (a), we see that the timelines were significantly less similar when the target was in a position of greater power ($D=0.294$). This is not surprising, since power can be derived from such differences between social groups. We do not observe the same dissimilarity when the author was more powerful ($p=0.58$). What we do observe is likely caused by noise from extreme class imbalance and low inter-annotator agreement on labels for author power.", "Turning to Figure FIGREF44 (b), we see that aggressive messages were less likely to harbor harmful intent if they were sent between users with similar timelines ($D=0.285$). Aggressive banter between friends is generally harmless, so again, this confirms our intuitions." ], [ "Harmful intent is difficult to measure in isolated messages because social context determines pragmatic meaning. We attempt to approximate the author's harmful intent by measuring the linguistic “surprise” of a given message relative to the author's timeline history. We do this in two ways: through a simple ratio of new words, and through the use of language models.", "To estimate historical language behavior, we count unigram and bigram frequencies from a 4-year snapshot of the author's timeline. Then, after removing all URLs, punctuation, stop words, mentions, and hashtags from the original post, we take the cardinality of the set unigrams in the post having zero occurrences in the timeline. Lastly, we divide this count by the length of the processed message to arrive at our new words ratio. We can also build a language model from the bigram frequencies, using Kneser-Ney smoothing as implemented in NLTK BIBREF36. From the language model, we compute the surprise of the original message $m$ according to its cross-entropy, given by", "where $m$ is composed of bigrams $b_1, b_2, \\dots , b_N$, and $P(b_i)$ is the probability of the $i$th bigram from the language model.", "We see in Figure FIGREF47 that harmfully intended messages have a greater density of new words ($D=0.06$). This is intuitive, since attacks may be staged around new topics of conversation. However, the cross entropy of these harmful messages is slightly lower than for harmless messages ($D=0.06$). This may be due to harmless jokes, since joking messages might depart more from the standard syntax of the author's timeline." ], [ "Finally, we turn to the messages of the thread itself to compute measures of visibility and repeated aggression." ], [ "To determine the public visibility of the author's post, we collect basic measurements from the interactions of other users in the thread. They are as follows.", "Message count: Count the messages posted in the thread", "Reply message count: Count the replies posted in the thread after the author's first comment.", "Reply user count: Count the users who posted a reply in the thread after the author's first comment.", "Maximum author favorites: The largest number of favorites the author received on a message in the thread.", "Maximum author retweets: The largest number of retweets the author received on a message in the thread." ], [ "To detect repeated aggression, we again employ the hate speech and offensive language classifier of BIBREF35. Each message is given a binary label according to the classifier-assigned class: aggressive (classified as hate speech or offensive language) or non-aggressive (classified as neither hate speech nor offensive language). From these labels, we derive the following features.", "Aggressive message count: Count the messages in the thread classified as aggressive", "Aggressive author message count: Count the author's messages that were classified as aggressive", "Aggressive user count: Of the users who posted a reply in the thread after the author first commented, count how many had a message classified as aggressive" ], [ "Using our proposed features from the previous section and ground truth labels from our annotation task, we trained a separate Logistic Regression classifier for each of the five cyberbullying criteria, and we report precision, recall, and $F_1$ measures over each binary label independently. We averaged results using five-fold cross-validation, with 80% of the data allocated for training and 20% of the data allocated for testing at each iteration. To account for the class imbalance in the training data, we used the synthetic minority over-sampling technique (SMOTE) BIBREF39. We did not over-sample testing sets, however, to ensure that our tests better match the class distributions obtained as we did by pre-filtering for aggressive directed Twitter messages.", "We compare our results across the five different feature combinations given in Table TABREF58. Note that because we do not include thread features in the User set, it can be used for cyberbullying prediction and early intervention. The Proposed set can be used for detection, sinct it is a collection of all newly proposed features, including thread features. The Combined adds these to the baseline text features.", "The performance of the different classifiers is summarized in Tables TABREF59, TABREF64, and TABREF65. Here, we see that Bag of Words and text-based methods performed well on the aggressive language classification task, with an $F_1$ score of 83.5%. This was expected and the score aligns well with the success of other published results of Table TABREF8. Cyberbullying detection is more complex than simply identifying aggressive text, however. We find that these same baseline methods fail to reliably detect repetition, harmful intent, visibility among peers, and power imbalance, as shown by the low recall scores in Table TABREF64. We conclude that our investigation of socially informed features was justified.", "Our proposed set of features beats recall scores for lexically trained baselines in all but the aggression criterion. We also improve precision scores for repetition, visibility among peers, and power imbalance. When we combine all features, we see our $F_1$ scores beat baselines for each criterion. This demonstrates the effectiveness of our approach, using linguistic similarity and community measurements to encode social characteristics for cyberbullying classification.", "Similar results were obtained by replacing our logistic regression model with any of a random forest model, support vector machine (SVM), AdaBoost, or Multilayer Perceptron (MLP). We report all precision, recall, and $F_1$ scores in Appendix 2, Tables TABREF69-TABREF77. We chose to highlight logistic regression because it can be more easily interpreted. As a result, we can identify the relative importance of our proposed features. The feature weights are also given in Appendix 2, Tables TABREF78-TABREF78. There we observe a trend. The aggressive language and repetition criteria are dominated by lexical features; the harmful intent is split between lexical and historical communication features; and the visibility among peers and target power criteria are dominated by our proposed social features.", "Although we achieve moderately competitive scores in most categories, our classifiers are still over-classifying cyberbullying cases. Precision scores are generally much lower than recall scores across all models. To reduce our misclassification of false positives and better distinguish between joking or friendly banter and cyberbullying, it may be necessary to mine for additional social features. Overall, we should work to increase all $F_1$ scores to above 0.8 before we can consider our classifiers ready for real-world applications BIBREF10." ], [ "Our study focuses on the Twitter ecosystem and a small part of its network. The initial sampling of tweets was based on a machine learning classifier of aggressive English language. This classifier has an F1 score of 0.90 BIBREF35. Even with this filter, only 0.7% of tweets were deemed by a majority of MTurk workers as cyberbullying (Table TABREF17). This extreme class imbalance can disadvantage a wide range of machine learning models. Moreover, the MTurk workers exhibited only moderate inter-annotator agreement (Table TABREF17). We also acknowledge that notions of harmful intent and power imbalance can be subjective, since they may depend on the particular conventions or social structure of a given community. For these reasons, we recognize that cyberbullying still has not been unambiguously defined. Moreover, their underlying constructs are difficult to identify. In this study, we did not train workers to recognize subtle cues for interpersonal popularity, nor the role of anonymity in creating a power imbalance.", "Furthermore, because we lack the authority to define cyberbullying, we cannot assert a two-way implication between cyberbullying and the five criteria outlined here. It may be possible for cyberbullying to exist with only one criterion present, such as harmful intent. Our five criteria also might not span all of the dimensions of cyberbullying. However, they are representative of the literature in both the social science and machine learning communities, and they can be used in weighted combinations to accommodate new definitions.", "The main contribution of our paper is not that we solved the problem of cyberbullying detection. Instead, we have exposed the challenge of defining and measuring cyberbullying activity, which has been historically overlooked in the research community." ], [ "Cyberbullying detection is an increasingly important and yet challenging problem to tackle. A lack of detailed and appropriate real-world datasets stymies progress towards more reliable detection methods. With cyberbullying being a systemic issue across social media platforms, we urge the development of a methodology for data sharing with researchers that provides adequate access to rich data to improve on the early detection of cyberbullying while also addressing the sensitive privacy issues that accompany such instances." ], [ "In this study, we produced an original dataset for cyberbullying detection research and an approach that leverages this dataset to more accurately detect cyberbullying. Our labeling scheme was designed to accommodate the cyberbullying definitions that have been proposed throughout the literature. In order to more accurately represent the nature of cyberbullying, we decomposed this complex issue into five representative characteristics. Our classes distinguish cyberbullying from other related behaviors, such as isolated aggression or crude joking. To help annotators infer these distinctions, we provided them with the full context of each message's reply thread, along with a list of the author's most recent mentions. In this way, we secured a new set of labels for more reliable cyberbullying representations.", "From these ground truth labels, we designed a new set of features to quantify each of the five cyberbullying criteria. Unlike previous text-based or user-based features, our features measure the relationship between a message author and target. We show that these features improve the performance of standard text-based models. These results demonstrate the relevance of social-network and language-based measurements to account for the nuanced social characteristics of cyberbullying.", "Despite improvements over baseline methods, our classifiers have not attained the high levels of precision and recall that should be expected of real-world detection systems. For this reason, we argue that the challenging task of cyberbullying detection remains an open research problem." ], [ "This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR0011890019, and by the National Science Foundation (NSF) under Grant No. 1659886 and Grant No. 1553579." ], [ "To understand the real-world class distribution for the cyberbullying criteria, we randomly selected 222 directed English tweets from an unbiased sample of drawn from the Twitter Decahose stream across the entire month of October 2016. Using the same methodology given in the paper, we had these tweets labeled three times each on Amazon Mechanical Turk. Again, ground truth was determined using 2 out of 3 majority vote. Upon analysis, we found that the positive class balance was prohibitively small, especially for repetition, harmful intent, visibility among peers, and author power, which were all under 5%." ], [ "For the sake of comparison, we provide precision, recall, and $F_1$ scores for five different machine learning models: $k$-nearest neighbors (KNN), random forest, support vector machine (SVM), AdaBoost, and Multilayer Perceptron (MLP). Then we provide feature weights for our logistic regression model trained on each of the five cyberbullying criteria." ] ], "section_name": [ "Introduction", "Background", "Background ::: Defining Cyberbullying", "Background ::: Existing Sources of Cyberbullying Data", "Background ::: Modeling Cyberbullying Behavior", "Curating a Comprehensive Cyberbullying Dataset", "Curating a Comprehensive Cyberbullying Dataset ::: Data Collection", "Curating a Comprehensive Cyberbullying Dataset ::: Annotation Task", "Curating a Comprehensive Cyberbullying Dataset ::: Cyberbullying Transcends Cyberaggression", "Feature Engineering", "Feature Engineering ::: Text-based Features", "Feature Engineering ::: Social Network Features", "Feature Engineering ::: Social Network Features ::: Neighborhood Overlap", "Feature Engineering ::: Social Network Features ::: User-based features", "Feature Engineering ::: Timeline Features", "Feature Engineering ::: Timeline Features ::: Message Behavior", "Feature Engineering ::: Timeline Features ::: Timeline Similarity", "Feature Engineering ::: Timeline Features ::: Language Models", "Feature Engineering ::: Thread Features", "Feature Engineering ::: Thread Features ::: Visibility", "Feature Engineering ::: Thread Features ::: Aggression", "Experimental Evaluation", "Discussion ::: Limitations", "Discussion ::: Future Directions", "Conclusion", "Acknowledgements", "Appendix 1: Analysis of the Real-World Class Distribution for Cyberbullying Criteria", "Appendix 2: Model Evaluation" ] }
{ "answers": [ { "annotation_id": [ "67cee6de2a0dda2b1b0d4cad7c9e28369fe83f3d", "9f71c3d94a22f92fc337668123382564dd472e77", "a3948f5d15af0f47fbecb771fbc5fabbb05907ef" ], "answer": [ { "evidence": [ "We successfully recruited 170 workers to label all 6,897 available threads in our dataset. They labeled an average of 121.7 threads and a median of 7 threads each. They spent an average time of 3 minutes 50 seconds, and a median time of 61 seconds per thread. For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "extractive_spans": [ "Fleiss's Kappa" ], "free_form_answer": "", "highlighted_evidence": [ "For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We successfully recruited 170 workers to label all 6,897 available threads in our dataset. They labeled an average of 121.7 threads and a median of 7 threads each. They spent an average time of 3 minutes 50 seconds, and a median time of 61 seconds per thread. For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "extractive_spans": [ "Fleiss's Kappa " ], "free_form_answer": "", "highlighted_evidence": [ " For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We successfully recruited 170 workers to label all 6,897 available threads in our dataset. They labeled an average of 121.7 threads and a median of 7 threads each. They spent an average time of 3 minutes 50 seconds, and a median time of 61 seconds per thread. For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "extractive_spans": [ "Fleiss's Kappa" ], "free_form_answer": "", "highlighted_evidence": [ "For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "1ec85962ede0447d69c9ebc3cf9aeb004678ddf5", "51db9f9aa57898a5539b6268a630ada780b7d899", "5252d16fd531546c1913ab80f14a724360b1fc87" ], "answer": [ { "evidence": [ "Our study focuses on the Twitter ecosystem and a small part of its network. The initial sampling of tweets was based on a machine learning classifier of aggressive English language. This classifier has an F1 score of 0.90 BIBREF35. Even with this filter, only 0.7% of tweets were deemed by a majority of MTurk workers as cyberbullying (Table TABREF17). This extreme class imbalance can disadvantage a wide range of machine learning models. Moreover, the MTurk workers exhibited only moderate inter-annotator agreement (Table TABREF17). We also acknowledge that notions of harmful intent and power imbalance can be subjective, since they may depend on the particular conventions or social structure of a given community. For these reasons, we recognize that cyberbullying still has not been unambiguously defined. Moreover, their underlying constructs are difficult to identify. In this study, we did not train workers to recognize subtle cues for interpersonal popularity, nor the role of anonymity in creating a power imbalance." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Moreover, the MTurk workers exhibited only moderate inter-annotator agreement (Table TABREF17)." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "FLOAT SELECTED: Table 1: Datasets built from different related definitions of cyberbullying. For each dataset, we report the size, positive class balance, inter-annotator agreement, and whether the study incorporated social context in the annotation process." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Datasets built from different related definitions of cyberbullying. For each dataset, we report the size, positive class balance, inter-annotator agreement, and whether the study incorporated social context in the annotation process." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "We successfully recruited 170 workers to label all 6,897 available threads in our dataset. They labeled an average of 121.7 threads and a median of 7 threads each. They spent an average time of 3 minutes 50 seconds, and a median time of 61 seconds per thread. For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "1c8d8e098dbb163e9776399fbcc327364ecd33d1", "23cc7ca581e8c2bd34b4d52493ad49921cc9ee7f", "abc02ca1c9fd1e22eb8cdd27c4d5dfcbd8c440c3" ], "answer": [ { "evidence": [ "We successfully recruited 170 workers to label all 6,897 available threads in our dataset. They labeled an average of 121.7 threads and a median of 7 threads each. They spent an average time of 3 minutes 50 seconds, and a median time of 61 seconds per thread. For each thread, we collected annotations from three different workers, and from this data we computed our reliability metrics using Fleiss's Kappa for inter-annotator agreement as shown in Table TABREF17." ], "extractive_spans": [ "170" ], "free_form_answer": "", "highlighted_evidence": [ "We successfully recruited 170 workers to label all 6,897 available threads in our dataset. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We presented each tweet in the dataset to three separate annotators as a Human Intelligence Task (HIT) on Amazon's Mechanical Turk (MTurk) platform. By the time of recruitment, 6,897 of the 9,803 aggressive tweets were accessible from the Twitter web page. The remainder of the tweets had been removed, or the Twitter account had been locked or suspended." ], "extractive_spans": [ "three " ], "free_form_answer": "", "highlighted_evidence": [ "We presented each tweet in the dataset to three separate annotators as a Human Intelligence Task (HIT) on Amazon's Mechanical Turk (MTurk) platform. " ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "56a2fee6a64d14c27ba69fc4848c98a52def3e3f", "71f7d5c99612c1b8e883b2eaac990581293427b8", "e1e147a996b9fec7be9d6fa1a12fd2c1729e2b3d" ], "answer": [ { "evidence": [ "Network features have been shown to improve text-based models BIBREF6, BIBREF25, and they can help classifiers distinguish between bullies and victims BIBREF32. These features may also capture some of the more social aspects of cyberbullying, such as power imbalance and visibility among peers. However, many centrality measures and clustering algorithms require detailed network representations. These features may not be scalable for real-world applications. We propose a set of low-complexity measurements that can be used to encode important higher-order relations at scale. Specifically, we measure the relative positions of the author and target accounts in the directed following network by computing modified versions of Jaccard's similarity index as we now explain.", "We also use basic user account metrics drawn from the author and target profiles. Specifically, we count the friends and followers of each user, their verified status, and the number of tweets posted within six-month snapshots of their timelines, as in BIBREF11, BIBREF4, and BIBREF8." ], "extractive_spans": [], "free_form_answer": "Relative positions of the author and target accounts in the directed following network by\ncomputing modified versions of Jaccard’s similarity index, friends count, followers count, verified status, number of tweets posted within 6 months.", "highlighted_evidence": [ "Specifically, we measure the relative positions of the author and target accounts in the directed following network by computing modified versions of Jaccard's similarity index as we now explain.", "Specifically, we count the friends and followers of each user, their verified status, and the number of tweets posted within six-month snapshots of their timelines, as in BIBREF11, BIBREF4, and BIBREF8." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Feature Engineering ::: Social Network Features ::: Neighborhood Overlap", "Let $N^{+}(u)$ be the set of all accounts followed by user $u$ and let $N^{-}(u)$ be the set of all accounts that follow user $u$. Then $N(u) = N^{+}(u) \\cup N^{-}(u)$ is the neighborhood set of $u$. We consider five related measurements of neighborhood overlap for a given author $a$ and target $t$, listed here.", "Downward overlap measures the number of two-hop paths from the author to the target along following relationships; upward overlap measures two-hop paths in the opposite direction. Inward overlap measures the similarity between the two users' follower sets, and outward overlap measures the similarity between their sets of friends. Bidirectional overlap then is a more generalized measure of social network similarity. We provide a graphical depiction for each of these features on the right side of Figure FIGREF18.", "Feature Engineering ::: Social Network Features ::: User-based features", "We also use basic user account metrics drawn from the author and target profiles. Specifically, we count the friends and followers of each user, their verified status, and the number of tweets posted within six-month snapshots of their timelines, as in BIBREF11, BIBREF4, and BIBREF8." ], "extractive_spans": [], "free_form_answer": "Downward overlap, upward overlap, inward overlap, outward overlap, bidirectional overlap, count of friends of each user, count of followers of each user, users verified status, number of tweets posted within six-month snapshots", "highlighted_evidence": [ "Feature Engineering ::: Social Network Features ::: Neighborhood Overlap", "We consider five related measurements of neighborhood overlap for a given author $a$ and target $t$, listed here.\n\nDownward overlap measures the number of two-hop paths from the author to the target along following relationships; upward overlap measures two-hop paths in the opposite direction. Inward overlap measures the similarity between the two users' follower sets, and outward overlap measures the similarity between their sets of friends. Bidirectional overlap then is a more generalized measure of social network similarity. ", "Feature Engineering ::: Social Network Features ::: User-based features\nWe also use basic user account metrics drawn from the author and target profiles. Specifically, we count the friends and followers of each user, their verified status, and the number of tweets posted within six-month snapshots of their timelines, as in BIBREF11, BIBREF4, and BIBREF8." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Network features have been shown to improve text-based models BIBREF6, BIBREF25, and they can help classifiers distinguish between bullies and victims BIBREF32. These features may also capture some of the more social aspects of cyberbullying, such as power imbalance and visibility among peers. However, many centrality measures and clustering algorithms require detailed network representations. These features may not be scalable for real-world applications. We propose a set of low-complexity measurements that can be used to encode important higher-order relations at scale. Specifically, we measure the relative positions of the author and target accounts in the directed following network by computing modified versions of Jaccard's similarity index as we now explain.", "Feature Engineering ::: Social Network Features ::: Neighborhood Overlap", "Feature Engineering ::: Social Network Features ::: User-based features", "We also use basic user account metrics drawn from the author and target profiles. Specifically, we count the friends and followers of each user, their verified status, and the number of tweets posted within six-month snapshots of their timelines, as in BIBREF11, BIBREF4, and BIBREF8." ], "extractive_spans": [ "Neighborhood Overlap", " count the friends and followers of each user, their verified status, and the number of tweets posted within six-month snapshots of their timelines" ], "free_form_answer": "", "highlighted_evidence": [ " Specifically, we measure the relative positions of the author and target accounts in the directed following network by computing modified versions of Jaccard's similarity index as we now explain.\n\nFeature Engineering ::: Social Network Features ::: Neighborhood Overlap", "User-based features\nWe also use basic user account metrics drawn from the author and target profiles. Specifically, we count the friends and followers of each user, their verified status, and the number of tweets posted within six-month snapshots of their timelines, as in BIBREF11, BIBREF4, and BIBREF8." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "032cd0f236d374c714226462c3f33d64fb72a665", "62cb38e6814a165cca3379e5d5ed9f77d80fc0fd", "752204874a96768ef35547ea3aede27e2d5ae2c3" ], "answer": [ { "evidence": [ "We asked our annotators to consider the full message thread for each tweet as displayed on Twitter's web interface. We also gave them a list of up to 15 recent mentions by the author of the tweet, directed towards any of the other accounts mentioned in the original thread. Then we asked annotators to interpret each tweet in light of this social context, and had them provide us with labels for five key cyberbullying criteria. We defined these criteria in terms of the author account (“who posted the given tweet?”) and the target (“who was the tweet about?” – not necessarily the first mention). We also stated that “if the target is not on Twitter or their handle cannot be identified” the annotator should “please write OTHER.” With this framework established, we gave the definitions for our five cyberbullying criteria as follows.", "Aggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive. The user either addresses a group or individual, and the message contains at least one phrase that could be described as confrontational, derogatory, insulting, threatening, hostile, violent, hateful, or sexually abusive.", "Repetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).", "Harmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. The target does not respond agreeably as to a joke or an otherwise lighthearted comment.", "Visibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.", "Power imbalance: (power) Power is derived from authority and perceived social advantage. Celebrities and public figures are more powerful than common users. Minorities and disadvantaged groups have less power. Bullies can also derive power from peer support." ], "extractive_spans": [ "Aggressive language", "Repetition", "Harmful intent", "Visibility among peers", "Power imbalance" ], "free_form_answer": "", "highlighted_evidence": [ "With this framework established, we gave the definitions for our five cyberbullying criteria as follows.\n\nAggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive. ", "Repetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).\n\nHarmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. ", "Visibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.\n\nPower imbalance: (power) Power is derived from authority and perceived social advantage. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We asked our annotators to consider the full message thread for each tweet as displayed on Twitter's web interface. We also gave them a list of up to 15 recent mentions by the author of the tweet, directed towards any of the other accounts mentioned in the original thread. Then we asked annotators to interpret each tweet in light of this social context, and had them provide us with labels for five key cyberbullying criteria. We defined these criteria in terms of the author account (“who posted the given tweet?”) and the target (“who was the tweet about?” – not necessarily the first mention). We also stated that “if the target is not on Twitter or their handle cannot be identified” the annotator should “please write OTHER.” With this framework established, we gave the definitions for our five cyberbullying criteria as follows.", "Aggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive. The user either addresses a group or individual, and the message contains at least one phrase that could be described as confrontational, derogatory, insulting, threatening, hostile, violent, hateful, or sexually abusive.", "Repetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).", "Harmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. The target does not respond agreeably as to a joke or an otherwise lighthearted comment.", "Visibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.", "Power imbalance: (power) Power is derived from authority and perceived social advantage. Celebrities and public figures are more powerful than common users. Minorities and disadvantaged groups have less power. Bullies can also derive power from peer support.", "Each of these criteria was represented as a binary label, except for power imbalance, which was ternary. We asked “Is there strong evidence that the author is more powerful than the target? Is the target more powerful? Or if there is not any good evidence, just mark equal.” We recognized that an imbalance of power might arise in a number of different circumstances. Therefore, we did not restrict our definition to just one form of power, such as follower count or popularity." ], "extractive_spans": [ "Aggressive language", "Repetition", "Harmful intent", "Visibility among peers", "Power imbalance" ], "free_form_answer": "", "highlighted_evidence": [ "With this framework established, we gave the definitions for our five cyberbullying criteria as follows.\n\nAggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive. The user either addresses a group or individual, and the message contains at least one phrase that could be described as confrontational, derogatory, insulting, threatening, hostile, violent, hateful, or sexually abusive.\n\nRepetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).\n\nHarmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. The target does not respond agreeably as to a joke or an otherwise lighthearted comment.\n\nVisibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.\n\nPower imbalance: (power) Power is derived from authority and perceived social advantage. Celebrities and public figures are more powerful than common users. Minorities and disadvantaged groups have less power. Bullies can also derive power from peer support.\n\nEach of these criteria was represented as a binary" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We asked our annotators to consider the full message thread for each tweet as displayed on Twitter's web interface. We also gave them a list of up to 15 recent mentions by the author of the tweet, directed towards any of the other accounts mentioned in the original thread. Then we asked annotators to interpret each tweet in light of this social context, and had them provide us with labels for five key cyberbullying criteria. We defined these criteria in terms of the author account (“who posted the given tweet?”) and the target (“who was the tweet about?” – not necessarily the first mention). We also stated that “if the target is not on Twitter or their handle cannot be identified” the annotator should “please write OTHER.” With this framework established, we gave the definitions for our five cyberbullying criteria as follows.", "Aggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive. The user either addresses a group or individual, and the message contains at least one phrase that could be described as confrontational, derogatory, insulting, threatening, hostile, violent, hateful, or sexually abusive.", "Repetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).", "Harmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. The target does not respond agreeably as to a joke or an otherwise lighthearted comment.", "Visibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.", "Power imbalance: (power) Power is derived from authority and perceived social advantage. Celebrities and public figures are more powerful than common users. Minorities and disadvantaged groups have less power. Bullies can also derive power from peer support." ], "extractive_spans": [ "Aggressive language", "Repetition", "Harmful intent", "Visibility among peers", "Power imbalance" ], "free_form_answer": "", "highlighted_evidence": [ "With this framework established, we gave the definitions for our five cyberbullying criteria as follows.\n\nAggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive.", "Repetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).\n\nHarmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. ", "Visibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.\n\nPower imbalance: (power) Power is derived from authority and perceived social advantage. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "2e6daf1a03322896cc93dcb727acbddaa98a858d", "5f17675bea62218b7204c7f87909039ae88b8d58", "e9fa48c79383d9b60df06685aefefe7edd9332af" ], "answer": [ { "evidence": [ "We asked our annotators to consider the full message thread for each tweet as displayed on Twitter's web interface. We also gave them a list of up to 15 recent mentions by the author of the tweet, directed towards any of the other accounts mentioned in the original thread. Then we asked annotators to interpret each tweet in light of this social context, and had them provide us with labels for five key cyberbullying criteria. We defined these criteria in terms of the author account (“who posted the given tweet?”) and the target (“who was the tweet about?” – not necessarily the first mention). We also stated that “if the target is not on Twitter or their handle cannot be identified” the annotator should “please write OTHER.” With this framework established, we gave the definitions for our five cyberbullying criteria as follows.", "Aggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive. The user either addresses a group or individual, and the message contains at least one phrase that could be described as confrontational, derogatory, insulting, threatening, hostile, violent, hateful, or sexually abusive.", "Repetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).", "Harmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. The target does not respond agreeably as to a joke or an otherwise lighthearted comment.", "Visibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.", "Power imbalance: (power) Power is derived from authority and perceived social advantage. Celebrities and public figures are more powerful than common users. Minorities and disadvantaged groups have less power. Bullies can also derive power from peer support." ], "extractive_spans": [], "free_form_answer": "They define cyberbullying as aggressive language, repetition, harmful intent, visibility among peers, and power imbalance", "highlighted_evidence": [ "With this framework established, we gave the definitions for our five cyberbullying criteria as follows.\n\nAggressive language: (aggr) Regardless of the author's intent, the language of the tweet could be seen as aggressive. The user either addresses a group or individual, and the message contains at least one phrase that could be described as confrontational, derogatory, insulting, threatening, hostile, violent, hateful, or sexually abusive.\n\nRepetition: (rep) The target user has received at least two aggressive messages in total (either from the author or from another user in the visible thread).\n\nHarmful intent: (harm) The tweet was designed to tear down or disadvantage the target user by causing them distress or by harming their public image. The target does not respond agreeably as to a joke or an otherwise lighthearted comment.\n\nVisibility among peers: (peer) At least one other user besides the target has liked, retweeted, or responded to at least one of the author's messages.\n\nPower imbalance: (power) Power is derived from authority and perceived social advantage. Celebrities and public figures are more powerful than common users. Minorities and disadvantaged groups have less power. Bullies can also derive power from peer support." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We have established that cyberbullying is a complex social phenomenon, different from the simpler notion of cyberaggression. Standard Bag of Words (BoW) features based on single sentences, such as $n$-grams and word embeddings, may thus lead machine learning algorithms to incorrectly classify friendly or joking behavior as cyberbullying BIBREF12, BIBREF10, BIBREF9. To more reliably capture the nuances of repetition, harmful intent, visibility among peers, and power imbalance, we designed a new set of features from the social and linguistic traces of Twitter users. These measures allow our classifiers to encode the dynamic relationship between the message author and target, using network and timeline similarities, expectations from language models, and other signals taken from the message thread." ], "extractive_spans": [ "cyberbullying is a complex social phenomenon, different from the simpler notion of cyberaggression" ], "free_form_answer": "", "highlighted_evidence": [ "We have established that cyberbullying is a complex social phenomenon, different from the simpler notion of cyberaggression." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Some researchers view cyberbullying as an extension of more “traditional” bullying behaviors BIBREF16, BIBREF17, BIBREF18. In one widely-cited book, the psychologist Dan Olweus defines schoolyard bullying in terms of three criteria: repetition, harmful intent, and an imbalance of power BIBREF19. He then identifies bullies by their intention to “inflict injury or discomfort” upon a weaker victim through repeated acts of aggression." ], "extractive_spans": [], "free_form_answer": "A public display of intention to “inflict injury or discomfort” upon a weaker victim through repeated acts of aggression.", "highlighted_evidence": [ " In one widely-cited book, the psychologist Dan Olweus defines schoolyard bullying in terms of three criteria: repetition, harmful intent, and an imbalance of power BIBREF19. He then identifies bullies by their intention to “inflict injury or discomfort” upon a weaker victim through repeated acts of aggression." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five", "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "question": [ "What agreement measure is used?", "Do they report the annotation agreement?", "How many annotators participated?", "What social-network features are used?", "What are the five factors considered?", "How is cyberbullying defined?" ], "question_id": [ "42eb7c5311fc1ac0344f0b38d3184ccd4faad3be", "8d14dd9c67d71494b4468000ff9683afdd11af7e", "b857f3e3f1dad5df55f69d062978967fe023ac6f", "5a473f86052cf7781dfe40943ddf99bc9fe8a4e4", "235c7c7ca719068136928b18e19f9661e0f72806", "c87966e7f497975b76a60f6be50c33d296a4a4e7" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "cyberbullying", "cyberbullying", "cyberbullying", "cyberbullying", "cyberbullying", "cyberbullying" ], "topic_background": [ "research", "research", "research", "research", "research", "research" ] }
{ "caption": [ "Table 1: Datasets built from different related definitions of cyberbullying. For each dataset, we report the size, positive class balance, inter-annotator agreement, and whether the study incorporated social context in the annotation process.", "Table 2: State of the Art in Cyberbullying Detection. Here, results are reported on either the Cyberbullying (CB) class exclusively or on the entire (total) dataset.", "Table 3: Analysis of Labeled Twitter Data", "Figure 1: Cyberbullying or not. The leftmost thread demonstrates all five cyberbullying criteria. Although the thread in the middle contains repeated use of aggressive language, there is no harmful intent, visibility among peers, or power imbalance. Overlap measures. (right) Graphical representation of the neighborhood overlap measures of author a and target t.", "Figure 2: Cumulative Distribution Functions for neighborhood overlap on relevant features. These measures are shown to be predictive of power imbalance and visibility among peers.", "Figure 3: Cumulative Distribution Functions for message behavior on relevant features. These measures are shown to be indicative of harmful intent and repetition.", "Figure 4: Cumulative Distribution Functions for timeline similarity on relevant features. These measures are shown to be predictive of power imbalance and harmful intent.", "Figure 5: Cumulative Distribution Functions for language models on relevant features. These measures are shown to be predictive of harmful intent.", "Table 4: Feature Combinations", "Table 6: Recall", "Table 5: Precision", "Table 7: F1 Scores", "Table 8: Analysis of Unfiltered Decahose Data", "Table 9: Random Forest Precision", "Table 10: AdaBoost Precision", "Table 11: MLP Precision", "Table 19: Top Absolute Weights for Repetition Features", "Table 20: Top Absolute Weights for Harmful Intent", "Table 12: Random Forest Recall", "Table 21: Top Absolute Weights for Visibility Among Peers" ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Table3-1.png", "5-Figure1-1.png", "6-Figure2-1.png", "7-Figure3-1.png", "7-Figure4-1.png", "7-Figure5-1.png", "8-Table4-1.png", "8-Table6-1.png", "8-Table5-1.png", "8-Table7-1.png", "11-Table8-1.png", "11-Table9-1.png", "11-Table10-1.png", "11-Table11-1.png", "12-Table19-1.png", "12-Table20-1.png", "12-Table12-1.png", "12-Table21-1.png" ] }
[ "What social-network features are used?", "How is cyberbullying defined?" ]
[ [ "2004.01820-Feature Engineering ::: Social Network Features ::: Neighborhood Overlap-0", "2004.01820-Feature Engineering ::: Social Network Features ::: Neighborhood Overlap-1", "2004.01820-Feature Engineering ::: Social Network Features ::: User-based features-0", "2004.01820-Feature Engineering ::: Social Network Features-0" ], [ "2004.01820-Curating a Comprehensive Cyberbullying Dataset ::: Annotation Task-3", "2004.01820-Curating a Comprehensive Cyberbullying Dataset ::: Annotation Task-6", "2004.01820-Background ::: Defining Cyberbullying-0", "2004.01820-Curating a Comprehensive Cyberbullying Dataset ::: Annotation Task-1", "2004.01820-Curating a Comprehensive Cyberbullying Dataset ::: Annotation Task-5", "2004.01820-Curating a Comprehensive Cyberbullying Dataset ::: Annotation Task-2", "2004.01820-Feature Engineering-0", "2004.01820-Curating a Comprehensive Cyberbullying Dataset ::: Annotation Task-4" ] ]
[ "Downward overlap, upward overlap, inward overlap, outward overlap, bidirectional overlap, count of friends of each user, count of followers of each user, users verified status, number of tweets posted within six-month snapshots", "A public display of intention to “inflict injury or discomfort” upon a weaker victim through repeated acts of aggression." ]
36
1808.04122
A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization
In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are reconstructed into corresponding capsules which are then routed to another capsule to produce a continuous vector. The length of this vector is used to measure the plausibility score of the triple. Our proposed CapsE obtains better performance than previous state-of-the-art embedding models for knowledge graph completion on two benchmark datasets WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17.
{ "paragraphs": [ [ "Knowledge graphs (KGs) containing relationship triples (subject, relation, object), denoted as (s, r, o), are the useful resources for many NLP and especially information retrieval applications such as semantic search and question answering BIBREF0 . However, large knowledge graphs, even containing billions of triples, are still incomplete, i.e., missing a lot of valid triples BIBREF1 . Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not BIBREF2 , BIBREF3 , BIBREF4 . To this end, many embedding models have been proposed to learn vector representations for entities (i.e., subject/head entity and object/tail entity) and relations in KGs, and obtained state-of-the-art results as summarized by BIBREF5 and BIBREF6 . These embedding models score triples (s, r, o), such that valid triples have higher plausibility scores than invalid ones BIBREF2 , BIBREF3 , BIBREF4 . For example, in the context of KGs, the score for (Melbourne, cityOf, Australia) is higher than the score for (Melbourne, cityOf, United Kingdom).", "Triple modeling is applied not only to the KG completion, but also for other tasks which can be formulated as a triple-based prediction problem. An example is in search personalization, one would aim to tailor search results to each specific user based on the user's personal interests and preferences BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Here the triples can be formulated as (submitted query, user profile, returned document) and used to re-rank documents returned to a user given an input query, by employing an existing KG embedding method such as TransE BIBREF3 , as proposed by BIBREF12 . Previous studies have shown the effectiveness of modeling triple for either KG completion or search personalization. However, there has been no single study investigating the performance on both tasks.", "Conventional embedding models, such as TransE BIBREF3 , DISTMULT BIBREF13 and ComplEx BIBREF14 , use addition, subtraction or simple multiplication operators, thus only capture the linear relationships between entities. Recent research has raised interest in applying deep neural networks to triple-based prediction problems. For example, BIBREF15 proposed ConvKB—a convolutional neural network (CNN)-based model for KG completion and achieved state-of-the-art results. Most of KG embedding models are constructed to modeling entries at the same dimension of the given triple, where presumably each dimension captures some relation-specific attribute of entities. To the best of our knowledge, however, none of the existing models has a “deep” architecture for modeling the entries in a triple at the same dimension.", " BIBREF16 introduced capsule networks (CapsNet) that employ capsules (i.e., each capsule is a group of neurons) to capture entities in images and then uses a routing process to specify connections from capsules in a layer to those in the next layer. Hence CapsNet could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. Each capsule accounts for capturing variations of an object or object part in the image, which can be efficiently visualized. Our high-level hypothesis is that embedding entries at the same dimension of the triple also have these variations, although it is not straightforward to be visually examined.", "To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not.", "In summary, our main contributions from this paper are as follows:", " INLINEFORM0 We propose an embedding model CapsE using the capsule network BIBREF16 for modeling relationship triples. To our best of knowledge, our work is the first consideration of exploring the capsule network to knowledge graph completion and search personalization.", " INLINEFORM0 We evaluate our CapsE for knowledge graph completion on two benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237.", " INLINEFORM0 We restate the prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems. We adapt our model to search personalization and evaluate on SEARCH17 BIBREF12 – a dataset of the web search query logs. Experimental results show that our CapsE achieves the new state-of-the-art results with significant improvements over strong baselines." ], [ "Let INLINEFORM0 be a collection of valid factual triples in the form of (subject, relation, object) denoted as (s, r, o). Embedding models aim to define a score function giving a score for each triple, such that valid triples receive higher scores than invalid triples.", "We denote INLINEFORM0 , INLINEFORM1 and INLINEFORM2 as the INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. In our proposed CapsE, we follow BIBREF15 to view each embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] as a matrix INLINEFORM10 , and denote INLINEFORM11 as the INLINEFORM12 -th row of INLINEFORM13 . We use a filter INLINEFORM14 operated on the convolution layer. This filter INLINEFORM15 is repeatedly operated over every row of INLINEFORM16 to generate a feature map INLINEFORM17 , in which INLINEFORM18 where INLINEFORM19 denotes a dot product, INLINEFORM20 is a bias term and INLINEFORM21 is a non-linear activation function such as ReLU. Our model uses multiple filters INLINEFORM22 to generate feature maps. We denote INLINEFORM23 as the set of filters and INLINEFORM24 as the number of filters, thus we have INLINEFORM25 INLINEFORM26 -dimensional feature maps, for which each feature map can capture one single characteristic among entries at the same dimension.", "We build our CapsE with two single capsule layers for a simplified architecture. In the first layer, we construct INLINEFORM0 capsules, wherein entries at the same dimension from all feature maps are encapsulated into a corresponding capsule. Therefore, each capsule can capture many characteristics among the entries at the corresponding dimension in the embedding triple. These characteristics are generalized into one capsule in the second layer which produces a vector output whose length is used as the score for the triple.", "The first capsule layer consists of INLINEFORM0 capsules, for which each capsule INLINEFORM1 has a vector output INLINEFORM2 . Vector outputs INLINEFORM3 are multiplied by weight matrices INLINEFORM4 to produce vectors INLINEFORM5 which are summed to produce a vector input INLINEFORM6 to the capsule in the second layer. The capsule then performs the non-linear squashing function to produce a vector output INLINEFORM7 : DISPLAYFORM0 ", "where INLINEFORM0 , and INLINEFORM1 are coupling coefficients determined by the routing process as presented in Algorithm SECREF2 . Because there is one capsule in the second layer, we make only one difference in the routing process proposed by BIBREF16 , for which we apply the INLINEFORM2 in a direction from all capsules in the previous layer to each of capsules in the next layer.", "[ht] 1.25", "all capsule i INLINEFORM0 the first layer INLINEFORM1 0 INLINEFORM2 = 1, 2, ..., m INLINEFORM3 INLINEFORM4 ", " INLINEFORM0 ", "all capsule i INLINEFORM0 the first layer INLINEFORM1 The routing process is extended from BIBREF16 .", "We illustrate our proposed model in Figure FIGREF1 where embedding size: INLINEFORM0 , the number of filters: INLINEFORM1 , the number of neurons within the capsules in the first layer is equal to INLINEFORM2 , and the number of neurons within the capsule in the second layer: INLINEFORM3 . The length of the vector output INLINEFORM4 is used as the score for the input triple.", "Formally, we define the score function INLINEFORM0 for the triple INLINEFORM1 as follows: DISPLAYFORM0 ", "where the set of filters INLINEFORM0 is shared parameters in the convolution layer; INLINEFORM1 denotes a convolution operator; and INLINEFORM2 denotes a capsule network operator. We use the Adam optimizer BIBREF19 to train CapsE by minimizing the loss function BIBREF14 , BIBREF15 as follows: DISPLAYFORM0 ", " INLINEFORM0 ", "here INLINEFORM0 and INLINEFORM1 are collections of valid and invalid triples, respectively. INLINEFORM2 is generated by corrupting valid triples in INLINEFORM3 ." ], [ "In the knowledge graph completion task BIBREF3 , the goal is to predict a missing entity given a relation and another entity, i.e, inferring a head entity INLINEFORM0 given INLINEFORM1 or inferring a tail entity INLINEFORM2 given INLINEFORM3 . The results are calculated based on ranking the scores produced by the score function INLINEFORM4 on test triples." ], [ "Datasets: We use two recent benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . These two datasets are created to avoid reversible relation problems, thus the prediction task becomes more realistic and hence more challenging BIBREF18 . Table TABREF7 presents the statistics of WN18RR and FB15k-237.", "Evaluation protocol: Following BIBREF3 , for each valid test triple INLINEFORM0 , we replace either INLINEFORM1 or INLINEFORM2 by each of all other entities to create a set of corrupted triples. We use the “Filtered” setting protocol BIBREF3 , i.e., not taking any corrupted triples that appear in the KG into accounts. We rank the valid test triple and corrupted triples in descending order of their scores. We employ evaluation metrics: mean rank (MR), mean reciprocal rank (MRR) and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions). Lower MR, higher MRR or higher Hits@10 indicate better performance. Final scores on the test set are reported for the model obtaining the highest Hits@10 on the validation set.", "Training protocol: We use the common Bernoulli strategy BIBREF20 , BIBREF21 when sampling invalid triples. For WN18RR, BIBREF22 found a strong evidence to support the necessity of a WordNet-related semantic setup, in which they averaged pre-trained word embeddings for word surface forms within the WordNet to create synset embeddings, and then used these synset embeddings to initialize entity embeddings for training their TransE association model. We follow this evidence in using the pre-trained 100-dimensional Glove word embeddings BIBREF23 to train a TransE model on WN18RR.", "We employ the TransE and ConvKB implementations provided by BIBREF24 and BIBREF15 . For ConvKB, we use a new process of training up to 100 epochs and monitor the Hits@10 score after every 10 training epochs to choose optimal hyper-parameters with the Adam initial learning rate in INLINEFORM0 and the number of filters INLINEFORM1 in INLINEFORM2 . We obtain the highest Hits@10 scores on the validation set when using N= 400 and the initial learning rate INLINEFORM3 on WN18RR; and N= 100 and the initial learning rate INLINEFORM4 on FB15k-237.", "Like in ConvKB, we use the same pre-trained entity and relation embeddings produced by TransE to initialize entity and relation embeddings in our CapsE for both WN18RR and FB15k-237 ( INLINEFORM0 ). We set the batch size to 128, the number of neurons within the capsule in the second capsule layer to 10 ( INLINEFORM1 ), and the number of iterations in the routing algorithm INLINEFORM2 in INLINEFORM3 . We run CapsE up to 50 epochs and monitor the Hits@10 score after each 10 training epochs to choose optimal hyper-parameters. The highest Hits@10 scores for our CapsE on the validation set are obtained when using INLINEFORM4 , INLINEFORM5 and the initial learning rate at INLINEFORM6 on WN18RR; and INLINEFORM7 , INLINEFORM8 and the initial learning rate at INLINEFORM9 on FB15k-237.", "Dataset: We use the SEARCH17 dataset BIBREF12 of query logs of 106 users collected by a large-scale web search engine. A log entity consists of a user identifier, a query, top-10 ranked documents returned by the search engine and clicked documents along with the user's dwell time. BIBREF12 constructed short-term (session-based) user profiles and used the profiles to personalize the returned results. They then employed the SAT criteria BIBREF26 to identify whether a returned document is relevant from the query logs as either a clicked document with a dwell time of at least 30 seconds or the last clicked document in a search session (i.e., a SAT click). After that, they assigned a INLINEFORM0 label to a returned document if it is a SAT click and also assigned INLINEFORM1 labels to the remaining top-10 documents. The rank position of the INLINEFORM2 labeled documents is used as the ground truth to evaluate the search performance before and after re-ranking.", "The dataset was uniformly split into the training, validation and test sets. This split is for the purpose of using historical data in the training set to predict new data in the test set BIBREF12 . The training, validation and test sets consist of 5,658, 1,184 and 1,210 relevant (i.e., valid) triples; and 40,239, 7,882 and 8,540 irrelevant (i.e., invalid) triples, respectively.", "Evaluation protocol: Our CapsE is used to re-rank the original list of documents returned by a search engine as follows: (i) We train our model and employ the trained model to calculate the score for each INLINEFORM0 triple. (ii) We then sort the scores in the descending order to obtain a new ranked list. To evaluate the performance of our proposed model, we use two standard evaluation metrics: mean reciprocal rank (MRR) and Hits@1. For each metric, the higher value indicates better ranking performance.", "We compare CapsE with the following baselines using the same experimental setup: (1) SE: The original rank is returned by the search engine. (2) CI BIBREF27 : This baseline uses a personalized navigation method based on previously clicking returned documents. (3) SP BIBREF9 , BIBREF11 : A search personalization method makes use of the session-based user profiles. (4) Following BIBREF12 , we use TransE as a strong baseline model for the search personalization task. Previous work shows that the well-known embedding model TransE, despite its simplicity, obtains very competitive results for the knowledge graph completion BIBREF28 , BIBREF29 , BIBREF14 , BIBREF30 , BIBREF15 . (5) The CNN-based model ConvKB is the most closely related model to our CapsE.", "Embedding initialization: We follow BIBREF12 to initialize user profile, query and document embeddings for the baselines TransE and ConvKB, and our CapsE.", "We train a LDA topic model BIBREF31 with 200 topics only on the relevant documents (i.e., SAT clicks) extracted from the query logs. We then use the trained LDA model to infer the probability distribution over topics for every returned document. We use the topic proportion vector of each document as its document embedding (i.e. INLINEFORM0 ). In particular, the INLINEFORM1 element ( INLINEFORM2 ) of the vector embedding for document INLINEFORM3 is: INLINEFORM4 where INLINEFORM5 is the probability of the topic INLINEFORM6 given the document INLINEFORM7 .", "We also represent each query by a probability distribution vector over topics. Let INLINEFORM0 be the set of top INLINEFORM1 ranked documents returned for a query INLINEFORM2 (here, INLINEFORM3 ). The INLINEFORM4 element of the vector embedding for query INLINEFORM5 is defined as in BIBREF12 : INLINEFORM6 , where INLINEFORM7 is the exponential decay function of INLINEFORM8 which is the rank of INLINEFORM9 in INLINEFORM10 . And INLINEFORM11 is the decay hyper-parameter ( INLINEFORM12 ). Following BIBREF12 , we use INLINEFORM13 . Note that if we learn query and document embeddings during training, the models will overfit to the data and will not work for new queries and documents. Thus, after the initialization process, we fix (i.e., not updating) query and document embeddings during training for TransE, ConvKB and CapsE.", "In addition, as mentioned by BIBREF9 , the more recently clicked document expresses more about the user current search interest. Hence, we make use of the user clicked documents in the training set with the temporal weighting scheme proposed by BIBREF11 to initialize user profile embeddings for the three embedding models.", "Hyper-parameter tuning: For our CapsE model, we set batch size to 128, and also the number of neurons within the capsule in the second capsule layer to 10 ( INLINEFORM0 ). The number of iterations in the routing algorithm is set to 1 ( INLINEFORM1 ). For the training model, we use the Adam optimizer with the initial learning rate INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 . We also use ReLU as the activation function INLINEFORM8 . We select the number of filters INLINEFORM9 . We run the model up to 200 epochs and perform a grid search to choose optimal hyper-parameters on the validation set. We monitor the MRR score after each training epoch and obtain the highest MRR score on the validation set when using INLINEFORM10 and the initial learning rate at INLINEFORM11 .", "We employ the TransE and ConvKB implementations provided by BIBREF24 and BIBREF15 and then follow their training protocols to tune hyper-parameters for TransE and ConvKB, respectively. We also monitor the MRR score after each training epoch and attain the highest MRR score on the validation set when using margin = 5, INLINEFORM0 -norm and SGD learning rate at INLINEFORM1 for TransE; and INLINEFORM2 and the Adam initial learning rate at INLINEFORM3 for ConvKB." ], [ "Table TABREF10 compares the experimental results of our CapsE with previous state-of-the-art published results, using the same evaluation protocol. Our CapsE performs better than its closely related CNN-based model ConvKB on both experimental datasets (except Hits@10 on WN18RR and MR on FB15k-237), especially on FB15k-237 where our CapsE gains significant improvements of INLINEFORM0 in MRR (which is about 25.1% relative improvement), and INLINEFORM1 % absolute improvement in Hits@10. Table TABREF10 also shows that our CapsE obtains the best MR score on WN18RR and the highest MRR and Hits@10 scores on FB15k-237.", "Following BIBREF3 , for each relation INLINEFORM0 in FB15k-237, we calculate the averaged number INLINEFORM1 of head entities per tail entity and the averaged number INLINEFORM2 of tail entities per head entity. If INLINEFORM3 1.5 and INLINEFORM4 1.5, INLINEFORM5 is categorized one-to-one (1-1). If INLINEFORM6 1.5 and INLINEFORM7 1.5, INLINEFORM8 is categorized one-to-many (1-M). If INLINEFORM9 1.5 and INLINEFORM10 1.5, INLINEFORM11 is categorized many-to-one (M-1). If INLINEFORM12 1.5 and INLINEFORM13 1.5, INLINEFORM14 is categorized many-to-many (M-M). As a result, 17, 26, 81 and 113 relations are labelled 1-1, 1-M, M-1 and M-M, respectively. And 0.9%, 6.3%, 20.5% and 72.3% of the test triples in FB15k-237 contain 1-1, 1-M, M-1 and M-M relations, respectively.", "Figure FIGREF11 shows the Hits@10 and MRR results for predicting head and tail entities w.r.t each relation category on FB15k-237. CapsE works better than ConvKB in predicting entities on the “side M” of triples (e.g., predicting head entities in M-1 and M-M; and predicting tail entities in 1-M and M-M), while ConvKB performs better than CapsE in predicting entities on the “side 1” of triples (i.e., predicting head entities in 1-1 and 1-M; and predicting tail entities in 1-1 and M-1).", "Figure FIGREF12 shows the Hits@10 and MRR scores w.r.t each relation on WN18RR. INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are symmetric relations which can be considered as M-M relations. Our CapsE also performs better than ConvKB on these 4 M-M relations. Thus, results shown in Figures FIGREF11 and FIGREF12 are consistent. These also imply that our CapsE would be a potential candidate for applications which contain many M-M relations such as search personalization.", "We see that the length and orientation of each capsule in the first layer can also help to model the important entries in the corresponding dimension, thus CapsE can work well on the “side M” of triples where entities often appear less frequently than others appearing in the “side 1” of triples. Additionally, existing models such as DISTMULT, ComplEx and ConvE can perform well for entities with high frequency, but may not for rare entities with low frequency. These are reasons why our CapsE can be considered as the best one on FB15k-237 and it outperforms most existing models on WN18RR.", "Effects of routing iterations: We study how the number of routing iterations affect the performance. Table TABREF13 shows the Hits@10 scores on the WN18RR validation set for a comparison w.r.t each number value of the routing iterations and epochs with the number of filters INLINEFORM0 and the Adam initial learning rate at INLINEFORM1 . We see that the best performance for each setup over each 10 epochs is obtained by setting the number INLINEFORM2 of routing iterations to 1. This indicates the opposite side for knowledge graphs compared to images. In the image classification task, setting the number INLINEFORM3 of iterations in the routing process higher than 1 helps to capture the relative positions of entities in an image (e.g., eyes, nose and mouth) properly. In contrast, this property from images may be only right for the 1-1 relations, but not for the 1-M, M-1 and M-M relations in the KGs because of the high variant of each relation type (e.g., symmetric relations) among different entities." ], [ "Given a user, a submitted query and the documents returned by a search system for that query, our approach is to re-rank the returned documents so that the more relevant documents should be ranked higher. Following BIBREF12 , we represent the relationship between the submitted query, the user and the returned document as a (s, r, o)-like triple (query, user, document). The triple captures how much interest a user puts on a document given a query. Thus, we can evaluate the effectiveness of our CapsE for the search personalization task." ], [ "Table TABREF17 presents the experimental results of the baselines and our model. Embedding models TranE, ConvKB and CapsE produce better ranking performances than traditional learning-to-rank search personalization models CI and SP. This indicates a prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems. In particular, our MRR and Hits@1 scores are higher than those of TransE (with relative improvements of 14.5% and 22% over TransE, respectively). Specifically, our CapsE achieves the highest performances in both MRR and Hits@1 (our improvements over all five baselines are statistically significant with INLINEFORM0 using the paired t-test).", "To illustrate our training progress, we plot performances of CapsE on the validation set over epochs in Figure FIGREF18 . We observe that the performance is improved with the increase in the number of filters since capsules can encode more useful properties for a large embedding size." ], [ "Other transition-based models extend TransE to additionally use projection vectors or matrices to translate embeddings of INLINEFORM0 and INLINEFORM1 into the vector space of INLINEFORM2 , such as: TransH BIBREF20 , TransR BIBREF21 , TransD BIBREF32 and STransE BIBREF24 . Furthermore, DISTMULT BIBREF13 and ComplEx BIBREF14 use a tri-linear dot product to compute the score for each triple. Moreover, ConvKB BIBREF15 applies convolutional neural network, in which feature maps are concatenated into a single feature vector which is then computed with a weight vector via a dot product to produce the score for the input triple. ConvKB is the most closely related model to our CapsE. See an overview of embedding models for KG completion in BIBREF6 .", "For search tasks, unlike classical methods, personalized search systems utilize the historical interactions between the user and the search system, such as submitted queries and clicked documents to tailor returned results to the need of that user BIBREF7 , BIBREF8 . That historical information can be used to build the user profile, which is crucial to an effective search personalization system. Widely used approaches consist of two separated steps: (1) building the user profile from the interactions between the user and the search system; and then (2) learning a ranking function to re-rank the search results using the user profile BIBREF9 , BIBREF33 , BIBREF10 , BIBREF11 . The general goal is to re-rank the documents returned by the search system in such a way that the more relevant documents are ranked higher. In this case, apart from the user profile, dozens of other features have been proposed as the input of a learning-to-rank algorithm BIBREF9 , BIBREF33 . Alternatively, BIBREF12 modeled the potential user-oriented relationship between the submitted query and the returned document by applying TransE to reward higher scores for more relevant documents (e.g., clicked documents). They achieved better performances than the standard ranker as well as competitive search personalization baselines BIBREF27 , BIBREF9 , BIBREF11 ." ], [ "We propose CapsE—a novel embedding model using the capsule network to model relationship triples for knowledge graph completion and search personalization. Experimental results show that our CapsE outperforms other state-of-the-art models on two benchmark datasets WN18RR and FB15k-237 for the knowledge graph completion. We then show the effectiveness of our CapsE for the search personalization, in which CapsE outperforms the competitive baselines on the dataset SEARCH17 of the web search query logs. In addition, our CapsE is capable to effectively model many-to-many relationships. Our code is available at: https://github.com/daiquocnguyen/CapsE." ], [ "This research was partially supported by the ARC Discovery Projects DP150100031 and DP160103934. The authors thank Yuval Pinter for assisting us in running his code." ] ], "section_name": [ "Introduction", "The proposed CapsE", "Knowledge graph completion evaluation ", "Experimental setup", "Main experimental results", "Search personalization application", "Main results", "Related work", "Conclusion", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "9d7c2ee8e0dd25eea19c5e8e52459d827e7f39ba", "c80a15ae611a393b6f1d8cad7e028da04f6f960c", "cf7c1ec458b3344a9ede0c5fd7aee4c38f8caabc" ], "answer": [ { "evidence": [ "To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not." ], "extractive_spans": [], "free_form_answer": "1x3 filter size is used in convolutional layers.", "highlighted_evidence": [ "The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not." ], "extractive_spans": [], "free_form_answer": "1x3", "highlighted_evidence": [ "The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "03c37249b1bf599dc6f5efaed4b7571524ed5f19", "2aa99d164e83d4da30458d5caaf85ca3fb9d5670", "9800fa1f9dde2b1577322c95b6a3e30554823e40" ], "answer": [ { "evidence": [ "Table TABREF10 compares the experimental results of our CapsE with previous state-of-the-art published results, using the same evaluation protocol. Our CapsE performs better than its closely related CNN-based model ConvKB on both experimental datasets (except Hits@10 on WN18RR and MR on FB15k-237), especially on FB15k-237 where our CapsE gains significant improvements of INLINEFORM0 in MRR (which is about 25.1% relative improvement), and INLINEFORM1 % absolute improvement in Hits@10. Table TABREF10 also shows that our CapsE obtains the best MR score on WN18RR and the highest MRR and Hits@10 scores on FB15k-237." ], "extractive_spans": [ " improvements of INLINEFORM0 in MRR (which is about 25.1% relative improvement)", "INLINEFORM1 % absolute improvement in Hits@10" ], "free_form_answer": "", "highlighted_evidence": [ "Our CapsE performs better than its closely related CNN-based model ConvKB on both experimental datasets (except Hits@10 on WN18RR and MR on FB15k-237), especially on FB15k-237 where our CapsE gains significant improvements of INLINEFORM0 in MRR (which is about 25.1% relative improvement), and INLINEFORM1 % absolute improvement in Hits@10. Table TABREF10 also shows that our CapsE obtains the best MR score on WN18RR and the highest MRR and Hits@10 scores on FB15k-237." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF10 compares the experimental results of our CapsE with previous state-of-the-art published results, using the same evaluation protocol. Our CapsE performs better than its closely related CNN-based model ConvKB on both experimental datasets (except Hits@10 on WN18RR and MR on FB15k-237), especially on FB15k-237 where our CapsE gains significant improvements of INLINEFORM0 in MRR (which is about 25.1% relative improvement), and INLINEFORM1 % absolute improvement in Hits@10. Table TABREF10 also shows that our CapsE obtains the best MR score on WN18RR and the highest MRR and Hits@10 scores on FB15k-237." ], "extractive_spans": [], "free_form_answer": "0.105 in MRR and 6.1 percent points in Hits@10 on FB15k-237", "highlighted_evidence": [ "Our CapsE performs better than its closely related CNN-based model ConvKB on both experimental datasets (except Hits@10 on WN18RR and MR on FB15k-237), especially on FB15k-237 where our CapsE gains significant improvements of INLINEFORM0 in MRR (which is about 25.1% relative improvement), and INLINEFORM1 % absolute improvement in Hits@10. ", "25.1% relative improvement" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 2: Experimental results on the WN18RR and FB15k-237 test sets. Hits@10 (H@10) is reported in %. Results of DISTMULT, ComplEx and ConvE are taken from Dettmers et al. (2018). Results of TransE on FB15k237 are taken from Nguyen et al. (2018). Our CapsE Hits@1 scores are 33.7% on WN18RR and 48.9% on FB15k-237. Formulas of MRR and Hits@1 show a strong correlation, so using Hits@1 does not really reveal any additional information for this task. The best score is in bold, while the second best score is in underline. ? denotes our new results for TransE and ConvKB, which are better than those published by Nguyen et al. (2018).", "Table TABREF10 compares the experimental results of our CapsE with previous state-of-the-art published results, using the same evaluation protocol. Our CapsE performs better than its closely related CNN-based model ConvKB on both experimental datasets (except Hits@10 on WN18RR and MR on FB15k-237), especially on FB15k-237 where our CapsE gains significant improvements of INLINEFORM0 in MRR (which is about 25.1% relative improvement), and INLINEFORM1 % absolute improvement in Hits@10. Table TABREF10 also shows that our CapsE obtains the best MR score on WN18RR and the highest MRR and Hits@10 scores on FB15k-237." ], "extractive_spans": [], "free_form_answer": "On FB15k-237 dataset it outperforms 0.105 in MRR and 6.1% absolute improvement in Hits@10", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Experimental results on the WN18RR and FB15k-237 test sets. Hits@10 (H@10) is reported in %. Results of DISTMULT, ComplEx and ConvE are taken from Dettmers et al. (2018). Results of TransE on FB15k237 are taken from Nguyen et al. (2018). Our CapsE Hits@1 scores are 33.7% on WN18RR and 48.9% on FB15k-237. Formulas of MRR and Hits@1 show a strong correlation, so using Hits@1 does not really reveal any additional information for this task. The best score is in bold, while the second best score is in underline. ? denotes our new results for TransE and ConvKB, which are better than those published by Nguyen et al. (2018).", "Table TABREF10 also shows that our CapsE obtains the best MR score on WN18RR and the highest MRR and Hits@10 scores on FB15k-237." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "infinity", "infinity" ], "paper_read": [ "no", "no" ], "question": [ "What size filters do they use in the convolution layer?", "By how much do they outperform state-of-the-art models on knowledge graph completion?" ], "question_id": [ "1acfbdc34669cf19a778aceca941543f11b9a861", "864295caceb1e15144c1746ab5671d085d7ff7a1" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: An example illustration of our CapsE with k = 4, N = 5, and d = 2.", "Table 1: Statistics of the experimental datasets. #E is the number of entities. #R is the number of relations.", "Table 2: Experimental results on the WN18RR and FB15k-237 test sets. Hits@10 (H@10) is reported in %. Results of DISTMULT, ComplEx and ConvE are taken from Dettmers et al. (2018). Results of TransE on FB15k237 are taken from Nguyen et al. (2018). Our CapsE Hits@1 scores are 33.7% on WN18RR and 48.9% on FB15k-237. Formulas of MRR and Hits@1 show a strong correlation, so using Hits@1 does not really reveal any additional information for this task. The best score is in bold, while the second best score is in underline. ? denotes our new results for TransE and ConvKB, which are better than those published by Nguyen et al. (2018).", "Figure 2: Hits@10 (in %) and MRR on the FB15k-237 test set w.r.t each relation category.", "Figure 3: Hits@10 and MRR on the WN18RR test set w.r.t each relation. The right y-axis is the percentage of triples corresponding to relations.", "Table 3: Hits@10 on the WN18RR validation set with N = 50 and the initial learning rate at 1e−5 w.r.t each number of iterations in the routing algorithm m and each 10 training epochs.", "Table 4: Experimental results on the test set. [?] denotes the results reported in (Vu et al., 2017). Hits@1 (H@1) is reported in %. In information retrieval, Hits@1 is also referred to as P@1. The subscripts denote the relative improvement over our TransE results.", "Figure 4: Learning curves on the validation set with the initial learning rate at 5e−5." ], "file": [ "3-Figure1-1.png", "4-Table1-1.png", "5-Table2-1.png", "5-Figure2-1.png", "5-Figure3-1.png", "6-Table3-1.png", "8-Table4-1.png", "8-Figure4-1.png" ] }
[ "What size filters do they use in the convolution layer?", "By how much do they outperform state-of-the-art models on knowledge graph completion?" ]
[ [ "1808.04122-Introduction-4" ], [ "1808.04122-5-Table2-1.png", "1808.04122-Main experimental results-0" ] ]
[ "1x3", "On FB15k-237 dataset it outperforms 0.105 in MRR and 6.1% absolute improvement in Hits@10" ]
38
1907.05338
To Tune or Not To Tune? How About the Best of Both Worlds?
The introduction of pre-trained language models has revolutionized natural language research communities. However, researchers still know relatively little regarding their theoretical and empirical properties. In this regard, Peters et al. perform several experiments which demonstrate that it is better to adapt BERT with a light-weight task-specific head, rather than building a complex one on top of the pre-trained language model, and freeze the parameters in the said language model. However, there is another option to adopt. In this paper, we propose a new adaptation method which we first train the task model with the BERT parameters frozen and then fine-tune the entire model together. Our experimental results show that our model adaptation method can achieve 4.7% accuracy improvement in semantic similarity task, 0.99% accuracy improvement in sequence labeling task and 0.72% accuracy improvement in the text classification task.
{ "paragraphs": [ [ "The introduction of pre-trained language models, such as BERT BIBREF1 and Open-GPT BIBREF2 , among many others, has brought tremendous progress to the NLP research and industrial communities. The contribution of these models can be categorized into two aspects. First, pre-trained language models allow modelers to achieve reasonable accuracy without the need an excessive amount of manually labeled data. This strategy is in contrast with the classical deep learning methods, which requires a multitude more data to reach comparable results. Second, for many NLP tasks, including but not limited to, SQuAD BIBREF3 , CoQA BIBREF4 , named entity recognition BIBREF5 , Glue BIBREF6 , machine translation BIBREF7 , pre-trained model allows the creation of new state-of-art, given a reasonable amount of labelled data.", "In the post pre-trained language model era, to pursue new state-of-art, two directions can be followed. The first method, is to improve the pre-training process, such as in the work of ERNIE BIBREF8 , GPT2.0 BIBREF2 and MT-DNN BIBREF9 . The second method is to stand on the shoulder of the pre-trained language models. Among the many possibilities, one of them is to build new neural network structures on top of pre-trained language models.", "In principles, there are three ways to train the networks with stacked neural networks on top of pre-trained language models, as shown in Table TABREF1 . In Peters et al . BIBREF0 , the authors compare the possibility of option stack-only and finetune-only, and conclude that option finetune-only is better than option stack-only. More specifically, Peter et al. BIBREF0 argue that it is better to add a task-specific head on top of BERT than to freeze the weights of BERT and add more complex network structures. However, Peters et al. BIBREF0 did not compare option stack-and-finetune and finetune-only. On the other hand, before pre-trained deep language models became popular, researchers often use a strategy analog to option stack-and-finetune. That is, modelers first train the model until convergence, and then fine-tune the word embeddings with a few epochs. If pre-trained language models can be understood as at least partially resemblance of word embeddings, then it will be imprudent not to consider the possibility of option stack-and-finetune.", "In this study, we aim to compare the strategy stack-and-finetune and strategy finetune-only. More specifically, we perform three NLP tasks, sequence labeling, text classification, and question similarity. In the first tasks, we demonstrate that even without modifying the network structures, building networks on top of pre-trained language models might improve accuracy. In the second tasks, we show that by ensembling different neural networks, one can even improve the accuracy of fine-tuning only methods even further. Finally, in the last task, we demonstrate that if one can tailor-made a neural network that specifically fit the characteristics of the pre-trained language models, one can improve the accuracy even further. All the results indicate the strategy stack-and-finetune is superior to strategy finetune-only. This leads us to conclude that, at least, by overlooking the possibility strategy stack-and-finetune is imprudent.", "The contribution of this paper is two-fold. First, we propose a new strategy to improve the fine-tune-only strategy proposed by Peter et al. BIBREF0 , this allows us to achieve better results, at least on the selected tasks. More importantly, the results of this study demonstrate the importance of neural networks design, even in the presence of all-powerful pre-trained language models. Second, during the experiment, we have found that although simply using the proposed training strategy can result in higher accuracies compared to that of Peter et al. BIBREF0 , it is still a challenging task to find the appropriate methods to design and to utilize pre-trained networks. In this regard, we find that pre-trained models differ significantly from word embeddings in terms of their training strategies. Especially, since word embeddings can be viewed as shallow transfer learning, while pre-trained model should be viewed as deep transfer learning, one must try to combat over-fitting problems with more care due to the enormous number of parameters presented in the pre-trained models. Besides, we also find that in order to achieve the maximal performance in the post-pre-trained language model era, one must design, either manually or via Auto ML, networks that best fit the structure, especially the depth of the pre-trained language models.", "The rest of the paper is organized as follows. First, we review the relevant literature on pre-trained deep neural networks, the argument in Peter et al. BIBREF0 as well as fine-tuning strategies with word embeddings. Second, we present three experiments and showed the superiority of strategy stack-and-finetune compared to strategy finetune-only. Finally, we conclude with some remarks and future research possibilities." ], [ "Before the introduction of deep neural networks, researchers in the field of NLP have been using pre-trained models. Among all of them, one of the most famous is the word embeddings, which maps each word into a continuous vector, instead of one-hot encodings BIBREF10 . By doing so, not only are we able to reduce the dimensionality of the input features, which helps to avoid over-fitting, but also capture, at least partially, the internal meaning of each word.", "However, since each word is only endowed with a fixed numerical vector in the methodology of word embeddings, word embeddings are unable to capture the contextual meaning in the text. For example, consider the word ”bank” sentences “I am walking on the bank of the river.” with “I am going to rob the bank”. It is obvious that the word “bank” represents completely different meaning, which the word embeddings techniques fail to capture.", "The aforementioned deficiencies prompt researchers to propose deep neural networks that are able to be trained in an unsupervised fashion while being able to capture the contextual meaning of the words presented in the texts. Some early attempts include pre-trained models includes, CoVe BIBREF11 , CVT BIBREF12 , BIBREF13 , ELMo BIBREF14 and ULMFiT BIBREF15 . However, the most successful ones are BERT BIBREF1 and Open-GPT BIBREF2 . Unlike standard NLP deep learning model, BERT and Open-GPT are built on top of transformer BIBREF16 structures, instead of LSTM BIBREF17 or GRU BIBREF18 . The difference between BERT and Open-GPT is that BERT uses bi-directional self-attentions while Open-GPT uses only unidirectional ones, as shown in Figure FIGREF2 . The transformer structures differ from the LSTM's in the two important aspects. First, it allows for stacking of multiple layers with residual connections and batch normalizations, which allows for free gradient flow. Second, the core computational unit is matrix multiplications, which allows researchers to utilize the full computational potential of TPU BIBREF19 . After training on a large corpus, both BERT and Open-GPT are able to renew the SOTA of many important natural language tasks, such as such as SQuAD BIBREF3 , CoQA BIBREF4 , named entity recognition BIBREF5 , Glue BIBREF6 , machine translation BIBREF7 .", "In the presence of the success of pre-trained language models, especially BERT BIBREF1 , it is natural to ask how to best utilize the pre-trained language models to achieve new state-of-the-art results. In this line of work, Liu et al. BIBREF20 investigated the linguistic knowledge and transferability of contextual representations by comparing BERT BIBREF1 with ELMo BIBREF14 , and concluded that while the higher levels of LSTM's are more task-specific, this trend does not exhibit in transformer based models. Stickland and Murray BIBREF21 invented projected attention layer for multi-task learning using BERT, which results in an improvement in various state-of-the-art results compared to the original work of Devlin et al. BIBREF1 . Xu et al. BIBREF22 propose a “post-training” algorithms, which does not directly fine-tune BERT, but rather first “post-train” BERT on the task related corpus using the masked language prediction task next sentence prediction task, which helps to reduce the bias in the training corpus. Finally, Sun et al. BIBREF23 added additional fine-tuning tasks based on multi-task training, which further improves the prediction power of BERT in the tasks of text classification.", "In this aspect, however, there is a simple yet crucial question that needs to be addressed. That is, whether it is possible to top BERT with the commonly used or task specific layers, and if this is possible, how to best utilize the pre-trained language models in this situation. In this regards, Peters et al. BIBREF0 investigated how to best adapt the pre-trained model to a specific task, and focused on two different adaptation method,feature extraction and directly fine-tuning the pre-trained model, which corresponding to the strategy finetune-only and the strategy stack-only in Table TABREF1 . On this regard, Peters et al. BIBREF0 performs five experiments, including: (1) named entity recognition BIBREF5 ; (2) sentiment analysis BIBREF24 ; (3) natural language inference BIBREF25 ; (4) paraphrase detection BIBREF26 ; (5) semantic textual similarity BIBREF27 . By the results of these tasks, Peters et al. BIBREF0 concludes that adding a light task-specific head and performing fine-tuning on BERT is better than building a complex network on top without BERT fine-tuning." ], [ "Under our strategy stack-and-finetune, the model training process is divided into two phases, which are described in detail below. In the first phase, the parameters of the pre-training model are fixed, and only the upper-level models added for a specific task is learned. In the second phase, we fine-tune the upper-level models together with the pre-trained language models. We choose this strategy for the following reasons. Pre-training models have been used to obtain more effective word representations through the study of a large number of corpora. In the paradigm proposed in the original work by Devlin et al. BIBREF1 , the author directly trained BERT along with with a light-weighted task-specific head. In our case though, we top BERT with a more complex network structure, using Kaiming initialization BIBREF28 . If one would fine-tune directly the top models along with the weights in BERT, one is faced with the following dilemma: on the one hand, if the learning rate is too large, it is likely to disturb the structure innate to the pre-trained language models; on the other hand, if the learning rate is too small, since we top BERT with relatively complex models, the convergence of the top models might be impeded. Therefore, in the first phase we fix the weights in the pre-training language models, and only train the model on top of it.", "Another aspect that is worth commenting in the first phase is that it is most beneficial that one does not train the top model until it reaches the highest accuracy on the training or validation data sets, but rather only up to a point where the prediction accuracy of the training and validation data sets do not differ much. This is intuitively reasonable for the following reasons. Unlike word embeddings, the pre-trained language models possess a large number of parameters compared to the task-specific models we build on top them. Therefore, if one were to train the top models until they reach the highest prediction accuracy in the training or validation data sets, it would likely cause the models to over-fit. Therefore, in our experiment, we found that this leads to the highest performance increase in the fine-tuning stage." ], [ "We perform three different experiments to test our hypotheses. First, we perform a named entity recognition tasks, by adding a bi-LSTM on top of the BERT model. In this experiment, we hope to test whether, without any modification to the commonly used network structure, our proposed training strategy will improve the overall accuracy. Second, we perform a text classification experiments, in this experiments, we trained three models, and perform a model ensemble. We hope to show that even the added network has not contributed to significantly in improving the accuracy, it does provide opportunities for model ensembles. Finally, we perform the textual similarity tests, in which we show that if one can tailor make a network that specifically fit the characteristics of the pre-trained languages, more significant improvement can be expected.", "Under the strategy finetune-only, we use only single BERT.In order to adapt to different tasks, we will add a fully connected layer upon BERT. In the sequence labeling task, the BERT word embedding of each word passes through two fully connected layers, and the prediction probability of named entity can be obtained. In the next two verification tasks, we use “[CLS]” for prediction and add two fully connected layers subsequently. Under our strategy stack-and-finetune, we set different learning rates for the two phases. We tried to set the learning rate of the first stage to INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 , and set it to a smaller number in the latter stage, such as INLINEFORM5 , INLINEFORM6 , INLINEFORM7 and INLINEFORM8 . After our experiments, we found that it gets better results while the learning rate is set to 0.001 in the stage of training only the upper model and set to INLINEFORM9 in the later stage. Since BERT-Adam BIBREF1 has excellent performance, in our experiments, we use it as an optimizer with INLINEFORM10 , INLINEFORM11 -weight decay of INLINEFORM12 .We apply a dropout trick on all layers and set the dropout probability as 0.1." ], [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 . For strategy finetune-only and strategy stack-and-finetune, we implemented two models: one with BERT and the other with BERT adding a Bi-LSTM on top. Eval measure is accuracy and F1 score.", "As is shown in Table 2, even without modifying the networks to specifically adapt to the pre-trained model, our training strategy still brought improvement towards overall accuracy of 0.99% for the accuracy and 0.068 on the F1 score, proving the success of our proposed methods." ], [ "In the task of text categorization, we used Yahoo Answer Classification Dataset. The Dataset is consists of 10 classes, but due to the huge amount of the dataset, we just select two class of them. As for the upper model,we choose DenseNet BIBREF33 and HighwayLSTM BIBREF34 .", "The DenseNet structure contains four independent blocks and each block has four CNNs connected by residual. We initialize word embedding in the word representation layer with BERT. We initialize each character as a 768-dimension vector. In the experiment of training DenseNet,we concat the output vector of DenseNet with [CLS] for prediction.", "We find the ensembled model enjoys a 0.72% improvements compared to the fine-tune only model and 0.005 improvement for the F1 score." ], [ "We use “Quora-Question-Pair” dataset 1. This is a commonly used dataset containing 400k question pairs, annotated manually to be semantically equivalent or not. Due to its high quality, it is a standard dataset to test the success of various semantic similarity tasks. Various models which are tested on this data set are proposed, including but not limited to BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 .", "Apart from the BERT fine-tuning only model and BERT+ BIMPM model, we also devise two new network structures by modifying the BIMPM model. In the first model is to remove the first bi-LSTM of BIMPM, which is the input layer for the matching layer in BIMPM. In the second model, we combine the matching layer of BIMPM and with a transformer BIBREF16 , a model we call Sim-Transformer by replacing the output layer of the matching layer, originally a bi-LSTM model, with a transformer model. From the experimental results shown in Table 4, we can see that due to the strong expressive ability of the BERT, there is almost no difference in the experimental results of removing the first bi-LSTM and BIMPM. In addition, we also find that Sim-Transformer's performance without fine-tuning is nearly four percentage points lower than BIMPM, but it out-performs BIMPM after fine-tuning. In general, the results show that BERT + Sim-Transformer out-performs BERT-only model by 4.7%, thus confirming our hypotheses again." ], [ "In summary, we find that in all the three tasks, our proposed method out-performs the methods of simply tuning pre-trained language models, as is proposed in BIBREF0 . However, we would like to caution the readers in two aspects when reading the conclusion of this study. First, this study does not argue that our proposed methods are always superior to fine-tuning only methods. For example, all the experiments in our study are based on data sets of relatively large size. In the other spectrum, if one is only given a limited data set, then building complex networks upon pre-trained language models might lead to disastrous over-fitting. If this is the case, then it is possible that deep domain adaptation BIBREF39 might be a better choice if one desires to stack neural networks on top of pre-trained language models. However, most domain adaptation applications belong to the field of computer vision, therefore, a call for domain adaptations research in the NLP fields.", "During the experimentation, we also discover some tricks to obtain higher quality networks. The first is that due to the enormous number of parameters presented in the pre-trained language models, to achieve generalizable results on the test data sets, it is vital to combat over-fitting. In classical embedding + training networks, the general training method is to fix the word-embeddings, then train the top model until it converges, and finally fine-tuning the word-embeddings for a few epochs. This training strategy does not work when we replace pre-trained language models with word-embeddings. In our experiment, we first fix the pre-trained language models, and then we train the top neural networks only for a few epochs, until it reaches a reasonable accuracy, while closely monitoring the discrepancy between training accuracy and testing accuracy. After that, we fine-tune the pre-trained language model as well as our models on top together. This allows us to achieve better results on the experimentation. However, it is not yet clear to us when to stop the training of top neural networks. This poses an even more essential question for Auto ML researchers in the following sense. In the classical computer vision based Auto ML approaches, since one seldom build networks on already trained models, there is no particular need to auxiliary measure for over-fittings. While if Auto ML is to be performed on NLP tasks successfully, it might be essential that the gap between training accuracy and test accuracy to be incorporated when one evaluates the model.", "Finally, it is not yet clear what is the most proper way to build networks that tops the pre-trained language models. However, there are several principles that we can follow when designing such networks. First, such networks must be able to ensure the gradient flow from the top of the model to the bottom. This is essential due to the depth of the pre-trained language model. Second, this also means, one does not need explicitly to build extremely complex networks on top of pre-trained language models unless it complements the mechanisms of self-attention. Finally, a challenge remains as to how to use the depth of pre-trained language models. The process of our experiment shows that utilizing deeper layers might be a fruitful way to achieve better accuracy." ] ], "section_name": [ "Introduction", "Related Studies", "Methodology", "Overview", "Experiment A: Sequence Labeling", "Experiment B: Text Classification", "Experiment C: Semantic Similarity Tasks", "Discussions and Conclusions" ] }
{ "answers": [ { "annotation_id": [ "043a190c8e474d7ca7ede7bbdc340409e0482d49", "63c58de6fb55db8552a052faf2da734a92cb18f8", "8903d5b7ac7c5ad82db6d3f3b15d33a5aa35837c" ], "answer": [ { "evidence": [ "We perform three different experiments to test our hypotheses. First, we perform a named entity recognition tasks, by adding a bi-LSTM on top of the BERT model. In this experiment, we hope to test whether, without any modification to the commonly used network structure, our proposed training strategy will improve the overall accuracy. Second, we perform a text classification experiments, in this experiments, we trained three models, and perform a model ensemble. We hope to show that even the added network has not contributed to significantly in improving the accuracy, it does provide opportunities for model ensembles. Finally, we perform the textual similarity tests, in which we show that if one can tailor make a network that specifically fit the characteristics of the pre-trained languages, more significant improvement can be expected.", "Under the strategy finetune-only, we use only single BERT.In order to adapt to different tasks, we will add a fully connected layer upon BERT. In the sequence labeling task, the BERT word embedding of each word passes through two fully connected layers, and the prediction probability of named entity can be obtained. In the next two verification tasks, we use “[CLS]” for prediction and add two fully connected layers subsequently. Under our strategy stack-and-finetune, we set different learning rates for the two phases. We tried to set the learning rate of the first stage to INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 , and set it to a smaller number in the latter stage, such as INLINEFORM5 , INLINEFORM6 , INLINEFORM7 and INLINEFORM8 . After our experiments, we found that it gets better results while the learning rate is set to 0.001 in the stage of training only the upper model and set to INLINEFORM9 in the later stage. Since BERT-Adam BIBREF1 has excellent performance, in our experiments, we use it as an optimizer with INLINEFORM10 , INLINEFORM11 -weight decay of INLINEFORM12 .We apply a dropout trick on all layers and set the dropout probability as 0.1." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "First, we perform a named entity recognition tasks, by adding a bi-LSTM on top of the BERT model. ", "Under the strategy finetune-only, we use only single BERT." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "We perform three different experiments to test our hypotheses. First, we perform a named entity recognition tasks, by adding a bi-LSTM on top of the BERT model. In this experiment, we hope to test whether, without any modification to the commonly used network structure, our proposed training strategy will improve the overall accuracy. Second, we perform a text classification experiments, in this experiments, we trained three models, and perform a model ensemble. We hope to show that even the added network has not contributed to significantly in improving the accuracy, it does provide opportunities for model ensembles. Finally, we perform the textual similarity tests, in which we show that if one can tailor make a network that specifically fit the characteristics of the pre-trained languages, more significant improvement can be expected.", "Under the strategy finetune-only, we use only single BERT.In order to adapt to different tasks, we will add a fully connected layer upon BERT. In the sequence labeling task, the BERT word embedding of each word passes through two fully connected layers, and the prediction probability of named entity can be obtained. In the next two verification tasks, we use “[CLS]” for prediction and add two fully connected layers subsequently. Under our strategy stack-and-finetune, we set different learning rates for the two phases. We tried to set the learning rate of the first stage to INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 , and set it to a smaller number in the latter stage, such as INLINEFORM5 , INLINEFORM6 , INLINEFORM7 and INLINEFORM8 . After our experiments, we found that it gets better results while the learning rate is set to 0.001 in the stage of training only the upper model and set to INLINEFORM9 in the later stage. Since BERT-Adam BIBREF1 has excellent performance, in our experiments, we use it as an optimizer with INLINEFORM10 , INLINEFORM11 -weight decay of INLINEFORM12 .We apply a dropout trick on all layers and set the dropout probability as 0.1." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "First, we perform a named entity recognition tasks, by adding a bi-LSTM on top of the BERT model.", "Under the strategy finetune-only, we use only single BERT.In order to adapt to different tasks, we will add a fully connected layer upon BERT. In the sequence labeling task, the BERT word embedding of each word passes through two fully connected layers, and the prediction probability of named entity can be obtained." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "Under the strategy finetune-only, we use only single BERT.In order to adapt to different tasks, we will add a fully connected layer upon BERT. In the sequence labeling task, the BERT word embedding of each word passes through two fully connected layers, and the prediction probability of named entity can be obtained. In the next two verification tasks, we use “[CLS]” for prediction and add two fully connected layers subsequently. Under our strategy stack-and-finetune, we set different learning rates for the two phases. We tried to set the learning rate of the first stage to INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 , and set it to a smaller number in the latter stage, such as INLINEFORM5 , INLINEFORM6 , INLINEFORM7 and INLINEFORM8 . After our experiments, we found that it gets better results while the learning rate is set to 0.001 in the stage of training only the upper model and set to INLINEFORM9 in the later stage. Since BERT-Adam BIBREF1 has excellent performance, in our experiments, we use it as an optimizer with INLINEFORM10 , INLINEFORM11 -weight decay of INLINEFORM12 .We apply a dropout trick on all layers and set the dropout probability as 0.1." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Under the strategy finetune-only, we use only single BERT." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "060252e0332ced916b40ce48d398cee2cb868068", "4eebbf4fdb37c5bfafaf79316e5a3afa9cdf16f0", "6aeae91599cf9286b393fbb82a93632bd483e20b" ], "answer": [ { "evidence": [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 . For strategy finetune-only and strategy stack-and-finetune, we implemented two models: one with BERT and the other with BERT adding a Bi-LSTM on top. Eval measure is accuracy and F1 score.", "In the task of text categorization, we used Yahoo Answer Classification Dataset. The Dataset is consists of 10 classes, but due to the huge amount of the dataset, we just select two class of them. As for the upper model,we choose DenseNet BIBREF33 and HighwayLSTM BIBREF34 .", "Apart from the BERT fine-tuning only model and BERT+ BIMPM model, we also devise two new network structures by modifying the BIMPM model. In the first model is to remove the first bi-LSTM of BIMPM, which is the input layer for the matching layer in BIMPM. In the second model, we combine the matching layer of BIMPM and with a transformer BIBREF16 , a model we call Sim-Transformer by replacing the output layer of the matching layer, originally a bi-LSTM model, with a transformer model. From the experimental results shown in Table 4, we can see that due to the strong expressive ability of the BERT, there is almost no difference in the experimental results of removing the first bi-LSTM and BIMPM. In addition, we also find that Sim-Transformer's performance without fine-tuning is nearly four percentage points lower than BIMPM, but it out-performs BIMPM after fine-tuning. In general, the results show that BERT + Sim-Transformer out-performs BERT-only model by 4.7%, thus confirming our hypotheses again." ], "extractive_spans": [ "BERT", "BERT adding a Bi-LSTM on top", "DenseNet BIBREF33 and HighwayLSTM BIBREF34", "BERT+ BIMPM", "remove the first bi-LSTM of BIMPM", "Sim-Transformer" ], "free_form_answer": "", "highlighted_evidence": [ "For strategy finetune-only and strategy stack-and-finetune, we implemented two models: one with BERT and the other with BERT adding a Bi-LSTM on top. Eval measure is accuracy and F1 score.", "As for the upper model,we choose DenseNet BIBREF33 and HighwayLSTM BIBREF34 .", "Apart from the BERT fine-tuning only model and BERT+ BIMPM model, we also devise two new network structures by modifying the BIMPM model. In the first model is to remove the first bi-LSTM of BIMPM, which is the input layer for the matching layer in BIMPM. In the second model, we combine the matching layer of BIMPM and with a transformer BIBREF16 , a model we call Sim-Transformer by replacing the output layer of the matching layer, originally a bi-LSTM model, with a transformer model." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 . For strategy finetune-only and strategy stack-and-finetune, we implemented two models: one with BERT and the other with BERT adding a Bi-LSTM on top. Eval measure is accuracy and F1 score.", "In the task of text categorization, we used Yahoo Answer Classification Dataset. The Dataset is consists of 10 classes, but due to the huge amount of the dataset, we just select two class of them. As for the upper model,we choose DenseNet BIBREF33 and HighwayLSTM BIBREF34 .", "The DenseNet structure contains four independent blocks and each block has four CNNs connected by residual. We initialize word embedding in the word representation layer with BERT. We initialize each character as a 768-dimension vector. In the experiment of training DenseNet,we concat the output vector of DenseNet with [CLS] for prediction.", "We find the ensembled model enjoys a 0.72% improvements compared to the fine-tune only model and 0.005 improvement for the F1 score.", "Apart from the BERT fine-tuning only model and BERT+ BIMPM model, we also devise two new network structures by modifying the BIMPM model. In the first model is to remove the first bi-LSTM of BIMPM, which is the input layer for the matching layer in BIMPM. In the second model, we combine the matching layer of BIMPM and with a transformer BIBREF16 , a model we call Sim-Transformer by replacing the output layer of the matching layer, originally a bi-LSTM model, with a transformer model. From the experimental results shown in Table 4, we can see that due to the strong expressive ability of the BERT, there is almost no difference in the experimental results of removing the first bi-LSTM and BIMPM. In addition, we also find that Sim-Transformer's performance without fine-tuning is nearly four percentage points lower than BIMPM, but it out-performs BIMPM after fine-tuning. In general, the results show that BERT + Sim-Transformer out-performs BERT-only model by 4.7%, thus confirming our hypotheses again." ], "extractive_spans": [], "free_form_answer": "BERT, BERT+ Bi-LSTM , BERT+ DenseNet, BERT+HighwayLSTM, Ensembled model, BERT+ BIMPM, BERT+ BIMPM(first bi-LSTM removed), BERT + Sim-Transformer .", "highlighted_evidence": [ "For strategy finetune-only and strategy stack-and-finetune, we implemented two models: one with BERT and the other with BERT adding a Bi-LSTM on top. ", "In the task of text categorization, we used Yahoo Answer Classification Dataset. The Dataset is consists of 10 classes, but due to the huge amount of the dataset, we just select two class of them. As for the upper model,we choose DenseNet BIBREF33 and HighwayLSTM BIBREF34 .\n\nThe DenseNet structure contains four independent blocks and each block has four CNNs connected by residual. We initialize word embedding in the word representation layer with BERT. We initialize each character as a 768-dimension vector. In the experiment of training DenseNet,we concat the output vector of DenseNet with [CLS] for prediction.\n\nWe find the ensembled model enjoys a 0.72% improvements compared to the fine-tune only model and 0.005 improvement for the F1 score.", "Apart from the BERT fine-tuning only model and BERT+ BIMPM model, we also devise two new network structures by modifying the BIMPM model. In the first model is to remove the first bi-LSTM of BIMPM, which is the input layer for the matching layer in BIMPM. In the second model, we combine the matching layer of BIMPM and with a transformer BIBREF16 , a model we call Sim-Transformer by replacing the output layer of the matching layer, originally a bi-LSTM model, with a transformer model. From the experimental results shown in Table 4, we can see that due to the strong expressive ability of the BERT, there is almost no difference in the experimental results of removing the first bi-LSTM and BIMPM. In addition, we also find that Sim-Transformer's performance without fine-tuning is nearly four percentage points lower than BIMPM, but it out-performs BIMPM after fine-tuning. In general, the results show that BERT + Sim-Transformer out-performs BERT-only model by 4.7%, thus confirming our hypotheses again." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 2: Results for named entity recognition", "FLOAT SELECTED: Table 3: Results for text classification", "FLOAT SELECTED: Table 4: Results for semantic similarity task" ], "extractive_spans": [], "free_form_answer": "BERT, BERT + Bi-LSTM, BERT + HighwayLSTM, BERT + DenseNet, Ensembled Model, BERT + BIMPM, BERT + Sim-Transformer", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Results for named entity recognition", "FLOAT SELECTED: Table 3: Results for text classification", "FLOAT SELECTED: Table 4: Results for semantic similarity task" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "1da08a153a6fb629820631dc65526053a8a0867b", "bb716d825e33ac7283acbe90e790bb6cba3193b3", "e104f5b068ff555baccb85588ba622c69753bfd4" ], "answer": [ { "evidence": [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 . For strategy finetune-only and strategy stack-and-finetune, we implemented two models: one with BERT and the other with BERT adding a Bi-LSTM on top. Eval measure is accuracy and F1 score.", "In the task of text categorization, we used Yahoo Answer Classification Dataset. The Dataset is consists of 10 classes, but due to the huge amount of the dataset, we just select two class of them. As for the upper model,we choose DenseNet BIBREF33 and HighwayLSTM BIBREF34 .", "We use “Quora-Question-Pair” dataset 1. This is a commonly used dataset containing 400k question pairs, annotated manually to be semantically equivalent or not. Due to its high quality, it is a standard dataset to test the success of various semantic similarity tasks. Various models which are tested on this data set are proposed, including but not limited to BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 ." ], "extractive_spans": [ "CoNLL03 ", "Yahoo Answer Classification Dataset", "“Quora-Question-Pair” dataset 1" ], "free_form_answer": "", "highlighted_evidence": [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 . ", "In the task of text categorization, we used Yahoo Answer Classification Dataset. ", "We use “Quora-Question-Pair” dataset 1. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 . For strategy finetune-only and strategy stack-and-finetune, we implemented two models: one with BERT and the other with BERT adding a Bi-LSTM on top. Eval measure is accuracy and F1 score.", "In the task of text categorization, we used Yahoo Answer Classification Dataset. The Dataset is consists of 10 classes, but due to the huge amount of the dataset, we just select two class of them. As for the upper model,we choose DenseNet BIBREF33 and HighwayLSTM BIBREF34 .", "We use “Quora-Question-Pair” dataset 1. This is a commonly used dataset containing 400k question pairs, annotated manually to be semantically equivalent or not. Due to its high quality, it is a standard dataset to test the success of various semantic similarity tasks. Various models which are tested on this data set are proposed, including but not limited to BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 ." ], "extractive_spans": [ "CoNLL03", " Yahoo Answer Classification Dataset", "“Quora-Question-Pair” dataset 1" ], "free_form_answer": "", "highlighted_evidence": [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 .", "In the task of text categorization, we used Yahoo Answer Classification Dataset.", "We use “Quora-Question-Pair” dataset 1. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 . For strategy finetune-only and strategy stack-and-finetune, we implemented two models: one with BERT and the other with BERT adding a Bi-LSTM on top. Eval measure is accuracy and F1 score.", "In the task of text categorization, we used Yahoo Answer Classification Dataset. The Dataset is consists of 10 classes, but due to the huge amount of the dataset, we just select two class of them. As for the upper model,we choose DenseNet BIBREF33 and HighwayLSTM BIBREF34 .", "We use “Quora-Question-Pair” dataset 1. This is a commonly used dataset containing 400k question pairs, annotated manually to be semantically equivalent or not. Due to its high quality, it is a standard dataset to test the success of various semantic similarity tasks. Various models which are tested on this data set are proposed, including but not limited to BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 ." ], "extractive_spans": [ "CoNLL03 dataset BIBREF5", "Yahoo Answer Classification Dataset", " “Quora-Question-Pair” dataset" ], "free_form_answer": "", "highlighted_evidence": [ "In the sequence labeling task,we explore sub-task named entity recognition using CoNLL03 dataset BIBREF5 , which is a public available used in many studies to test the accuracy of their proposed methods BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF1 .", "In the task of text categorization, we used Yahoo Answer Classification Dataset.", "We use “Quora-Question-Pair” dataset 1." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "did they test with other pretrained models besides bert?", "what models did they compare with?", "what datasets were used for testing?" ], "question_id": [ "79e61134a6e29141cd19252571ffc92a0b4bc97f", "18fbfb1f88c5487f739aceffd23210a7d4057145", "5d3e87937ecebf0695bece08eccefb2f88ad4a0f" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Table 1: Methods to Stack Neural Networks on Top of Pre-trained Language Models", "Figure 1: The Difference Between BERT and Open-GPT, extracted from Devlin et al. [2], Figure 1", "Table 2: Results for named entity recognition", "Table 3: Results for text classification", "Table 4: Results for semantic similarity task" ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Table4-1.png" ] }
[ "what models did they compare with?" ]
[ [ "1907.05338-Experiment A: Sequence Labeling-0", "1907.05338-5-Table4-1.png", "1907.05338-5-Table3-1.png", "1907.05338-4-Table2-1.png", "1907.05338-Experiment B: Text Classification-0", "1907.05338-Experiment C: Semantic Similarity Tasks-1", "1907.05338-Experiment B: Text Classification-2", "1907.05338-Experiment B: Text Classification-1" ] ]
[ "BERT, BERT + Bi-LSTM, BERT + HighwayLSTM, BERT + DenseNet, Ensembled Model, BERT + BIMPM, BERT + Sim-Transformer" ]
39
2003.08437
A Corpus of Adpositional Supersenses for Mandarin Chinese
Adpositions are frequent markers of semantic relations, but they are highly ambiguous and vary significantly from language to language. Moreover, there is a dearth of annotated corpora for investigating the cross-linguistic variation of adposition semantics, or for building multilingual disambiguation systems. This paper presents a corpus in which all adpositions have been semantically annotated in Mandarin Chinese; to the best of our knowledge, this is the first Chinese corpus to be broadly annotated with adposition semantics. Our approach adapts a framework that defined a general set of supersenses according to ostensibly language-independent semantic criteria, though its development focused primarily on English prepositions (Schneider et al., 2018). We find that the supersense categories are well-suited to Chinese adpositions despite syntactic differences from English. On a Mandarin translation of The Little Prince, we achieve high inter-annotator agreement and analyze semantic correspondences of adposition tokens in bitext.
{ "paragraphs": [ [ "Adpositions (i.e. prepositions and postpositions) include some of the most frequent words in languages like Chinese and English, and help convey a myriad of semantic relations of space, time, causality, possession, and other domains of meaning. They are also a persistent thorn in the side of second language learners owing to their extreme idiosyncrasy BIBREF1, BIBREF2. For instance, the English word in has no exact parallel in another language; rather, for purposes of translation, its many different usages cluster differently depending on the second language. Semantically annotated corpora of adpositions in multiple languages, including parallel data, would facilitate broader empirical study of adposition variation than is possible today, and could also contribute to NLP applications such as machine translation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9 and grammatical error correction BIBREF1, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14.", "This paper describes the first corpus with broad-coverage annotation of adpositions in Chinese. For this corpus we have adapted schneider-etal-2018-comprehensive Semantic Network of Adposition and Case Supersenses annotation scheme (SNACS; see sec:snacs) to Chinese. Though other languages were taken into consideration in designing SNACS, no serious annotation effort has been undertaken to confirm empirically that it generalizes to other languages. After developing new guidelines for syntactic phenomena in Chinese (subsec:adpositioncriteria), we apply the SNACS supersenses to a translation of The Little Prince (3 2 3), finding the supersenses to be robust and achieving high inter-annotator agreement (sec:corpus-annotation). We analyze the distribution of adpositions and supersenses in the corpus, and compare to adposition behavior in a separate English corpus (see sec:corpus-analysis). We also examine the predictions of a part-of-speech tagger in relation to our criteria for annotation targets (sec:adpositionidentification). The annotated corpus and the Chinese guidelines for SNACS will be made freely available online." ], [ "To date, most wide-coverage semantic annotation of prepositions has been dictionary-based, taking a word sense disambiguation perspective BIBREF16, BIBREF17, BIBREF18. BIBREF19 proposed a supersense-based (unlexicalized) semantic annotation scheme which would be applied to all tokens of prepositions in English text. We adopt a revised version of the approach, known as SNACS (see sec:snacs). Previous SNACS annotation efforts have been mostly focused on English—particularly STREUSLE BIBREF20, BIBREF0, the semantically annotated corpus of reviews from the English Web Treebank BIBREF21. We present the first adaptation of SNACS for Chinese by annotating an entire Chinese translation of The Little Prince." ], [ "In the computational literature for Chinese, apart from some focused studies (e.g., BIBREF22 on logical-semantic representation of temporal adpositions), there has been little work addressing adpositions specifically. Most previous semantic projects for Mandarin Chinese focused on content words and did not directly annotate the semantic relations signaled by functions words such as prepositions BIBREF23, BIBREF24, BIBREF25, BIBREF26. For example, in Chinese PropBank, BIBREF27 argued that the head word and its part of speech are clearly informative for labeling the semantic role of a phrase, but the preposition is not always the most informative element. BIBREF28 annotated the Tsinghua Corpus BIBREF29 from People’s Daily where the content words were selected as the headwords, i.e., the object is the headword of the prepositional phrase. In these prepositional phrases, the nominal headwords were labeled with one of the 59 semantic relations (e.g. Location, LocationIni, Kernel word) whereas the prepositions and postpositions were respectively labeled with syntactic relations Preposition and LocationPreposition. Similarly, in Semantic Dependency Relations (SDR, BIBREF30, BIBREF31), prepositions and localizers were labeled as semantic markers mPrep and mRange, whereas semantic roles, e.g., Location, Patient, are assigned to the governed nominal phrases.", "BIBREF32 compared PropBank parsing performance on Chinese and English, and showed that four Chinese prepositions (4, 2, 3, and 4) are among the top 20 lexicalized syntactic head words in Chinese PropBank, bridging the connections between verbs and their arguments. The high frequency of prepositions as head words in PropBank reflects their importance in context. However, very few annotation scheme attempted to directly label the semantics of these adposition words.", "BIBREF33 is the most relevant adposition annotation effort, categorizing Chinese prepositions into 66 types of senses grouped by lexical items. However, these lexicalized semantic categories are constrained to a given language and a closed set of adpositions. For semantic labeling of Chinese adpositions in a multilingual context, we turn to the SNACS framework, described below." ], [ "BIBREF0 proposed the Semantic Network of Adposition and Case Supersenses (SNACS), a hierarchical inventory of 50 semantic labels, i.e., supersenses, that characterize the use of adpositions, as shown in fig:supersenses. Since the meaning of adpositions is highly affected by the context, SNACS can help distinguish different usages of adpositions. For instance, single-label presents an example of the supersense Topic for the adposition about which emphasizes the subject matter of urbanization that the speaker discussed. In single-label-amb, however, the same preposition about takes a measurement in the context, expressing an approximation.", ". I gave a presentation about:Topic urbanization.", ". We have about:Approximator 3 eggs left.", "Though assigning a single label to each adposition can help capture its lexical contribution to the sentence meaning as well as disambiguate its uses in different scenarios, the canonical lexical semantics of adpositions are often stretched to fit the needs of the scene in actual language use.", ". I care about:StimulusTopic you.", "For instance, eg:stimulustopic blends the domains of emotion (principally reflected in care, which licenses a Stimulus), and cognition (principally reflected in about, which often marks non-emotional Topics). Thus, SNACS incorporates the construal analysis BIBREF34 wherein the lexical semantic contribution of an adposition (its function) is distinguished and may diverge from the underlying relation in the surrounding context (its scene role). Construal is notated by SceneRoleFunction, as StimulusTopic in eg:stimulustopic.", "Another motivation for incorporating the construal analysis, as pointed out by BIBREF34, is its capability to adapt the English-centric supersense labels to other languages, which is the main contribution of this paper. The construal analysis can give us insights into the similarities and differences of function and scene roles of adpositions across languages." ], [ "Our first challenge is to determine which tokens qualify as adpositions in Mandarin Chinese and merit supersense annotations. The English SNACS guidelines (we use version 2.3) broadly define the set of SNACS annotation targets to include canonical prepositions (taking an noun phrase (NP) complement) and their subordinating (clausal complement) uses. Possessives, intransitive particles, and certain uses of the infinitive marker to are also included BIBREF35.", "In Chinese, the difficulty lies in two areas, which we discuss below. Firstly, prepositional words are widely attested. However, since no overt derivational morphology occurs on these prepositional tokens (previously referred to as coverbs), we need to filter non-prepositional uses of these words. Secondly, post-nominal particles, i.e., localizers, though not always considered adpositions in Chinese, deliver rich semantic information." ], [ "Tokens that are considered generic prepositions can co-occur with the main predicate of the clause and introduce an NP argument to the clause BIBREF36 as in zho:shangtopic. These tokens are referred to as coverbs. In some cases, coverbs can also occur as the main predicate. For example, the coverb 4 heads the predicate phrase in zho:pred.", ". 1 4:Locus 24 4:TopicLocus 3342.", "3sg p:at academia lc:on-top-of successful", "`He succeeded in academia.’", ". 3 4 de 2 4 4 34.", "2sg want de sheep res at inside", "`The sheep you wanted is in the box.' (zh_lpp_1943.92)", "In this project, we only annotate coverbs when they do not function as the main predicate in the sentence, echoing the view that coverbs modify events introduced by the predicates, rather than establishing multiple events in a clause BIBREF37. Therefore, lexical items such as 4 are annotated when functioning as a modifier as in zho:shangtopic, but not when as the main predicate as in zho:pred." ], [ "Localizers are words that follow a noun phrase to refine its semantic relation. For example, 4 in zho:shangtopic denotes a contextual meaning, `in a particular area,' whereas the co-occurring coverb 4 only conveys a generic location. It is unclear whether localizers are syntactically postpositions, but we annotate all localizers because of their semantic significance. Though coverbs frequently co-occur with localizers and the combination of coverbs and localizers is very productive, there is no strong evidence to suggest that they are circumpositions. As a result, we treat them as separate targets for SNACS annotation: for example, 4 and 4 receive Locus and TopicLocus respectively in zho:shangtopic.", "Setting aside the syntactic controversies of coverbs and localizers in Mandarin Chinese, we regard both of them as adpositions that merit supersense annotations. As in zho:shangtopic, both the coverb 4 and the localizer 4 surround an NP argument 24 (`academia') and they as a whole modify the main predicate 3342 (`successful'). In this paper, we take the stance that coverbs co-occur with the main predicate and precede an NP, whereas localizers follow a noun phrase and add semantic information to the clause." ], [ "We chose to annotate the novella The Little Prince because it has been translated into hundreds of languages and dialects, which enables comparisons of linguistic phenomena across languages on bitexts. This is the first Chinese corpus to undergo SNACS annotation. Ongoing adpositional supersense projects on The Little Prince include English, German, French, and Korean. In addition, The Little Prince has received large attention from other semantic frameworks and corpora, including the English BIBREF38 and Chinese BIBREF26 AMR corpora." ], [ "We use the same Chinese translation of The Little Prince as the Chinese AMR corpus BIBREF26, which is also sentence-aligned with the English AMR corpus BIBREF38. These bitext annotations in multiple languages and annotation semantic frameworks can facilitate cross-framework comparisons.", "Prior to supersense annotation, we conducted the following preprocessing steps in order to identify the adposition targets that merit supersense annotation." ], [ "After automatic tokenization using Jieba, we conducted manual corrections to ensure that all potential adpositions occur as separate tokens, closely following the Chinese Penn Treebank segmentation guidelines BIBREF39. The final corpus includes all 27 chapters of The Little Prince, with a total of 20k tokens." ], [ "All annotators jointly identified adposition targets according to the criteria discussed in subsec:adpositioncriteria. Manual identification of adpositions was necessary as an automatic POS tagger was found unsuitable for our criteria (sec:adpositionidentification)." ], [ "Though parsing is not essential to this annotation project, we ran the StanfordNLP BIBREF40 dependency parser to obtain POS tags and dependency trees. These are stored alongside supersense annotations in the CoNLL-U-Lex format BIBREF41, BIBREF0. CoNLL-U-Lex extends the CoNLL-U format used by the Universal Dependencies BIBREF42 project to add additional columns for lexical semantic annotations." ], [ "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese.", "", "tab:iaa-results shows raw agreement and Cohen's kappa across three annotators computed by averaging three pairwise comparisons. Agreement levels on scene role, function, and full construal are high for both phases, attesting to the validity of the annotation framework in Chinese. However, there is a slight decrease from Phase 1 to Phase 2, possibly due to the seven newly attested adpositions in Phase 2 and the 1-year interval between the two annotation phases." ], [ "Our corpus contains 933 manually identified adpositions. Of these, 70 distinct adpositions, 28 distinct scene roles, 26 distinct functions, and 41 distinct full construals are attested in annotation. Full statistics of token and type frequencies are shown in tab:stats. This section presents the most frequent adpositions in Mandarin Chinese, as well as quantitative and qualitative comparisons of scene roles, functions, and construals between Chinese and English annotations." ], [ "We analyze semantic and distributional properties of adpositions in Mandarin Chinese. The top 5 most frequent prepositions and postpositions are shown in tab:statstoptoks. Prepositions include canonical adpositions such as 14 and coverbs such as 4. Postpositions are localizers such as 4 and 1. We observe that prepositions 4 and 4 are dominant in the corpus (greater than 10%). Other top adpositions are distributed quite evenly between prepositions and postpositions. On the low end, 27 out of the 70 attested adposition types occur only once in the corpus." ], [ "The distribution of scene role and function types in Chinese and English reflects the differences and similarities of adposition semantics in both languages. In tab:statssupersensezhen we compare this corpus with the largest English adposition supersense corpus, STREUSLE version 4.1 BIBREF0, which consists of web reviews. We note that the Chinese corpus is proportionally smaller than the English one in terms of token and adposition counts. Moreover, there are fewer scene role, function and construal types attested in Chinese. The proportion of construals in which the scene role differs from the function (scene$\\ne $fxn) is also halved in Chinese. In this section, we delve into comparisons regarding scene roles, functions, and full construals between the two corpora both quantitatively and qualitatively." ], [ "fig:barscenezhen,fig:barfunctionzhen present the top 10 scene roles and functions in Mandarin Chinese and their distributions in English. It is worth noting that since more scene role and function types are attested in the larger STREUSLE dataset, the percentages of these supersenses in English are in general lower than the ones in Chinese.", "", "There are a few observations in these distributions that are of particular interest. For some of the examples, we use an annotated subset of the English Little Prince corpus for qualitative comparisons, whereas all quantitative results in English refer to the larger STREUSLE corpus of English Web Treebank reviews BIBREF0." ], [ "As shown in tab:statssupersensezhen, the percentage of adposition targets over tokens in Chinese is only half of that in English. This is due to the fact that Chinese has a stronger preference to convey semantic information via verbal or nominal forms. Examples eg:enmoreadpositions,eg:zhlessadpositions show that the prepositions used in English, of and in, are translated as copula verbs (4) and progressives (44) in Chinese. Corresponding to fig:barscenezhen,fig:barfunctionzhen, the proportion of the supersense label Topic in English is higher than that in Chinese; and similarly, the supersense label Identity is not attested in Chinese for either scene role or function.", ". It was a picture of:Topic a boa constrictor in:Manner the act of:Identity swallowing an animal . (en_lpp_1943.3)", ". [4 de] 4 [[4 2 32] 44 12 [4 1 4 34]]", "draw de cop one cl boa prog swallow one cl big animal", "`The drawing is a boa swallowing a big animal'. (en_lpp_1943.3)" ], [ "In both fig:barscenezhen and fig:barfunctionzhen, the percentages of Locus as scene role and function are twice that of the English corpus respectively. This corresponds to the fact that fewer supersense types occur in Mandarin Chinese than in English. As a result, generic locative and temporal adpositions, as well as adpositions tied to thematic roles, have larger proportions in Chinese than in English." ], [ "Despite the fact that there are fewer supersense types attested in Chinese, Experiencer as a function is specific to Chinese as it does not have any prototypical adpositions in English BIBREF35. In eg:enexperiencergoal, the scene role Experiencer is expressed through the preposition to and construed as Goal, which highlights the abstract destination of the `air of truth'. This reflects the basic meaning of to, which denotes a path towards a goal BIBREF43. In contrast, the lexicalized combination of the preposition 4 and the localizer 21 in eg:zhexperiencershenghuo are a characteristic way to introduce the mental state of the experiencer, denoting the meaning `to someone's regard'. The high frequency of 21 and the semantic role of Experiencer (6.3%) underscore its status as a prototypical adposition usage in Chinese.", ". To:ExperiencerGoal those who understand life, that would have given a much greater air of truth to my story. (en_lpp_1943.185)", ". [4:Experiencer [32 12 de 2] 21:Experiencer], 44 1 4 32 12", "p:to know-about life de people lc:one's-regard this-way tell res seems real", "`It looks real to those who know about life.' (zh_lpp_1943.185)" ], [ "Among all possible types of construals between scene role and function, here we are only concerned with construals where the scene role differs from the function (scene$\\ne $fxn). The basis of hwang-etal-2017-double construal analysis is that a scene role is construed as a function to express the contexual meaning of the adposition that is different from its lexical one. fig:barconstrualzhen presents the top 10 divergent (scene$\\ne $fxn) construals in Chinese and their corresponding proportions in English. Strikingly fewer types of construals are formed in Chinese. Nevertheless, Chinese is replete with RecipientDirection adpositions, which constitute nearly half of the construals.", "", "The 2 adpositions annotated with RecipientDirection are 4 and 4, both meaning `towards' in Chinese. In eg:enrecipient,eg:zhrecipientdirection, both English to and Chinese 4 have Recipient as the scene role. In eg:enrecipient, Goal is labelled as the function of to because it indicates the completion of the “saying” event. In Chinese, 4 has the function label Direction provided that 4 highlights the orientation of the message uttered by the speaker as in eg:zhrecipientdirection. Even though they express the same scene role in the parallel corpus, their lexical semantics still requires them to have different functions in English versus Chinese.", ". You would have to say to:RecipientGoal them: “I saw a house that costs $$20,000$.” (en_lpp_1943.172).", ". (3) 41 [4:RecipientDirection 1men] 1: “3 44 le 2 4 24 32 de 2zi.”", "2sg must P:to 3pl say 1sg see asp one CL $10,000$ franc de house", "`You must tell them: “I see a house that costs 10,000 francs.” ' (zh_lpp_1943.172)." ], [ "Similar to the distinction between RecipientGoal and RecipientDirection in English versus Chinese, language-specific lexical semantics contribute to unique construals in Chinese, i.e. semantic uses of adpositions that are unattested in the STREUSLE corpus. Six construals are newly attested in the Chinese corpus:", "[noitemsep,topsep=0pt]", "BeneficiaryExperiencer", "CircumstanceTime", "PartPortionLocus", "TopicLocus", "CircumstanceAccompanier", "DurationInstrument", "Of these new construals, BeneficiaryExperiencer has the highest frequency in the corpus. The novelty of this construal lies in the possibility of Experiencer as function in Chinese, shown by the parallel examples in eg:enbenibeni,eg:zhbeniexpe, where 4 receives the construal annotation BeneficiaryExperiencer.", ". One must not hold it against:Beneficiary them . (en_lpp_1943.180)", ". 33zimen 4:BeneficiaryExperiencer 42men 41 14 xie", "children P:to adults should lenient comp", "`Children should not hold it against adults.' (zh_lpp_1943.180)", "Similarly, other new construals in Chinese resulted from the lexical meaning of the adpositions that are not equivalent to those in English. For instance, the combination of 1 ... 2 (during the time of) denotes the circumstance of an event that is grounded by the time (2) of the event. Different lexical semantics of adpositions necessarily creates new construals when adapting the same supersense scheme into a new language, inducing newly found associations between scene and function roles of these adpositions. Fortunately, though combinations of scene and function require innovation when adapting SNACS into Chinese, the 50 supersense labels are sufficient to account for the semantic diversity of adpositions in the corpus." ], [ "We conduct a post-annotation comparison between manually identified adposition targets and automatically POS-tagged adpositions in the Chinese SNACS corpus. Among the 933 manually identified adposition targets that merit supersense annotation, only 385 (41.3%) are tagged as adp (adposition) by StanfordNLP BIBREF40. fig:piegoldpos shows that gold targets are more frequently tagged as verb than adp in automatic parses, as well as a small portion that are tagged as noun. The inclusion of targets with pos=verb reflects our discussion in subsec:adpositioncriteria that coverbs co-occurring with a main predicate are included in our annotation. The automatic POS tagger also wrongly predicts some non-coverb adpositions, such as 12, to be verbs.", "The StanfordNLP POS tagger also suffers from low precision (72.6%). Most false positives resulted from the discrepancies in adposition criteria between theoretical studies on Chinese adpositions and the tagset used in Universal Dependencies (UD) corpora such as the Chinese-GSD corpus. For instance, the Chinese-GSD corpus considers subordinating conjunctions (such as 23, 24, 42, 34) adpositions; however, theoretical research on Chinese adpositions such as BIBREF44 differentiates them from adpositions, since they can never syntactically precede a noun phrase.", "Hence, further SNACS annotation and disambiguation efforts on Chinese adpositions cannot rely on the StanfordNLP adp category to identify annotation targets. Since adpositions mostly belong to a closed set of tokens, we apply a simple rule to identify all attested adpositions which are not functioning as the main predicate of a sentence, i.e., not the root of the dependency tree. As shown in Table TABREF43, our heuristic results in an $F_1$ of 82.4%, outperforming the strategy of using the StanfordNLP POS tagger." ], [ "In this paper, we presented the first corpus annotated with adposition supersenses in Mandarin Chinese. The corpus is a valuable resource for examining similarities and differences between adpositions in different languages with parallel corpora and can further support automatic disambiguation of adpositions in Chinese. We intend to annotate additional genres—including native (non-translated) Chinese and learner corpora—in order to more fully capture the semantic behavior of adpositions in Chinese as compared to other languages." ], [ "We thank anonymous reviewers for their feedback. This research was supported in part by NSF award IIS-1812778 and grant 2016375 from the United States–Israel Binational Science Foundation (BSF), Jerusalem, Israel." ] ], "section_name": [ "Introduction", "Related Work", "Related Work ::: Chinese Adpositions and Roles", "Related Work ::: SNACS: Adposition Supersenses", "Adposition Criteria in Mandarin Chinese", "Adposition Criteria in Mandarin Chinese ::: Coverbs", "Adposition Criteria in Mandarin Chinese ::: Localizers", "Corpus Annotation", "Corpus Annotation ::: Preprocessing", "Corpus Annotation ::: Preprocessing ::: Tokenization", "Corpus Annotation ::: Preprocessing ::: Adposition Targets", "Corpus Annotation ::: Preprocessing ::: Data Format", "Corpus Annotation ::: Reliability of Annotation", "Corpus Analysis", "Corpus Analysis ::: Adpositions in Chinese", "Corpus Analysis ::: Supersense & Construal Distributions in Chinese versus English", "Corpus Analysis ::: Supersense & Construal Distributions in Chinese versus English ::: Overall Distribution of Supersenses", "Corpus Analysis ::: Supersense & Construal Distributions in Chinese versus English ::: Fewer Adpositions in Chinese", "Corpus Analysis ::: Supersense & Construal Distributions in Chinese versus English ::: Larger Proportion of Locus in Chinese", "Corpus Analysis ::: Supersense & Construal Distributions in Chinese versus English ::: Experiencer as Function in Chinese", "Corpus Analysis ::: Supersense & Construal Distributions in Chinese versus English ::: Divergence of Functions across Languages", "Corpus Analysis ::: Supersense & Construal Distributions in Chinese versus English ::: New Construals in Chinese", "POS Tagging of Adposition Targets", "Conclusion", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "047f5f828136b659282cd2645e66129be3870f12", "2247105159bf171338b7e7f4c572631aaa3b9726", "a467cf3e49187252146873ead4393ebf78fce1e4" ], "answer": [ { "evidence": [ "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese.", "tab:iaa-results shows raw agreement and Cohen's kappa across three annotators computed by averaging three pairwise comparisons. Agreement levels on scene role, function, and full construal are high for both phases, attesting to the validity of the annotation framework in Chinese. However, there is a slight decrease from Phase 1 to Phase 2, possibly due to the seven newly attested adpositions in Phase 2 and the 1-year interval between the two annotation phases." ], "extractive_spans": [ " two inter-annotator agreement ", "aw agreement and Cohen's kappa across three annotators computed by averaging three pairwise comparisons" ], "free_form_answer": "", "highlighted_evidence": [ "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese.", "tab:iaa-results shows raw agreement and Cohen's kappa across three annotators computed by averaging three pairwise comparisons. Agreement levels on scene role, function, and full construal are high for both phases, attesting to the validity of the annotation framework in Chinese. However, there is a slight decrease from Phase 1 to Phase 2, possibly due to the seven newly attested adpositions in Phase 2 and the 1-year interval between the two annotation phases." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "tab:iaa-results shows raw agreement and Cohen's kappa across three annotators computed by averaging three pairwise comparisons. Agreement levels on scene role, function, and full construal are high for both phases, attesting to the validity of the annotation framework in Chinese. However, there is a slight decrease from Phase 1 to Phase 2, possibly due to the seven newly attested adpositions in Phase 2 and the 1-year interval between the two annotation phases.", "FLOAT SELECTED: Table 1: Inter-annotator agreement (IAA) results on two samples from different phases of the project." ], "extractive_spans": [], "free_form_answer": "Raw agreement is around .90 for this dataset.", "highlighted_evidence": [ "tab:iaa-results shows raw agreement and Cohen's kappa across three annotators computed by averaging three pairwise comparisons.", "FLOAT SELECTED: Table 1: Inter-annotator agreement (IAA) results on two samples from different phases of the project." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 1: Inter-annotator agreement (IAA) results on two samples from different phases of the project." ], "extractive_spans": [], "free_form_answer": "The average agreement on scene, function and construal is 0.915", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Inter-annotator agreement (IAA) results on two samples from different phases of the project." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "7ea12273d12469f2d2ae1ad140294a631124a129", "bef761841c0e536c8fa770757b97279d8537b67f", "ea7ca55b65ac3ec46c8763afe8e1a55cab7310ff" ], "answer": [ { "evidence": [ "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese." ], "extractive_spans": [ "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese." ], "free_form_answer": "", "highlighted_evidence": [ "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Corpus Annotation ::: Preprocessing ::: Tokenization", "After automatic tokenization using Jieba, we conducted manual corrections to ensure that all potential adpositions occur as separate tokens, closely following the Chinese Penn Treebank segmentation guidelines BIBREF39. The final corpus includes all 27 chapters of The Little Prince, with a total of 20k tokens.", "Corpus Annotation ::: Preprocessing ::: Adposition Targets", "All annotators jointly identified adposition targets according to the criteria discussed in subsec:adpositioncriteria. Manual identification of adpositions was necessary as an automatic POS tagger was found unsuitable for our criteria (sec:adpositionidentification).", "Corpus Annotation ::: Preprocessing ::: Data Format", "Though parsing is not essential to this annotation project, we ran the StanfordNLP BIBREF40 dependency parser to obtain POS tags and dependency trees. These are stored alongside supersense annotations in the CoNLL-U-Lex format BIBREF41, BIBREF0. CoNLL-U-Lex extends the CoNLL-U format used by the Universal Dependencies BIBREF42 project to add additional columns for lexical semantic annotations.", "Corpus Annotation ::: Reliability of Annotation", "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese." ], "extractive_spans": [ "Tokenization", "Adposition Targets", "Data Format", "Reliability of Annotation" ], "free_form_answer": "", "highlighted_evidence": [ "Tokenization\nAfter automatic tokenization using Jieba, we conducted manual corrections to ensure that all potential adpositions occur as separate tokens, closely following the Chinese Penn Treebank segmentation guidelines BIBREF39.", "Tokenization\nAfter automatic tokenization using Jieba, we conducted manual corrections to ensure that all potential adpositions occur as separate tokens, closely following the Chinese Penn Treebank segmentation guidelines BIBREF39.", "Adposition Targets\nAll annotators jointly identified adposition targets according to the criteria discussed in subsec:adpositioncriteria.", "Data Format\nThough parsing is not essential to this annotation project, we ran the StanfordNLP BIBREF40 dependency parser to obtain POS tags and dependency trees.", "Reliability of Annotation\nThe corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese." ], "extractive_spans": [ "The corpus is jointly annotated by three native Mandarin Chinese speakers", "Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication", "Annotation was conducted in two phases" ], "free_form_answer": "", "highlighted_evidence": [ "The corpus is jointly annotated by three native Mandarin Chinese speakers, all of whom have received advanced training in theoretical and computational linguistics. Supersense labeling was performed cooperatively by 3 annotators for 25% (235/933) of the adposition targets, and for the remainder, independently by the 3 annotators, followed by cooperative adjudication. Annotation was conducted in two phases, and therefore we present two inter-annotator agreement studies to demonstrate the reproducibility of SNACS and the reliability of the adapted scheme for Chinese." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "457ef8baa97036c1f22d981ea82c4c1c8ebd1332", "9e7f13927d54e0545b9dbd23e0ea3ea40cfc2bcf", "ee85df3f0648f4ebe196c303293337a75b5a5e67" ], "answer": [ { "evidence": [ "Our corpus contains 933 manually identified adpositions. Of these, 70 distinct adpositions, 28 distinct scene roles, 26 distinct functions, and 41 distinct full construals are attested in annotation. Full statistics of token and type frequencies are shown in tab:stats. This section presents the most frequent adpositions in Mandarin Chinese, as well as quantitative and qualitative comparisons of scene roles, functions, and construals between Chinese and English annotations." ], "extractive_spans": [ "933 manually identified adpositions" ], "free_form_answer": "", "highlighted_evidence": [ "Our corpus contains 933 manually identified adpositions." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 2: Statistics of the final Mandarin The Little Prince Corpus (the Chinese SNACS Corpus). Tokenization, identification of adposition targets, and supersense labeling were performed manually." ], "extractive_spans": [], "free_form_answer": "20287", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Statistics of the final Mandarin The Little Prince Corpus (the Chinese SNACS Corpus). Tokenization, identification of adposition targets, and supersense labeling were performed manually." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our corpus contains 933 manually identified adpositions. Of these, 70 distinct adpositions, 28 distinct scene roles, 26 distinct functions, and 41 distinct full construals are attested in annotation. Full statistics of token and type frequencies are shown in tab:stats. This section presents the most frequent adpositions in Mandarin Chinese, as well as quantitative and qualitative comparisons of scene roles, functions, and construals between Chinese and English annotations." ], "extractive_spans": [ "933 manually identified adpositions" ], "free_form_answer": "", "highlighted_evidence": [ "Our corpus contains 933 manually identified adpositions. Of these, 70 distinct adpositions, 28 distinct scene roles, 26 distinct functions, and 41 distinct full construals are attested in annotation. Full statistics of token and type frequencies are shown in tab:stats. This section presents the most frequent adpositions in Mandarin Chinese, as well as quantitative and qualitative comparisons of scene roles, functions, and construals between Chinese and English annotations." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "What inter-annotator agreement did they obtain?", "How did they annotate the corpus?", "What is the size of the corpus?" ], "question_id": [ "7d539258b948cd5b5ad1230a15e4b739f29ed947", "9c1f70affc87024b4280f0876839309b8dddd579", "2694a679a703ccd6139897e4d9ff8e053dabd0f2" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: SNACS hierarchy of 50 supersenses.", "Table 1: Inter-annotator agreement (IAA) results on two samples from different phases of the project.", "Table 3: Percentages and counts of the top 5 prepositions and postpositions in Chinese Little Prince. The percentages are out of all adpositions.", "Table 2: Statistics of the final Mandarin The Little Prince Corpus (the Chinese SNACS Corpus). Tokenization, identification of adposition targets, and supersense labeling were performed manually.", "Table 4: Statistics of Adpositional Supersenses in Chinese versus English. % adps presents the proportion of adposition targets over all token counts; uniq adps/scene/fxn/cons demonstrates the type frequency of adposition tokens, scene role and function supersense and construals; scene≠fxn and % scene≠fxn shows the type frequency and proportion of divergent construals.", "Figure 2: Top 10 most frequent scene roles in Chinese versus English.", "Figure 3: Top 10 most frequent functions in Chinese versus English.", "Figure 4: Top 10 Construals where scene≠function in Chinese versus English.", "Figure 5: POS Distribution of Gold Adposition Tokens.", "Table 5: Adposition identification performance on Chinese SNACS corpus." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png", "4-Table3-1.png", "4-Table2-1.png", "5-Table4-1.png", "5-Figure2-1.png", "5-Figure3-1.png", "6-Figure4-1.png", "6-Figure5-1.png", "7-Table5-1.png" ] }
[ "What inter-annotator agreement did they obtain?", "What is the size of the corpus?" ]
[ [ "2003.08437-4-Table1-1.png", "2003.08437-Corpus Annotation ::: Reliability of Annotation-0", "2003.08437-Corpus Annotation ::: Reliability of Annotation-2" ], [ "2003.08437-Corpus Analysis-0", "2003.08437-4-Table2-1.png" ] ]
[ "The average agreement on scene, function and construal is 0.915", "20287" ]
40
1809.08935
Lexical Bias In Essay Level Prediction
Automatically predicting the level of non-native English speakers given their written essays is an interesting machine learning problem. In this work I present the system"balikasg"that achieved the state-of-the-art performance in the CAp 2018 data science challenge among 14 systems. I detail the feature extraction, feature engineering and model selection steps and I evaluate how these decisions impact the system's performance. The paper concludes with remarks for future work.
{ "paragraphs": [ [ "Automatically predicting the level of English of non-native speakers from their written text is an interesting text mining task. Systems that perform well in the task can be useful components for online, second-language learning platforms as well as for organisations that tutor students for this purpose. In this paper I present the system balikasg that achieved the state-of-the-art performance in the CAp 2018 data science challenge among 14 systems. In order to achieve the best performance in the challenge, I decided to use a variety of features that describe an essay's readability and syntactic complexity as well as its content. For the prediction step, I found Gradient Boosted Trees, whose efficiency is proven in several data science challenges, to be the most efficient across a variety of classifiers.", "The rest of the paper is organized as follows: in Section 2 I frame the problem of language level as an ordinal classification problem and describe the available data. Section 3 presents the feature extaction and engineering techniques used. Section 4 describes the machine learning algorithms for prediction as well as the achieved results. Finally, Section 5 concludes with discussion and avenues for future research." ], [ "In order to approach the language-level prediction task as a supervised classification problem, I frame it as an ordinal classification problem. In particular, given a written essay INLINEFORM0 from a candidate, the goal is to associate the essay with the level INLINEFORM1 of English according to the Common European Framework of Reference for languages (CEFR) system. Under CEFR there are six language levels INLINEFORM2 , such that INLINEFORM3 . In this notation, INLINEFORM4 is the beginner level while INLINEFORM5 is the most advanced level. Notice that the levels of INLINEFORM6 are ordered, thus defining an ordered classification problem. In this sense, care must be taken both during the phase of model selection and during the phase of evaluation. In the latter, predicting a class far from the true should incur a higher penalty. In other words, given a INLINEFORM7 essay, predicting INLINEFORM8 is worse than predicting INLINEFORM9 , and this difference must be captured by the evaluation metrics.", "In order to capture this explicit ordering of INLINEFORM0 , the organisers proposed a cost measure that uses the confusion matrix of the prediction and prior knowledge in order to evaluate the performance of the system. In particular, the meaures uses writes as: DISPLAYFORM0 ", "where INLINEFORM0 is a cost matrix that uses prior knowledge to calculate the misclassification errors and INLINEFORM1 is the number of observations of class INLINEFORM2 classified with category INLINEFORM3 . The cost matrix INLINEFORM4 is given in Table TABREF3 . Notice that, as expected, moving away from the diagonal (correct classification) the misclassification costs are higher. The biggest error (44) occurs when a INLINEFORM5 essay is classified as INLINEFORM6 . On the contrary, the classification error is lower (6) when the opposite happens and an INLINEFORM7 essay is classified as INLINEFORM8 . Since INLINEFORM9 is not symmetric and the costs of the lower diagonal are higher, the penalties for misclassification are worse when essays of upper languages levels (e.g., INLINEFORM10 ) are classified as essays of lower levels." ], [ "In this section I present the extracted features partitioned in six groups and detail each of them separately." ], [ "As the class distribution in the training data is not balanced, I have used stratified cross-validation for validation purposes and for hyper-parameter selection. As a classification1 algorithm, I have used gradient boosted trees trained with gradient-based one-side sampling as implemented in the Light Gradient Boosting Machine toolkit released by Microsoft.. The depth of the trees was set to 3, the learning rate to 0.06 and the number of trees to 4,000. Also, to combat the class imbalance in the training labels I assigned class weights at each class so that errors in the frequent classes incur less penalties than error in the infrequent." ], [ "In this work I presented the feature extraction, feature engineering and model evaluation steps I followed while developing balikasg for CAp 2018 that was ranked first among 14 other systems. I evaluated the efficiency of the different feature groups and found that readbility and complexity scores as well as topic models to be effective predictors. Further, I evaluated the the effectiveness of different classification algorithms and found that Gradient Boosted Trees outperform the rest of the models in this problem.", "While in terms of accuracy the system performed excellent achieving 98.2% in the test data, the question raised is whether there are any types of biases in the process. For instance, topic distributions learned with LDA were valuable features. One, however, needs to deeply investigate whether this is due to the expressiveness and modeling power of LDA or an artifact of the dataset used. In the latter case, given that the candidates are asked to write an essay given a subject BIBREF0 that depends on their level, the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not. I believe that further analysis and validation can answer this question if the topics of the essays are released so that validation splits can be done on the basis of these topics." ], [ "I would like to thank the organisers of the challenge and NVidia for sponsoring the prize of the challenge. The views expressed in this paper belong solely to the author, and not necessarily to the author's employer." ] ], "section_name": [ "Introduction", "Problem Definition", "Feature Extaction", "Model Selection and Evaluation", "Conclusion", "Acknoledgements" ] }
{ "answers": [ { "annotation_id": [ "04f6edd117112fc7814dd49aa907f171e72d56dc", "46ee0d226146c7065bc6762d9f6e268174df944c", "b6e3b6bbc6ba6cee70501ab2b464c506762a8704" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3: Stratified 3-fold cross-validation scores for the official measure of the challenge." ], "extractive_spans": [], "free_form_answer": "Following groups of features are extracted:\n- Numerical Features\n- Language Models\n- Clusters\n- Latent Dirichlet Allocation\n- Part-Of-Speech\n- Bag-of-words", "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Stratified 3-fold cross-validation scores for the official measure of the challenge." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 4: Ablation study to explore the importance of different feature families." ], "extractive_spans": [], "free_form_answer": "Numerical features, language models features, clusters, latent Dirichlet allocation, Part-of-Speech tags, Bag-of-words.", "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Ablation study to explore the importance of different feature families." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 4: Ablation study to explore the importance of different feature families." ], "extractive_spans": [], "free_form_answer": "Numerical features, Language Models, Clusters, Latent Dirichlet Allocation, Part-Of-Speech tags, Bag-of-words", "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Ablation study to explore the importance of different feature families." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "4677e0f1b93209b28d8f7069dca82f123636edb4", "68d0960464f2c41b01926b331866dd4aca290131", "a5ef79fd0c521c4b7e21c4b1a8b9bf691b6c02fc" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 4: Ablation study to explore the importance of different feature families." ], "extractive_spans": [], "free_form_answer": "Accuracy metric", "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Ablation study to explore the importance of different feature families." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "While in terms of accuracy the system performed excellent achieving 98.2% in the test data, the question raised is whether there are any types of biases in the process. For instance, topic distributions learned with LDA were valuable features. One, however, needs to deeply investigate whether this is due to the expressiveness and modeling power of LDA or an artifact of the dataset used. In the latter case, given that the candidates are asked to write an essay given a subject BIBREF0 that depends on their level, the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not. I believe that further analysis and validation can answer this question if the topics of the essays are released so that validation splits can be done on the basis of these topics." ], "extractive_spans": [ "accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "While in terms of accuracy the system performed excellent achieving 98.2% in the test data, the question raised is whether there are any types of biases in the process." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Figure 2: The accuracy scores of each feature set using 3-fold cross validation on the training data." ], "extractive_spans": [], "free_form_answer": "Accuracy", "highlighted_evidence": [ "FLOAT SELECTED: Figure 2: The accuracy scores of each feature set using 3-fold cross validation on the training data." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "0795104f1eeaeb2e053ae16d53dac7cb9f071173", "65bf2956e5637913c920cb292b4f1557e07b5930", "d74edc45d9dec6312fa012f6e17723b2f0f9f88f" ], "answer": [ { "evidence": [ "As the class distribution in the training data is not balanced, I have used stratified cross-validation for validation purposes and for hyper-parameter selection. As a classification1 algorithm, I have used gradient boosted trees trained with gradient-based one-side sampling as implemented in the Light Gradient Boosting Machine toolkit released by Microsoft.. The depth of the trees was set to 3, the learning rate to 0.06 and the number of trees to 4,000. Also, to combat the class imbalance in the training labels I assigned class weights at each class so that errors in the frequent classes incur less penalties than error in the infrequent." ], "extractive_spans": [ "gradient boosted trees" ], "free_form_answer": "", "highlighted_evidence": [ "As a classification1 algorithm, I have used gradient boosted trees trained with gradient-based one-side sampling as implemented in the Light Gradient Boosting Machine toolkit released by Microsoft.." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "As the class distribution in the training data is not balanced, I have used stratified cross-validation for validation purposes and for hyper-parameter selection. As a classification1 algorithm, I have used gradient boosted trees trained with gradient-based one-side sampling as implemented in the Light Gradient Boosting Machine toolkit released by Microsoft.. The depth of the trees was set to 3, the learning rate to 0.06 and the number of trees to 4,000. Also, to combat the class imbalance in the training labels I assigned class weights at each class so that errors in the frequent classes incur less penalties than error in the infrequent." ], "extractive_spans": [ "Light Gradient Boosting Machine" ], "free_form_answer": "", "highlighted_evidence": [ "As a classification1 algorithm, I have used gradient boosted trees trained with gradient-based one-side sampling as implemented in the Light Gradient Boosting Machine toolkit released by Microsoft." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "As the class distribution in the training data is not balanced, I have used stratified cross-validation for validation purposes and for hyper-parameter selection. As a classification1 algorithm, I have used gradient boosted trees trained with gradient-based one-side sampling as implemented in the Light Gradient Boosting Machine toolkit released by Microsoft.. The depth of the trees was set to 3, the learning rate to 0.06 and the number of trees to 4,000. Also, to combat the class imbalance in the training labels I assigned class weights at each class so that errors in the frequent classes incur less penalties than error in the infrequent." ], "extractive_spans": [ "gradient boosted trees" ], "free_form_answer": "", "highlighted_evidence": [ "As a classification1 algorithm, I have used gradient boosted trees trained with gradient-based one-side sampling as implemented in the Light Gradient Boosting Machine toolkit released by Microsoft.." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "398494f2ad0cb962f08e2f1329a7326516fd0ec8", "5dd16986ef6c0949d3a3af829c8e6cfbdadb6b7a", "dc25e2479c404fc1c950063e58d6662ef1f4995a" ], "answer": [ { "evidence": [ "While in terms of accuracy the system performed excellent achieving 98.2% in the test data, the question raised is whether there are any types of biases in the process. For instance, topic distributions learned with LDA were valuable features. One, however, needs to deeply investigate whether this is due to the expressiveness and modeling power of LDA or an artifact of the dataset used. In the latter case, given that the candidates are asked to write an essay given a subject BIBREF0 that depends on their level, the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not. I believe that further analysis and validation can answer this question if the topics of the essays are released so that validation splits can be done on the basis of these topics." ], "extractive_spans": [ "the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not" ], "free_form_answer": "", "highlighted_evidence": [ " In the latter case, given that the candidates are asked to write an essay given a subject BIBREF0 that depends on their level, the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not. I believe that further analysis and validation can answer this question if the topics of the essays are released so that validation splits can be done on the basis of these topics." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "While in terms of accuracy the system performed excellent achieving 98.2% in the test data, the question raised is whether there are any types of biases in the process. For instance, topic distributions learned with LDA were valuable features. One, however, needs to deeply investigate whether this is due to the expressiveness and modeling power of LDA or an artifact of the dataset used. In the latter case, given that the candidates are asked to write an essay given a subject BIBREF0 that depends on their level, the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not. I believe that further analysis and validation can answer this question if the topics of the essays are released so that validation splits can be done on the basis of these topics." ], "extractive_spans": [], "free_form_answer": "Investigate the effectiveness of LDA to capture the subject of the essay.", "highlighted_evidence": [ "One, however, needs to deeply investigate whether this is due to the expressiveness and modeling power of LDA or an artifact of the dataset used. In the latter case, given that the candidates are asked to write an essay given a subject BIBREF0 that depends on their level, the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "While in terms of accuracy the system performed excellent achieving 98.2% in the test data, the question raised is whether there are any types of biases in the process. For instance, topic distributions learned with LDA were valuable features. One, however, needs to deeply investigate whether this is due to the expressiveness and modeling power of LDA or an artifact of the dataset used. In the latter case, given that the candidates are asked to write an essay given a subject BIBREF0 that depends on their level, the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not. I believe that further analysis and validation can answer this question if the topics of the essays are released so that validation splits can be done on the basis of these topics." ], "extractive_spans": [ "investigate whether this is due to the expressiveness and modeling power of LDA or an artifact of the dataset used" ], "free_form_answer": "", "highlighted_evidence": [ "For instance, topic distributions learned with LDA were valuable features. One, however, needs to deeply investigate whether this is due to the expressiveness and modeling power of LDA or an artifact of the dataset used. In the latter case, given that the candidates are asked to write an essay given a subject BIBREF0 that depends on their level, the hypothesis that needs be studied is whether LDA was just a clever way to model this information leak in the given data or not. I believe that further analysis and validation can answer this question if the topics of the essays are released so that validation splits can be done on the basis of these topics." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "3da2eedb860c98e7f626ccb0c24f76c9e282cf11", "96d4e7ea63470ca7fd703a2e9d2e5043a227047a", "a155ec01c10eaa5f1a0fae995855a6fe82fce4a2" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "question": [ "what features of the essays are extracted?", "what were the evaluation metrics?", "what model is used?", "what future work is described?", "what was the baseline?" ], "question_id": [ "c728fe6137f114c02e921f9be4a02a5bd83ae787", "50bda708293532f07a3193aaea0519d433fcc040", "46e660becd727c994a2a35c6587e15ea8bf8272d", "d1a4529ea32aaab5ca3b9d9ae5c16f146c23af6b", "7fba61426737394304e307cdc7537225f6253150" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ] }
{ "caption": [ "Table 1: Cost matrix used to calculate the miscalssification error described in Eq. (1).", "Figure 1: The distribution of essays according to the CERF levels in the training data.", "Table 2: Basic statistics for the released essays.", "Figure 2: The accuracy scores of each feature set using 3-fold cross validation on the training data.", "Table 4: Ablation study to explore the importance of different feature families.", "Table 3: Stratified 3-fold cross-validation scores for the official measure of the challenge.", "Table 5: Confusion matrix of the 3-fold stratified cross validation. The Ci,j value is the number of predictions known to be in group i and predicted to be in group j. Notice how most of the mis-classification errors occur between close categories." ], "file": [ "2-Table1-1.png", "2-Figure1-1.png", "2-Table2-1.png", "4-Figure2-1.png", "4-Table4-1.png", "4-Table3-1.png", "4-Table5-1.png" ] }
[ "what features of the essays are extracted?", "what were the evaluation metrics?", "what future work is described?" ]
[ [ "1809.08935-4-Table3-1.png", "1809.08935-4-Table4-1.png" ], [ "1809.08935-Conclusion-1", "1809.08935-4-Table4-1.png", "1809.08935-4-Figure2-1.png" ], [ "1809.08935-Conclusion-1" ] ]
[ "Numerical features, Language Models, Clusters, Latent Dirichlet Allocation, Part-Of-Speech tags, Bag-of-words", "Accuracy", "Investigate the effectiveness of LDA to capture the subject of the essay." ]
42
1910.07924
LibriVoxDeEn: A Corpus for German-to-English Speech Translation and German Speech Recognition
We present a corpus of sentence-aligned triples of German audio, German text, and English translation, based on German audio books. The corpus consists of over 100 hours of audio material and over 50k parallel sentences. The audio data is read speech and thus low in disfluencies. The quality of audio and sentence alignments has been checked by a manual evaluation, showing that speech alignment quality is in general very high. The sentence alignment quality is comparable to well-used parallel translation data and can be adjusted by cutoffs on the automatic alignment score. To our knowledge, this corpus is to date the largest resource for end-to-end speech translation for German.
{ "paragraphs": [ [ "Direct speech translation has recently been shown to be feasible using a single sequence-to-sequence neural model, trained on parallel data consisting of source audio, source text and target text. The crucial advantage of such end-to-end approaches is the avoidance of error propagation as in a pipeline approaches of speech recognition and text translation. While cascaded approaches have an advantage in that they can straightforwardly use large independent datasets for speech recognition and text translation, clever sharing of sub-networks via multi-task learning and two-stage modeling BIBREF0, BIBREF1, BIBREF2 has closed the performance gap between end-to-end and pipeline approaches. However, end-to-end neural speech translation is very data hungry while available datatsets must be considered large if they exceed 100 hours of audio. For example, the widely used Fisher and Call-home Spanish-English corpus BIBREF3 comprises 162 hours of audio and $138,819$ parallel sentences. Larger corpora for end-to-end speech translation have only recently become available for speech translation from English sources. For example, 236 hours of audio and $131,395$ parallel sentences are available for English-French speech translation based on audio books BIBREF4, BIBREF5. For speech translation of English TED talks, 400-500 hours of audio aligned to around $250,000$ parallel sentences depending on the language pair have been provided for eight target languages by DiGangiETAL:19. Pure speech recognition data are available in amounts of $1,000$ hours of read English speech and their transcriptions in the LibriSpeech corpus provided by PanayotovETAL:15.", "When it comes to German sources, the situation regarding corpora for end-to-end speech translation as well as for speech recognition is dire. To our knowledge, the largest freely available corpora for German-English speech translation comprise triples for 37 hours of German audio, German transcription, and English translation BIBREF6. Pure speech recognition data are available from 36 hours BIBREF7 to around 200 hours BIBREF8.", "We present a corpus of sentence-aligned triples of German audio, German text, and English translation, based on German audio books. The corpus consists of over 100 hours of audio material aligned to over 50k parallel sentences. Our approach mirrors that of KocabiyikogluETAL:18 in that we start from freely available audio books. The fact that the audio data is read speech keeps the number of disfluencies low. Furthermore, we use state-of-the art tools for audio-text and text-text alignment, and show in a manual evaluation that the speech alignment quality is in general very high, while the sentence alignment quality is comparable to widely used corpora such as that of KocabiyikogluETAL:18 and can be adjusted by cutoffs on the automatic alignment score. To our knowledge, the presented corpus is to data the largest resource for end-to-end speech translation for German." ], [ "In the following, we will give an overview over our corpus creation methodology. More details will be given in the following sections.", "Creation of German corpus (see Section sourcecorpus. )", "Data download", "Download German audio books from LibriVox web platform", "Collect corresponding text files by crawling public domain web pages", "Audio preprocessing", "Manual filtering of audio pre- and postfixes", "Text preprocessing", "Noise removal, e.g. special symbols, advertisements, hyperlinks", "Sentence segmentation using spaCy", "Speech-to-text alignments", "Manual chapter segmentation of audio files", "Audio-to-text alignments using forced aligner aeneas", "Split audio according to obtained timestamps using SoX", "Creation of German-English Speech Translation Corpus (see Sections targetcorpus. and corpusfiltering. )", "Download English translations for German texts", "Text preprocessing (same procedure as for German texts)", "Bilingual text-to-text alignments", "Manual text-to-text alignments of chapters", "Dictionary creation using parallel DE-EN WikiMatrix corpus BIBREF9", "German-English sentence alignments using hunalign BIBREF10", "Data filtering based on hunalign alignment scores" ], [ "We acquired pairs of German books and their corresponding audio files starting from LibriVox, an open source platform for people to publish their audio recordings of them reading books which are available open source on the platform Project Gutenberg. Source data were gathered in a semi-automatic way: The URL links were collected manually by using queries containing metadata descriptions to find German books with LibriVox audio and possible German transcripts. These were later automatically scraped using BeautifulSoup4 and Scrapy, and saved for further processing and cleaning. Public domain web pages crawled include https://gutenberg.spiegel.de, http://www.zeno.org, and https://archive.org." ], [ "We processed the audio data in a semi-automatic manner which included manual splitting and alignment of audio files into chapters, while also saving timestamps for start and end of chapters. We removed boilerplate intros and outros and as well as noise at the beginning and end of the recordings.", "Preprocessing the text included removal of several items, including special symbols like *, advertisements, hyperlinks in [], <>, empty lines, quotes, - preceding sentences, indentations, and noisy OCR output.", "German sentence segmentation was done using spaCy based on a medium sized German corpus that contains the TIGER corpus and the WikiNER dataset dataset. Furthermore we added rules to adjust the segmenting behavior for direct speech and for semicolon-separated sentences." ], [ "To align sentences to onsets and endings of corresponding audio segments we made use of aeneas – a tool for an automatic synchronization of text and audio. In contrast to most forced aligners, aeneas does not use automatic speech recognition (ASR) to compare an obtained transcript with the original text. Instead, it works in the opposite direction by using dynamic time warping to align the mel-frequency cepstral coefficients extracted from the real audio to the audio representation synthesized from the text, thus aligning the text file to a time interval in the real audio.", "Furthermore, we used the maps pointing to the beginning and the end of each text row in the audio file produced with SoX to split the audio into sentence level chunks. The timestamps were also used to filter boilerplate information about the book, author, speaker at the beginning and end of the audio file.", "Statistics on the resulting corpus are given in Table TABREF36." ], [ "In collecting and preprocessing the English texts we followed the same procedure as for the source language corpus, i.e., we manually created queries containing metadata descriptions of English books (e.g. author names) corresponding to German books which then were scraped. The spaCy model for sentence segmentation used a large English web corpus. See Section sourcecorpus. for more information." ], [ "To produce text-to-text alignments we used hunalign with a custom dictionary of parallel sentences, generated from the WikiMatrix corpus. Using this additional dictionary improved our alignment scores. Furthermore we availed ourselves of a realign option enabling to save a dictionary generated in a first pass and profiting from it in a second pass. The final dictionary we used for the alignments consisted of a combination of entries of our corpora as well as the parallel corpus WikiMatrix. For further completeness we reversed the arguments in hunalign to not only obtain German to English alignments, but also English to German. These tables were merged to build the union by dropping duplicate entries and keeping those with a higher confidence score, while also appending alignments that may only have been produced when aligning in a specific direction.", "Statistics on the resulting text alignments are given in Table TABREF37." ], [ "A last step in our corpus creation procedure consisted out filtering out empty and incomplete alignments, i.e., alignments that did not consist of a DE-EN sentence pair. This was achieved by dropping all entries with a hunalign score of -0.3 or below. Table TABREF38 shows the resulting corpus after this filtering step.", "Moreover, many-to-many alignments by hunalign were re-segmented to source-audio sentence level for German, while keeping the merged English sentence to provide a complete audio lookup. The corresponding English sentences were duplicated and tagged with <MERGE> to mark that the German sentence was involved into a many-to-many alignment.", "The size of our final cleaned and filtered corpus is thus comparable to the cleaned Augmented LibriSpeech corpus that has been used in speech translation experiments by BerardETAL:18.", "Statistics on the resulting filtered text alignments are given in Table TABREF38." ], [ "Our corpus is structured in following folders:", "contains German text files for each book", "contains English text files for each book", "alignment maps produced by aeneas", "sentence level audio files", "text2speech, a lookup table for speech alignments", "text2text, a lookup table for text-to-text alignments", "Further information about the corpus and a download link can be found here: https://www.cl.uni-heidelberg.de/statnlpgroup/librivoxdeen/." ], [ "For a manual evaluation of our dataset, we split the corpus into three bins according to ranges $(-0.3,0.3]$, $(0.3,0.8]$ and $(0.8,\\infty )$ of the hunalign confidence score (see Table TABREF56).", "The evaluation of the text alignment quality was conducted according to the 5-point scale used in KocabiyikogluETAL:18:", "Wrong alignment", "Partial alignment with slightly compositional translational equivalence", "Partial alignment with compositional translation and additional or missing information", "Correct alignment with compositional translation and few additional or missing information", "Correct alignment and fully compositional translation", "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:", "Wrong alignment", "Partial alignment, some words or sentences may be missing", "Correct alignment, allowing non-spoken syllables at start or end.", "The evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], [ "Table TABREF54 shows the results of our manual evaluation. The audio-text alignment was rated as in general as high quality. The text-text alignment rating increases corresponding to increasing hunalign confidence score which shows that the latter can be safely used to find a threshold for corpus filtering. Overall, the audio-text and text-text alignment scores are very similar to those reported by KocabiyikogluETAL:18.", "The inter-annotator agreement between two raters was measured by Krippendorff's $\\alpha $-reliability score BIBREF11 for ordinal ratings. The inter-annotator reliability for text-to-text alignment quality ratings scored 0.77, while for audio-text alignment quality ratings it scored 1.00." ], [ "In the following, we present selected examples for text-text alignments for each bin. A closer inspection reveals properties and shortcomings of hunalign scores which are based on a combination of dictionary-based alignments and sentence-length information.", "Shorter sentence pairs are in general aligned correctly, irrespective of the score (compare examples with score $0.30$. $0.78$ and $1.57$, $2.44$ below). Longer sentences can include exact matches of longer substrings, however, they are scored based on a bag-of-words overlap (see the examples with scores $0.41$ and $0.84$ below).", "Schigolch Yes, yes; und mir träumte von einem Stück Christmas Pudding.", "She only does that to revive old memories. LULU.", "Und hätten dreißigtausend Helfer sich ersehn.", "And feardefying Folker shall our companion be; He shall bear our banner; better none than he.", "Kakambo verlor nie den Kopf.", "Cacambo never lost his head.", "Es befindet sich gar keine junge Dame an Bord, versetzte der Proviantmeister.", "He is a tall gentleman, quiet, and not very talkative, and has with him a young lady — There is no young lady on board, interrupted the AROUND THE WORLD IN EIGPITY DAYS. purser..", "Ottilie, getragen durch das Gefühl ihrer Unschuld, auf dem Wege zu dem erwünschtesten Glück, lebt nur für Eduard.", "Ottilie, led by the sense of her own innocence along the road to the happiness for which she longed, only lived for Edward.", "Was ist geschehen? fragte er.", "What has happened ? he asked.", "Es sind nun drei Monate verflossen, daß wir Charleston auf dem Chancellor verlassen, und zwanzig Tage, die wir schon auf dem Flosse, von der Gnade der Winde und Strömungen abhängig, verbracht haben!", "JANUARY st to th.More than three months had elapsed since we left Charleston in the Chancellor, and for no less than twenty days had we now been borne along on our raft at the mercy of the wind and waves.", "Charlotte stieg weiter, und Ottilie trug das Kind.", "Charlotte went on up the cliff, and Ottilie carried the child.", "Fin de siecle, murmelte Lord Henry.", "Fin de siecle, murmured Lord Henry." ], [ "We presented a corpus of aligned triples of German audio, German text, and English translations for speech translation from German to English. The audio data in our corpus are read speech, based on German audio books, ensuring a low amount of speech disfluencies. The audio-text alignment and text-to-text sentence alignment was done with state-of-the-art alignment tools and checked to be of high quality in a manual evaluation. The audio-text alignment was generally rated very high. The text-text sentence alignment quality is comparable to widely used corpora such as that of KocabiyikogluETAL:18. A cutoff on a sentence alignment quality score allows to filter the text alignments further for speech translation, resulting in a clean corpus of $50,427$ German-English sentence pairs aligned to 110 hours of German speech. A larger version of the corpus, comprising 133 hours of German speech and high-quality alignments to German transcriptions is available for speech recognition." ], [ "The research reported in this paper was supported in part by the German research foundation (DFG) under grant RI-2221/4-1." ] ], "section_name": [ "Introduction", "Overview", "Source Corpus Creation ::: Data Collection", "Source Corpus Creation ::: Data Preprocessing", "Source Corpus Creation ::: Text-to-Speech Alignment", "Target Corpus Creation ::: Data Collection and Preprocessing", "Target Corpus Creation ::: Text-to-Text Alignment", "Data Filtering and Corpus Structure ::: Corpus Filtering", "Data Filtering and Corpus Structure ::: Corpus Structure", "Corpus Evaluation ::: Human Evaluation", "Corpus Evaluation ::: Evaluation Results", "Corpus Evaluation ::: Examples", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "7aafc607e9232c2212c0bb0b694568f3b768a81e", "1f92ab9b953860b775d303d243f3b07e6352d870", "722480c3c6e2fc1d13450d20fa2bcb73b22be4bd" ], "answer": [ { "evidence": [ "The evaluation of the text alignment quality was conducted according to the 5-point scale used in KocabiyikogluETAL:18:", "Wrong alignment", "Partial alignment with slightly compositional translational equivalence", "Partial alignment with compositional translation and additional or missing information", "Correct alignment with compositional translation and few additional or missing information", "Correct alignment and fully compositional translation", "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:", "Partial alignment, some words or sentences may be missing", "Correct alignment, allowing non-spoken syllables at start or end.", "The evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], "extractive_spans": [], "free_form_answer": "Through human evaluation on a 5-point scale for text alignment and 3-point scale for audio-text", "highlighted_evidence": [ "The evaluation of the text alignment quality was conducted according to the 5-point scale used in KocabiyikogluETAL:18:\n\nWrong alignment\n\nPartial alignment with slightly compositional translational equivalence\n\nPartial alignment with compositional translation and additional or missing information\n\nCorrect alignment with compositional translation and few additional or missing information\n\nCorrect alignment and fully compositional translation\n\nThe evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:\n\nWrong alignment\n\nPartial alignment, some words or sentences may be missing\n\nCorrect alignment, allowing non-spoken syllables at start or end.\n\nThe evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The evaluation of the text alignment quality was conducted according to the 5-point scale used in KocabiyikogluETAL:18:", "Wrong alignment", "Partial alignment with slightly compositional translational equivalence", "Partial alignment with compositional translation and additional or missing information", "Correct alignment with compositional translation and few additional or missing information", "Correct alignment and fully compositional translation", "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:", "Partial alignment, some words or sentences may be missing", "Correct alignment, allowing non-spoken syllables at start or end.", "The evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], "extractive_spans": [ "The evaluation of the text alignment quality was conducted according to the 5-point scale used in KocabiyikogluETAL:18:\n\nWrong alignment\n\nPartial alignment with slightly compositional translational equivalence\n\nPartial alignment with compositional translation and additional or missing information\n\nCorrect alignment with compositional translation and few additional or missing information\n\nCorrect alignment and fully compositional translation\n\nThe evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:\n\nWrong alignment\n\nPartial alignment, some words or sentences may be missing\n\nCorrect alignment, allowing non-spoken syllables at start or end.\n\nThe evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], "free_form_answer": "", "highlighted_evidence": [ "The evaluation of the text alignment quality was conducted according to the 5-point scale used in KocabiyikogluETAL:18:\n\nWrong alignment\n\nPartial alignment with slightly compositional translational equivalence\n\nPartial alignment with compositional translation and additional or missing information\n\nCorrect alignment with compositional translation and few additional or missing information\n\nCorrect alignment and fully compositional translation\n\nThe evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:\n\nWrong alignment\n\nPartial alignment, some words or sentences may be missing\n\nCorrect alignment, allowing non-spoken syllables at start or end.\n\nThe evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The evaluation of the text alignment quality was conducted according to the 5-point scale used in KocabiyikogluETAL:18:", "Wrong alignment", "Partial alignment with slightly compositional translational equivalence", "Partial alignment with compositional translation and additional or missing information", "Correct alignment with compositional translation and few additional or missing information", "Correct alignment and fully compositional translation" ], "extractive_spans": [ "5-point scale used in KocabiyikogluETAL:18" ], "free_form_answer": "", "highlighted_evidence": [ "The evaluation of the text alignment quality was conducted according to the 5-point scale used in KocabiyikogluETAL:18:\n\nWrong alignment\n\nPartial alignment with slightly compositional translational equivalence\n\nPartial alignment with compositional translation and additional or missing information\n\nCorrect alignment with compositional translation and few additional or missing information\n\nCorrect alignment and fully compositional translation" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "fa716cd87ce6fd6905e2f23f09b262e90413167f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "b5d4a4d4aa0e132eaa0c1e2e5d2997697e343b96", "a87becdfca24c48e5139a7e3a09c57d1729b4970", "b21fcd4cc902e13a5bb5c2a662adaf62064452ed" ], "answer": [ { "evidence": [ "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:", "Wrong alignment", "Partial alignment, some words or sentences may be missing", "Correct alignment, allowing non-spoken syllables at start or end." ], "extractive_spans": [], "free_form_answer": "Through a 3-point scale by annotators.", "highlighted_evidence": [ "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:\n\nWrong alignment\n\nPartial alignment, some words or sentences may be missing\n\nCorrect alignment, allowing non-spoken syllables at start or end." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:", "Wrong alignment", "Partial alignment, some words or sentences may be missing", "Correct alignment, allowing non-spoken syllables at start or end." ], "extractive_spans": [ "Wrong alignment", "Partial alignment, some words or sentences may be missing", "Correct alignment, allowing non-spoken syllables at start or end." ], "free_form_answer": "", "highlighted_evidence": [ "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:\n\nWrong alignment\n\nPartial alignment, some words or sentences may be missing\n\nCorrect alignment, allowing non-spoken syllables at start or end." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:", "Wrong alignment", "Partial alignment, some words or sentences may be missing", "Correct alignment, allowing non-spoken syllables at start or end.", "The evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], "extractive_spans": [ "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:\n\nWrong alignment\n\nPartial alignment, some words or sentences may be missing\n\nCorrect alignment, allowing non-spoken syllables at start or end.\n\nThe evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], "free_form_answer": "", "highlighted_evidence": [ "The evaluation of the audio-text alignment quality was conducted according to the following 3-point scale:\n\nWrong alignment\n\nPartial alignment, some words or sentences may be missing\n\nCorrect alignment, allowing non-spoken syllables at start or end.\n\nThe evaluation experiment was performed by two annotators who each rated 30 items from each bin, where 10 items were the same for both annotators in order to calculate inter-annotator reliability." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c", "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] } ], "nlp_background": [ "two", "two" ], "paper_read": [ "no", "no" ], "question": [ "How is the sentence alignment quality evaluated?", "How is the speech alignment quality evaluated?" ], "question_id": [ "46aa61557c8d20b1223a30366a0704d7af68bbbe", "b3b9d7c8722e8ec41cbbae40e68458485a5ba25c" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "search_query": [ "German", "German" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Source corpus description", "Table 3: DE-EN text-to-text alignment data after filtering", "Table 4: Manual evaluation of for audio-text and text-text alignments, averaged over 90 items and two raters", "Table 5: Bins of text alignment quality according to hunalign confidence score" ], "file": [ "3-Table1-1.png", "3-Table3-1.png", "4-Table4-1.png", "4-Table5-1.png" ] }
[ "How is the sentence alignment quality evaluated?", "How is the speech alignment quality evaluated?" ]
[ [ "1910.07924-Corpus Evaluation ::: Human Evaluation-10", "1910.07924-Corpus Evaluation ::: Human Evaluation-2", "1910.07924-Corpus Evaluation ::: Human Evaluation-6", "1910.07924-Corpus Evaluation ::: Human Evaluation-11", "1910.07924-Corpus Evaluation ::: Human Evaluation-3", "1910.07924-Corpus Evaluation ::: Human Evaluation-4", "1910.07924-Corpus Evaluation ::: Human Evaluation-7", "1910.07924-Corpus Evaluation ::: Human Evaluation-5", "1910.07924-Corpus Evaluation ::: Human Evaluation-9", "1910.07924-Corpus Evaluation ::: Human Evaluation-1" ], [ "1910.07924-Corpus Evaluation ::: Human Evaluation-10", "1910.07924-Corpus Evaluation ::: Human Evaluation-2", "1910.07924-Corpus Evaluation ::: Human Evaluation-11", "1910.07924-Corpus Evaluation ::: Human Evaluation-7", "1910.07924-Corpus Evaluation ::: Human Evaluation-9" ] ]
[ "Through human evaluation on a 5-point scale for text alignment and 3-point scale for audio-text", "Through a 3-point scale by annotators." ]
43
1911.11899
Self-Attention Enhanced Selective Gate with Entity-Aware Embedding for Distantly Supervised Relation Extraction
Distantly supervised relation extraction intrinsically suffers from noisy labels due to the strong assumption of distant supervision. Most prior works adopt a selective attention mechanism over sentences in a bag to denoise from wrongly labeled data, which however could be incompetent when there is only one sentence in a bag. In this paper, we propose a brand-new light-weight neural framework to address the distantly supervised relation extraction problem and alleviate the defects in previous selective attention framework. Specifically, in the proposed framework, 1) we use an entity-aware word embedding method to integrate both relative position information and head/tail entity embeddings, aiming to highlight the essence of entities for this task; 2) we develop a self-attention mechanism to capture the rich contextual dependencies as a complement for local dependencies captured by piecewise CNN; and 3) instead of using selective attention, we design a pooling-equipped gate, which is based on rich contextual representations, as an aggregator to generate bag-level representation for final relation classification. Compared to selective attention, one major advantage of the proposed gating mechanism is that, it performs stably and promisingly even if only one sentence appears in a bag and thus keeps the consistency across all training examples. The experiments on NYT dataset demonstrate that our approach achieves a new state-of-the-art performance in terms of both AUC and top-n precision metrics.
{ "paragraphs": [ [ "Relation extraction (RE) is one of the most fundamental tasks in natural language processing, and its goal is to identify the relationship between a given pair of entities in a sentence. Typically, a large-scale training dataset with clean labels is required to train a reliable relation extraction model. However, it is time-consuming and labor-intensive to annotate such data by crowdsourcing. To overcome the lack of labeled training data, BIBREF0 mintz2009distant presents a distant supervision approach that automatically generates a large-scale, labeled training set by aligning entities in knowledge graph (e.g. Freebase BIBREF1) to corresponding entity mentions in natural language sentences. This approach is based on a strong assumption that, any sentence containing two entities should be labeled according to the relationship of the two entities on the given knowledge graph. However, this assumption does not always hold. Sometimes the same two entities in different sentences with various contexts cannot express a consistent relationship as described in the knowledge graph, which certainly results in wrongly labeled problem.", "To alleviate the aformentioned problem, BIBREF2 riedel2010modeling proposes a multi-instance learning framework, which relaxes the strong assumption to expressed-at-least-one assumption. In plainer terms, this means any possible relation between two entities hold true in at least one distantly-labeled sentence rather than all of the them that contains those two entities. In particular, instead of generating a sentence-level label, this framework assigns a label to a bag of sentences containing a common entity pair, and the label is a relationship of the entity pair on knowledge graph. Recently, based on the labeled data at bag level, a line of works BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 under selective attention framework BIBREF5 let model implicitly focus on the correctly labeled sentence(s) by an attention mechanism and thus learn a stable and robust model from the noisy data.", "However, such selective attention framework is vulnerable to situations where a bag is merely comprised of one single sentence labeled; and what is worse, the only one sentence possibly expresses inconsistent relation information with the bag-level label. This scenario is not uncommon. For a popular distantly supervised relation extraction benchmark, e.g., NYT dataset BIBREF2, up to $80\\%$ of its training examples (i.e., bags) are one-sentence bags. From our data inspection, we randomly sample 100 one-sentence bags and find $35\\%$ of them is incorrectly labeled. Two examples of one-sentence bag are shown in Table TABREF1. These results indicate that, in training phrase the selective attention module is enforced to output a single-valued scalar for $80\\%$ examples, leading to an ill-trained attention module and thus hurting the performance.", "Motivated by aforementioned observations, in this paper, we propose a novel Selective Gate (SeG) framework for distantly supervised relation extraction. In the proposed framework, 1) we employ both the entity embeddings and relative position embeddings BIBREF8 for relation extraction, and an entity-aware embedding approach is proposed to dynamically integrate entity information into each word embedding, yielding more expressively-powerful representations for downstream modules; 2) to strengthen the capability of widely-used piecewise CNN (PCNN) BIBREF3 on capturing long-term dependency BIBREF9, we develop a light-weight self-attention BIBREF10, BIBREF11 mechanism to capture rich dependency information and consequently enhance the capability of neural network via producing complementary representation for PCNN; and 3) based on preceding versatile features, we design a selective gate to aggregate sentence-level representations into bag-level one and alleviate intrinsic issues appearing in selective attention.", "Compared to the baseline framework (i.e., selective attention for multi-instance learning), SeG is able to produce entity-aware embeddings and rich-contextual representations to facilitate downstream aggregation modules that stably learn from noisy training data. Moreover, SeG uses gate mechanism with pooling to overcome problem occurring in selective attention, which is caused by one-sentence bags. In addition, it still keeps a light-weight structure to ensure the scalability of this model.", "The experiments and extensive ablation studies on New York Time dataset BIBREF2 show that our proposed framework achieves a new state-of-the-art performance regarding both AUC and top-n precision metrics for distantly supervised relation extraction task, and also verify the significance of each proposed module. Particularly, the proposed framework can achieve AUC of 0.51, which outperforms selective attention baseline by 0.14 and improves previous state-of-the-art approach by 0.09." ], [ "As illustrated in Figure FIGREF2, we propose a novel neural network, i.e., SeG, for distantly supervised relation extraction, which is composed of following neural components." ], [ "Given a bag of sentences $B^k = \\lbrace s^k_1, \\dots , s^k_{m^k}\\rbrace $ where each sentence contains common entity pair (i.e., head entity $e^k_h,$ and tail entity $e^k_t$), the target of relation extraction is to predict the relation $y^k$ between the two entities. For a clear demonstration, we omit indices of example and sentence in remainder if no confusion caused. Each sentence is a sequence of tokens, i.e., $s = [w_1, \\dots , w_n]$, where $n$ is the length of the sentence. In addition, each token has a low-dimensional dense-vector representation, i.e., $[\\mathbf {v}_1, \\cdots , \\mathbf {v}_n] \\in \\mathbb {R}^{d_w \\times n}$, where $d_w$ denotes the dimension of word embedding.", "In addition to the typical word embedding, relative position is a crucial feature for relation extraction, which can provide downstream neural model with rich positional information BIBREF8, BIBREF3. Relative positions explicitly describe the relative distances between each word $w_i$ and the two targeted entities $e_h$ and $e_t$. For $i$-th word, a randomly initialized weight matrix projects the relative position features into a two dense-vector representations w.r.t the head and tail entities, i.e., $\\mathbf {r}^{e_h}_i$ and $\\mathbf {r}^{e_t}_i\\in \\mathbb {R}^{d_r}$ respectively. The final low-level representations for all tokens are a concatenation of the aforementioned embeddings, i.e., $\\mathbf {X}^{(p)} = [\\mathbf {x}^{(p)}_1, \\cdots , \\mathbf {x}^{(p)}_n] \\in \\mathbb {R}^{d_p \\times n}$ in which $\\mathbf {x}^{(p)}_i = [\\mathbf {v_i}; \\mathbf {r}^{e_h}_i; \\mathbf {r}^{e_t}_i]$ and $d_p = d_w + 2\\times d_r$.", "However, aside from the relative position features, we argue that the embeddings of both the head entity $e_h$ and tail entity $e_t$ are also vitally significant for relation extraction task, since the ultimate goal of this task is to predict the relationship between these two entities. This hypothesis is further verified by our quantitative and qualitative analyses in later experiments (Section SECREF35 and SECREF39). The empirical results show that our proposed embedding can outperform the widely-used way in prior works BIBREF12.", "In particular, we propose a novel entity-aware word embedding approach to enrich the traditional word embeddings with features of the head and tail entities. To this end, a position-wise gate mechanism is naturally leveraged to dynamically select features between relative position embedding and entity embeddings. Formally, the embeddings of head and tail entities are denoted as $\\mathbf {v}^{(h)}$ and $\\mathbf {v}^{(t)}$ respectively. The position-wise gating procedure is formulated as", "in which $\\mathbf {W}^{(g1)}\\in \\mathbb {R}^{d_h \\times 3d_w}$ and $\\mathbf {W}^{(g2)}\\in \\mathbb {R}^{d_h \\times d_p}$ are learnable parameters, $\\lambda $ is a hyper-parameter to control smoothness, and $\\mathbf {X} = [\\mathbf {x}_1, \\dots , \\mathbf {x}_n] \\in \\mathbb {R}^{d_h \\times n}$ containing the entity-aware embeddings of all tokens from the sentence." ], [ "Previous works of relation extraction mainly employ a piecewise convolutional neural network (PCNN) BIBREF3 to obtain contextual representation of sentences due to its capability of capturing local features, less computation and light-weight structure. However, some previous works BIBREF13 find that CNNs cannot reach state-of-the-art performance on a majority of natural language processing benchmarks due to a lack of measuring long-term dependency, even if stacking multiple modules. This motivates us to enhance the PCNN with another neural module, which is capable of capturing long-term or global dependencies to produce complementary and more powerful sentence representation.", "Hence, we employ a self-attention mechanism in our model due to its parallelizable computation and state-of-the-art performance. Unlike existing approaches that sequentially stack self-attention and CNN layers in a cascade form BIBREF9, BIBREF14, we arrange these two modules in parallel so they can generate features describing both local and long-term relations for the same input sequence. Since each bag may contain many sentences (up to 20), a light-weight networks that can can efficiently process these sentences simultaneously is more preferable, such as PCNN that is the most popular module for relation extraction. For this reason, there is only one light-weight self-attention layer in our model. This is contrast to BIBREF9 yu2018qanet and BIBREF14 wu2019pay who stack both modules many times repeatedly. Our experiments show that two modules arranged in parallel manner consistently outperform stacking architectures that are even equipped with additional residual connections BIBREF15). The comparative experiments will be elaborated in Section SECREF34 and SECREF35." ], [ "This section provides a brief introduction to PCNN as a background for further integration with our model, and we refer readers to BIBREF3 zeng2015distant for more details. Each sentence is divided into three segments w.r.t. the head and tail entities. Compared to the typical 1D-CNN with max-pooling BIBREF8, piecewise pooling has the capability to capture the structure information between two entities. Therefore, instead of using word embeddings with relative position features $\\mathbf {X}^{(p)}$ as the input, we here employ our entity-aware embedding $\\mathbf {X}$ as described in Section SECREF3 to enrich the input features. First, 1D-CNN is invoked over the input, which can be formally represented as", "where, $\\mathbf {W}^{(c)} \\in \\mathbb {R}^{d_c \\times m \\times d_h}$ is convolution kernel with window size of $m$ (i.e., $m$-gram). Then, to obtain sentence-level representation, a piecewise pooling performs over the output sequence, i.e., $\\mathbf {H}^{(c)} = [\\mathbf {h}_1, \\dots , \\mathbf {h}_n]$, which is formulated as", "In particular, $\\mathbf {H}^{(1)}$, $\\mathbf {H}^{(2)}$ and $\\mathbf {H}^{(3)}$ are three consecutive parts of $\\mathbf {H}$, obtained by dividing $\\mathbf {H}$ according to the positions of head and tail entities. Consequently, $\\mathbf {s} \\in \\mathbb {R}^{3d_c}$ is the resulting sentence vector representation." ], [ "To maintain efficiency of proposed approach, we adopt the recently-promoted self-attention mechanism BIBREF16, BIBREF10, BIBREF17, BIBREF18, BIBREF19 for compressing a sequence of token representations into a sentence-level vector representation by exploiting global dependency, rather than computation-consuming pairwise ones BIBREF13. It is used to measure the contribution or importance of each token to relation extraction task w.r.t. the global dependency. Formally, given the entity-aware embedding $\\mathbf {X}$, we first calculate attention probabilities by a parameterized compatibility function, i.e.,", "where, $\\mathbf {W}^{(a1)}, \\mathbf {W}^{(a2)} \\in \\mathbb {R}^{d_h \\times d_h}$ are learnable parameters, $\\operatornamewithlimits{softmax}(\\cdot )$ is invoked over sequence, and $\\mathbf {P}^{(A)}$ is resulting attention probability matrix. Then, the result of self-attention mechanism can be calculated as", "in which, $\\sum $ is performed along sequential dimension and $\\odot $ stands for element-wise multiplication. And, $\\mathbf {u} \\in \\mathbb {R}^{d_h}$ is also a sentence-level vector representation which is a complement to PCNN-resulting one, i.e., $\\mathbf {s}$ from Eq.(DISPLAY_FORM9)." ], [ "Given a sentence bag $B = [s_1, \\dots , s_m]$ with common entity pair, where $m$ is the number of sentences. As elaborated in Section SECREF6, we can obtain $\\mathbf {S} = [\\mathbf {s}_1, \\dots , \\mathbf {s}_m]$ and $\\mathbf {U} = [\\mathbf {u}_1, \\dots , \\mathbf {u}_m]$ for each sentence in the bag, which are derived from PCNN and self-attention respectively.", "Unlike previous works under multi-instance framework that frequently use a selective attention module to aggregate sentence-level representations into bag-level one, we propose a innovative selective gate mechanism to perform this aggregation. The selective gate can mitigate problems existing in distantly supervised relation extraction and achieve a satisfactory empirical effectiveness. Specifically, when handling the noisy instance problem, selective attention tries to produce a distribution over all sentence in a bag; but if there is only one sentence in the bag, even the only sentence is wrongly labeled, the selective attention mechanism will be low-effective or even completely useless. Note that almost $80\\%$ of bags from popular relation extraction benchmark consist of only one sentence, and many of them suffer from the wrong label problem. In contrast, our proposed gate mechanism is competent to tackle such case by directly and dynamically aligning low gating value to the wrongly labeled instances and thus preventing noise representation being propagated.", "Particularly, a two-layer feed forward network is applied to each $\\mathbf {u}_j$ to sentence-wisely produce gating value, which is formally denoted as", "where, $\\mathbf {W}^{(g1)} \\in \\mathbb {R}^{3d_c \\times d_h}$, $\\mathbf {W}^{(g2)} \\in \\mathbb {R}^{d_h \\times d_h}$, $\\sigma (\\cdot )$ denotes an activation function and $g_j \\in (0, 1)$. Then, given the calculated gating value, an mean aggregation performs over sentence embeddings $[\\mathbf {s}_j]_{j=1}^m$ in the bag, and thus produces bag-level vector representation for further relation classification. This procedure is formalized as", "Finally, $\\mathbf {c}$ is fed into a multi-layer perceptron followed with $|C|$-way $\\operatornamewithlimits{softmax}$ function (i.e., an $\\operatornamewithlimits{MLP}$ classifier) to judge the relation between head and tail entities, where $|C|$ is the number of distinctive relation categories. This can be regarded as a classification task BIBREF20. Formally," ], [ "We minimize negative log-likelihood loss plus $L_2$ regularization penalty to train the model, which is written as", "where $\\mathbf {p}^k$ is the predicted distribution from Eq.(DISPLAY_FORM16) for the $k$-th example in dataset $\\mathcal {D}$ and $y^k$ is its corresponding distant supervision label." ], [ "To evaluate our proposed framework, and to compare the framework with baselines and competitive approaches, we conduct experiments on a popular benchmark dataset for distantly supervised relation extraction. We also conduct an ablation study to separately verify the effectiveness of each proposed component, and last, case study and error analysis are provided for an insight into our model." ], [ "In order to accurately compare the performance of our model, we adopt New York Times (NYT) dataset BIBREF2, a widely-used standard benchmark for distantly supervised relation extraction in most of previous works BIBREF5, BIBREF3, BIBREF6, BIBREF4, which contains 53 distinct relations including a null class NA relation. This dataset generates by aligning Freebase with the New York Times (NYT) corpus automatically. In particular, NYT dataset contains 53 distinct relations including a null class NA relation referred to as the relation of an entity pair is unavailable. There are 570K and 172K sentences respectively in training and test set." ], [ "Following previous works BIBREF3, BIBREF5, BIBREF6, BIBREF4, we use precision-recall (PR) curves, area under curve (AUC) and top-N precision (P@N) as metrics in our experiments on the held-out test set from the NYT dataset. To directly show the perfomance on one sentence bag, we also calculate the accuracy of classification (Acc.) on non-NA sentences." ], [ "For a fair and rational comparison with baselines and competitive approaches, we set most of the hyper-parameters by following prior works BIBREF10, BIBREF6, and also use 50D word embedding and 5D position embedding released by BIBREF5, BIBREF6 for initialization, where the dimension of $d_h$ equals to 150. The filters number of CNN $d_c$ equals to 230 and the kernel size $m$ in CNN equals to 3. In output layer, we employ dropout BIBREF22 for regularization, where the drop probability is set to $0.5$. To minimize the loss function defined in Eq.DISPLAY_FORM18, we use stochastic gradient descent with initial learning rate of $0.1$, and decay the learning rate to one tenth every 100K steps." ], [ "We compare our proposed approach with extensive previous ones, including feature-engineering, competitive and state-of-the-art approaches, which are briefly summarized in the following.", "Mintz BIBREF0 is the original distantly supervised approach to solve relation extraction problems with distantly supervised data.", "MultiR BIBREF23 is a graphical model within a multi-instance learning framework that is able to handle problems with overlapping relations.", "MIML BIBREF24 is a multi-instance, multi-label learning framework that jointly models both multiple instances and multiple relations.", "PCNN+ATT BIBREF5 employs a selective attention over multiple instances to alleviate the wrongly labeled problem, which is the principal baseline of our work.", "PCNN+ATT+SL BIBREF21 introduces an entity-pair level denoising method, namely employing a soft label to alleviate the impact of wrongly labeled problem.", "PCNN+HATT BIBREF6 employs hierarchical attention to exploit correlations among relations.", "PCNN+BAG-ATT BIBREF7 uses an intra-bag to deal with the noise at sentence-level and an inter-bag attention to deal with noise at the bag-level." ], [ "We first compare our proposed SeG with aforementioned approaches in Table TABREF19 for top-N precision (i.e., P@N). As shown in the top panel of the table, our proposed model SeG can consistently and significantly outperform baseline (i.e., PCNN+ATT) and all recently-promoted works in terms of all P@N metric. Compared to PCNN with selective attention (i.e., PCNN+ATT), our proposed SeG can significantly improve the performance by 23.6% in terms of P@N mean for all sentences; even if a soft label technique is applied (i.e., PCNN+ATT+SL) to alleviate wrongly labeled problem, our performance improvement is also very significant, i.e., 7.8%.", "Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3% and 5.3% , even if they propose sophisticated techniques to handle the noisy training data. These verify the effectiveness of our approach over previous works when solving the wrongly labeled problem that frequently appears in distantly supervised relation extraction.", "Moreover, for proposed approach and comparative ones, we also show AUC curves and available numerical values in Figure FIGREF31 and Table TABREF32 respectively. The empirical results for AUC are coherent with those of P@N, which shows that, our proposed approach can significantly improve previous ones and reach a new state-of-the-art performance by handling wrongly labeled problem using context-aware selective gate mechanism. Specifically, our approach substantially improves both PCNN+HATT and PCNN+BAG-ATT by 21.4% in aspect of AUC for precision-recall." ], [ "To further verify the effectiveness of each module in the proposed framework, we conduct an extensive ablation study in this section. In particular, SeG w/o Ent denotes removing entity-aware embedding, SeG w/o Gate denotes removing selective gate and concatenating two representations from PCNN and self-attention, SeG w/o Gate w/o Self-Attn denotes removing self-attention enhanced selective gate. In addition, we also replace the some parts of the proposed framework with baseline module for an in-depth comparison. SeG+ATT denotes replacing mean-pooing with selective attention, and SeG w/ stack denotes using stacked PCNN and self-attention rather than in parallel.", "The P@N results are listed in the bottom panel of Table TABREF19, and corresponding AUC results are shown in Table TABREF36 and Figure FIGREF37. According to the results, we find that our proposed modules perform substantially better than those of the baseline in terms of both metrics. Particularly, by removing entity-aware embedding (i.e, SeG w/o Ent) and self-attention enhanced selective gate (i.e., SeG w/o Gate w/o Self-Attn), it shows 11.5% and 1.8% decreases respectively in terms of P@N mean for all sentences. Note that, when dropping both modules above (i.e., SeG w/o ALL), the framework will be degenerated as selective attention baseline BIBREF5, which again demonstrates that our proposed framework is superior than the baseline by 15% in terms of P@N mean for all sentences.", "To verify the performance of selective gate modul when handling wrongly labeled problem, we simply replace the selective gate module introduced in Eq.(DISPLAY_FORM15) with selective attention module, namely, SeG+Attn w/o Gate, and instead of mean pooling in Eq.(DISPLAY_FORM15), we couple selective gate with selective attention to fulfill aggregation instead mean-pooling, namely, SeG+Attn. Across the board, the proposed SeG still deliver the best results in terms of both metrics even if extra selective attention module is applied.", "Lastly, to explore the influence of the way to combine PCNN with self-attention mechanism, we stack them by following the previous works BIBREF9, i.e., SeG w/ Stack. And we observe a notable performance drop after stacking PCNN and self-attention in Table TABREF36. This verifies that our model combining self-attention mechanism and PCNN in parallel can achieve a satisfactory result.", "To further empirically evaluate the performance of our method in solving one-sentence bag problem, we extract only the one-sentence bags from NYT's training and test sets, which occupy 80% of the original dataset. The evaluation and comparison results in Table TABREF33 show that compared to PCNN+ATT, the AUC improvement (+0.13) between our model and PCNN+ATT on one-sentence bags is higher than the improvement of full NYT dataset, which verifies SeG's effectiveness on one-sentence bags. In addition, PCNN+ATT shows a light decrease compared with PCNN, which can also support the claim that selective attention is vulnerable to one-sentence bags." ], [ "In this section, we conduct a case study to qualitatively analyze the effects of entity-aware embedding and self-attention enhanced selective gate. The case study of four examples is shown in Table TABREF38.", "First, comparing Bag 1 and 2, we find that, without the support of the self-attention enhanced selective gate, the model will misclassify both bags into NA, leading to a degraded performance. Further, as shown in Bag 2, even if entity-aware embedding module is absent, proposed framework merely depending on selective gate can also make a correct prediction. This finding warrants more investigation into the power of the self-attention enhanced selective gate; hence, the two error cases are shown in Bags 3 and 4.", "Then, to further consider the necessity of entity-aware embedding, we show two error cases for SeG w/o Ent whose labels are /location/location/contains and NA respectively in Bag 3 and 4. One possible reason for the misclassification of both cases is that, due to a lack of entity-aware embedding, the remaining position features cannot provide strong information to distinguish complex context with similar relation position pattern w.r.t the two entities." ], [ "To investigate the possible reasons for misclassification, we randomly sample 50 error examples from the test set and manually analyze them. After human evaluation, we find the errors can be roughly categorized into following two classes." ], [ "We observe that, our approach is likely to mistakenly classify relation of almost all the sentences containing two place entities to /location/location/contains. However, the correct relation is /location/country/capital or /location/country/administrative_divisions. This suggests that we can incorporate external knowledge to alleviate this problem possibly caused by a lack of background." ], [ "Each sentence in a bag can be regarded as independent individual and do not have any relationship with other sentences in the bag, which possibly leads to information loss among the multiple sentences in the bag when considering classification over bag level." ], [ "In this paper, we propose a brand-new framework for distantly supervised relation extraction, i.e., selective gate (SeG) framework, as a new alternative to previous ones. It incorporates an entity-aware embedding module and a self-attention enhanced selective gate mechanism to integrate task-specific entity information into word embedding and then generates a complementary context-enriched representation for PCNN. The proposed framework has certain merits over previously prevalent selective attention when handling wrongly labeled data, especially for a usual case that there are only one sentence in the most of bags. The experiments conduct on popular NYT dataset show that our model SeG can consistently deliver a new benchmark in state-of-the-art performance in terms of all P@N and precision-recall AUC. And further ablation study and case study also demonstrate the significance of the proposed modules to handle wrongly labeled data and thus set a new state-of-the-art performance for the benchmark dataset. In the future, we plan to incorporate an external knowledge base into our framework, which may further boost the prediction quality by overcoming the problems with a lack of background information as discussed in our error analysis." ], [ "This research was funded by the Australian Government through the Australian Research Council (ARC) under grants LP180100654 partnership with KS computer. We also acknowledge the support of NVIDIA Corporation and Google Cloud with the donation of GPUs and computation credits respectively." ], [ "Recently, many works BIBREF21, BIBREF4 employed selective attention BIBREF5 to alleviate wrongly labeled problem existing in distantly supervised RE. For example, BIBREF6 han2018hierarchical propose a hierarchical relation structure attention based on the insight of selective attention. And, BIBREF7 ye2019distant extend the sentence-level selective attention to bag-level, where the bags have same relation label. Differing from these works suffering from one-sentence bag problem due to the defect of selective attention, our proposed approach employ a gate mechanism as an aggregator to handle this problem.", "There are several works recently proposed to couple CNN with self-attention BIBREF14, BIBREF27, BIBREF26 for either natural language processing or computer vision. For example, BIBREF9 yu2018qanet enrich CNN's representation with self-attention for machine reading comprehension. Unlike these works stacking the two modules many times, we arrange them in parallel instead of to ensure model's scalability. In addition, some previous approach explore the importance of entity embedding for relation extraction BIBREF12, BIBREF25, which usually need the support external knowledge graph and learn the entity embeddings over the graph. In contrast, this approach considers the entity embeddings within a sentence and incorporate them with relative position feature without any external support." ] ], "section_name": [ "Introduction", "Proposed Approach", "Proposed Approach ::: Entity-Aware Embedding", "Proposed Approach ::: Self-Attention Enhanced Neural Network", "Proposed Approach ::: Self-Attention Enhanced Neural Network ::: Piecewise Convolutional Neural Network", "Proposed Approach ::: Self-Attention Enhanced Neural Network ::: Self-Attention Mechanism", "Proposed Approach ::: Selective Gate", "Proposed Approach ::: Model Learning", "Experiments", "Experiments ::: Dataset", "Experiments ::: Metrics", "Experiments ::: Training Setup", "Experiments ::: Baselines and Competitive Approaches", "Experiments ::: Relation Extraction Performance", "Experiments ::: Ablation Study", "Experiments ::: Case Study", "Experiments ::: Error Analysis", "Experiments ::: Error Analysis ::: Lack of background", "Experiments ::: Error Analysis ::: Isolated Sentence in Bag", "Conclusion", "Acknowledgements", "Related Work" ] }
{ "answers": [ { "annotation_id": [ "470815c0d2f3150344d89f5f955d20dbdf15f7c9", "4b5424873f27472533edaeee7ee4501ef9748201", "a7fda41def392fed8e8ffbd722000374809f829b" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": true }, { "evidence": [ "Unlike previous works under multi-instance framework that frequently use a selective attention module to aggregate sentence-level representations into bag-level one, we propose a innovative selective gate mechanism to perform this aggregation. The selective gate can mitigate problems existing in distantly supervised relation extraction and achieve a satisfactory empirical effectiveness. Specifically, when handling the noisy instance problem, selective attention tries to produce a distribution over all sentence in a bag; but if there is only one sentence in the bag, even the only sentence is wrongly labeled, the selective attention mechanism will be low-effective or even completely useless. Note that almost $80\\%$ of bags from popular relation extraction benchmark consist of only one sentence, and many of them suffer from the wrong label problem. In contrast, our proposed gate mechanism is competent to tackle such case by directly and dynamically aligning low gating value to the wrongly labeled instances and thus preventing noise representation being propagated." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The selective gate can mitigate problems existing in distantly supervised relation extraction and achieve a satisfactory empirical effectiveness. Specifically, when handling the noisy instance problem, selective attention tries to produce a distribution over all sentence in a bag; but if there is only one sentence in the bag, even the only sentence is wrongly labeled, the selective attention mechanism will be low-effective or even completely useless." ], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "0f02f32fa1b2aa2c7f2710ec54ae5d5e53584015", "14dabf76cac7b253684b670335ec231437389efb", "e5dad875c75175f089f858364a3935fce848ce0f" ], "answer": [ { "evidence": [ "In this section, we conduct a case study to qualitatively analyze the effects of entity-aware embedding and self-attention enhanced selective gate. The case study of four examples is shown in Table TABREF38.", "First, comparing Bag 1 and 2, we find that, without the support of the self-attention enhanced selective gate, the model will misclassify both bags into NA, leading to a degraded performance. Further, as shown in Bag 2, even if entity-aware embedding module is absent, proposed framework merely depending on selective gate can also make a correct prediction. This finding warrants more investigation into the power of the self-attention enhanced selective gate; hence, the two error cases are shown in Bags 3 and 4.", "Then, to further consider the necessity of entity-aware embedding, we show two error cases for SeG w/o Ent whose labels are /location/location/contains and NA respectively in Bag 3 and 4. One possible reason for the misclassification of both cases is that, due to a lack of entity-aware embedding, the remaining position features cannot provide strong information to distinguish complex context with similar relation position pattern w.r.t the two entities." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In this section, we conduct a case study to qualitatively analyze the effects of entity-aware embedding and self-attention enhanced selective gate. The case study of four examples is shown in Table TABREF38.\n\nFirst, comparing Bag 1 and 2, we find that, without the support of the self-attention enhanced selective gate, the model will misclassify both bags into NA, leading to a degraded performance. Further, as shown in Bag 2, even if entity-aware embedding module is absent, proposed framework merely depending on selective gate can also make a correct prediction. This finding warrants more investigation into the power of the self-attention enhanced selective gate; hence, the two error cases are shown in Bags 3 and 4.\n\nThen, to further consider the necessity of entity-aware embedding, we show two error cases for SeG w/o Ent whose labels are /location/location/contains and NA respectively in Bag 3 and 4. One possible reason for the misclassification of both cases is that, due to a lack of entity-aware embedding, the remaining position features cannot provide strong information to distinguish complex context with similar relation position pattern w.r.t the two entities." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "FLOAT SELECTED: Table 6: A case study where each bag contains one sentence. SeG w/o GSA is an abbreviation of SeG w/o Gate w/o Self-Attn." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 6: A case study where each bag contains one sentence. SeG w/o GSA is an abbreviation of SeG w/o Gate w/o Self-Attn." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "However, such selective attention framework is vulnerable to situations where a bag is merely comprised of one single sentence labeled; and what is worse, the only one sentence possibly expresses inconsistent relation information with the bag-level label. This scenario is not uncommon. For a popular distantly supervised relation extraction benchmark, e.g., NYT dataset BIBREF2, up to $80\\%$ of its training examples (i.e., bags) are one-sentence bags. From our data inspection, we randomly sample 100 one-sentence bags and find $35\\%$ of them is incorrectly labeled. Two examples of one-sentence bag are shown in Table TABREF1. These results indicate that, in training phrase the selective attention module is enforced to output a single-valued scalar for $80\\%$ examples, leading to an ill-trained attention module and thus hurting the performance.", "FLOAT SELECTED: Table 1: Two examples of one-sentence bag, which are correctly and wrongly labeled by distant supervision respectively." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Two examples of one-sentence bag are shown in Table TABREF1. These results indicate that, in training phrase the selective attention module is enforced to output a single-valued scalar for $80\\%$ examples, leading to an ill-trained attention module and thus hurting the performance.", "FLOAT SELECTED: Table 1: Two examples of one-sentence bag, which are correctly and wrongly labeled by distant supervision respectively." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "0665bef1c59e25ad8ed8758f684ace7aa7bf4692", "615513bdb57f6c7debe2a731d8af734e49ee6e22", "e8a6a320775311f3fe03704c62eeacdfebb17a86" ], "answer": [ { "evidence": [ "We first compare our proposed SeG with aforementioned approaches in Table TABREF19 for top-N precision (i.e., P@N). As shown in the top panel of the table, our proposed model SeG can consistently and significantly outperform baseline (i.e., PCNN+ATT) and all recently-promoted works in terms of all P@N metric. Compared to PCNN with selective attention (i.e., PCNN+ATT), our proposed SeG can significantly improve the performance by 23.6% in terms of P@N mean for all sentences; even if a soft label technique is applied (i.e., PCNN+ATT+SL) to alleviate wrongly labeled problem, our performance improvement is also very significant, i.e., 7.8%.", "Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3% and 5.3% , even if they propose sophisticated techniques to handle the noisy training data. These verify the effectiveness of our approach over previous works when solving the wrongly labeled problem that frequently appears in distantly supervised relation extraction." ], "extractive_spans": [], "free_form_answer": "Outperforms PCNN+HATT by 10.3% and PCNN+BAG-ATT by 5.3%", "highlighted_evidence": [ "We first compare our proposed SeG with aforementioned approaches in Table TABREF19 for top-N precision (i.e., P@N). ", "Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3% and 5.3% , even if they propose sophisticated techniques to handle the noisy training data. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3% and 5.3% , even if they propose sophisticated techniques to handle the noisy training data. These verify the effectiveness of our approach over previous works when solving the wrongly labeled problem that frequently appears in distantly supervised relation extraction." ], "extractive_spans": [], "free_form_answer": "5.3 percent points", "highlighted_evidence": [ "Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3% and 5.3% , even if they propose sophisticated techniques to handle the noisy training data. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We first compare our proposed SeG with aforementioned approaches in Table TABREF19 for top-N precision (i.e., P@N). As shown in the top panel of the table, our proposed model SeG can consistently and significantly outperform baseline (i.e., PCNN+ATT) and all recently-promoted works in terms of all P@N metric. Compared to PCNN with selective attention (i.e., PCNN+ATT), our proposed SeG can significantly improve the performance by 23.6% in terms of P@N mean for all sentences; even if a soft label technique is applied (i.e., PCNN+ATT+SL) to alleviate wrongly labeled problem, our performance improvement is also very significant, i.e., 7.8%.", "Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3% and 5.3% , even if they propose sophisticated techniques to handle the noisy training data. These verify the effectiveness of our approach over previous works when solving the wrongly labeled problem that frequently appears in distantly supervised relation extraction." ], "extractive_spans": [ "Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3% and 5.3%" ], "free_form_answer": "", "highlighted_evidence": [ "We first compare our proposed SeG with aforementioned approaches in Table TABREF19 for top-N precision (i.e., P@N).", "Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3% and 5.3% , even if they propose sophisticated techniques to handle the noisy training data." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "Is their gating mechanism specially designed to handle one sentence bags?", "Do they show examples where only one sentence appears in a bag and their method works, as opposed to using selective attention?", "By how much do they outperform previous state-of-the-art in terms of top-n precision?" ], "question_id": [ "b569827ecd04ae8757dc3c9523ab97e3f47a6e00", "0d42bd759c84cbf3a293ab58283a3d0d5e27d290", "9f1e60ee86a5c46abe75b67ef369bf92a5090568" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Two examples of one-sentence bag, which are correctly and wrongly labeled by distant supervision respectively.", "Figure 1: The framework of our approach (i.e. SeG) that consisting of three components: 1) entity-aware embedding 2) self-attention enhanced neural network and 3) a selective gate. Note, tokens eh and et with gray background mean the head entity and tail entity of this sentence.", "Table 2: Precision values for the top-100, -200 and -300 relation instances that are randomly selected in terms of one/two/all sentence(s).", "Figure 2: Performance comparison for proposed model and previous baselines in terms of precision-recall curves", "Table 3: Model comparison regarding the AUC value. The comparative results are reported by Han et al. (2018) and Ye and Ling (2019) respectively.", "Figure 3: Performance comparison for ablation study under precision-recall curves", "Table 4: Model that is trained and tested on extracted one sentence bags from NYT dataset comparison regarding the AUC value and Acc., where Acc. is accuracy on non-NA sentences.", "Table 5: Ablation study regarding precision-recall AUC value.", "Table 6: A case study where each bag contains one sentence. SeG w/o GSA is an abbreviation of SeG w/o Gate w/o Self-Attn." ], "file": [ "1-Table1-1.png", "3-Figure1-1.png", "5-Table2-1.png", "5-Figure2-1.png", "6-Table3-1.png", "6-Figure3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png" ] }
[ "By how much do they outperform previous state-of-the-art in terms of top-n precision?" ]
[ [ "1911.11899-Experiments ::: Relation Extraction Performance-0", "1911.11899-Experiments ::: Relation Extraction Performance-1" ] ]
[ "5.3 percent points" ]
44
1603.09405
Enhancing Sentence Relation Modeling with Auxiliary Character-level Embedding
Neural network based approaches for sentence relation modeling automatically generate hidden matching features from raw sentence pairs. However, the quality of matching feature representation may not be satisfied due to complex semantic relations such as entailment or contradiction. To address this challenge, we propose a new deep neural network architecture that jointly leverage pre-trained word embedding and auxiliary character embedding to learn sentence meanings. The two kinds of word sequence representations as inputs into multi-layer bidirectional LSTM to learn enhanced sentence representation. After that, we construct matching features followed by another temporal CNN to learn high-level hidden matching feature representations. Experimental results demonstrate that our approach consistently outperforms the existing methods on standard evaluation datasets.
{ "paragraphs": [ [ "Traditional approaches BIBREF0 , BIBREF1 , BIBREF2 for sentence relation modeling tasks such as paraphrase identification, question answering, recognized textual entailment and semantic textual similarity prediction usually build the supervised model using a variety of hand crafted features. Hundreds of features generated at different linguistic levels are exploited to boost classification. With the success of deep learning, there has been much interest in applying deep neural network based techniques to further improve the prediction performances BIBREF3 , BIBREF4 , BIBREF5 .", "A key component of deep neural network is word embedding which serve as an lookup table to get word representations. From low level NLP tasks such as language modeling, POS tagging, name entity recognition, and semantic role labeling BIBREF6 , BIBREF7 , to high level tasks such as machine translation, information retrieval and semantic analysis BIBREF8 , BIBREF9 , BIBREF10 . Deep word representation learning has demonstrated its importance for these tasks. All the tasks get performance improvement via further learning either word level representations or sentence level representations. On the other hand, some researchers have found character-level convolutional networks BIBREF11 , BIBREF12 are useful in extracting information from raw signals for the task such as language modeling or text classification.", "In this work, we focus on deep neural network based sentence relation modeling tasks. We explore treating each sentence as a kind of raw signal at character level, and applying temporal (one-dimensional) Convolution Neural Network (CNN) BIBREF6 , Highway Multilayer Perceptron (HMLP) and multi-layer bidirectional LSTM (Long Short Term Memory) BIBREF13 to learn sentence representations. We propose a new deep neural network architecture that jointly leverage pre-trained word embedding and character embedding to represent the meaning sentences. More specifically, our new approach first generates two kinds of word sequence representations. One kind of sequence representations are the composition of pre-trained word vectors. The other kind of sequence representation comprise word vectors that generating from character-level convolutional network. We then inject the two sequence representations into bidirectional LSTM, which means forward directional LSTM accept pre-trained word embedding output and backward directional LSTM accept auxiliary character CNN embedding output. The final sentence representation is the concatenation of the two direction. After that, we construct matching features followed by another temporal CNN to learn high-level hidden matching feature representations. Figure FIGREF1 shows the neural network architecture for general sentence relation modeling.", "Our model shows that when trained on small size datasets, combining pre-trained word embeddings with auxiliary character-level embedding can improve the sentence representation. Word embeddings can help capturing general word semantic meanings, whereas char-level embedding can help modeling task specific word meanings. Note that auxiliary character-level embedding based sentence representation do not require the knowledge of words or even syntactic structure of a language. The enhanced sentence representation generated by multi-layer bidirectional LSTM will encapsulate the character and word levels informations. Furthermore, it may enhance matching features that generated by computing similarity measures on sentence pairs. Quantitative evaluations on standard dataset demonstrate the effectiveness and advantages of our method." ], [ "Besides pre-trained word vectors, we are also interested in generating word vectors from characters. To achieve that, we leverage deep convolutional neural network(ConvNets). The model accepts a sequence of encoded characters as input. The encoding si done by prescribing an alphabet of size INLINEFORM0 for the input language, and then quantize each character using one-hot encoding. Then, the sequence of characters is transformed to a sequence of such INLINEFORM1 sized vectors with fixed length INLINEFORM2 . Any character exceeding length INLINEFORM3 is ignored, and any characters that are not in the alphabet are quantized as all-zero vectors. The alphabet used in our model consists of 36 characters, including 26 english letters and 10 digits. Below, we will introduce character-level temporal convolution neural network." ], [ "Temporal Convolution applies one-dimensional convolution over an input sequence. The one-dimensional convolution is an operation between a vector of weights INLINEFORM0 and a vector of inputs viewed as a sequence INLINEFORM1 . The vector INLINEFORM2 is the filter of the convolution. Concretely, we think of INLINEFORM3 as the input token and INLINEFORM4 as a single feature value associated with the INLINEFORM5 -th character in this token. The idea behind the one-dimensional convolution is to take the dot product of the vector INLINEFORM6 with each INLINEFORM7 -gram in the token INLINEFORM8 to obtain another sequence INLINEFORM9 : DISPLAYFORM0 ", "Usually, INLINEFORM0 is not a single value, but a INLINEFORM1 -dimensional vector so that INLINEFORM2 . There exist two types of 1d convolution operations. One is called Time Delay Neural Networks (TDNNs). The other one was introduced by BIBREF6 . In TDNN, weights INLINEFORM3 form a matrix. Each row of INLINEFORM4 is convolved with the corresponding row of INLINEFORM5 . In BIBREF6 architecture, a sequence of length INLINEFORM6 is represented as: DISPLAYFORM0 ", "where INLINEFORM0 is the concatenation operation. In general, let INLINEFORM1 refer to the concatenation of characters INLINEFORM2 . A convolution operation involves a filter INLINEFORM3 , which is applied to a window of INLINEFORM4 characters to produce the new feature. For example, a feature INLINEFORM5 is generated from a window of characters INLINEFORM6 by: DISPLAYFORM0 ", "Here INLINEFORM0 is a bias term and INLINEFORM1 is a non-linear function such as the thresholding function INLINEFORM2 . This filter is applied to each possible window of characters in the sequence INLINEFORM3 to produce a feature map: DISPLAYFORM0 ", "with INLINEFORM0 ." ], [ "On top of convolutional neural network layers, we build another Highway Multilayer Perceptron (HMLP) layer to further enhance character-level word embeddings. Conventional MLP applies an affine transformation followed by a nonlinearity to obtain a new set of features: DISPLAYFORM0 ", "One layer of a highway network does the following: DISPLAYFORM0 ", "where INLINEFORM0 is a nonlinearity, INLINEFORM1 is called as the transform gate, and INLINEFORM2 is called as the carry gate. Similar to the memory cells in LSTM networks, highway layers allow adaptively carrying some dimensions of the input directly to the input for training deep networks." ], [ "Now that we have two kinds of word sequence representations. One kind of sequence representations are the composition of pre-trained word vectors. The other kind of sequence representation comprise word vectors that generating from character-level convolutional network. We can inject the two sequence representations into bidirectional LSTM to learn sentence representation. More specifically, forward directional LSTM accept pre-trained word embedding output and backward directional LSTM accept character CNN embedding output. The final sentence representation is the concatenation of the two direction." ], [ "Recurrent neural networks (RNNs) are capable of modeling sequences of varying lengths via the recursive application of a transition function on a hidden state. For example, at each time step INLINEFORM0 , an RNN takes the input vector INLINEFORM1 and the hidden state vector INLINEFORM2 , then applies affine transformation followed by an element-wise nonlinearity such as hyperbolic tangent function to produce the next hidden state vector INLINEFORM3 : DISPLAYFORM0 ", "A major issue of RNNs using these transition functions is that it is difficult to learn long-range dependencies during training step because the components of the gradient vector can grow or decay exponentially BIBREF14 .", "The LSTM architecture BIBREF15 addresses the problem of learning long range dependencies by introducing a memory cell that is able to preserve state over long periods of time. Concretely, at each time step INLINEFORM0 , the LSTM unit can be defined as a collection of vectors in INLINEFORM1 : an input gate INLINEFORM2 , a forget gate INLINEFORM3 , an output gate INLINEFORM4 , a memory cell INLINEFORM5 and a hidden state INLINEFORM6 . We refer to INLINEFORM7 as the memory dimensionality of the LSTM. One step of an LSTM takes as input INLINEFORM8 , INLINEFORM9 , INLINEFORM10 and produces INLINEFORM11 , INLINEFORM12 via the following transition equations: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are the element-wise sigmoid and hyperbolic tangent functions, INLINEFORM2 is the element-wise multiplication operator." ], [ "One shortcoming of conventional RNNs is that they are only able to make use of previous context. In text entailment, the decision is made after the whole sentence pair is digested. Therefore, exploring future context would be better for sequence meaning representation. Bidirectional RNNs architecture BIBREF13 proposed a solution of making prediction based on future words. At each time step INLINEFORM0 , the model maintains two hidden states, one for the left-to-right propagation INLINEFORM1 and the other for the right-to-left propagation INLINEFORM2 . The hidden state of the Bidirectional LSTM is the concatenation of the forward and backward hidden states. The following equations illustrate the main ideas: DISPLAYFORM0 ", "Deep RNNs can be created by stacking multiple RNN hidden layer on top of each other, with the output sequence of one layer forming the input sequence for the next. Assuming the same hidden layer function is used for all INLINEFORM0 layers in the stack, the hidden vectors INLINEFORM1 are iteratively computed from INLINEFORM2 to INLINEFORM3 and INLINEFORM4 to INLINEFORM5 : DISPLAYFORM0 ", "Multilayer bidirectional RNNs can be implemented by replacing each hidden vector INLINEFORM0 with the forward and backward vectors INLINEFORM1 and INLINEFORM2 , and ensuring that every hidden layer receives input from both the forward and backward layers at the level below. Furthermore, we can apply LSTM memory cell to hidden layers to construct multilayer bidirectional LSTM.", "Finally, we can concatenate sequence hidden matrix INLINEFORM0 and reversed sequence hidden matrix INLINEFORM1 to form the sentence representation. We refer to INLINEFORM2 is the number of layers, INLINEFORM3 as the memory dimensionality of the LSTM. In the next section, we will use the two matrixs to generate matching feature planes via linear algebra operations." ], [ "Inspired by BIBREF10 , we apply element-wise merge to first sentence matrix INLINEFORM0 and second sentence matrix INLINEFORM1 . Similar to previous method, we can define two simple matching feature planes (FPs) with below equations: DISPLAYFORM0 ", "where INLINEFORM0 is the element-wise multiplication. The INLINEFORM1 measure can be interpreted as an element-wise comparison of the signs of the input representations. The INLINEFORM2 measure can be interpreted as the distance between the input representations.", "In addition to the above measures, we also found the following feature plane can improve the performance: DISPLAYFORM0 ", "In INLINEFORM0 , the INLINEFORM1 means one-dimensional convolution. Join mean concatenate the two representation. The intuition behind INLINEFORM2 is let the one-dimensional convolution preserves the common information between sentence pairs." ], [ "Recall that the multi-layer bidirectional LSTM generates sentence representation matrix INLINEFORM0 by concatenating sentence hidden matrix INLINEFORM1 and reversed sentence hidden matrix INLINEFORM2 . Then we conduct element-wise merge to form feature plane INLINEFORM3 . Therefore, the final input into temporal convolution layer is a 3D tensor INLINEFORM4 , where INLINEFORM5 is the number of matching feature plane, INLINEFORM6 is the number of layers, INLINEFORM7 as the memory dimensionality of the LSTM. Note that the 3D tensor convolutional layer input INLINEFORM8 can be viewed as an image where each feature plane is a channel. In computer vision and image processing communities, the spatial 2D convolution is often used over an input image composed of several input planes. In experiment section, we will compare 2D convolution with 1D convolution. In order to facilitate temporal convolution, we need reshape INLINEFORM9 to 2D tensor." ], [ "The matching feature planes can be viewed as channels of images in image processing. In our scenario, these feature planes hold the matching information. We will use temporal convolutional neural network to learn hidden matching features. The mechanism of temporal CNN here is the same as character-level temporal CNN. However, the kernels are totally different.", "It's quite important to design a good topology for CNN to learn hidden features from heterogeneous feature planes. After several experiments, we found two topological graphs can be deployed in the architecture. Figure FIGREF20 and Figure FIGREF20 show the two CNN graphs. In Topology i@, we stack temporal convolution with kernel width as 1 and tanh activation on top of each feature plane. After that, we deploy another temporal convolution and tanh activation operation with kernel width as 2. In Topology ii@, however, we first stack temporal convolution and tanh activation with kernel width as 2. Then we deploy another temporal convolution and tanh activation operation with kernel width as 1. Experiment results demonstrate that the Topology i@ is slightly better than the Topology ii@. This conclusion is reasonable. The feature planes are heterogeneous. After conducting convolution and tanh activation transformation, it makes sense to compare values across different feature planes." ], [ "We selected two related sentence relation modeling tasks: semantic relatedness task, which measures the degree of semantic relatedness of a sentence pair by assigning a relatedness score ranging from 1 (completely unrelated) to 5 ( very related); and textual entailment task, which determines whether the truth of a text entails the truth of another text called hypothesis. We use standard SICK (Sentences Involving Compositional Knowledge) dataset for evaluation. It consists of about 10,000 English sentence pairs annotated for relatedness in meaning and entailment." ], [ "We first initialize our word representations using publicly available 300-dimensional Glove word vectors . LSTM memory dimension is 100, the number of layers is 2. On the other hand, for CharCNN model we use threshold activation function on top of each temporal convolution and max pooling pairs . The CharCNN input frame size equals alphabet size, output frame size is 100. The maximum sentence length is 37. The kernel width of each temporal convolution is set to 3, the step is 1, the hidden units of HighwayMLP is 50. Training is done through stochastic gradient descent over shuffled mini-batches with the AdaGrad update rule BIBREF16 . The learning rate is set to 0.05. The mini-batch size is 25. The model parameters were regularized with a per-minibatch L2 regularization strength of INLINEFORM0 . Note that word embeddings were fixed during training." ], [ "The task of semantic relatedness prediction tries to measure the degree of semantic relatedness of a sentence pair by assigning a relatedness score ranging from 1 (completely unrelated) to 5 (very related). More formally, given a sentence pair, we wish to predict a real-valued similarity score in a range of INLINEFORM0 , where INLINEFORM1 is an integer. The sequence INLINEFORM2 is the ordinal scale of similarity, where higher scores indicate greater degrees of similarity. We can predict the similarity score INLINEFORM3 by predicting the probability that the learned hidden representation INLINEFORM4 belongs to the ordinal scale. This is done by projecting an input representation onto a set of hyperplanes, each of which corresponds to a class. The distance from the input to a hyperplane reflects the probability that the input will located in corresponding scale.", "Mathematically, the similarity score INLINEFORM0 can be written as: DISPLAYFORM0 ", "where INLINEFORM0 and the weight matrix INLINEFORM1 and INLINEFORM2 are parameters.", "In order to introduce the task objective function, we define a sparse target distribution INLINEFORM0 that satisfies INLINEFORM1 : DISPLAYFORM0 ", "where INLINEFORM0 . The objective function then can be defined as the regularized KL-divergence between INLINEFORM1 and INLINEFORM2 : DISPLAYFORM0 ", "where INLINEFORM0 is the number of training pairs and the superscript INLINEFORM1 indicates the INLINEFORM2 -th sentence pair BIBREF10 .", "Referring to textual entailment recognition task, we want to maximize the likelihood of the correct class. This is equivalent to minimizing the negative log-likelihood (NLL). More specifically, the label INLINEFORM0 given the inputs INLINEFORM1 is predicted by a softmax classifier that takes the hidden state INLINEFORM2 at the node as input: DISPLAYFORM0 ", "After that, the objective function is the negative log-likelihood of the true class labels INLINEFORM0 : DISPLAYFORM0 ", "where INLINEFORM0 is the number of training pairs and the superscript INLINEFORM1 indicates the INLINEFORM2 th sentence pair." ], [ "Table TABREF31 and TABREF32 show the Pearson correlation and accuracy comparison results of semantic relatedness and text entailment tasks. We can see that combining CharCNN with multi-layer bidirectional LSTM yields better performance compared with other traditional machine learning methods such as SVM and MaxEnt approach BIBREF17 , BIBREF0 that served with many handcraft features. Note that our method doesn't need extra handcrafted feature extraction procedure. Also our method doesn't leverage external linguistic resources such as wordnet or parsing which get best results in BIBREF10 . More importantly, both task prediction results close to the state-of-the-art results. It proved that our approaches successfully simultaneously predict heterogeneous tasks. Note that for semantic relatedness task, the latest research BIBREF10 proposed a tree-structure based LSTM, the Pearson correlation score of their system can reach 0.863. Compared with their approach, our method didn't use dependency parsing and can be used to predict tasks contains multiple languages.", "We hope to point out that we implemented the method in BIBREF10 , but the results are not as good as our method. Here we use the results reported in their paper. Based on our experiments, we believe the method in BIBREF10 is very sensitive to the initializations, thus it may not achieve the good performance in different settings. However, our method is pretty stable which may benefit from the joint tasks training." ], [ "In this experiment, we will compare tree LSTM with sequential LSTM. A limitation of the sequence LSTM architectures is that they only allow for strictly sequential information propagation. However, tree LSTMs allow richer network topologies where each LSTM unit is able to incorporate information from multiple child units. As in standard LSTM units, each Tree-LSTM unit (indexed by INLINEFORM0 ) contains input and output gates INLINEFORM1 and INLINEFORM2 , a memory cell INLINEFORM3 and hidden state INLINEFORM4 . The difference between the standard LSTM unit and tree LSTM units is that gating vectors and memory cell updates are dependent on the states of possibly many child units. Additionally, instead of a single forget gate, the tree LSTM unit contains one forget gate INLINEFORM5 for each child INLINEFORM6 . This allows the tree LSTM unit to selectively incorporate information from each child.", "We use dependency tree child-sum tree LSTM proposed by BIBREF10 as our baseline. Given a tree, let INLINEFORM0 denote the set of children of node INLINEFORM1 . The child-sum tree LSTM transition equations are the following: DISPLAYFORM0 ", "Table TABREF35 show the comparisons between tree and sequential based methods. We can see that, if we don't deploy CNN, simple Tree LSTM yields better result than traditional LSTM, but worse than Bidirectional LSTM. This is reasonable due to the fact that Bidirectional LSTM can enhance sentence representation by concatenating forward and backward representations. We found that adding CNN layer will decrease the accuracy in this scenario. Because when feeding into CNN, we have to reshape the feature planes otherwise convolution will not work. For example, we set convolution kernel width as 2, the input 2D tensor will have the shape lager than 2. To boost performance with CNN, we need more matching features. We found Multi-layer Bidirectional LSTM can incorporate more features and achieve best performance compared with single-layer Bidirectional LSTM." ], [ "Existing neural sentence models mainly fall into two groups: convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In regular 1D CNNs BIBREF6 , BIBREF8 , BIBREF19 , a fixed-size window slides over time (successive words in sequence) to extract local features of a sentence; then they pool these features to a vector, usually taking the maximum value in each dimension, for supervised learning. The convolutional unit, when combined with max-pooling, can act as the compositional operator with local selection mechanism as in the recursive autoencoder BIBREF3 . However, semantically related words that are not in one filter can't be captured effectively by this shallow architecture. BIBREF20 built deep convolutional models so that local features can mix at high-level layers. However, deep convolutional models may result in worse performance BIBREF19 .", "On the other hand, RNN can take advantage of the parsing or dependency tree of sentence structure information BIBREF3 , BIBREF21 . BIBREF4 used dependency-tree recursive neural network to map text descriptions to quiz answers. Each node in the tree is represented as a vector; information is propagated recursively along the tree by some elaborate semantic composition. One major drawback of RNNs is the long propagation path of information near leaf nodes. As gradient may vanish when propagated through a deep path, such long dependency buries illuminating information under a complicated neural architecture, leading to the difficulty of training. To address this issue, BIBREF10 proposed a Tree-Structured Long Short-Term Memory Networks. This motivates us to investigate multi-layer bidirectional LSTM that directly models sentence meanings without parsing for RTE task." ], [ "In this paper, we propose a new deep neural network architecture that jointly leverage pre-trained word embedding and character embedding to learn sentence meanings. Our new approach first generates two kinds of word sequence representations as inputs into bidirectional LSTM to learn sentence representation. After that, we construct matching features followed by another temporal CNN to learn high-level hidden matching feature representations. Our model shows that combining pre-trained word embeddings with auxiliary character-level embedding can improve the sentence representation. The enhanced sentence representation generated by multi-layer bidirectional LSTM will encapsulate the character and word levels informations. Furthermore, it may enhance matching features that generated by computing similarity measures on sentence pairs. Experimental results on benchmark datasets demonstrate that our new framework achieved the state-of-the-art performance compared with other deep neural networks based approaches." ] ], "section_name": [ "Introduction", "Character-level Convolutional Neural Network", "Temporal Convolution", "Highway MLP", "Multi-Layer Bidirectional LSTM", "RNN vs LSTM", "Model Description", "Learning from Matching Features", "Reshape Feature Planes", "CNN Topology", "Experiments", "Hyperparameters and Training Details", "Objective Functions", "Results and Discussions", "Tree LSTM vs Sequence LSTM", "Related Work", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "06930ae03a908d48c6a3adceaee5e078a55ffa4c", "25f65a65692c2d8616d85578a363707cdb88a488", "e51f4f62cbf9dc742da40a880abb4aa7152eafd4" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Semantic Relatedness Task Comparison.", "FLOAT SELECTED: Table 2: Textual Entailment Task Comparison.", "Table TABREF31 and TABREF32 show the Pearson correlation and accuracy comparison results of semantic relatedness and text entailment tasks. We can see that combining CharCNN with multi-layer bidirectional LSTM yields better performance compared with other traditional machine learning methods such as SVM and MaxEnt approach BIBREF17 , BIBREF0 that served with many handcraft features. Note that our method doesn't need extra handcrafted feature extraction procedure. Also our method doesn't leverage external linguistic resources such as wordnet or parsing which get best results in BIBREF10 . More importantly, both task prediction results close to the state-of-the-art results. It proved that our approaches successfully simultaneously predict heterogeneous tasks. Note that for semantic relatedness task, the latest research BIBREF10 proposed a tree-structure based LSTM, the Pearson correlation score of their system can reach 0.863. Compared with their approach, our method didn't use dependency parsing and can be used to predict tasks contains multiple languages." ], "extractive_spans": [], "free_form_answer": "In Semantic Relatedness task their model outperforms existing methods by more than 0.023 Pearson Correlation. In Textual Entailment task their model scores 0.004 accuracy lesser than MaxEnt", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Semantic Relatedness Task Comparison.", "FLOAT SELECTED: Table 2: Textual Entailment Task Comparison.", "Table TABREF31 and TABREF32 show the Pearson correlation and accuracy comparison results of semantic relatedness and text entailment tasks." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 1: Semantic Relatedness Task Comparison.", "FLOAT SELECTED: Table 2: Textual Entailment Task Comparison." ], "extractive_spans": [], "free_form_answer": "Their best implementation for semantic relatedness task comparison outperforms standard MaxEnt by 0,052 Pearson Correlation.\nTheir best implementation for Textual Entailment task comparison (84,2 accuracy) DOES NOT outperform standard SVM (84,6 accuracy).\n", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Semantic Relatedness Task Comparison.", "FLOAT SELECTED: Table 2: Textual Entailment Task Comparison." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF31 and TABREF32 show the Pearson correlation and accuracy comparison results of semantic relatedness and text entailment tasks. We can see that combining CharCNN with multi-layer bidirectional LSTM yields better performance compared with other traditional machine learning methods such as SVM and MaxEnt approach BIBREF17 , BIBREF0 that served with many handcraft features. Note that our method doesn't need extra handcrafted feature extraction procedure. Also our method doesn't leverage external linguistic resources such as wordnet or parsing which get best results in BIBREF10 . More importantly, both task prediction results close to the state-of-the-art results. It proved that our approaches successfully simultaneously predict heterogeneous tasks. Note that for semantic relatedness task, the latest research BIBREF10 proposed a tree-structure based LSTM, the Pearson correlation score of their system can reach 0.863. Compared with their approach, our method didn't use dependency parsing and can be used to predict tasks contains multiple languages.", "FLOAT SELECTED: Table 1: Semantic Relatedness Task Comparison.", "FLOAT SELECTED: Table 2: Textual Entailment Task Comparison." ], "extractive_spans": [], "free_form_answer": "Best proposed result had 0.851 and 0.842 compared to best previous result of 0.828 and 0.846 on person correlation and accuracy respectively.", "highlighted_evidence": [ "Table TABREF31 and TABREF32 show the Pearson correlation and accuracy comparison results of semantic relatedness and text entailment tasks.", "Note that for semantic relatedness task, the latest research BIBREF10 proposed a tree-structure based LSTM, the Pearson correlation score of their system can reach 0.863. Compared with their approach, our method didn't use dependency parsing and can be used to predict tasks contains multiple languages.", "FLOAT SELECTED: Table 1: Semantic Relatedness Task Comparison.", "FLOAT SELECTED: Table 2: Textual Entailment Task Comparison." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "d9135203a92ded14d260a7d551b7a447c8b7c910", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "18ee05e63e95cf30da12e45bca7f39214b45adb0", "5f2db275a581887ab9d4d380f0d44277f1cc2aa0", "cefa35262312398148a7a571ab29b05dc6a0a070" ], "answer": [ { "evidence": [ "We selected two related sentence relation modeling tasks: semantic relatedness task, which measures the degree of semantic relatedness of a sentence pair by assigning a relatedness score ranging from 1 (completely unrelated) to 5 ( very related); and textual entailment task, which determines whether the truth of a text entails the truth of another text called hypothesis. We use standard SICK (Sentences Involving Compositional Knowledge) dataset for evaluation. It consists of about 10,000 English sentence pairs annotated for relatedness in meaning and entailment." ], "extractive_spans": [ "SICK (Sentences Involving Compositional Knowledge) dataset " ], "free_form_answer": "", "highlighted_evidence": [ "We use standard SICK (Sentences Involving Compositional Knowledge) dataset for evaluation. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We selected two related sentence relation modeling tasks: semantic relatedness task, which measures the degree of semantic relatedness of a sentence pair by assigning a relatedness score ranging from 1 (completely unrelated) to 5 ( very related); and textual entailment task, which determines whether the truth of a text entails the truth of another text called hypothesis. We use standard SICK (Sentences Involving Compositional Knowledge) dataset for evaluation. It consists of about 10,000 English sentence pairs annotated for relatedness in meaning and entailment." ], "extractive_spans": [ "SICK (Sentences Involving Compositional Knowledge) dataset" ], "free_form_answer": "", "highlighted_evidence": [ "We use standard SICK (Sentences Involving Compositional Knowledge) dataset for evaluation." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We selected two related sentence relation modeling tasks: semantic relatedness task, which measures the degree of semantic relatedness of a sentence pair by assigning a relatedness score ranging from 1 (completely unrelated) to 5 ( very related); and textual entailment task, which determines whether the truth of a text entails the truth of another text called hypothesis. We use standard SICK (Sentences Involving Compositional Knowledge) dataset for evaluation. It consists of about 10,000 English sentence pairs annotated for relatedness in meaning and entailment." ], "extractive_spans": [ "SICK (Sentences Involving Compositional Knowledge) dataset" ], "free_form_answer": "", "highlighted_evidence": [ "We use standard SICK (Sentences Involving Compositional Knowledge) dataset for evaluation. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "d9135203a92ded14d260a7d551b7a447c8b7c910" ] }, { "annotation_id": [ "57b5daf1eabdd3b3e60e0ddc82b91040818c5105", "8601d0c400789d64b2d5016546e1ff67178c945b", "d072ddc0a231279cc6c1a9e36e5279374832b2ba" ], "answer": [ { "evidence": [ "Table TABREF35 show the comparisons between tree and sequential based methods. We can see that, if we don't deploy CNN, simple Tree LSTM yields better result than traditional LSTM, but worse than Bidirectional LSTM. This is reasonable due to the fact that Bidirectional LSTM can enhance sentence representation by concatenating forward and backward representations. We found that adding CNN layer will decrease the accuracy in this scenario. Because when feeding into CNN, we have to reshape the feature planes otherwise convolution will not work. For example, we set convolution kernel width as 2, the input 2D tensor will have the shape lager than 2. To boost performance with CNN, we need more matching features. We found Multi-layer Bidirectional LSTM can incorporate more features and achieve best performance compared with single-layer Bidirectional LSTM." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Table TABREF35 show the comparisons between tree and sequential based methods. We can see that, if we don't deploy CNN, simple Tree LSTM yields better result than traditional LSTM, but worse than Bidirectional LSTM." ], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "d9135203a92ded14d260a7d551b7a447c8b7c910", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "By how much do they outperform existing methods?", "Which datasets do they evaluate on?", "Do they separately evaluate performance of their learned representations (before forwarding them to the CNN layer)?" ], "question_id": [ "4dc4180127761e987c1043d5f8b94512bbe74d4f", "420862798054f736128a6f0c4393c7f9cc648b40", "ad8411edf11d3429c9bdd08b3e07ee671464d73c" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Neural Network Architecture for Deep Matching Feature Learning. M-BLSTM is Multilayer Bidirectional LSTM. Orange color represents sequence representations that concatenating pre-trained word vectors. Purple color represents sequence representation concatenating word vectors that generating from character-level convolutional network and HMLP.", "Figure 2: CNN Topology I Figure 3: CNN Topology II", "Table 1: Semantic Relatedness Task Comparison.", "Table 2: Textual Entailment Task Comparison.", "Table 3: Results of Tree LSTM vs Sequence LSTM on auxiliary char embedding." ], "file": [ "2-Figure1-1.png", "6-Figure2-1.png", "7-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png" ] }
[ "By how much do they outperform existing methods?" ]
[ [ "1603.09405-7-Table1-1.png", "1603.09405-7-Table2-1.png", "1603.09405-Results and Discussions-0" ] ]
[ "Best proposed result had 0.851 and 0.842 compared to best previous result of 0.828 and 0.846 on person correlation and accuracy respectively." ]
45
1912.11585
THUEE system description for NIST 2019 SRE CTS Challenge
This paper describes the systems submitted by the department of electronic engineering, institute of microelectronics of Tsinghua university and TsingMicro Co. Ltd. (THUEE) to the NIST 2019 speaker recognition evaluation CTS challenge. Six subsystems, including etdnn/ams, ftdnn/as, eftdnn/ams, resnet, multitask and c-vector are developed in this evaluation.
{ "paragraphs": [ [ "This paper describes the systems developed by the department of electronic engineering, institute of microelectronics of Tsinghua university and TsingMicro Co. Ltd. (THUEE) for the NIST 2019 speaker recognition evaluation (SRE) CTS challenge BIBREF0. Six subsystems, including etdnn/ams, ftdnn/as, eftdnn/ams, resnet, multitask and c-vector are developed in this evaluation. All the subsystems consists of a deep neural network followed by dimension deduction, score normalization and calibration. For each system, we begin with a summary of the data usage, followed by a description of the system setup along with their hyperparameters. Finally, we report experimental results obtained by each subsystem and fusion system on the SRE18 development and SRE18 evaluation datasets." ], [ "For the sake of clarity, the datasets notations are defined as in table 1 and the training data for the six subsystems are list in table 2, 3, and 4." ], [ "Etdnn/ams system is an extended version of tdnn with the additive margin softmax loss BIBREF1. Etdnn is used in speaker verification in BIBREF2. Compared with the traditional tdnn in BIBREF3, it has wider context and interleaving dense layers between each two tdnn layers. The architecture of our etdnn network is shown in table TABREF6. It is the same as the etdnn architecture in BIBREF2, except that the context of layer 5 of our system is t-3:t+3 instead of t-3, t, t+3. The x-vector is extracted from layer 12 prior to the ReLU non-linearity. For the loss, we use additive margin softmax with $m=0.15$ instead of traditional softmax loss or angular softmax loss. Additive margin softmax is proposed in BIBREF4 and then used in speaker verification in our paper BIBREF1. It is easier to train and generally performs better than angular softmax." ], [ "Factorized TDNN (ftdnn) architecture is listed in table TABREF8. It is the same to BIBREF2 except that we use 1024 nodes instead of 512 nodes in layer 12 and 13. The x-vector is extracted from layer 12 prior to the ReLU non-linearity. So our x-vector is 1024 dimensional. More details about the architecture can be found in BIBREF2." ], [ "Extended ftdnn (eftdnn) is a combination of etdnn and ftdnn. Its architecture is listed in table TABREF10. The x-vector is extracted from layer 22 prior to the ReLU non-linearity." ], [ "ResNet architecture is also based on tdnn x-vector BIBREF3. The five frame level tdnn layers in BIBREF3 are replaced by ResNet34 (512 nodes) + DNN(512 nodes) + DNN(1000 nodes). Further details about ResNet34 can be found in BIBREF5. In our realization, acoustic features are regarded as a single channel picture and feed into the ResNet34. If the dimensions in the residual network don't match, zeros are added. The statistic pooling and segment level network stay the same. For the loss function, we use angular softmax with $m=4$. The x-vector is extracted from first DNN layer in segment level prior to the ReLU non-linearity. It has 512 dimensions." ], [ "Multitask architecture is proposed in BIBREF6. It is a hybrid multi-task learning based on x-vector network and ASR network. It aims to introduce phonetic information by another neural acoustic model in ASR to help speaker recognition task. The architecture is shown in Fig. FIGREF13.", "The frame-level part of the x-vector network is a 10-layer TDNN. The input of each layer is the sliced output of the previous layer. The slicing parameter is: {t - 2; t - 1; t; t + 1; t + 2}, { t }, { t - 2; t; t + 2 }, {t}, { t - 3; t; t + 3 }, {t }, {t - 4; t; t + 4 }, { t }, { t } , { t }. It has 512 nodes in layer 1 to 9, and the 10-th layer has 1500 nodes. The segment-level part of x-vector network is a 2-layer fully-connected network with 512 nodes per layer. The output is predicted by softmax and the size is the same as the number of speakers.", "The ASR network has no statistics pooling component. The frame-level part of the x-vector network is a 7-layer TDNN. The input of each layer is the sliced output of the previous layer. The slicing parameter is: {t - 2; t - 1; t; t + 1; t + 2}, {t - 2; t; t + 2}, {t - 3; t; t + 3}, {t}, {t}, {t}, {t}. It has 512 nodes in layer 1 to 7.", "Only the first TDNN layer of the x-vector network is shared with the ASR network. The phonetic classification is done at the frame level, while the speaker labels are classified at the segment level.", "To train the multitask network, we need training data with speaker and ASR transcribed. But only Phonetic dataset fits this condition and the data amount is too small to train a neural network. So, we need to train a GMM-HMM speech recognition system to do phonetic alignment for other datasets. The GMM-HMM is trained using Phonetic dataset with features of 20-dimensional MFCCs with delta and delta-delta, totally 60-dimensional. The total number of senones is 3800. After training, forced alignment is applied to the SRE, Switchboard, and Voxceleb datasets using a fMLLR-SAT system." ], [ "C-vector architecture is also one of our proposed systems in paper BIBREF7. As shown in Fig. FIGREF15, it is an extension of multitask architecture. It combines multitask architecture with an extra ASR Acoustic Model. The output of ASR Acoustic Model is concatenated with x-vector's frame-level output as the input of statistics pooling. Refer to BIBREF7 for more details.", "The multitask part of c-vector has the same architecture as in the above section SECREF12 ASR Acoustic Model of c-vector is a 5-layer TDNN network. The slicing parameter is { t - 2; t - 1; t; t + 1; t + 2 }, { t - 1; t; t + 1 }, { t - 1; t; t + 1 }, { t - 3; t; t + 3}, { t - 6; t - 3; t}. The 5-th layer is the BN layer containing 128 nodes and other layers have 650 nodes.", "A GMM-HMM is also trained as like in section SECREF12 to do phonetic alignment for training datasets." ], [ "23-dimensional MFCC (20-3700Hz) is extracted as feature for etdnn/ams, ftdnn/as, eftdnn/ams, multitask and c-vector subsystems. 23-dimensional Fbank is used as feature for ResNet 16kHz subsystems. A simple energy-based VAD is used based on the C0 component of the MFCC feature BIBREF8.", "For each neural network, its training data are augmented using the public accessible MUSAN and RIRS_NOISES as the noise source. Two-fold data augmentation is applied for etdnn/ams, ftdnn/as, resnet, multitask and cvector subsystems. For eftdnn/ams subsystem, five-fold data augmentation is applied.", "After the embeddings are extracted, they are then transformed to 150 dimension using LDA. Then, embeddings are projected into unit sphere. At last, adapted PLDA with no dimension reduction is applied.", "The execution time is test on Intel Xeon E5-2680 v4. Extracting x-vector cost about 0.087RT. Single trial cost around 0.09RT. The memory cost about 1G for a x-vector extraction and a single trial. In the inference, only CPU is used.", "The speed test was performed on Intel Xeon E5-2680 v4 for etdnn_ams, multitask, c-vector and ResNet system. Test on Intel Xeon Platinum 8168 for ftdnn and eftdnn system. Extracting embedding cost about 0.103RT for etdnn_ams, 0.089RT for multitask, 0.092RT for c-vector, 0.132RT for eftdnn, 0.0639RT for ftdnn, and 0.112RT for ResNet. Single trial cost around 1.2ms for etdnn_ams, 0.9ms for multitask, 0.9ms for c-vector, 0.059s for eftdnn, 0.0288s for ftdnn, 1.0ms for ResNet. The memory cost about 1G for an embedding extraction and a single trial. In the inference, we just use CPU." ], [ "Our primary system is the linear fusion of all the above six subsystems by BOSARIS Toolkit on SRE19 dev and eval BIBREF9. Before the fusion, each score is calibrated by PAV method (pav_calibrate_scores) on our development database. It is evaluated by the primary metric provided by NIST SRE 2019." ] ], "section_name": [ "Introduction", "Data Usage", "Systems ::: Etdnn/ams", "Systems ::: ftdnn/as", "Systems ::: eftdnn/ams", "Systems ::: resnet", "Systems ::: multitask", "Systems ::: c-vector", "feature and back-end", "Fusion" ] }
{ "answers": [ { "annotation_id": [ "1b262b70d1ce9da1635c93baa62c8b1e4a97b412", "61a1a60e2a9fe9c3a3ae907398cc0a8764390835", "69fb04c5566525d2cce7349d0ca570c95f0395e0" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "8a244b3889b3d3dffc0706927c960dd6fc5616cc", "4ccd26242432ad9c9ff7ee0147d547adbd4bb676", "bbcd18b5070f6d84c4fc8eca1fdfe3b92907443e" ], "answer": [ { "evidence": [ "This paper describes the systems developed by the department of electronic engineering, institute of microelectronics of Tsinghua university and TsingMicro Co. Ltd. (THUEE) for the NIST 2019 speaker recognition evaluation (SRE) CTS challenge BIBREF0. Six subsystems, including etdnn/ams, ftdnn/as, eftdnn/ams, resnet, multitask and c-vector are developed in this evaluation. All the subsystems consists of a deep neural network followed by dimension deduction, score normalization and calibration. For each system, we begin with a summary of the data usage, followed by a description of the system setup along with their hyperparameters. Finally, we report experimental results obtained by each subsystem and fusion system on the SRE18 development and SRE18 evaluation datasets.", "FLOAT SELECTED: Table 1. Datasets Notations" ], "extractive_spans": [ "SRE18 development and SRE18 evaluation datasets" ], "free_form_answer": "", "highlighted_evidence": [ "Finally, we report experimental results obtained by each subsystem and fusion system on the SRE18 development and SRE18 evaluation datasets.", "FLOAT SELECTED: Table 1. Datasets Notations" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our primary system is the linear fusion of all the above six subsystems by BOSARIS Toolkit on SRE19 dev and eval BIBREF9. Before the fusion, each score is calibrated by PAV method (pav_calibrate_scores) on our development database. It is evaluated by the primary metric provided by NIST SRE 2019." ], "extractive_spans": [ "SRE19" ], "free_form_answer": "", "highlighted_evidence": [ "Our primary system is the linear fusion of all the above six subsystems by BOSARIS Toolkit on SRE19 dev and eval BIBREF9. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 3. Data usage for multitask and c-vector subsystems" ], "extractive_spans": [], "free_form_answer": "SRE04/05/06/08/10/MIXER6\nLDC98S75/LDC99S79/LDC2002S06/LDC2001S13/LDC2004S07\nVoxceleb 1/2\nFisher + Switchboard I\nCallhome+Callfriend", "highlighted_evidence": [ "FLOAT SELECTED: Table 3. Data usage for multitask and c-vector subsystems" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "43a21d26d019d7967db6e8fcdaa6ec4daca063fa", "55757eba38858adaeabb498631455b59f323d003", "76579c1ad5b5030641c1948eeceb1819ce2b5f24" ], "answer": [ { "evidence": [ "Our primary system is the linear fusion of all the above six subsystems by BOSARIS Toolkit on SRE19 dev and eval BIBREF9. Before the fusion, each score is calibrated by PAV method (pav_calibrate_scores) on our development database. It is evaluated by the primary metric provided by NIST SRE 2019." ], "extractive_spans": [ "primary system is the linear fusion of all the above six subsystems" ], "free_form_answer": "", "highlighted_evidence": [ "Our primary system is the linear fusion of all the above six subsystems by BOSARIS Toolkit on SRE19 dev and eval BIBREF9." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 8. Subsystem performance on SRE18 DEV and EVAL set." ], "extractive_spans": [], "free_form_answer": "eftdnn ", "highlighted_evidence": [ "FLOAT SELECTED: Table 8. Subsystem performance on SRE18 DEV and EVAL set." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 8. Subsystem performance on SRE18 DEV and EVAL set." ], "extractive_spans": [], "free_form_answer": "eftdnn", "highlighted_evidence": [ "FLOAT SELECTED: Table 8. Subsystem performance on SRE18 DEV and EVAL set." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "no", "no", "no" ], "question": [ "What was the baseline?", "What dataset was used in this challenge?", "Which subsystem outperformed the others?" ], "question_id": [ "11360385dff0a9d7b8f4b106ba2b7fe15ca90d7c", "875fbf4e5f93c3da63e28a233ce1d8405c7dfe63", "56b66d19dbc5e605788166e168f36d25f5beb774" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1. Datasets Notations", "Table 3. Data usage for multitask and c-vector subsystems", "Table 5. Etdnn architecture", "Fig. 1. multitask architecture for the speaker embedding extraction.", "Table 6. ftdnn architecture", "Table 7. eftdnn architecture", "Fig. 2. multitask architecture for the speaker embedding extraction.", "Table 8. Subsystem performance on SRE18 DEV and EVAL set." ], "file": [ "1-Table1-1.png", "1-Table3-1.png", "2-Table5-1.png", "2-Figure1-1.png", "2-Table6-1.png", "3-Table7-1.png", "3-Figure2-1.png", "4-Table8-1.png" ] }
[ "What dataset was used in this challenge?", "Which subsystem outperformed the others?" ]
[ [ "1912.11585-Fusion-0", "1912.11585-Introduction-0", "1912.11585-1-Table3-1.png", "1912.11585-1-Table1-1.png" ], [ "1912.11585-Fusion-0", "1912.11585-4-Table8-1.png" ] ]
[ "SRE04/05/06/08/10/MIXER6\nLDC98S75/LDC99S79/LDC2002S06/LDC2001S13/LDC2004S07\nVoxceleb 1/2\nFisher + Switchboard I\nCallhome+Callfriend", "eftdnn" ]
46
1707.09816
Combining Thesaurus Knowledge and Probabilistic Topic Models
In this paper we present the approach of introducing thesaurus knowledge into probabilistic topic models. The main idea of the approach is based on the assumption that the frequencies of semantically related words and phrases, which are met in the same texts, should be enhanced: this action leads to their larger contribution into topics found in these texts. We have conducted experiments with several thesauri and found that for improving topic models, it is useful to utilize domain-specific knowledge. If a general thesaurus, such as WordNet, is used, the thesaurus-based improvement of topic models can be achieved with excluding hyponymy relations in combined topic models.
{ "paragraphs": [ [ "Currently, probabilistic topic models are important tools for improving automatic text processing including information retrieval, text categorization, summarization, etc. Besides, they can be useful in supporting expert analysis of document collections, news flows, or large volumes of messages in social networks BIBREF0 , BIBREF1 , BIBREF2 . To facilitate this analysis, such approaches as automatic topic labeling and various visualization techniques have been proposed BIBREF1 , BIBREF3 .", "Boyd-Graber et al. BIBREF4 indicate that to be understandable by humans, topics should be specific, coherent, and informative. Relationships between the topic components can be inferred. In BIBREF1 four topic visualization approaches are compared. The authors of the experiment concluded that manual topic labels include a considerable number of phrases; users prefer shorter labels with more general words and tend to incorporate phrases and more generic terminology when using more complex network graph. Blei and Lafferty BIBREF3 visualize topics with ngrams consisting of words mentioned in these topics. These works show that phrases and knowledge about hyponyms/hypernyms are important for topic representation.", "In this paper we describe an approach to integrate large manual lexical resources such as WordNet or EuroVoc into probabilistic topic models, as well as automatically extracted n-grams to improve coherence and informativeness of generated topics. The structure of the paper is as follows. In Section 2 we consider related works. Section 3 describes the proposed approach. Section 4 enumerates automatic quality measures used in experiments. Section 5 presents the results obtained on several text collections according to automatic measures. Section 6 describes the results of manual evaluation of combined topic models for Islam Internet-site thematic analysis." ], [ "Topic modeling approaches are unsupervised statistical algorithms that usually considers each document as a \"bag of words\". There were several attempts to enrich word-based topic models (=unigram topic models) with additional prior knowledge or multiword expressions.", "Andrzejewski et al. BIBREF5 incorporated knowledge by Must-Link and Cannot-Link primitives represented by a Dirichlet Forest prior. These primitives were then used in BIBREF6 , where similar words are encouraged to have similar topic distributions. However, all such methods incorporate knowledge in a hard and topic-independent way, which is a simplification since two words that are similar in one topic are not necessarily of equal importance for another topic.", "Xie et al. BIBREF7 proposed a Markov Random Field regularized LDA model (MRF-LDA), which utilizes the external knowledge to improve the coherence of topic modeling. Within a document, if two words are labeled as similar according to the external knowledge, their latent topic nodes are connected by an undirected edge and a binary potential function is defined to encourage them to share the same topic label. Distributional similarity of words is calculated beforehand on a large text corpus.", "In BIBREF8 , the authors gather so-called lexical relation sets (LR-sets) for word senses described in WordNet. The LR-sets include synonyms, antonyms and adjective-attribute related words. To adapt LR-sets to a specific domain corpus and to remove inappropriate lexical relations, the correlation matrix for word pairs in each LR-set is calculated. This matrix at the first step is used for filtrating inappropriate senses, then it is used to modify the initial LDA topic model according to the generalized Polya urn model described in BIBREF9 . The generalized Polya urn model boosts probabilities of related words in word-topic distributions.", "Gao and Wen BIBREF10 presented Semantic Similarity-Enhanced Topic Model that accounts for corpus-specific word co-occurrence and word semantic similarity calculated on WordNet paths between corresponding synsets using the generalized Polya urn model. They apply their topic model for categorizing short texts.", "All above-mentioned approaches on adding knowledge to topic models are limited to single words. Approaches using ngrams in topic models can be subdivided into two groups. The first group of methods tries to create a unified probabilistic model accounting unigrams and phrases. Bigram-based approaches include the Bigram Topic Model BIBREF11 and LDA Collocation Model BIBREF12 . In BIBREF13 the Topical N-Gram Model was proposed to allow the generation of ngrams based on the context. However, all these models are enough complex and hard to compute on real datasets.", "The second group of methods is based on preliminary extraction of ngrams and their further use in topics generation. Initial studies of this approach used only bigrams BIBREF14 , BIBREF15 . Nokel and Loukachevitch BIBREF16 proposed the LDA-SIM algorithm, which integrates top-ranked ngrams and terms of information-retrieval thesauri into topic models (thesaurus relations were not utilized). They create similarity sets of expressions having the same word components and sum up frequencies of similarity set members if they co-occur in the same text.", "In this paper we describe the approach to integrate whole manual thesauri into topic models together with multiword expressions." ], [ "In our approach we develop the idea of BIBREF16 that proposed to construct similarity sets between ngram phrases between each other and single words. Phrases and words are included in the same similarity set if they have the same component word, for example, weapon – nuclear weapon – weapon of mass destruction; discrimination – racial discrimination. It was supposed that if expressions from the same similarity set co-occur in the same document then their contribution into the document's topics is really more than it is presented with their frequencies, therefore their frequencies should be increased. In such an approach, the algorithm can \"see\" similarities between different multiword expressions with the same component word.", "In our approach, at first, we include related single words and phrases from a thesaurus such as WordNet or EuroVoc in these similarity sets. Then, we add preliminarily extracted ngrams into these sets and, this way, we use two different sources of external knowledge. We use the same LDA-SIM algorithm as described in BIBREF16 but study what types of semantic relations can be introduced into such similarity sets and be useful for improving topic models. The pseudocode of LDA-SIM algorithm is presented in Algorithm SECREF3 , where INLINEFORM0 is a similarity set, expressions in similarity sets can comprise single words, thesaurus phrases or generated noun compounds.", "We can compare this approach with the approaches applying the generalized Polya urn model BIBREF8 , BIBREF9 , BIBREF10 . To add prior knowledge, those approaches change topic distributions for related words globally in the collection. We modify topic probabilities for related words and phrases locally, in specific texts, only when related words (phrases) co-occur in these texts.", "[ht!] collection INLINEFORM0 , vocabulary INLINEFORM1 , number of topics INLINEFORM2 , initial INLINEFORM3 and INLINEFORM4 , sets of similar expressions INLINEFORM5 , hyperparameters INLINEFORM6 and INLINEFORM7 , INLINEFORM8 is the frequency of INLINEFORM9 in the document INLINEFORM10 distributions INLINEFORM11 and INLINEFORM12 not meet the stop criterion INLINEFORM13 INLINEFORM14 ", " INLINEFORM0 INLINEFORM1 ", " INLINEFORM0 ", " INLINEFORM0 ", " LDA-SIM algorithm" ], [ "To estimate the quality of topic models, we use two main automatic measures: topic coherence and kernel uniqueness. For human content analysis, measures of topic coherence and kernel uniqueness are both important and complement each other. Topics can be coherent but have a lot of repetitions. On the other hand, generated topics can be very diverse, but incoherent within each topic.", "Topic coherence is an automatic metric of interpretability. It was shown that the coherence measure has a high correlation with the expert estimates of topic interpretability BIBREF9 , BIBREF17 . Mimno BIBREF9 described an experiment comparing expert evaluation of LDA-generated topics and automatic topic coherence measures. It was found that most \"bad\" topics consisted of words without clear relations between each other.", "Newman et al. BIBREF6 asked users to score topics on a 3-point scale, where 3=“useful” (coherent) and 1=“useless” (less coherent). They instructed the users that one indicator of usefulness is the ease by which one could think of a short label to describe a topic. Then several automatic measures, including WordNet-based measures and corpus co-occurrence measures, were compared. It was found that the best automatic measure having the largest correlation with human evaluation is word co-occurrence calculated as point-wise mutual information (PMI) on Wikipedia articles. Later Lau et al. BIBREF17 showed that normalized poinwise mutual information (NPMI) BIBREF18 calculated on Wikipedia articles correlates even more strongly with human scores.", "We calculate automatic topic coherence using two measure variants. The coherence of a topic is the median PMI (NPMI) of word pairs representing the topic, usually it is calculated for INLINEFORM0 most probable elements (in our study ten elements) in the topic. The coherence of the model is the median of the topic coherence. To make this measure more objective, it should be calculated on an external corpus BIBREF17 . In our case, we use Wikipedia dumps. DISPLAYFORM0 ", "Human-constructed topics usually have unique main words. The measure of kernel uniqueness shows to what extent topics are different from each other and is calculated as the number of unique elements among most probable elements of topics (kernels) in relation to the whole number of elements in kernels. DISPLAYFORM0 ", "If uniqueness of the topic kernels is closer to zero then many topics are similar to each other, contain the same words in their kernels. In this paper the kernel of a topic means the ten most probable words in the topic. We also calculated perplexity as the measure of language models. We use it for additional checking the model quality." ], [ "For evaluating topics with automatic quality measures, we used several English text collections and one Russian collection (Table TABREF7 ). We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 .", "At the preprocessing step, documents were processed by morphological analyzers. Also, we extracted noun groups as described in BIBREF16 . As baselines, we use the unigram LDA topic model and LDA topic model with added 1000 ngrams with maximal NC-value BIBREF20 extracted from the collection under analysis.", "As it was found before BIBREF14 , BIBREF16 , the addition of ngrams without accounting relations between their components considerably worsens the perplexity because of the vocabulary growth (for perplexity the less is the better) and practically does not change other automatic quality measures (Table 2).", "We add the Wordnet data in the following steps. At the first step, we include WordNet synonyms (including multiword expressions) into the proposed similarity sets (LDA-Sim+WNsyn). At this step, frequencies of synonyms found in the same document are summed up in process LDA topic learning as described in Algorithm SECREF3 . We can see that the kernel uniqueness becomes very low, topics are very close to each other in content (Table 2: LDA-Sim+WNsyn). At the second step, we add word direct relatives (hyponyms, hypernyms, etc.) to similarity sets. Now the frequencies of semantically related words are added up enhancing the contribution into all topics of the current document.", "The Table 2 shows that these two steps lead to great degradation of the topic model in most measures in comparison to the initial unigram model: uniqueness of kernels abruptly decreases, perplexity at the second step grows by several times (Table 2: LDA-Sim+WNsynrel). It is evident that at this step the model has a poor quality. When we look at the topics, the cause of the problem seems to be clear. We can see the overgeneralization of the obtained topics. The topics are built around very general words such as \"person\", \"organization\", \"year\", etc. These words were initially frequent in the collection and then received additional frequencies from their frequent synonyms and related words.", "Then we suppose that these general words were used in texts to discuss specific events and objects, therefore, we change the constructions of the similarity sets in the following way: we do not add word hyponyms to its similarity set. Thus, hyponyms, which are usually more specific and concrete, should obtain additional frequencies from upper synsets and increase their contributions into the document topics. But the frequencies and contribution of hypernyms into the topic of the document are not changed. And we see the great improvement of the model quality: the kernel uniqueness considerably improves, perplexity decreases to levels comparable with the unigram model, topic coherence characteristics also improve for most collections (Table 2:LDA-Sim+WNsynrel/hyp).", "We further use the WordNet-based similarity sets with n-grams having the same components as described in BIBREF16 . All measures significantly improve for all collections (Table 2:LDA-Sim+WNsr/hyp+Ngrams). At the last step, we try to apply the same approach to ngrams that was previously utilized to hyponym-hypernym relations: frequencies of shorter ngrams and words are summed to frequencies of longer ngrams but not vice versa. In this case we try to increase the contribution of more specific longer ngrams into topics. It can be seen (Table 2) that the kernel uniqueness grows significantly, at this step it is 1.3-1.6 times greater than for the baseline models achieving 0.76 on the ACL collection (Table 2:LDA-Sim+WNsr/hyp+Ngrams/l).", "At the second series of the experiments, we applied EuroVoc information retrieval thesaurus to two European Union collections: Europarl and JRC. In content, the EuroVoc thesaurus is much smaller than WordNet, it contains terms from economic and political domains and does not include general abstract words. The results are shown in Table 3. It can be seen that inclusion of EuroVoc synsets improves the topic coherence and increases kernel uniqueness (in contrast to results with WordNet). Adding ngrams further improves the topic coherence and kernel uniqueness.", "At last we experimented with the Russian banking collection and utilized RuThes thesaurus. In this case we obtained improvement already on RuThes synsets and again adding ngrams further improved topic coherence and kernel uniqueness (Table 4).", "It is worth noting that adding ngrams sometimes worsens the TC-NPMI measure, especially on the JRC collection. This is due to the fact that in these evaluation frameworks, the topics' top elements contain a lot of multiword expressions, which rarely occur in Wikipedia, used for the coherence calculation, therefore the utilized automatic coherence measures can have insufficient evidence for correct estimates." ], [ "To estimate the quality of topic models in a real task, we chose Islam informational portal \"Golos Islama\" (Islam Voice) (in Russian). This portal contains both news articles related to Islam and articles discussing Islam basics. We supposed that the thematic analysis of this specialized site can be significantly improved with domain-specific knowledge described in the thesaurus form. We extracted the site contents using Open Web Spider and obtained 26,839 pages.", "To combine knowledge with a topic model, we used RuThes thesaurus together with the additional block of the Islam thesaurus. The Islam thesaurus contains more than 5 thousand Islam-related terms including single words and expressions.", "For each combined model, we ran two experiments with 100 topics and with 200 topics. The generated topics were evaluated by two linguists, who had previously worked on the Islam thesaurus. The evaluation task was formulated as follows: the experts should read the top elements of the generated topics and try to formulate labels of these topics. The labels should be different for each topic in the set generated with a specific model. The experts should also assign scores to the topics' labels:", "Then we can sum up all the scores for each model under consideration and compare the total scores in value. Thus, maximum values of the topic score are 200 for a 100-topic model and 400 for a 200-topic model. In this experiment we do not measure inter-annotator agreement for each topic, but try to get expert's general impression.", "Due to the complicated character of the Islam portal contents for automatic extraction (numerous words and names difficult for Russian morphological analyzers), we did not use automatic extraction of multiword expressions and exploited only phrases described in RuThes or in the Islam Thesaurus. We added thesaurus phrases in two ways: most frequent 1000 phrases (as in BIBREF14 , BIBREF16 ) and phrases with frequency more than 10 (More10phrases): the number of such phrases is 9351.", "The results of the evaluation are shown in Table 5. The table contains the overall expert scores for a topic model (Score), kernel uniqueness as in the previous section (KernU), perplexity (Prpl). Also for each model kernels, we calculated the average number of known relations between topics’s elements: thesaurus relations (synonyms and direct relations between concepts) and component-based relations between phrases (Relc).", "It can be seen that if we add phrases without accounting component similarity (Runs 2, 3), the quality of topics decreases: the more phrases are added, the more the quality degrades. The human scores also confirm this fact. But if the similarity between phrase components is considered then the quality of topics significantly improves and becomes better than for unigram models (Runs 4, 5). All measures are better. Relational coherence between kernel elements also grows. The number of added phrases is not very essential.", "Adding unary synonyms decreases the quality of the models (Run 6) according to human scores. But all other measures behave differently: kernel uniqueness is high, perplexity decreases, relational coherence grows. The problem of this model is in that non-topical, general words are grouped together, reinforce one another but do not look as related to any topic. Adding all thesaurus relations is not very beneficial (Runs 7, 8). If we consider all relations except hyponyms, the human scores are better for corresponding runs (Runs 9, 10). Relational coherence in topics’ kernels achieves very high values: the quarter of all elements have some relations between each other, but it does not help to improve topics. The explanation is the same: general words can be grouped together.", "At last, we removed General Lexicon concepts from the RuThes data, which are top-level, non-thematic concepts that can be met in arbitrary domains BIBREF19 and considered all-relations and without-hyponyms variants (Runs 11, 12). These last variants achieved maximal human scores because they add thematic knowledge and avoid general knowledge, which can distort topics. Kernel uniqueness is also maximal.", "Table 6 shows similar topics obtained with the unigram, phrase-enriched (Run 5) and the thesaurus-enriched topic model (Run 12). The Run-5 model adds thesaurus phrases with frequency more than 10 and accounts for the component similarity between phrases. The Run-12 model accounts both component relations and hypernym thesaurus relations. All topics are of high quality, quite understandable. The experts evaluated them with the same high scores.", "Phrase-enriched and thesaurus-enriched topics convey the content using both single words and phrases. It can be seen that phrase-enriched topics contain more phrases. Sometimes the phrases can create not very convincing relations such as Russian church - Russian language. It is explainable but does not seem much topical in this case.", "The thesaurus topics seem to convey the contents in the most concentrated way. In the Syrian topic general word country is absent; instead of UN (United Nations), it contains word rebel, which is closer to the Syrian situation. In the Orthodox church topic, the unigram variant contains extra word year, relations of words Moscow and Kirill to other words in the topic can be inferred only from the encyclopedic knowledge." ], [ "In this paper we presented the approach for introducing thesaurus information into topic models. The main idea of the approach is based on the assumption that if related words or phrases co-occur in the same text, their frequencies should be enhanced and this action leads to their mutual larger contribution into topics found in this text.", "In the experiments on four English collections, it was shown that the direct implementation of this idea using WordNet synonyms and/or direct relations leads to great degradation of the unigram model. But the correction of initial assumptions and excluding hyponyms from frequencies adding improve the model and makes it much better than the initial model in several measures. Adding ngrams in a similar manner further improves the model.", "Introducing information from domain-specific thesaurus EuroVoc led to improving the initial model without the additional assumption, which can be explained by the absence of general abstract words in such information-retrieval thesauri.", "We also considered thematic analysis of an Islam Internet site and evaluated the combined topic models manually. We found that the best, understandable topics are obtained by adding domain-specific thesaurus knowledge (domain terms, synonyms, and relations)." ] ], "section_name": [ "Introduction", "Related Work", "Approach to Integration Whole Thesauri into Topic Models", "Automatic Measures to Estimate the Quality of Topic Models", "Use of Automatic Measures to Assess Combined Models", "Manual Evaluation of Combined Topic Models", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "2108aa9d6241a4425397f58ae6b4c24c2711ccb4", "650b5d82be75a1b3208840fc58813ab933acf07c", "fe0a5937c590049015c1c4530286611a90234bd6" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "We add the Wordnet data in the following steps. At the first step, we include WordNet synonyms (including multiword expressions) into the proposed similarity sets (LDA-Sim+WNsyn). At this step, frequencies of synonyms found in the same document are summed up in process LDA topic learning as described in Algorithm SECREF3 . We can see that the kernel uniqueness becomes very low, topics are very close to each other in content (Table 2: LDA-Sim+WNsyn). At the second step, we add word direct relatives (hyponyms, hypernyms, etc.) to similarity sets. Now the frequencies of semantically related words are added up enhancing the contribution into all topics of the current document." ], "extractive_spans": [], "free_form_answer": "Variation decreases when frequencies of synonyms is enhanced; variation increases when frequencies of synonyms, hyponyms, hypernyms are enhanced", "highlighted_evidence": [ "At the first step, we include WordNet synonyms (including multiword expressions) into the proposed similarity sets (LDA-Sim+WNsyn). At this step, frequencies of synonyms found in the same document are summed up in process LDA topic learning as described in Algorithm SECREF3 . We can see that the kernel uniqueness becomes very low, topics are very close to each other in content (Table 2: LDA-Sim+WNsyn). At the second step, we add word direct relatives (hyponyms, hypernyms, etc.) to similarity sets. Now the frequencies of semantically related words are added up enhancing the contribution into all topics of the current document." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "492388cc8bec1566c816f3589dc6b5bf59903cb3", "93ac749e066aab943d6e825ef9e9d0265f7f9505", "b2c86bb6663b4d63c2548918e4adff8508fd4551" ], "answer": [ { "evidence": [ "At the second series of the experiments, we applied EuroVoc information retrieval thesaurus to two European Union collections: Europarl and JRC. In content, the EuroVoc thesaurus is much smaller than WordNet, it contains terms from economic and political domains and does not include general abstract words. The results are shown in Table 3. It can be seen that inclusion of EuroVoc synsets improves the topic coherence and increases kernel uniqueness (in contrast to results with WordNet). Adding ngrams further improves the topic coherence and kernel uniqueness." ], "extractive_spans": [ "economic", "political" ], "free_form_answer": "", "highlighted_evidence": [ "At the second series of the experiments, we applied EuroVoc information retrieval thesaurus to two European Union collections: Europarl and JRC. In content, the EuroVoc thesaurus is much smaller than WordNet, it contains terms from economic and political domains and does not include general abstract words." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To estimate the quality of topic models in a real task, we chose Islam informational portal \"Golos Islama\" (Islam Voice) (in Russian). This portal contains both news articles related to Islam and articles discussing Islam basics. We supposed that the thematic analysis of this specialized site can be significantly improved with domain-specific knowledge described in the thesaurus form. We extracted the site contents using Open Web Spider and obtained 26,839 pages." ], "extractive_spans": [ " news articles related to Islam and articles discussing Islam basics" ], "free_form_answer": "", "highlighted_evidence": [ "To estimate the quality of topic models in a real task, we chose Islam informational portal \"Golos Islama\" (Islam Voice) (in Russian). This portal contains both news articles related to Islam and articles discussing Islam basics." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "At the second series of the experiments, we applied EuroVoc information retrieval thesaurus to two European Union collections: Europarl and JRC. In content, the EuroVoc thesaurus is much smaller than WordNet, it contains terms from economic and political domains and does not include general abstract words. The results are shown in Table 3. It can be seen that inclusion of EuroVoc synsets improves the topic coherence and increases kernel uniqueness (in contrast to results with WordNet). Adding ngrams further improves the topic coherence and kernel uniqueness." ], "extractive_spans": [ "economic", "political" ], "free_form_answer": "", "highlighted_evidence": [ "In content, the EuroVoc thesaurus is much smaller than WordNet, it contains terms from economic and political domains and does not include general abstract words." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "08420993785e83068dbdfb0a25c2171224725b1c", "563c9caee56077a0abaca8c6cf7641ed412b0ac2", "a79209c610ba64924d36c1e5d1285043c72088e4" ], "answer": [ { "evidence": [ "For evaluating topics with automatic quality measures, we used several English text collections and one Russian collection (Table TABREF7 ). We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 ." ], "extractive_spans": [ "WordNet", "European Union EuroVoc", "RuThes" ], "free_form_answer": "", "highlighted_evidence": [ "We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For evaluating topics with automatic quality measures, we used several English text collections and one Russian collection (Table TABREF7 ). We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 ." ], "extractive_spans": [ "WordNet", "EuroVoc", " RuThes" ], "free_form_answer": "", "highlighted_evidence": [ " We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For evaluating topics with automatic quality measures, we used several English text collections and one Russian collection (Table TABREF7 ). We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 ." ], "extractive_spans": [ "WordNet ", "EuroVoc ", "RuThes " ], "free_form_answer": "", "highlighted_evidence": [ "We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "Do they reduce language variation of text by enhancing frequencies?", "Which domains do they explore?", "Which thesauri did they use?" ], "question_id": [ "2d924e888a92dc0b14cdb5584e73e87254c3d1ee", "3ed8ac1ba4df6609fa7de5077d83e820641edc5e", "e1ab241059ef1700738f885f051d724a7fcf283a" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1. Text collections for experiments", "Table 2. Integration of WordNet into topic models", "Table 3. Integration of EuroVoc into topic models", "Table 4. The results obtained for Russian Banking collection", "Table 5. Results of manual labeling of topic models for the Islam site", "Table 6. Comparison of similar topics in the unigram, phrase-based (Run 5) and the best thesaurus-enriched topic models (Run 12)." ], "file": [ "5-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "9-Table5-1.png", "10-Table6-1.png" ] }
[ "Do they reduce language variation of text by enhancing frequencies?" ]
[ [ "1707.09816-Use of Automatic Measures to Assess Combined Models-3" ] ]
[ "Variation decreases when frequencies of synonyms is enhanced; variation increases when frequencies of synonyms, hyponyms, hypernyms are enhanced" ]
47
1703.04009
Automated Hate Speech Detection and the Problem of Offensive Language
A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.
{ "paragraphs": [ [ "What constitutes hate speech and when does it differ from offensive language? No formal definition exists but there is a consensus that it is speech that targets disadvantaged social groups in a manner that is potentially harmful to them BIBREF0 , BIBREF1 . In the United States, hate speech is protected under the free speech provisions of the First Amendment, but it has been extensively debated in the legal sphere and with regards to speech codes on college campuses. In many countries, including the United Kingdom, Canada, and France, there are laws prohibiting hate speech, which tends to be defined as speech that targets minority groups in a way that could promote violence or social disorder. People convicted of using hate speech can often face large fines and even imprisonment. These laws extend to the internet and social media, leading many sites to create their own provisions against hate speech. Both Facebook and Twitter have responded to criticism for not doing enough to prevent hate speech on their sites by instituting policies to prohibit the use of their platforms for attacks on people based on characteristics like race, ethnicity, gender, and sexual orientation, or threats of violence towards others.", "Drawing upon these definitions, we define hate speech as language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group. In extreme cases this may also be language that threatens or incites violence, but limiting our definition only to such cases would exclude a large proportion of hate speech. Importantly, our definition does not include all instances of offensive language because people often use terms that are highly offensive to certain groups but in a qualitatively different manner. For example some African Americans often use the term n*gga in everyday language online BIBREF2 , people use terms like h*e and b*tch when quoting rap lyrics, and teenagers use homophobic slurs like f*g as they play video games. Such language is prevalent on social media BIBREF3 , making this boundary condition crucial for any usable hate speech detection system .", "Previous work on hate speech detection has identified this problem but many studies still tend to conflate hate speech and offensive language. In this paper we label tweets into three categories: hate speech, offensive language, or neither. We train a model to differentiate between these categories and then analyze the results in order to better understand how we can distinguish between them. Our results show that fine-grained labels can help in the task of hate speech detection and highlights some of the key challenges to accurate classification. We conclude that future work must better account for context and the heterogeneity in hate speech usage." ], [ "Bag-of-words approaches tend to have high recall but lead to high rates of false positives since the presence of offensive words can lead to the misclassification of tweets as hate speech BIBREF4 , BIBREF5 . Focusing on anti-black racism, BIBREF4 find that 86% of the time the reason a tweet was categorized as racist was because it contained offensive words. Given the relatively high prevalence of offensive language and curse words on social media this makes hate speech detection particularly challenging BIBREF3 . The difference between hate speech and other offensive language is often based upon subtle linguistic distinctions, for example tweets containing the word n*gger are more likely to be labeled as hate speech than n*gga BIBREF4 . Many can be ambiguous, for example the word gay can be used both pejoratively and in other contexts unrelated to hate speech BIBREF3 .", "Syntactic features have been leveraged to better identify the targets and intensity of hate speech, for example sentences where a relevant noun and verb occur (e.g. kill and Jews) BIBREF6 , the POS trigram DT jewish NN BIBREF2 , and the syntactic structure I <intensity > <user intent > <hate target >, e.g. I f*cking hate white people BIBREF7 .", "Other supervised approaches to hate speech classification have unfortunately conflated hate speech with offensive language, making it difficult to ascertain the extent to which they are really identifying hate speech BIBREF5 , BIBREF8 . Neural language models show promise in the task but existing work has used training data has a similarly broad definition of hate speech BIBREF9 . Non-linguistic features like the gender or ethnicity of the author can help improve hate speech classification but this information is often unavailable or unreliable on social media BIBREF8 ." ], [ "We begin with a hate speech lexicon containing words and phrases identified by internet users as hate speech, compiled by Hatebase.org. Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. We extracted the time-line for each user, resulting in a set of 85.4 million tweets. From this corpus we then took a random sample of 25k tweets containing terms from the lexicon and had them manually coded by CrowdFlower (CF) workers. Workers were asked to label each tweet as one of three categories: hate speech, offensive but not hate speech, or neither offensive nor hate speech. They were provided with our definition along with a paragraph explaining it in further detail. Users were asked to think not just about the words appearing in a given tweet but about the context in which they were used. They were instructed that the presence of a particular word, however offensive, did not necessarily indicate a tweet is hate speech. Each tweet was coded by three or more people. The intercoder-agreement score provided by CF is 92%. We use the majority decision for each tweet to assign a label. Some tweets were not assigned labels as there was no majority class. This results in a sample of 24,802 labeled tweets.", "Only 5% of tweets were coded as hate speech by the majority of coders and only 1.3% were coded unanimously, demonstrating the imprecision of the Hatebase lexicon. This is much lower than a comparable study using Twitter, where 11.6% of tweets were flagged as hate speech BIBREF5 , likely because we use a stricter criteria for hate speech. The majority of the tweets were considered to be offensive language (76% at 2/3, 53% at 3/3) and the remainder were considered to be non-offensive (16.6% at 2/3, 11.8% at 3/3). We then constructed features from these tweets and used them to train a classifier." ], [ "We lowercased each tweet and stemmed it using the Porter stemmer, then create bigram, unigram, and trigram features, each weighted by its TF-IDF. To capture information about the syntactic structure we use NLTK BIBREF10 to construct Penn Part-of-Speech (POS) tag unigrams, bigrams, and trigrams. To capture the quality of each tweet we use modified Flesch-Kincaid Grade Level and Flesch Reading Ease scores, where the number of sentences is fixed at one. We also use a sentiment lexicon designed for social media to assign sentiment scores to each tweet BIBREF11 . We also include binary and count indicators for hashtags, mentions, retweets, and URLs, as well as features for the number of characters, words, and syllables in each tweet." ], [ "We first use a logistic regression with L1 regularization to reduce the dimensionality of the data. We then test a variety of models that have been used in prior work: logistic regression, naïve Bayes, decision trees, random forests, and linear SVMs. We tested each model using 5-fold cross validation, holding out 10% of the sample for evaluation to help prevent over-fitting. After using a grid-search to iterate over the models and parameters we find that the Logistic Regression and Linear SVM tended to perform significantly better than other models. We decided to use a logistic regression with L2 regularization for the final model as it more readily allows us to examine the predicted probabilities of class membership and has performed well in previous papers BIBREF5 , BIBREF8 . We trained the final model using the entire dataset and used it to predict the label for each tweet. We use a one-versus-rest framework where a separate classifier is trained for each class and the class label with the highest predicted probability across all classifiers is assigned to each tweet. All modeling was performing using scikit-learn BIBREF12 ." ], [ "The best performing model has an overall precision 0.91, recall of 0.90, and F1 score of 0.90. Looking at Figure 1, however, we see that almost 40% of hate speech is misclassified: the precision and recall scores for the hate class are 0.44 and 0.61 respectively. Most of the misclassification occurs in the upper triangle of this matrix, suggesting that the model is biased towards classifying tweets as less hateful or offensive than the human coders. Far fewer tweets are classified as more offensive or hateful than their true category; approximately 5% of offensive and 2% of innocuous tweets have been erroneously classified as hate speech. To explore why these tweets have been misclassified we now look more closely at the tweets and their predicted classes.", "Tweets with the highest predicted probabilities of being hate speech tend to contain multiple racial or homophobic slurs, e.g. @JuanYeez shut yo beaner ass up sp*c and hop your f*ggot ass back across the border little n*gga and RT @eBeZa: Stupid f*cking n*gger LeBron. You flipping jungle bunny monkey f*ggot. Other tweets tend to be correctly identified as hate when they contained strongly racist or homophobic terms like n*gger and f*ggot. Interestingly, we also find cases where people use hate speech to respond to other hate speakers, such as this tweet where someone uses a homophobic slur to criticize someone else's racism: @MrMoonfrog @RacistNegro86 f*ck you, stupid ass coward b*tch f*ggot racist piece of sh*t.", "Turning to true hate speech classified as offensive it appears that tweets with the highest predicted probability of being offensive are genuinely less hateful and were perhaps mislabeled, for example When you realize how curiosity is a b*tch #CuriosityKilledMe may have been erroneously coded as hate speech if people thought that curiosity was a person, and Why no boycott of racist \"redskins\"? #Redskins #ChangeTheName contains a slur but is actually against racism. It is likely that coders skimmed these tweets too quickly, picking out words or phrases that appeared to be hateful without considering the context. Turning to borderline cases, where the probability of being offensive is marginally higher than hate speech, it appears that the majority are hate speech, both directed towards other Twitter users, @MDreyfus @NatFascist88 Sh*t your ass your moms p*ssy u Jew b*stard. Ur times coming. Heil Hitler! and general hateful statements like My advice of the day: If your a tranny...go f*ck your self!. These tweets fit our definition of hate speech but were likely misclassified because they do not contain any of the terms most strongly associated with hate speech. Finally, the hateful tweets incorrectly labeled as neither tend not to contain hate or curse words, for example If some one isn't an Anglo-Saxon Protestant, they have no right to be alive in the US. None at all, they are foreign filth contains a negative term, filth but no slur against a particular group. We also see that rarer types of hate speech, for example this anti-Chinese statement Every slant in #LA should be deported. Those scum have no right to be here. Chinatown should be bulldozed, are incorrectly classified. While the classifier performs well at prevalent forms of hate speech, particularly anti-black racism and homophobia, but is less reliable at detecting types of hate speech that occur infrequently, a problem noted by BIBREF13 ( BIBREF13 ).", "A key flaw in much previous work is that offensive language is mislabeled as hate speech due to an overly broad definition. Our multi-class framework allows us to minimize these errors; only 5% of our true offensive language was labeled as hate. The tweets correctly labeled as offensive tend to contain curse words and often sexist language, e.g. Why you worried bout that other h*e? Cuz that other h*e aint worried bout another h*e and I knew Kendrick Lamar was onto something when he said I call a b*tch a b*tch, a h*e a h*e, a woman a woman. Many of these tweets contain sexist terms like b*tch, p*ssy, and h*e. Human coders appear to consider racists or homophobic terms to be hateful but consider words that are sexist and derogatory towards women to be only offensive, consistent prior findings BIBREF8 .", "Looking at the tweets misclassified as hate speech we see that many contain multiple slurs, e.g. @SmogBaby: These h*es be lyin to all of us n*ggas and My n*gga mister meaner just hope back in the b*tch. While these tweets contain terms that can be considered racist and sexist it is apparent than many Twitter users use this type of language in their everyday communications. When they do contain racist language they tend to contain the term n*gga rather than n*gger, in line with the findings of BIBREF4 ( BIBREF4 ). We also found a few recurring phrases such as these h*es ain't loyal that were actually lyrics from rap songs that users were quoting. Classification of such tweets as hate speech leads us to overestimate the prevalence of the phenomenon. While our model still misclassifies some offensive language as hate speech we are able to avoid the vast majority of these errors by differentiating between the two.", "Finally, turning to the neither class, we see that tweets with the highest predicted probability of belonging to this class all appear to be innocuous and were included in the sample because they contained terms included in the Hatebase lexicon such as charlie and bird that are generally not used in a hateful manner. Tweets with overall positive sentiment and higher readability scores are more likely to belong to this class. The tweets in this category that have been misclassified as hate or offensive tend to mention race, sexuality, and other social categories that are targeted by hate speakers. Most appear to be misclassifications appear to be caused by on the presence of potentially offensive language, for example He's a damn good actor. As a gay man it's awesome to see an openly queer actor given the lead role for a major film contains the potentially the offensive terms gay and queer but uses them in a positive sense. This problem has been encountered in previous research BIBREF2 and illustrates the importance of taking context into account. We also found a small number of cases where the coders appear to have missed hate speech that was correctly identified by our model, e.g. @mayormcgunn @SenFeinstein White people need those weapons to defend themselves from the subhuman trash your sort unleashes on us. This finding is consistent with previous work that has found amateur coders to often be unreliable at identifying abusive content BIBREF13 , BIBREF14 ." ], [ "If we conflate hate speech and offensive language then we erroneously consider many people to be hate speakers (errors in the lower triangle of Figure 1) and fail differentiate between commonplace offensive language and serious hate speech (errors in the upper triangle of Figure 1). Given the legal and moral implications of hate speech it is important that we are able to accurately distinguish between the two. Lexical methods are effective ways to identify potentially offensive terms but are inaccurate at identifying hate speech; only a small percentage of tweets flagged by the Hatebase lexicon were considered hate speech by human coders. While automated classification methods can achieve relatively high accuracy at differentiating between these different classes, close analysis of the results shows that the presence or absence of particular offensive or hateful terms can both help and hinder accurate classification.", "Consistent with previous work, we find that certain terms are particularly useful for distinguishing between hate speech and offensive language. While f*g, b*tch, and n*gga are used in both hate speech and offensive language, the terms f*ggot and n*gger are generally associated with hate speech. Many of the tweets considered most hateful contain multiple racial and homophobic slurs. While this allows us to easily identify some of the more egregious instances of hate speech it means that we are more likely to misclassify hate speech if it doesn't contain any curse words or offensive terms. To more accurately classify such cases we should find sources of training data that are hateful without necessarily using particular keywords or offensive language.", "Our results also illustrate how hate speech can be used in different ways: it can be directly send to a person or group of people targeted, it can be espoused to nobody in particular, and it can be used in conversation between people. Future work should distinguish between these different uses and look more closely at the social contexts and conversations in which hate speech occurs. We must also study more closely the people who use hate speech, focusing both on their individual characteristics and motivations and on the social structures they are embedded in.", "Hate speech is a difficult phenomenon to define and is not monolithic. Our classifications of hate speech tend to reflect our own subjective biases. People identify racist and homophobic slurs as hateful but tend to see sexist language as merely offensive. While our results show that people perform well at identifying some of the more egregious instances of hate speech, particularly anti-black racism and homophobia, it is important that we are cognizant of the social biases that enter into our algorithms and future work should aim to identify and correct these biases." ] ], "section_name": [ "Introduction", "Related Work", "Data", "Features", "Model", "Results", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "25c1d626c5af8efbe83fdb95d72f4681508baa12", "5a211d102cf9f618b294330b395cc59d937dfaa7", "63eeeae80b446b0d1a3ae393b86fc6c53934995a" ], "answer": [ { "evidence": [ "Drawing upon these definitions, we define hate speech as language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group. In extreme cases this may also be language that threatens or incites violence, but limiting our definition only to such cases would exclude a large proportion of hate speech. Importantly, our definition does not include all instances of offensive language because people often use terms that are highly offensive to certain groups but in a qualitatively different manner. For example some African Americans often use the term n*gga in everyday language online BIBREF2 , people use terms like h*e and b*tch when quoting rap lyrics, and teenagers use homophobic slurs like f*g as they play video games. Such language is prevalent on social media BIBREF3 , making this boundary condition crucial for any usable hate speech detection system ." ], "extractive_spans": [ "language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group" ], "free_form_answer": "", "highlighted_evidence": [ "Drawing upon these definitions, we define hate speech as language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Drawing upon these definitions, we define hate speech as language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group. In extreme cases this may also be language that threatens or incites violence, but limiting our definition only to such cases would exclude a large proportion of hate speech. Importantly, our definition does not include all instances of offensive language because people often use terms that are highly offensive to certain groups but in a qualitatively different manner. For example some African Americans often use the term n*gga in everyday language online BIBREF2 , people use terms like h*e and b*tch when quoting rap lyrics, and teenagers use homophobic slurs like f*g as they play video games. Such language is prevalent on social media BIBREF3 , making this boundary condition crucial for any usable hate speech detection system ." ], "extractive_spans": [ "language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group" ], "free_form_answer": "", "highlighted_evidence": [ "Drawing upon these definitions, we define hate speech as language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Drawing upon these definitions, we define hate speech as language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group. In extreme cases this may also be language that threatens or incites violence, but limiting our definition only to such cases would exclude a large proportion of hate speech. Importantly, our definition does not include all instances of offensive language because people often use terms that are highly offensive to certain groups but in a qualitatively different manner. For example some African Americans often use the term n*gga in everyday language online BIBREF2 , people use terms like h*e and b*tch when quoting rap lyrics, and teenagers use homophobic slurs like f*g as they play video games. Such language is prevalent on social media BIBREF3 , making this boundary condition crucial for any usable hate speech detection system ." ], "extractive_spans": [ "language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group" ], "free_form_answer": "", "highlighted_evidence": [ "Drawing upon these definitions, we define hate speech as language that is used to expresses hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "1b77af51931e7c2469fa619f0da6fea18bf74349", "9facaf03f2bec9dc6610f3ede46622300c0d6ae9", "eba1d78ef8e5c69d0bc0ac731c24c8d965f589ca" ], "answer": [ { "evidence": [ "We first use a logistic regression with L1 regularization to reduce the dimensionality of the data. We then test a variety of models that have been used in prior work: logistic regression, naïve Bayes, decision trees, random forests, and linear SVMs. We tested each model using 5-fold cross validation, holding out 10% of the sample for evaluation to help prevent over-fitting. After using a grid-search to iterate over the models and parameters we find that the Logistic Regression and Linear SVM tended to perform significantly better than other models. We decided to use a logistic regression with L2 regularization for the final model as it more readily allows us to examine the predicted probabilities of class membership and has performed well in previous papers BIBREF5 , BIBREF8 . We trained the final model using the entire dataset and used it to predict the label for each tweet. We use a one-versus-rest framework where a separate classifier is trained for each class and the class label with the highest predicted probability across all classifiers is assigned to each tweet. All modeling was performing using scikit-learn BIBREF12 ." ], "extractive_spans": [ "logistic regression", "naïve Bayes", "decision trees", "random forests", "linear SVMs" ], "free_form_answer": "", "highlighted_evidence": [ "We then test a variety of models that have been used in prior work: logistic regression, naïve Bayes, decision trees, random forests, and linear SVMs. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We first use a logistic regression with L1 regularization to reduce the dimensionality of the data. We then test a variety of models that have been used in prior work: logistic regression, naïve Bayes, decision trees, random forests, and linear SVMs. We tested each model using 5-fold cross validation, holding out 10% of the sample for evaluation to help prevent over-fitting. After using a grid-search to iterate over the models and parameters we find that the Logistic Regression and Linear SVM tended to perform significantly better than other models. We decided to use a logistic regression with L2 regularization for the final model as it more readily allows us to examine the predicted probabilities of class membership and has performed well in previous papers BIBREF5 , BIBREF8 . We trained the final model using the entire dataset and used it to predict the label for each tweet. We use a one-versus-rest framework where a separate classifier is trained for each class and the class label with the highest predicted probability across all classifiers is assigned to each tweet. All modeling was performing using scikit-learn BIBREF12 ." ], "extractive_spans": [ "logistic regression", "naïve Bayes", "decision trees", "random forests", "linear SVM" ], "free_form_answer": "", "highlighted_evidence": [ "We then test a variety of models that have been used in prior work: logistic regression, naïve Bayes, decision trees, random forests, and linear SVMs." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We first use a logistic regression with L1 regularization to reduce the dimensionality of the data. We then test a variety of models that have been used in prior work: logistic regression, naïve Bayes, decision trees, random forests, and linear SVMs. We tested each model using 5-fold cross validation, holding out 10% of the sample for evaluation to help prevent over-fitting. After using a grid-search to iterate over the models and parameters we find that the Logistic Regression and Linear SVM tended to perform significantly better than other models. We decided to use a logistic regression with L2 regularization for the final model as it more readily allows us to examine the predicted probabilities of class membership and has performed well in previous papers BIBREF5 , BIBREF8 . We trained the final model using the entire dataset and used it to predict the label for each tweet. We use a one-versus-rest framework where a separate classifier is trained for each class and the class label with the highest predicted probability across all classifiers is assigned to each tweet. All modeling was performing using scikit-learn BIBREF12 ." ], "extractive_spans": [ "logistic regression, naïve Bayes, decision trees, random forests, and linear SVMs" ], "free_form_answer": "", "highlighted_evidence": [ "We first use a logistic regression with L1 regularization to reduce the dimensionality of the data. We then test a variety of models that have been used in prior work: logistic regression, naïve Bayes, decision trees, random forests, and linear SVMs." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "0842507c716412c8f564d7496d9d5ad21c97f595", "89ca923bdfb8fc9aebc6102fc32c7e3619666fdc", "76edeea05a39d53adeaa25741434b9218f4baff9" ], "answer": [ { "evidence": [ "We begin with a hate speech lexicon containing words and phrases identified by internet users as hate speech, compiled by Hatebase.org. Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. We extracted the time-line for each user, resulting in a set of 85.4 million tweets. From this corpus we then took a random sample of 25k tweets containing terms from the lexicon and had them manually coded by CrowdFlower (CF) workers. Workers were asked to label each tweet as one of three categories: hate speech, offensive but not hate speech, or neither offensive nor hate speech. They were provided with our definition along with a paragraph explaining it in further detail. Users were asked to think not just about the words appearing in a given tweet but about the context in which they were used. They were instructed that the presence of a particular word, however offensive, did not necessarily indicate a tweet is hate speech. Each tweet was coded by three or more people. The intercoder-agreement score provided by CF is 92%. We use the majority decision for each tweet to assign a label. Some tweets were not assigned labels as there was no majority class. This results in a sample of 24,802 labeled tweets." ], "extractive_spans": [ "33,458" ], "free_form_answer": "", "highlighted_evidence": [ "Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We begin with a hate speech lexicon containing words and phrases identified by internet users as hate speech, compiled by Hatebase.org. Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. We extracted the time-line for each user, resulting in a set of 85.4 million tweets. From this corpus we then took a random sample of 25k tweets containing terms from the lexicon and had them manually coded by CrowdFlower (CF) workers. Workers were asked to label each tweet as one of three categories: hate speech, offensive but not hate speech, or neither offensive nor hate speech. They were provided with our definition along with a paragraph explaining it in further detail. Users were asked to think not just about the words appearing in a given tweet but about the context in which they were used. They were instructed that the presence of a particular word, however offensive, did not necessarily indicate a tweet is hate speech. Each tweet was coded by three or more people. The intercoder-agreement score provided by CF is 92%. We use the majority decision for each tweet to assign a label. Some tweets were not assigned labels as there was no majority class. This results in a sample of 24,802 labeled tweets." ], "extractive_spans": [], "free_form_answer": "33,458 Twitter users are orginally used, but than random sample of tweets is extracted resulting in smaller number or users in final dataset.", "highlighted_evidence": [ "Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users.", "From this corpus we then took a random sample of 25k tweets containing terms from the lexicon and had them manually coded by CrowdFlower (CF) workers." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We begin with a hate speech lexicon containing words and phrases identified by internet users as hate speech, compiled by Hatebase.org. Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. We extracted the time-line for each user, resulting in a set of 85.4 million tweets. From this corpus we then took a random sample of 25k tweets containing terms from the lexicon and had them manually coded by CrowdFlower (CF) workers. Workers were asked to label each tweet as one of three categories: hate speech, offensive but not hate speech, or neither offensive nor hate speech. They were provided with our definition along with a paragraph explaining it in further detail. Users were asked to think not just about the words appearing in a given tweet but about the context in which they were used. They were instructed that the presence of a particular word, however offensive, did not necessarily indicate a tweet is hate speech. Each tweet was coded by three or more people. The intercoder-agreement score provided by CF is 92%. We use the majority decision for each tweet to assign a label. Some tweets were not assigned labels as there was no majority class. This results in a sample of 24,802 labeled tweets." ], "extractive_spans": [], "free_form_answer": "33458", "highlighted_evidence": [ "Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "813ed8be4e94e0b479404532165f58320ccbac97", "d1f4f5d35b1df7acea177a871659f9e933457ffd", "d6b8ea56c90ce60f78b681564c9632d29c1c218a" ], "answer": [ { "evidence": [ "We begin with a hate speech lexicon containing words and phrases identified by internet users as hate speech, compiled by Hatebase.org. Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. We extracted the time-line for each user, resulting in a set of 85.4 million tweets. From this corpus we then took a random sample of 25k tweets containing terms from the lexicon and had them manually coded by CrowdFlower (CF) workers. Workers were asked to label each tweet as one of three categories: hate speech, offensive but not hate speech, or neither offensive nor hate speech. They were provided with our definition along with a paragraph explaining it in further detail. Users were asked to think not just about the words appearing in a given tweet but about the context in which they were used. They were instructed that the presence of a particular word, however offensive, did not necessarily indicate a tweet is hate speech. Each tweet was coded by three or more people. The intercoder-agreement score provided by CF is 92%. We use the majority decision for each tweet to assign a label. Some tweets were not assigned labels as there was no majority class. This results in a sample of 24,802 labeled tweets." ], "extractive_spans": [], "free_form_answer": "85400000", "highlighted_evidence": [ "We extracted the time-line for each user, resulting in a set of 85.4 million tweets. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We begin with a hate speech lexicon containing words and phrases identified by internet users as hate speech, compiled by Hatebase.org. Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. We extracted the time-line for each user, resulting in a set of 85.4 million tweets. From this corpus we then took a random sample of 25k tweets containing terms from the lexicon and had them manually coded by CrowdFlower (CF) workers. Workers were asked to label each tweet as one of three categories: hate speech, offensive but not hate speech, or neither offensive nor hate speech. They were provided with our definition along with a paragraph explaining it in further detail. Users were asked to think not just about the words appearing in a given tweet but about the context in which they were used. They were instructed that the presence of a particular word, however offensive, did not necessarily indicate a tweet is hate speech. Each tweet was coded by three or more people. The intercoder-agreement score provided by CF is 92%. We use the majority decision for each tweet to assign a label. Some tweets were not assigned labels as there was no majority class. This results in a sample of 24,802 labeled tweets." ], "extractive_spans": [ "24,802 " ], "free_form_answer": "", "highlighted_evidence": [ "This results in a sample of 24,802 labeled tweets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We begin with a hate speech lexicon containing words and phrases identified by internet users as hate speech, compiled by Hatebase.org. Using the Twitter API we searched for tweets containing terms from the lexicon, resulting in a sample of tweets from 33,458 Twitter users. We extracted the time-line for each user, resulting in a set of 85.4 million tweets. From this corpus we then took a random sample of 25k tweets containing terms from the lexicon and had them manually coded by CrowdFlower (CF) workers. Workers were asked to label each tweet as one of three categories: hate speech, offensive but not hate speech, or neither offensive nor hate speech. They were provided with our definition along with a paragraph explaining it in further detail. Users were asked to think not just about the words appearing in a given tweet but about the context in which they were used. They were instructed that the presence of a particular word, however offensive, did not necessarily indicate a tweet is hate speech. Each tweet was coded by three or more people. The intercoder-agreement score provided by CF is 92%. We use the majority decision for each tweet to assign a label. Some tweets were not assigned labels as there was no majority class. This results in a sample of 24,802 labeled tweets." ], "extractive_spans": [ "24,802 labeled tweets" ], "free_form_answer": "", "highlighted_evidence": [ "This results in a sample of 24,802 labeled tweets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "yes", "yes", "yes", "yes" ], "question": [ "What is their definition of hate speech?", "What type of model do they train?", "How many users does their dataset have?", "How long is their dataset?" ], "question_id": [ "a4b77a20e067789691e0ab246bc5b11913d77ae1", "ba39317e918b4386765f88e8c8ae99f9a098c935", "22c125c461f565f5437dac74bf19c2ef317bad86", "4a91432abe3f54fcbdd00bb85dc0df95b16edf42" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "Offensive language detection", "Offensive language detection", "Offensive language detection", "Offensive language detection" ], "topic_background": [ "research", "research", "research", "research" ] }
{ "caption": [ "Figure 1: True versus predicted categories" ], "file": [ "3-Figure1-1.png" ] }
[ "How many users does their dataset have?", "How long is their dataset?" ]
[ [ "1703.04009-Data-0" ], [ "1703.04009-Data-0" ] ]
[ "33458", "85400000" ]
48
1911.03090
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning
Pretrained transformer-based language models have achieved state of the art across countless tasks in natural language processing. These models are highly expressive, comprising at least a hundred million parameters and a dozen layers. Recent evidence suggests that only a few of the final layers need to be fine-tuned for high quality on downstream tasks. Naturally, a subsequent research question is, "how many of the last layers do we need to fine-tune?" In this paper, we precisely answer this question. We examine two recent pretrained language models, BERT and RoBERTa, across standard tasks in textual entailment, semantic similarity, sentiment analysis, and linguistic acceptability. We vary the number of final layers that are fine-tuned, then study the resulting change in task-specific effectiveness. We show that only a fourth of the final layers need to be fine-tuned to achieve 90% of the original quality. Surprisingly, we also find that fine-tuning all layers does not always help.
{ "paragraphs": [ [ "Transformer-based pretrained language models are a battle-tested solution to a plethora of natural language processing tasks. In this paradigm, a transformer-based language model is first trained on copious amounts of text, then fine-tuned on task-specific data. BERT BIBREF0, XLNet BIBREF1, and RoBERTa BIBREF2 are some of the most well-known ones, representing the current state of the art in natural language inference, question answering, and sentiment classification, to list a few. These models are extremely expressive, consisting of at least a hundred million parameters, a hundred attention heads, and a dozen layers.", "An emerging line of work questions the need for such a parameter-loaded model, especially on a single downstream task. BIBREF3, for example, note that only a few attention heads need to be retained in each layer for acceptable effectiveness. BIBREF4 find that, on many tasks, just the last few layers change the most after the fine-tuning process. We take these observations as evidence that only the last few layers necessarily need to be fine-tuned.", "The central objective of our paper is, then, to determine how many of the last layers actually need fine-tuning. Why is this an important subject of study? Pragmatically, a reasonable cutoff point saves computational memory across fine-tuning multiple tasks, which bolsters the effectiveness of existing parameter-saving methods BIBREF5. Pedagogically, understanding the relationship between the number of fine-tuned layers and the resulting model quality may guide future works in modeling.", "Our research contribution is a comprehensive evaluation, across multiple pretrained transformers and datasets, of the number of final layers needed for fine-tuning. We show that, on most tasks, we need to fine-tune only one fourth of the final layers to achieve within 10% parity with the full model. Surprisingly, on SST-2, a sentiment classification dataset, we find that not fine-tuning all of the layers leads to improved quality." ], [ "In the pretrained language modeling paradigm, a language model (LM) is trained on vast amounts of text, then fine-tuned on a specific downstream task. BIBREF6 are one of the first to successfully apply this idea, outperforming state of the art in question answering, textual entailment, and sentiment classification. Their model, dubbed ELMo, comprises a two-layer BiLSTM pretrained on the Billion Word Corpus BIBREF7.", "Furthering this approach with more data and improved modeling, BIBREF0 pretrain deep 12- and 24-layer bidirectional transformers BIBREF8 on the entirety of Wikipedia and BooksCorpus BIBREF9. Their approach, called BERT, achieves state of the art across all tasks in the General Language Understanding Evaluation (GLUE) benchmark BIBREF10, as well as the Stanford Question Answering Dataset (BIBREF11).", "As a result of this development, a flurry of recent papers has followed this more-data-plus-better-models principle. Two prominent examples include XLNet BIBREF1 and RoBERTa BIBREF2, both of which contest the present state of the art. XLNet proposes to pretrain two-stream attention-augmented transformers on an autoregressive LM objective, instead of the original cloze and next sentence prediction (NSP) tasks from BERT. RoBERTa primarily argues for pretraining longer, using more data, and removing the NSP task for BERT." ], [ "The prevailing evidence in the neural network literature suggests that earlier layers extract universal features, while later ones perform task-specific modeling. BIBREF12 visualize the per-layer activations in image classification networks, finding that the first few layers function as corner and edge detectors, and the final layers as class-specific feature extractors. BIBREF13 demonstrate that the low- and high-level notions of content and style are separable in convolutional neural networks, with lower layers capturing content and higher layers style.", "Pretrained transformers. In the NLP literature, similar observations have been made for pretrained language models. BIBREF14 analyze BERT's attention and observe that the bottom layers attend broadly, while the top layers capture linguistic syntax. BIBREF4 find that the last few layers of BERT change the most after task-specific fine-tuning. Similar to our work, BIBREF5 fine-tune the top layers of BERT, as part of their baseline comparison for their model compression approach. However, none of the studies comprehensively examine the number of necessary final layers across multiple pretrained transformers and datasets." ], [ "We conduct our experiments on NVIDIA Tesla V100 GPUs with CUDA v10.1. We run the models from the Transformers library (v2.1.1; BIBREF15) using PyTorch v1.2.0." ], [ "We choose BERT BIBREF0 and RoBERTa BIBREF2 as the subjects of our study, since they represent state of the art and the same architecture. XLNet BIBREF1 is another alternative; however, they use a slightly different attention structure, and our preliminary experiments encountered difficulties in reproducibility with the Transformers library. Each model has base and large variants that contain 12 and 24 layers, respectively. We denote them by appending the variant name as a subscript to the model name.", "Within each variant, the two models display slight variability in parameter count—110 and 125 million in the base variant, and 335 and 355 in the large one. These differences are mostly attributed to RoBERTa using many more embedding parameters—exactly 63% more for both variants. For in-depth, layerwise statistics, see Table TABREF4.", "For our datasets, we use the GLUE benchmark, which comprises the tasks in natural language inference, sentiment classification, linguistic acceptability, and semantic similarity. Specifically, for natural language inference (NLI), it provides the Multigenre NLI (MNLI; BIBREF16), Question NLI (QNLI; BIBREF10), Recognizing Textual Entailment (RTE; BIBREF17), and Winograd NLI BIBREF18 datasets. For semantic textual similarity and paraphrasing, it contains the Microsoft Research Paraphrase Corpus (MRPC; BIBREF19), the Semantic Textual Similarity Benchmark (STS-B; BIBREF20), and Quora Question Pairs (QQP; BIBREF21). Finally, its single-sentence tasks consist of the binary-polarity Stanford Sentiment Treebank (SST-2; BIBREF22) and the Corpus of Linguistic Acceptability (CoLA; BIBREF23)." ], [ "Our fine-tuning procedure closely resembles those of BERT and RoBERTa. We choose the Adam optimizer BIBREF24 with a batch size of 16 and fine-tune BERT for 3 epochs and RoBERTa for 10, following the original papers. For hyperparameter tuning, the best learning rate is different for each task, and all of the original authors choose one between $1 \\times 10^{-5}$ and $5 \\times 10^{-5}$; thus, we perform line search over the interval with a step size of $1 \\times 10^{-5}$. We report the best results in Table TABREF5.", "On each model, we freeze the embeddings and the weights of the first $N$ layers, then fine-tune the rest using the best hyperparameters of the full model. Specifically, if $L$ is the number of layers, we explore $N = \\frac{L}{2}, \\frac{L}{2} + 1, \\dots , L$. Due to computational limitations, we set half as the cutoff point. Additionally, we restrict our comprehensive all-datasets exploration to the base variant of BERT, since the large model variants and RoBERTa are much more computationally intensive. On the smaller CoLA, SST-2, MRPC, and STS-B datasets, we comprehensively evaluate both models. These choices do not substantially affect our analysis." ], [ "We report three relevant operating points in Tables TABREF6–TABREF9: two extreme operating points and an intermediate one. The former is self-explanatory, indicating fine-tuning all or none of the nonoutput layers. The latter denotes the number of necessary layers for reaching at least 90% of the full model quality, excluding CoLA, which is an outlier.", "From the reported results in Tables TABREF6–TABREF9, fine-tuning the last output layer and task-specific layers is insufficient for all tasks—see the rows corresponding to 0, 12, and 24 frozen layers. However, we find that the first half of the model is unnecessary; the base models, for example, need fine-tuning of only 3–5 layers out of the 12 to reach 90% of the original quality—see Table TABREF7, middle subrow of each row group. Similarly, fine-tuning only a fourth of the layers is sufficient for the large models (see Table TABREF9); only 6 layers out of 24 for BERT and 7 for RoBERTa." ], [ "In Figure FIGREF10, we examine how the relative quality changes with the number of frozen layers. To compute a relative score, we subtract each frozen model's results from its corresponding full model. The relative score aligns the two baselines at zero, allowing the fair comparison of the transformers. The graphs report the average of five trials to reduce the effects of outliers.", "When every component except the output layer and the task-specific layer is frozen, the fine-tuned model achieves only 64% of the original quality, on average. As more layers are fine-tuned, the model effectiveness often improves drastically—see CoLA and STS-B, the first and fourth vertical pairs of subfigures from the left. This demonstrates that gains decompose nonadditively with respect to the number of frozen initial layers. Fine-tuning subsequent layers shows diminishing returns, with every model rapidly approaching the baseline quality at fine-tuning half of the network; hence, we believe that half is a reasonable cutoff point for characterizing the models.", "Finally, for the large variants of BERT and RoBERTa on SST-2 (second subfigure from both the top and the left), we observe a surprisingly consistent increase in quality when freezing 12–16 layers. This finding suggests that these models may be overparameterized for SST-2." ], [ "In this paper, we present a comprehensive evaluation of the number of final layers that need to be fine-tuned for pretrained transformer-based language models. We find that only a fourth of the layers necessarily need to be fine-tuned to obtain 90% of the original quality. One line of future work is to conduct a similar, more fine-grained analysis on the contributions of the attention heads." ], [ "This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, and enabled by computational resources provided by Compute Ontario and Compute Canada." ] ], "section_name": [ "Introduction", "Background and Related Work ::: Pretrained Language Models", "Background and Related Work ::: Layerwise Interpretability", "Experimental Setup", "Experimental Setup ::: Models and Datasets", "Experimental Setup ::: Fine-Tuning Procedure", "Analysis ::: Operating Points", "Analysis ::: Per-Layer Study", "Conclusions and Future Work", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "0965c641c7eeb9f60ca3910da5095f6c3db84dd3", "59b0ed8a4f50b62339d63ac766e6dfb915d12cfb", "d3b65d449f7112686f1cc2f8c00db60edecb75df" ], "answer": [ { "evidence": [ "Finally, for the large variants of BERT and RoBERTa on SST-2 (second subfigure from both the top and the left), we observe a surprisingly consistent increase in quality when freezing 12–16 layers. This finding suggests that these models may be overparameterized for SST-2." ], "extractive_spans": [ "SST-2" ], "free_form_answer": "", "highlighted_evidence": [ "Finally, for the large variants of BERT and RoBERTa on SST-2 (second subfigure from both the top and the left), we observe a surprisingly consistent increase in quality when freezing 12–16 layers. " ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Our research contribution is a comprehensive evaluation, across multiple pretrained transformers and datasets, of the number of final layers needed for fine-tuning. We show that, on most tasks, we need to fine-tune only one fourth of the final layers to achieve within 10% parity with the full model. Surprisingly, on SST-2, a sentiment classification dataset, we find that not fine-tuning all of the layers leads to improved quality." ], "extractive_spans": [ "SST-2" ], "free_form_answer": "", "highlighted_evidence": [ "Surprisingly, on SST-2, a sentiment classification dataset, we find that not fine-tuning all of the layers leads to improved quality." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "8150f6fa5491117dbbdb05e01fb2895eb4cc6654", "bd6c1da455b10c077979e36651ce696e85eca0c8", "cde2406dfbb5b9e1adcbea9e6881685079afe7ca" ], "answer": [ { "evidence": [ "On each model, we freeze the embeddings and the weights of the first $N$ layers, then fine-tune the rest using the best hyperparameters of the full model. Specifically, if $L$ is the number of layers, we explore $N = \\frac{L}{2}, \\frac{L}{2} + 1, \\dots , L$. Due to computational limitations, we set half as the cutoff point. Additionally, we restrict our comprehensive all-datasets exploration to the base variant of BERT, since the large model variants and RoBERTa are much more computationally intensive. On the smaller CoLA, SST-2, MRPC, and STS-B datasets, we comprehensively evaluate both models. These choices do not substantially affect our analysis.", "For our datasets, we use the GLUE benchmark, which comprises the tasks in natural language inference, sentiment classification, linguistic acceptability, and semantic similarity. Specifically, for natural language inference (NLI), it provides the Multigenre NLI (MNLI; BIBREF16), Question NLI (QNLI; BIBREF10), Recognizing Textual Entailment (RTE; BIBREF17), and Winograd NLI BIBREF18 datasets. For semantic textual similarity and paraphrasing, it contains the Microsoft Research Paraphrase Corpus (MRPC; BIBREF19), the Semantic Textual Similarity Benchmark (STS-B; BIBREF20), and Quora Question Pairs (QQP; BIBREF21). Finally, its single-sentence tasks consist of the binary-polarity Stanford Sentiment Treebank (SST-2; BIBREF22) and the Corpus of Linguistic Acceptability (CoLA; BIBREF23)." ], "extractive_spans": [], "free_form_answer": "For GLUE bencmark no, for dataset MRPC, SST-B, SST-2 and COLA yes.", "highlighted_evidence": [ "Due to computational limitations, we set half as the cutoff point. Additionally, we restrict our comprehensive all-datasets exploration to the base variant of BERT, since the large model variants and RoBERTa are much more computationally intensive.", "For our datasets, we use the GLUE benchmark, which comprises the tasks in natural language inference, sentiment classification, linguistic acceptability, and semantic similarity. Specifically, for natural language inference (NLI), it provides the Multigenre NLI (MNLI; BIBREF16), Question NLI (QNLI; BIBREF10), Recognizing Textual Entailment (RTE; BIBREF17), and Winograd NLI BIBREF18 datasets. For semantic textual similarity and paraphrasing, it contains the Microsoft Research Paraphrase Corpus (MRPC; BIBREF19), the Semantic Textual Similarity Benchmark (STS-B; BIBREF20), and Quora Question Pairs (QQP; BIBREF21). Finally, its single-sentence tasks consist of the binary-polarity Stanford Sentiment Treebank (SST-2; BIBREF22) and the Corpus of Linguistic Acceptability (CoLA; BIBREF23)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We choose BERT BIBREF0 and RoBERTa BIBREF2 as the subjects of our study, since they represent state of the art and the same architecture. XLNet BIBREF1 is another alternative; however, they use a slightly different attention structure, and our preliminary experiments encountered difficulties in reproducibility with the Transformers library. Each model has base and large variants that contain 12 and 24 layers, respectively. We denote them by appending the variant name as a subscript to the model name." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We choose BERT BIBREF0 and RoBERTa BIBREF2 as the subjects of our study, since they represent state of the art and the same architecture.", "Each model has base and large variants that contain 12 and 24 layers, respectively." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "FLOAT SELECTED: Table 2: Reproduced results of BERT and RoBERTa on the development sets." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Reproduced results of BERT and RoBERTa on the development sets." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "no", "no" ], "question": [ "In what tasks does fine-tuning all layers hurt performance?", "Do they test against the large version of RoBERTa?" ], "question_id": [ "7c398615141ca416a32c9f72dbb785d3a6986a0f", "441be93e2830cc0fc65afad6959db92754c9f5a8" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "Roberta", "Roberta" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Table 1: Parameter statistics for the base and large variants of BERT and RoBERTa. Note that “per-layer” indicates the number of parameters in one intermediate layer, which is more relevant to our study.", "Table 2: Reproduced results of BERT and RoBERTa on the development sets.", "Table 3: Development set results of BERT, with none, some, and all of the nonoutput layer weights fine-tuned. Results are averaged across five runs.", "Table 5: Development set results of all large models, with none, some, and all of the nonoutput layer weights fine-tuned. Results are averaged across five runs.", "Table 4: Development set results of all base models, with none, some, and all of the nonoutput layer weights fine-tuned. Results are averaged across five runs.", "Figure 1: Relative change in quality compared to the full models, with respect to the number of frozen initial layers, represented by the x-axes." ], "file": [ "2-Table1-1.png", "2-Table2-1.png", "3-Table3-1.png", "3-Table5-1.png", "3-Table4-1.png", "4-Figure1-1.png" ] }
[ "Do they test against the large version of RoBERTa?" ]
[ [ "1911.03090-Experimental Setup ::: Fine-Tuning Procedure-1", "1911.03090-Experimental Setup ::: Models and Datasets-0", "1911.03090-Experimental Setup ::: Models and Datasets-2", "1911.03090-2-Table2-1.png" ] ]
[ "For GLUE bencmark no, for dataset MRPC, SST-B, SST-2 and COLA yes." ]
49
1909.04242
Mitigating Annotation Artifacts in Natural Language Inference Datasets to Improve Cross-dataset Generalization Ability
Natural language inference (NLI) aims at predicting the relationship between a given pair of premise and hypothesis. However, several works have found that there widely exists a bias pattern called annotation artifacts in NLI datasets, making it possible to identify the label only by looking at the hypothesis. This irregularity makes the evaluation results over-estimated and affects models' generalization ability. In this paper, we consider a more trust-worthy setting, i.e., cross-dataset evaluation. We explore the impacts of annotation artifacts in cross-dataset testing. Furthermore, we propose a training framework to mitigate the impacts of the bias pattern. Experimental results demonstrate that our methods can alleviate the negative effect of the artifacts and improve the generalization ability of models.
{ "paragraphs": [ [ "Natural language inference (NLI) is a widely-studied problem in natural language processing. It aims at comparing a pair of sentences (i.e. a premise and a hypothesis), and inferring the relationship between them (i.e., entailment, neutral and contradiction). Large-scaled datasets like SNLI BIBREF0 and MultiNLI BIBREF1 have been created by crowd-sourcing and fertilized NLI research substantially.", "However, several works BIBREF2, BIBREF3, BIBREF4 have pointed out that crowd-sourcing workers have brought a bias pattern named annotation artifacts in these NLI datasets. Such artifacts in hypotheses can reveal the labels and make it possible to predict the labels solely by looking at the hypotheses. For example, models trained on SNLI with only the hypotheses can achieve an accuracy of 67.0%, despite the always predicting the majority-class baseline is only 34.3% BIBREF2.", "Classifiers trained on NLI datasets are supposed to make predictions by understanding the semantic relationships between given sentence pairs. However, it is shown that models are unintentionally utilizing the annotation artifacts BIBREF4, BIBREF2. If the evaluation is conducted under a similar distribution as the training data, e.g., with the given testing set, models will enjoy additional advantages, making the evaluation results over-estimated. On the other hand, if the bias pattern cannot be generalized to the real-world, it may introduce noise to models, thus hurting the generalization ability.", "In this paper, we use cross-dataset testing to better assess models' generalization ability. We investigate the impacts of annotation artifacts in cross-dataset testing. Furthermore, we propose an easy-adopting debiasing training framework, which doesn't require any additional data or annotations, and apply it to the high-performing Densely Interactive Inference Network BIBREF5. Experiments show that our method can effectively mitigate the bias pattern and improve the cross-dataset generalization ability of models. To the best of our knowledge, our work is the first attempt to alleviate the annotation artifacts without any extra resources." ], [ "Frequently-used NLI datasets such as SNLI and MultiNLI are created by crowd-sourcing BIBREF0, BIBREF1, during which they present workers a premise and ask them to produce three hypotheses corresponding to labels. As BIBREF2 pointed out, workers may adopt some specific annotation strategies and heuristics when authoring hypotheses to save efforts, which produces certain patterns called annotation artifacts in the data. Models' trained on such datasets are heavily affected by the bias pattern BIBREF2.", "BIBREF4 further investigate models' robustness to the bias pattern using swapping operations. BIBREF6 demonstrate that the annotation artifacts widely exist among NLI datasets. They show that hypothesis-only-model, which refers to models trained and predict only with hypotheses, outperforms always predicting the majority-class in six of ten NLI datasets.", "The emergence of the pattern can be due to selection bias BIBREF7, BIBREF8, BIBREF9 in the datasets preparing procedure. Several works BIBREF10, BIBREF11 investigate the bias problem in relation inference datasest. BIBREF12 investigate the selection bias embodied in the comparing relationships in six natural language sentence matching datasets and propose a debiasing training and evaluation framework." ], [ "Essentially speaking, the problem of the bias pattern is that the artifacts in hypotheses are distributed differently among labels, so balancing them across labels may be a good solution to alleviate the impacts BIBREF2.", "Based on the idea proposed by BIBREF12, we demonstrate that we can make artifacts in biased datasets balanced across different classes by assigning specific weights for every sample. We refer the distribution of the acquired weighted dataset as artifact-balanced distribution. We consider a supervised NLI task, which is to predict the relationship label $y$ given a sentence pair $x$, and we denote the hypothesis in $x$ as $h$. Without loss of generality, we assume that the prior probability of different labels is equal, and then we have the following theorem.", "Theorem 1 For any classifier $f=f(x, h)$, and for any loss function $\\Delta (f(x, h), y)$, if we use $w = \\frac{1}{P(y|h)}$ as weight for every sample during training, it's equivalent to training with the artifact-balanced distribution.", "Detailed assumptions and the proof of the theorem is presented in Appendix SECREF6. With the theorem, we can simply use cross predictions to estimate $P(y|h)$ in origin datasets and use them as sample weights during training. The step-by-step procedure for artifact-balanced learning is presented in Algorithm 1.", "However, it is difficult to precisely estimate the probability $P(y|h)$. A minor error might lead to a significant difference to the weight, especially when the probability is close to zero. Thus, in practice, we use $w = \\frac{1}{(1-\\epsilon )P(y|h) + \\epsilon }$ as sample weights during training in order to improve the robustness. We can find that as $\\epsilon $ increases, the weights tend to be uniform, indicating that the debiasing effect decreases as the smooth term grows. Moreover, in order to keep the prior probability $P(Y)$ unchanged, we normalize the sum of weights of the three labels to the same." ], [ "In this section, we present the experimental results for cross-dataset testing of artifacts and artifact-balanced learning. We show that cross-dataset testing is less affected by annotation artifacts, while there are still some influences more or less in different datasets. We also demonstrate that our proposed framework can mitigate the bias and improve the generalization ability of models." ], [ "We utilize SNLI BIBREF0, MultiNLI BIBREF1, JOCI BIBREF13 and SICK BIBREF14 for cross-dataset testing.", "SNLI and MultiNLI are prepared by Human Elicited, in which workers are given a context and asked to produce hypotheses corresponding to labels. SICK and JOCI are created by Human Judged, referring that hypotheses and premises are automatically paired while labels are generated by humans BIBREF6. In order to maximumly mitigate the impacts of annotation artifacts during evaluations, we train and validate models respectively on SNLI and MultiNLI and test on both SICK and JOCI. We also report models' performances on SNLI and MultiNLI.", "As to SNLI, we use the same partition as BIBREF0. For MultiNLI, we separately use two origin validation sets (Matched and Mismatched) as the testing sets for convenience, and refer them as MMatch and MMismatch. We randomly select 10000 samples out of the origin training set for validation and use the rest for training. As to JOCI, we use the whole “B” subsets for testing, whose premises are from SNLI-train while hypotheses are generated based on world knowledge BIBREF13, and convert the score to NLI labels following BIBREF6. As to SICK, we use the whole dataset for testing." ], [ "To determine how biased the models are, we partition the testing set of SNLI and MMatch into two subsets: examples that the hypothesis-only model can be correctly classified as Easy and the rest as Hard as seen in BIBREF2. More detailed information is presented in Appendix SECREF14." ], [ "We refer models trained only with hypotheses as hypothesis-only-model (Hyp), and models that utilize both premises and hypotheses as normal-model (Norm). We implement a simple LSTM model for Hyp and use DIIN BIBREF5 as Norm. We report AUC for Hyp and ACC for Norm. More details can be seen in Appendix SECREF15", "We estimate $P(y|h)$ for SNLI and MultiNLI respectively using BERT BIBREF15 with 10-fold predictions. To investigate the impacts of smooth terms, we choose a series of smooth values and present the results. Considering models may jiggle during the training phase due to the varied scale of weights, we sample examples with probabilities proportional to the weights for every mini-batch instead of adding weights to the loss directly.", "The evaluation results are reported in Table TABREF3." ], [ "Anotation Artifacts can be generalized across Human Elicited datasets. From the AUC of Hyp baseline trained with SNLI, we can see that the bias pattern of SNLI has a strong predictive ability in itself and the other two testing sets of Human Elicited. The behavior of those trained with MultiNLI is similar.", "Anotation Artifacts of SNLI and MultiNLI can be generalized to SICK. Unexpectedly, it is shown that Hyp baseline can get $0.6250$ (AUC) trained with SNLI and $0.6079$ (AUC) with MultiNLI when tested on SICK, indicating that the bias pattern of SNLI and MultiNLI are predictive on SICK. The results imply that the bias pattern can even be generalized across datasets prepared by different methods.", "Annotation Artifacts of SNLI are nearly neutral in JOCI, while MultiNLI is misleading. We find that AUC of Hyp baseline trained with SNLI is very close to $0.5$ on JOCI, indicating that JOCI is nearly neutral to artifacts in SNLI. However, when it comes to training with MultiNLI, the AUC of Hyp baseline is lower than $0.5$, indicating that the artifacts are misleading in JOCI." ], [ "Focusing on the results when smooth equals $0.01$ for SNLI and smooth equals $0.02$ for MultiNLI, we observe that the AUC of Hyp for all testing sets are approximately $0.5$, indicating Hyp's predictions are approximately equivalent to randomly guessing. Also, the gap between Hard and Easy for Norm significantly decreases comparing with the baseline. With the smooth, we can conclude that our method effectively alleviates the bias pattern.", "With other smooth terms, our method still has more or less debiasing abilities. In those testing sets which are not neutral to the bias pattern, the AUC of Hyp always come closer to $0.5$ comparing with the baseline with whatever smooth values. Performances of Norm on Hard and Easy also come closer comparing with the baseline. Norm trained with SNLI even exceed baseline in Hard with most smooth terms.", "From the results of Hyp, we can find a trend that the larger the smooth value is, the lower the level of debiasing is, while with a very small or even no smooth value, the AUC may be lower than $0.5$. As mentioned before, we owe this to the imperfect estimation of $P(y|h)$, and we can conclude that a proper smooth value is a prerequisite for the best debiasing effect." ], [ "Debiasing may improve models' generalization ability from two aspects: (1) Mitigate the misleading effect of annotation artifacts. (2) Improve models' semantic learning ability.", "When the annotation artifacts of the training set cannot be generalized to the testing set, which should be more common in the real-world, predicting by artifacts may hurt models' performance. Centering on the results of JOCI, in which the bias pattern of MultiNLI is misleading, we find that Norm trained with MultiNLI outperforms baseline after debiasing with all smooth values tested.", "Furthermore, debiasing can reduce models' dependence on the bias pattern during training, thus force models to better learn semantic information to make predictions. Norm trained with SNLI exceed baseline in JOCI with smooth terms $0.01$ and $0.1$. With larger smooth terms, Norm trained with both SNLI and MultiNLI exceeds baseline in SICK. Given the fact that JOCI is almost neutral to artifacts in SNLI, and the bias pattern of both SNLI and MultiNLI are even predictive in SICK, we owe these promotions to that our method improves models' semantic learning ability.", "As to other testing sets like SNLI, MMatch and MMismatch, we notice that the performance of Norm always decreases compared with the baseline. As mentioned before, both SNLI and MultiNLI are prepared by Huamn Elicited, and their artifacts can be generalized across each other. We owe the drop to that the detrimental effect of mitigating the predictable bias pattern exceeds the beneficial effect of the improvement of semantic learning ability." ], [ "In this paper, we take a close look at the annotation artifacts in NLI datasets. We find that the bias pattern could be predictive or misleading in cross-dataset testing. Furthermore, we propose a debiasing framework and experiments demonstrate that it can effectively mitigate the impacts of the bias pattern and improve the cross-dataset generalization ability of models. However, it remains an open problem that how we should treat the annotation artifacts. We cannot assert whether the bias pattern should not exist at all or it is actually some kind of nature. We hope that our findings will encourage more explorations on reliable evaluation protocols for NLI models." ], [ "We make a few assumptions about an artifact-balanced distribution and how the biased datasets are generated from it, and demonstrate that we can train models fitting the artifact-balanced distribution using only the biased datasets.", "We consider the domain of the artifact-balanced distribution ${D}$ as $\\mathcal {X} \\times \\mathcal {A} \\times \\mathcal {Y} \\times \\mathcal {S}$, in which $\\mathcal {X}$ is the input variable space, $\\mathcal {Y}$ is the label space, $\\mathcal {A}$ is the feature space of annotation artifacts in hypotheses, $\\mathcal {S}$ is the selection intention space. We assume that the biased distribution $\\widehat{{D}}$ of origin datasets can be generated from the artifact-balanced distribution by selecting samples with $S = Y$, i.e., the selection intention matches with the label. We use $P(\\cdot )$ to represent the probability on $\\widehat{{D}}$ and use $Q(\\cdot )$ for ${D}$.", "We also make some assumptions about the artifact-balanced distribution. The first one is that the label is independent with the artifact in the hypothesis, defined as follows,", "The second one is that the selection intention is independent with $X$ and $Y$ when the annotation artifact is given,", "And we can prove the equivalence of training with weight $\\frac{1}{P(Y|A)}$ and fitting the artifact-balanced distribution. We first present an equation as follows,", "Without loss of generality, we can assume $Q(Y=i)=\\frac{1}{3}~(i=0,1,2)$ and get that,", "With the above derivation, we can prove the equivalence like following,", "As $Q(S=Y)$ is just a constant, training with the loss is equivalent to fitting the artifact-balanced distribution. Given hypotheses variable H, the probability $P(Y|A)$ can be replaced by $P(Y|H)$ since the predictive ability of hypotheses totally comes from the annotation artifacts, and we can have $w=\\frac{1}{P(Y|H)}$ as weights during training." ], [ "For SNLI, we use Hard released by BIBREF2. For MMatch, we manually partition the set using fastText BIBREF18. And we summarize the size of the datasets used in Hard-Easy Testing below.", "" ], [ "For DIIN, we use settings same as BIBREF5 but do not use syntactical features. Priors of labels are normalized to be the same. For hypothesis-only-model, we implement a naïve model with one LSTM layer and a three-layer MLP behind, implemented with Keras and Tensorflow backend BIBREF16. We use the 300-dimensional GloVe embeddings trained on the Common Crawl 840B tokens dataset BIBREF19 and keep them fixed during training. Batch Normalization BIBREF17 are applied after every hidden layer in MLP and we use Dropout BIBREF20 with rate 0.5 after the last hidden layer. We use RMSPropBIBREF21 as optimizer and set the learning rate as 1e-3. We set the gradient clipping to 1.0 and the batch size to 256." ] ], "section_name": [ "Introduction", "Related Work", "Making Artifacts Unpredictable", "Experimental Results", "Experimental Results ::: Evaluation Scheme ::: Cross-dataset Testing", "Experimental Results ::: Evaluation Scheme ::: Hard-Easy Testing", "Experimental Results ::: Experiment Setup", "Experimental Results ::: Can Artifacts Generalize Across Datasets?", "Experimental Results ::: Debiasing Results ::: Effectiveness of Debiasing", "Experimental Results ::: Debiasing Results ::: Benefits of Debiasing", "Conclusion", "Detailed Assumptions and Proof of Theorem @!START@UID1@!END@", "Experiment Setting ::: Hard-Easy Datasets Setting", "Experiment Setting ::: Experiment Setup" ] }
{ "answers": [ { "annotation_id": [ "28c2064275f6a27fe9e82e39aedbe9c63dc546d0", "521010b56e873c14a051afe42cebe59d2660d710", "696155cb59a48410852dd0c59c4bc7da1b5d24ca" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Evaluation Results of Hyp and Norm. Baseline refers to the model trained and validated without using weights. Hard, Easy refers to the Hard-Easy Testing generated from the testing set corresponding to the Trainset column. Results of Hyp are the average numbers of five runs with different random initialization. We report AUC for Hyp and ACC for Norm. “*” indicates where normal-model are better than the baseline." ], "extractive_spans": [], "free_form_answer": "Average improvement in accuracy is 2.26 points", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Evaluation Results of Hyp and Norm. Baseline refers to the model trained and validated without using weights. Hard, Easy refers to the Hard-Easy Testing generated from the testing set corresponding to the Trainset column. Results of Hyp are the average numbers of five runs with different random initialization. We report AUC for Hyp and ACC for Norm. “*” indicates where normal-model are better than the baseline." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "60c8f59921a40ce4f01c7c7f8215a1fbd4bec33c", "dc4d8e91ec240c38c3d865e24d35f117fefe62bf", "f67b0ebf3a97265282e0206fd8cb781cfc8fd7bf" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "\nWe make a few assumptions about an artifact-balanced distribution and how the biased datasets are generated from it, and demonstrate that we can train models fitting the artifact-balanced distribution using only the biased datasets.\n\nWe consider the domain of the artifact-balanced distribution ${D}$ as $\\mathcal {X} \\times \\mathcal {A} \\times \\mathcal {Y} \\times \\mathcal {S}$, in which $\\mathcal {X}$ is the input variable space, $\\mathcal {Y}$ is the label space, $\\mathcal {A}$ is the feature space of annotation artifacts in hypotheses, $\\mathcal {S}$ is the selection intention space. We assume that the biased distribution $\\widehat{{D}}$ of origin datasets can be generated from the artifact-balanced distribution by selecting samples with $S = Y$, i.e., the selection intention matches with the label. We use $P(\\cdot )$ to represent the probability on $\\widehat{{D}}$ and use $Q(\\cdot )$ for ${D}$.\n\nWe also make some assumptions about the artifact-balanced distribution. The first one is that the label is independent with the artifact in the hypothesis, defined as follows,\n\nThe second one is that the selection intention is independent with $X$ and $Y$ when the annotation artifact is given,\n\nAnd we can prove the equivalence of training with weight $\\frac{1}{P(Y|A)}$ and fitting the artifact-balanced distribution. We first present an equation as follows,\n\nWithout loss of generality, we can assume $Q(Y=i)=\\frac{1}{3}~(i=0,1,2)$ and get that,\n\nWith the above derivation, we can prove the equivalence like following,\n\nAs $Q(S=Y)$ is just a constant, training with the loss is equivalent to fitting the artifact-balanced distribution. Given hypotheses variable H, the probability $P(Y|A)$ can be replaced by $P(Y|H)$ since the predictive ability of hypotheses totally comes from the annotation artifacts, and we can have $w=\\frac{1}{P(Y|H)}$ as weights during training.\n\nExperiment Setting ::: Hard-Easy Datasets Setting\nFor SNLI, we use Hard released by BIBREF2. For MMatch, we manually partition the set using fastText BIBREF18. And we summarize the size of the datasets used in Hard-Easy Testing below.\n\nExperiment Setting ::: Experiment Setup\nFor DIIN, we use settings same as BIBREF5 but do not use syntactical features. Priors of labels are normalized to be the same. For hypothesis-only-model, we implement a naïve model with one LSTM layer and a three-layer MLP behind, implemented with Keras and Tensorflow backend BIBREF16. We use the 300-dimensional GloVe embeddings trained on the Common Crawl 840B tokens dataset BIBREF19 and keep them fixed during training. Batch Normalization BIBREF17 are applied after every hidden layer in MLP and we use Dropout BIBREF20 with rate 0.5 after the last hidden layer. We use RMSPropBIBREF21 as optimizer and set the learning rate as 1e-3. We set the gradient clipping to 1.0 and the batch size to 256.\n\n" ], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "09dca28028fc45f8f52b7cbbdbf916b60df2e493", "160ccb685496a2f5fec06a286528dc338e912859", "80eefea9620961cc4c81835e180aa57995369ba9" ], "answer": [ { "evidence": [ "When the annotation artifacts of the training set cannot be generalized to the testing set, which should be more common in the real-world, predicting by artifacts may hurt models' performance. Centering on the results of JOCI, in which the bias pattern of MultiNLI is misleading, we find that Norm trained with MultiNLI outperforms baseline after debiasing with all smooth values tested.", "Furthermore, debiasing can reduce models' dependence on the bias pattern during training, thus force models to better learn semantic information to make predictions. Norm trained with SNLI exceed baseline in JOCI with smooth terms $0.01$ and $0.1$. With larger smooth terms, Norm trained with both SNLI and MultiNLI exceeds baseline in SICK. Given the fact that JOCI is almost neutral to artifacts in SNLI, and the bias pattern of both SNLI and MultiNLI are even predictive in SICK, we owe these promotions to that our method improves models' semantic learning ability." ], "extractive_spans": [ "Centering on the results of JOCI, in which the bias pattern of MultiNLI is misleading" ], "free_form_answer": "", "highlighted_evidence": [ "Centering on the results of JOCI, in which the bias pattern of MultiNLI is misleading, we find that Norm trained with MultiNLI outperforms baseline after debiasing with all smooth values tested.\n\nFurthermore, debiasing can reduce models' dependence on the bias pattern during training, thus force models to better learn semantic information to make predictions." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Based on the idea proposed by BIBREF12, we demonstrate that we can make artifacts in biased datasets balanced across different classes by assigning specific weights for every sample. We refer the distribution of the acquired weighted dataset as artifact-balanced distribution. We consider a supervised NLI task, which is to predict the relationship label $y$ given a sentence pair $x$, and we denote the hypothesis in $x$ as $h$. Without loss of generality, we assume that the prior probability of different labels is equal, and then we have the following theorem." ], "extractive_spans": [], "free_form_answer": "Artifacts in biased datasets are balanced by assigning specific weights for every sample", "highlighted_evidence": [ "Based on the idea proposed by BIBREF12, we demonstrate that we can make artifacts in biased datasets balanced across different classes by assigning specific weights for every sample. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Essentially speaking, the problem of the bias pattern is that the artifacts in hypotheses are distributed differently among labels, so balancing them across labels may be a good solution to alleviate the impacts BIBREF2.", "Based on the idea proposed by BIBREF12, we demonstrate that we can make artifacts in biased datasets balanced across different classes by assigning specific weights for every sample. We refer the distribution of the acquired weighted dataset as artifact-balanced distribution. We consider a supervised NLI task, which is to predict the relationship label $y$ given a sentence pair $x$, and we denote the hypothesis in $x$ as $h$. Without loss of generality, we assume that the prior probability of different labels is equal, and then we have the following theorem.", "Focusing on the results when smooth equals $0.01$ for SNLI and smooth equals $0.02$ for MultiNLI, we observe that the AUC of Hyp for all testing sets are approximately $0.5$, indicating Hyp's predictions are approximately equivalent to randomly guessing. Also, the gap between Hard and Easy for Norm significantly decreases comparing with the baseline. With the smooth, we can conclude that our method effectively alleviates the bias pattern." ], "extractive_spans": [], "free_form_answer": "by balancing or, smoothing the artifacts across different classes by assigning specific weights for every sample", "highlighted_evidence": [ "Essentially speaking, the problem of the bias pattern is that the artifacts in hypotheses are distributed differently among labels, so balancing them across labels may be a good solution to alleviate the impacts BIBREF2.", "Based on the idea proposed by BIBREF12, we demonstrate that we can make artifacts in biased datasets balanced across different classes by assigning specific weights for every sample.", "With the smooth, we can conclude that our method effectively alleviates the bias pattern." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "6b8248c77ef112c7059a191c6e3a8c10ebd15329", "8f93e7dac49bbfcba496b5153ebd1bd22cd6079c", "f482f523a89c91c3431e60e9bb45729d2abb3877" ], "answer": [ { "evidence": [ "We utilize SNLI BIBREF0, MultiNLI BIBREF1, JOCI BIBREF13 and SICK BIBREF14 for cross-dataset testing." ], "extractive_spans": [ "SNLI", "MultiNLI", "JOCI", "SICK" ], "free_form_answer": "", "highlighted_evidence": [ "We utilize SNLI BIBREF0, MultiNLI BIBREF1, JOCI BIBREF13 and SICK BIBREF14 for cross-dataset testing." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We utilize SNLI BIBREF0, MultiNLI BIBREF1, JOCI BIBREF13 and SICK BIBREF14 for cross-dataset testing." ], "extractive_spans": [ "SNLI", "MultiNLI", "JOCI", "SICK" ], "free_form_answer": "", "highlighted_evidence": [ "We utilize SNLI BIBREF0, MultiNLI BIBREF1, JOCI BIBREF13 and SICK BIBREF14 for cross-dataset testing." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We utilize SNLI BIBREF0, MultiNLI BIBREF1, JOCI BIBREF13 and SICK BIBREF14 for cross-dataset testing." ], "extractive_spans": [ "SNLI BIBREF0", "MultiNLI BIBREF1", "JOCI BIBREF13", "SICK BIBREF14" ], "free_form_answer": "", "highlighted_evidence": [ "We utilize SNLI BIBREF0, MultiNLI BIBREF1, JOCI BIBREF13 and SICK BIBREF14 for cross-dataset testing." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What is the performance improvement of their method over state-of-the-art models on the used datasets? ", "Could the proposed training framework be applied to other NLP problems?", "How does the proposed training framework mitigate the bias pattern?", "Which datasets do they use in the cross-dataset evaluation?" ], "question_id": [ "7f11f128fd39b8060f5810fa84102f000d94ea33", "2a55076a66795793d79a3edfae1041098404fbc3", "ecaa10a2d9927fa6ab6a954488f12aa6b42ddc1a", "8b49423b7d1fa834128aa5038aa16c6ef3fdfa32" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "Inference", "Inference", "Inference", "Inference" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Evaluation Results of Hyp and Norm. Baseline refers to the model trained and validated without using weights. Hard, Easy refers to the Hard-Easy Testing generated from the testing set corresponding to the Trainset column. Results of Hyp are the average numbers of five runs with different random initialization. We report AUC for Hyp and ACC for Norm. “*” indicates where normal-model are better than the baseline." ], "file": [ "3-Table1-1.png" ] }
[ "What is the performance improvement of their method over state-of-the-art models on the used datasets? ", "How does the proposed training framework mitigate the bias pattern?" ]
[ [ "1909.04242-3-Table1-1.png" ], [ "1909.04242-Making Artifacts Unpredictable-0", "1909.04242-Experimental Results ::: Debiasing Results ::: Effectiveness of Debiasing-0", "1909.04242-Making Artifacts Unpredictable-1", "1909.04242-Experimental Results ::: Debiasing Results ::: Benefits of Debiasing-1", "1909.04242-Experimental Results ::: Debiasing Results ::: Benefits of Debiasing-2" ] ]
[ "Average improvement in accuracy is 2.26 points", "by balancing or, smoothing the artifacts across different classes by assigning specific weights for every sample" ]
50
2003.12139
Integrating Crowdsourcing and Active Learning for Classification of Work-Life Events from Tweets
Social media, especially Twitter, is being increasingly used for research with predictive analytics. In social media studies, natural language processing (NLP) techniques are used in conjunction with expert-based, manual and qualitative analyses. However, social media data are unstructured and must undergo complex manipulation for research use. The manual annotation is the most resource and time-consuming process that multiple expert raters have to reach consensus on every item, but is essential to create gold-standard datasets for training NLP-based machine learning classifiers. To reduce the burden of the manual annotation, yet maintaining its reliability, we devised a crowdsourcing pipeline combined with active learning strategies. We demonstrated its effectiveness through a case study that identifies job loss events from individual tweets. We used Amazon Mechanical Turk platform to recruit annotators from the Internet and designed a number of quality control measures to assure annotation accuracy. We evaluated 4 different active learning strategies (i.e., least confident, entropy, vote entropy, and Kullback-Leibler divergence). The active learning strategies aim at reducing the number of tweets needed to reach a desired performance of automated classification. Results show that crowdsourcing is useful to create high-quality annotations and active learning helps in reducing the number of required tweets, although there was no substantial difference among the strategies tested.
{ "paragraphs": [ [ "Micro-blogging social media platforms have become very popular in recent years. One of the most popular platforms is Twitter, which allows users to broadcast short texts (i.e., 140 characters initially, and 280 characters in a recent platform update) in real time with almost no restrictions on content. Twitter is a source of people’s attitudes, opinions, and thoughts toward the things that happen in their daily life. Twitter data are publicly accessible through Twitter application programming interface (API); and there are several tools to download and process these data. Twitter is being increasingly used as a valuable instrument for surveillance research and predictive analytics in many fields including epidemiology, psychology, and social sciences. For example, Bian et al. explored the relation between promotional information and laypeople’s discussion on Twitter by using topic modeling and sentiment analysis BIBREF0. Zhao et al. assessed the mental health signals among sexual and gender minorities using Twitter data BIBREF1. Twitter data can be used to study and predict population-level targets, such as disease incidence BIBREF2, political trends BIBREF3, earthquake detection BIBREF4, and crime perdition BIBREF5, and individual-level outcomes or life events, such as job loss BIBREF6, depression BIBREF7, and adverse events BIBREF8. Since tweets are unstructured textual data, natural language processing (NLP) and machine learning, especially deep learning nowadays, are often used for preprocessing and analytics. However, for many studiesBIBREF9, BIBREF10, BIBREF11, especially those that analyze individual-level targets, manual annotations of several thousands of tweets, often by experts, is needed to create gold-standard training datasets, to be fed to the NLP and machine learning tools for subsequent, reliable automated processing of millions of tweets. Manual annotation is obviously labor intense and time consuming.", "Crowdsourcing can scale up manual labor by distributing tasks to a large set of workers working in parallel instead of a single people working serially BIBREF12. Commercial platforms such as Amazon’s Mechanical Turk (MTurk, https://www.", "mturk.com/), make it easy to recruit a large crowd of people working remotely to perform time consuming manual tasks such as entity resolution BIBREF13, BIBREF14, image or sentiment annotation BIBREF15, BIBREF16. The annotation tasks published on MTurk can be done on a piecework basis and, given the very large pool of workers usually available (even by selecting a subset of those who have, say, a college degree), the tasks can be done almost immediately. However, any crowdsourcing service that solely relies on human workers will eventually be expensive when large datasets are needed, that is often the case when creating training datasets for NLP and deep learning tasks. Therefore, reducing the training dataset size (without losing performance and quality) would also improve efficiency while contain costs.", "Query optimization techniques (e.g., active learning) can reduce the number of tweets that need to be labeled, while yielding comparable performance for the downstream machine learning tasks BIBREF17, BIBREF18, BIBREF19. Active learning algorithms have been widely applied in various areas including NLP BIBREF20 and image processing BIBREF21. In a pool-based active learning scenario, data samples for training a machine learning algorithm (e.g., a classifier for identifying job loss events) are drawn from a pool of unlabeled data according to some forms of informativeness measure (a.k.a. active learning strategies BIBREF22), and then the most informative instances are selected to be annotated. For a classification task, in essence, an active learning strategy should be able to pick the “best” samples to be labelled that will improve the classification performance the most.", "In this study, we integrated active learning into a crowdsourcing pipeline for the classification of life events based on individual tweets. We analyzed the quality of crowdsourcing annotations and then experimented with different machine/deep learning classifiers combined with different active learning strategies to answer the following two research questions (RQs):", "RQ1. How does (1) the amount of time that a human worker spends on and (2) the number of workers assigned to each annotation task impact the quality of an-notation results?", "RQ2. Which active learning strategy is most efficient and cost-effective to build event classification models using Twitter data?", "-5pt" ], [ "We first collected tweets based on a list of job loss-related keywords. We then randomly selected a set of sample tweets and had these tweets annotated (i.e., whether the tweet is a job loss event) using the Amazon MTurk platform. With these annotated tweets, we then evaluated 4 different active learning strategies (i.e., least confi-dent, entropy, vote entropy, and Kullback-Leibler (KL) divergence) through simulations." ], [ "Our data were collected from two data sources based on a list of job loss-related keywords. The keywords were developed using a snowball sampling process, where we started with an initial list of 8 keywords that indicates a job-loss event (e.g., “got fired” and “lost my job”). Using these keywords, we then queried (1) Twitter’s own search engine (i.e., https://twitter.com/search-home?lang=en), and (2) a database of public random tweets that we have collected using the Twitter steaming application programming interface (API) from January 1, 2013 to December 30, 2017, to identify job loss-related tweets. We then manually reviewed a sample of randomly selected tweets to discover new job loss-related keywords. We repeated the search then review process iteratively until no new keywords were found. Through this process, we found 33 keywords from the historical random tweet database and 57 keywords through Twitter web search. We then (1) not only collected tweets based on the over-all of 68 unique keywords from the historical random tweet database, but also (2) crawled new Twitter data using Twitter search API from December 10, 2018 to December 26, 2018 (17 days)." ], [ "We preprocessed the collected data to eliminate tweets that were (1) duplicated or (2) not written in English. For building classifiers, we preprocessed the tweets following the preprocessing steps used by GloVe BIBREF23 with minor modifications as follows: (1) all hashtags (e.g., “#gotfired”) were replaced with “$<$hashtag$>$ PHRASE” (e.g.,, “$<$hashtag$>$ gotfired”); (2) user mentions (e.g., “$@$Rob_Bradley”) were replaced with “$<$user$>$”; (3) web links (eg, “https://t.co/", "fMmFWAHEuM”) were replaced with “$<$url$>$”; and (4) all emojis were replaced with “$<$emoji$>$.”" ], [ "Machine learning and deep learning have been wildly used in classification of tweets tasks. We evaluated 8 different classifiers: 4 traditional machine learning models (i.e., logistic regress [LR], Naïve Bayes [NB], random forest [RF], and support vector machine [SVM]) and 4 deep learning models (i.e., convolutional neural network [CNN], recurrent neural network [RNN], long short-term memory [LSTM] RNN, and gated recurrent unit [GRU] RNN). 3,000 tweets out of 7,220 Amazon MTurk annotated dataset was used for classifier training (n = 2,000) and testing (n = 1,000). The rest of MTurk annotated dataset were used for the subsequent active learning experiments. Each classifier was trained 10 times and 95 confidence intervals (CI) for mean value were reported. We explored two language models as the features for the classifiers (i.e., n-gram and word-embedding). All the machine learning classifiers were developed with n-gram features; while we used both n-gram and word-embedding features on the CNN classifier to test which feature set is more suitable for deep learning classifiers. CNN classifier with word embedding features had a better performance which is consistent with other studies BIBREF24, BIBREF25 We then selected one machine learning and one deep learning classifiers based on the prediction performance (i.e., F-score). Logistic regression was used as the baseline classifier." ], [ "In pool-based sampling for active learning, instances are drawn from a pool of samples according to some sort of informativeness measure, and then the most informative instances are selected to be annotated. This is the most common scenario in active learning studies BIBREF26. The informativeness measures of the pool instances are called active learning strategies (or query strategies). We evaluated 4 active learning strategies (i.e., least confident, entropy, vote entropy and KL divergence). Fig 1.C shows the workflow of our pool-based active learning experiments: for a given active learning strategy and classifiers trained with an initial set of training data (1) the classifiers make predictions of the remaining to-be-labelled dataset; (2) a set of samples is selected using the specific active learning strategy and annotated by human reviewers; (3) the classifiers are retrained with the newly annotated set of tweets. We repeated this process iteratively until the pool of data exhausts. For the least confident and entropy active learning strategies, we used the best performed machine learn-ing classifier and the best performed deep learning classifier plus the baseline classifier (LR). Note that vote entropy and KL divergence are query-by-committee strategies, which were tested upon three deep learning classifiers (i.e., CNN, RNN and LSTM) and three machine learning classifiers (i.e., LR, RF, and SVM) as two separate committees, respectively." ], [ "Our data came from two different sources as shown in Table 1. First, we collected 2,803,164 tweets using the Twitter search API BIBREF27 from December 10, 2018 to December 26, 2018 base on a list of job loss-related keywords (n = 68). After filtering out duplicates and non-English tweets, 1,952,079 tweets were left. Second, we used the same list of keywords to identify relevant tweets from a database of historical random public tweets we collected from January 1, 2013 to December 30, 2017. We found 1,733,905 relevant tweets from this database. Due to the different mechanisms behind the two Twitter APIs (i.e., streaming API vs. search API), the volumes of the tweets from the two data sources were significantly different. For the Twitter search API, users can retrieve most of the public tweets related to the provided keywords within 10 to 14 days before the time of data collection; while the Twitter streaming API returns a random sample (i.e., roughly 1% to 20% varying across the years) of all public tweets at the time and covers a wide range of topics. After integrating the tweets from the two data sources, there were 3,685,984 unique tweets." ], [ "We randomly selected 7,220 tweets from our Twitter data based on keyword distributions and had those tweets annotated using workers recruited through Amazon MTurk. Each tweet was also annotated by an expert annotator (i.e., one of the authors). We treated the consensus answer of the crowdsourcing workers (i.e., at least 5 annotators for each tweet assignment) and the expert annotator as the gold-standard. Using control tweets is a common strategy to identify workers who cheat (e.g., randomly select an answer without reading the instructions and/or tweets) on annotation tasks. We introduced two control tweets in each annotation assignment, where each annotation assignment contains a total of 12 tweets (including the 2 control tweets). Only responses with the two control tweets answered corrected were considered valid responses and the worker would receive the 10 cents incentive.", "The amount of time that a worker spends on a task is another factor associated with annotation quality. We measured the time that one spent on clicking through the annotation task without thinking about the content and repeated the experiment five times. The mean amount time spent on the task is 57.01 (95% CI [47.19, 66.43]) seconds. Thus, responses with less than 47 seconds were considered invalid regardless how the control tweets were answered.", "We then did two experiments to explore the relation between the amount of time that workers spend on annotation tasks and annotation quality. Fig 2. A. shows annotation quality by selecting different amounts of lower cut-off time (i.e., only considering assignments where workers spent more time than the cut-off time as valid responses), which tests whether the annotation is of low quality when workers spent more time on the task. The performance of the crowdsourcing workers was measured by the agreement (i.e., Cohan’s kappa) between labels from each crowdsourcing worker and the gold-standard labels. Fig 2. B. shows annotation quality by selecting different upper cut-off time (i.e., keep assignments whose time consumption were less than the cut-off time), which tests whether the annotation is of low quality when workers spent less time on the task. As shown in Fig. 2. A and B, it does not affect the annotation quality when a worker spent more time on the task; while, the annota-ion quality is significantly lower if the worker spent less than 90 seconds on the task.", "We also tested the annotation reliability (i.e., Fleiss’ Kappa score) between using 3 workers vs. using 5 workers. The Fleiss’ kappa score of 3 workers is 0.53 (95% CI [0.46, 0.61]. The Fleiss’ kappa score of 5 workers is 0.56 (95% CI [0.51, 0.61]. Thus, using 3 workers vs. 5 workers does not make any difference on the annotation reliability, while it is obviously cheaper to use only 3 workers." ], [ "We randomly selected 3,000 tweets from the 7,220 MTurk annotated dataset to build the initial classifiers. Two thousands out of 3,000 tweets were used to train the clas-sifiers and the rest 1,000 tweets were used as independent test dataset to benchmark their performance. We explored 4 machine learning classifiers (i.e., Logistic Regression [LR], Naïve Bayes [NB], Random Forest [RF], and Support Vector Machine [SVM]) and 4 deep learning classifiers (i.e., Convolutional Neural Network [CNN], Recurrent Neural Network [RNN], Long Short-Term Memory [LSTM], and Gated Recurrent Unit [GRU]). Each classifier was trained 10 times. The performance was measured in terms of precision, recall, and F-score. 95% confidence intervals (CIs) of the mean F-score across the ten runs were also reported. Table 2 shows the perfor-mance of classifiers. We chose logistic regression as the baseline model. RF and CNN were chosen for subsequent active learning experiments, since they outperformed other machine learning and deep learning classifiers.", "We implemented a pool-based active learning pipeline to test which classifier and active learning strategy is most efficient to build up an event classification classifier of Twitter data. We queried the top 300 most “informative” tweets from the rest of the pool (i.e., excluding the tweets used for training the classifiers) at each iteration. Table 3 shows the active learning and classifier combinations that we evaluated. The performance of the classifiers was measured by F-score. Fig 3 shows the results of the different active learning strategies combined with LR (i.e., the baseline), RF (i.e., the best performed machine learning model), and CNN (i.e., the best performed deep learning model). For both machine learning models (i.e., LR and RF), using the entropy strategy can reach the optimal performance the quickest (i.e., the least amount of tweets). While, the least confident algorithm does not have any clear advantages compared with random selection. For deep learning model (i.e., CNN), none of the active learning strategies tested are useful to improve the CNN classifier’s performance. Fig 4 shows the results of query-by-committee algorithms (i.e., vote entropy and KL divergence) combined with machine learning and deep learning ensemble classifiers. Query-by-committee algorithms are slightly better than random selection when it applied to machine learning ensemble classifier. However, query-by-committee algorithms are not useful for the deep learning ensemble classifier." ], [ "The goal of our study was to test the feasibility of building classifiers by using crowdsourcing and active learning strategies. We had 7,220 sample job loss-related tweets annotated using Amazon MTurk, tested 8 classification models, and evaluated 4 active learning strategies to answer our two RQs.", "The key benefit of crowdsourcing is to have a large number of workers available to carry out tasks on a piecework basis. This means that it is likely to get the crowd to start work on tasks almost immediately and be able to have a large number of tasks completed quickly. However, even welltrained workers are only human and can make mistakes. Our first RQ was to find an optimal and economical way to get reliable annotations from crowdsourcing. Beyond using control tweets, we tested different cut-off time to assess how the amount of time workers spent on the task would affect annotation quality. We found that the annotation quality is low if the tasks were finished within 90 seconds. We also found that the annotation quality is not affected by the number of workers (i.e., between 3 worker group vs 5 worker group), which was also demonstrated by Mozafari et al BIBREF28.", "In second RQ, we aimed to find which active learning strategy is most efficient and cost-effective to build event classification models using Twitter data. We started with selecting representative machine learning and deep learning classifiers. Among the 4 machine learning classifiers (i.e., LR, NB, RF, and SVM), LR and RF classifiers have the best performance on the task of identifying job loss events from tweets. Among the 4 deep learning methods (i.e., CNN, RNN, LSTM, LSTM with GRU), CNN has the best performance.", "In active learning, the learning algorithm is set to proactively select a subset of available examples to be manually labeled next from a pool of yet unlabeled instances. The fundamental idea behind the concept is that a machine learning algorithm could potentially achieve a better accuracy quicker and using fewer training data if it were allowed to choose the most informative data it wants to learn from. In our experiment, we found that the entropy algorithm is the best way to build machine learning models fast and efficiently. Vote entropy and KL divergence, the query-by-committee active learning methods are helpful for the training of machine learning ensemble classifiers. However, all the active learning strategies we tested do not work well with deep learning model (i.e., CNN) or deep learning-based ensemble classifier.", "We also recognize the limitations of our study. First, we only tested 5 classifiers (i.e., LR, RF, CNN, a machine learning ensemble classifier, and a deep learning classifier) and 4 active learning strategies (i.e., least confident, entropy, vote entropy, KL divergence). Other state-of-art methods for building tweet classifiers (e.g., BERT BIBREF29) and other active learning strategies (e.g., variance reduction BIBREF30) are worth exploring. Second, other crowdsourcing quality control methods such as using prequalification questions to identify high-quality workers also warrant further investigations. Third, the crowdsourcing and active learning pipeline can potentially be applied to other data and tasks. However, more experiments are needed to test the fea-sibility. Fourth, the current study only focused on which active learning strategy is most efficient and cost-effective to build event classification models using crowdsourcing labels. Other research questions such as how the correctness of the crowdsourced labels would impact classifier performance warrant future investigations.", "In sum, our study demonstrated that crowdsourcing with active learning is a possible way to build up machine learning classifiers efficiently. However, active learning strategies do not benefit deep learning classifiers in our study." ], [ "This study was supported by NSF Award #1734134." ] ], "section_name": [ "Introduction", "Methods", "Methods ::: Data Collection", "Methods ::: Data Preprocessing", "Methods ::: Classifier Selection", "Methods ::: Pool-based Active Learning", "Results ::: Data Collection", "Results ::: RQ1. How does (1) the amount of time that a human worker spends on and (2) the number of workers assigned to each annotation task impact the quality of annotation results?", "Results ::: RQ2. Which active learning strategy is most efficient and cost-effective to build event classification models using Twitter data?", "Discussion", "Acknowledgement" ] }
{ "answers": [ { "annotation_id": [ "09e2bd8b3c57ea06390e1061a974bcbfb6b2f509", "32f890c306596661ffcae5de0a67a58a2a8750cd", "b9b732b6dd77d5bf657331fae0afb1e84d7c6209" ], "answer": [ { "evidence": [ "In active learning, the learning algorithm is set to proactively select a subset of available examples to be manually labeled next from a pool of yet unlabeled instances. The fundamental idea behind the concept is that a machine learning algorithm could potentially achieve a better accuracy quicker and using fewer training data if it were allowed to choose the most informative data it wants to learn from. In our experiment, we found that the entropy algorithm is the best way to build machine learning models fast and efficiently. Vote entropy and KL divergence, the query-by-committee active learning methods are helpful for the training of machine learning ensemble classifiers. However, all the active learning strategies we tested do not work well with deep learning model (i.e., CNN) or deep learning-based ensemble classifier." ], "extractive_spans": [ "Vote entropy and KL divergence", " all the active learning strategies we tested do not work well with deep learning model" ], "free_form_answer": "", "highlighted_evidence": [ "Vote entropy and KL divergence, the query-by-committee active learning methods are helpful for the training of machine learning ensemble classifiers. However, all the active learning strategies we tested do not work well with deep learning model (i.e., CNN) or deep learning-based ensemble classifier." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In active learning, the learning algorithm is set to proactively select a subset of available examples to be manually labeled next from a pool of yet unlabeled instances. The fundamental idea behind the concept is that a machine learning algorithm could potentially achieve a better accuracy quicker and using fewer training data if it were allowed to choose the most informative data it wants to learn from. In our experiment, we found that the entropy algorithm is the best way to build machine learning models fast and efficiently. Vote entropy and KL divergence, the query-by-committee active learning methods are helpful for the training of machine learning ensemble classifiers. However, all the active learning strategies we tested do not work well with deep learning model (i.e., CNN) or deep learning-based ensemble classifier." ], "extractive_spans": [], "free_form_answer": "Entropy algorithm is the best way to build machine learning models. Vote entropy and KL divergence are helpful for the training of machine learning ensemble classifiers.", "highlighted_evidence": [ " In our experiment, we found that the entropy algorithm is the best way to build machine learning models fast and efficiently. Vote entropy and KL divergence, the query-by-committee active learning methods are helpful for the training of machine learning ensemble classifiers. ", "Vote entropy and KL divergence, the query-by-committee active learning methods are helpful for the training of machine learning ensemble classifiers" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We implemented a pool-based active learning pipeline to test which classifier and active learning strategy is most efficient to build up an event classification classifier of Twitter data. We queried the top 300 most “informative” tweets from the rest of the pool (i.e., excluding the tweets used for training the classifiers) at each iteration. Table 3 shows the active learning and classifier combinations that we evaluated. The performance of the classifiers was measured by F-score. Fig 3 shows the results of the different active learning strategies combined with LR (i.e., the baseline), RF (i.e., the best performed machine learning model), and CNN (i.e., the best performed deep learning model). For both machine learning models (i.e., LR and RF), using the entropy strategy can reach the optimal performance the quickest (i.e., the least amount of tweets). While, the least confident algorithm does not have any clear advantages compared with random selection. For deep learning model (i.e., CNN), none of the active learning strategies tested are useful to improve the CNN classifier’s performance. Fig 4 shows the results of query-by-committee algorithms (i.e., vote entropy and KL divergence) combined with machine learning and deep learning ensemble classifiers. Query-by-committee algorithms are slightly better than random selection when it applied to machine learning ensemble classifier. However, query-by-committee algorithms are not useful for the deep learning ensemble classifier." ], "extractive_spans": [ "entropy" ], "free_form_answer": "", "highlighted_evidence": [ "For both machine learning models (i.e., LR and RF), using the entropy strategy can reach the optimal performance the quickest (i.e., the least amount of tweets)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "24624499c37d589a53a3bee8244f0dbddc8c97e7", "ac69143002e1e7f723716d2ef207fe67c816e207", "f801ba29937f463fb05398e019f4972b988be729" ], "answer": [ { "evidence": [ "Our data came from two different sources as shown in Table 1. First, we collected 2,803,164 tweets using the Twitter search API BIBREF27 from December 10, 2018 to December 26, 2018 base on a list of job loss-related keywords (n = 68). After filtering out duplicates and non-English tweets, 1,952,079 tweets were left. Second, we used the same list of keywords to identify relevant tweets from a database of historical random public tweets we collected from January 1, 2013 to December 30, 2017. We found 1,733,905 relevant tweets from this database. Due to the different mechanisms behind the two Twitter APIs (i.e., streaming API vs. search API), the volumes of the tweets from the two data sources were significantly different. For the Twitter search API, users can retrieve most of the public tweets related to the provided keywords within 10 to 14 days before the time of data collection; while the Twitter streaming API returns a random sample (i.e., roughly 1% to 20% varying across the years) of all public tweets at the time and covers a wide range of topics. After integrating the tweets from the two data sources, there were 3,685,984 unique tweets." ], "extractive_spans": [ "3,685,984 unique tweets" ], "free_form_answer": "", "highlighted_evidence": [ "After integrating the tweets from the two data sources, there were 3,685,984 unique tweets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our data came from two different sources as shown in Table 1. First, we collected 2,803,164 tweets using the Twitter search API BIBREF27 from December 10, 2018 to December 26, 2018 base on a list of job loss-related keywords (n = 68). After filtering out duplicates and non-English tweets, 1,952,079 tweets were left. Second, we used the same list of keywords to identify relevant tweets from a database of historical random public tweets we collected from January 1, 2013 to December 30, 2017. We found 1,733,905 relevant tweets from this database. Due to the different mechanisms behind the two Twitter APIs (i.e., streaming API vs. search API), the volumes of the tweets from the two data sources were significantly different. For the Twitter search API, users can retrieve most of the public tweets related to the provided keywords within 10 to 14 days before the time of data collection; while the Twitter streaming API returns a random sample (i.e., roughly 1% to 20% varying across the years) of all public tweets at the time and covers a wide range of topics. After integrating the tweets from the two data sources, there were 3,685,984 unique tweets." ], "extractive_spans": [ "3,685,984 unique tweets" ], "free_form_answer": "", "highlighted_evidence": [ " After integrating the tweets from the two data sources, there were 3,685,984 unique tweets." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our data came from two different sources as shown in Table 1. First, we collected 2,803,164 tweets using the Twitter search API BIBREF27 from December 10, 2018 to December 26, 2018 base on a list of job loss-related keywords (n = 68). After filtering out duplicates and non-English tweets, 1,952,079 tweets were left. Second, we used the same list of keywords to identify relevant tweets from a database of historical random public tweets we collected from January 1, 2013 to December 30, 2017. We found 1,733,905 relevant tweets from this database. Due to the different mechanisms behind the two Twitter APIs (i.e., streaming API vs. search API), the volumes of the tweets from the two data sources were significantly different. For the Twitter search API, users can retrieve most of the public tweets related to the provided keywords within 10 to 14 days before the time of data collection; while the Twitter streaming API returns a random sample (i.e., roughly 1% to 20% varying across the years) of all public tweets at the time and covers a wide range of topics. After integrating the tweets from the two data sources, there were 3,685,984 unique tweets." ], "extractive_spans": [ "3,685,984 unique tweets" ], "free_form_answer": "", "highlighted_evidence": [ "After integrating the tweets from the two data sources, there were 3,685,984 unique tweets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ], "nlp_background": [ "two", "two" ], "paper_read": [ "no", "no" ], "question": [ "Which was the most helpful strategy?", "How large is their tweets dataset?" ], "question_id": [ "0aca0a208a1e28857fab44e397dc7880e010dbca", "471683ba6251b631f38a24d42b6dba6f52dee429" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Fig. 1: The workflow of our Twitter analysis pipeline.", "Fig. 2: Annotation quality by selecting different cut-off time.", "Table 2: The performance of machine learning and deep learning classifiers.", "Fig. 3: The performance of active learning strategies combined with linear regression, random forest, and convolutional neural network.", "Fig. 4: The performance of active learning strategies combined with machine learning and deep learning ensemble classifiers." ], "file": [ "3-Figure1-1.png", "7-Figure2-1.png", "7-Table2-1.png", "8-Figure3-1.png", "9-Figure4-1.png" ] }
[ "Which was the most helpful strategy?" ]
[ [ "2003.12139-Results ::: RQ2. Which active learning strategy is most efficient and cost-effective to build event classification models using Twitter data?-1", "2003.12139-Discussion-3" ] ]
[ "Entropy algorithm is the best way to build machine learning models. Vote entropy and KL divergence are helpful for the training of machine learning ensemble classifiers." ]
51
1809.03391
Toward a Standardized and More Accurate Indonesian Part-of-Speech Tagging
Previous work in Indonesian part-of-speech (POS) tagging are hard to compare as they are not evaluated on a common dataset. Furthermore, in spite of the success of neural network models for English POS tagging, they are rarely explored for Indonesian. In this paper, we explored various techniques for Indonesian POS tagging, including rule-based, CRF, and neural network-based models. We evaluated our models on the IDN Tagged Corpus. A new state-of-the-art of 97.47 F1 score is achieved with a recurrent neural network. To provide a standard for future work, we release the dataset split that we used publicly.
{ "paragraphs": [ [ "Part-of-speech (POS) tagging is a process to tag tokens in a string with their corresponding part-of-speech (e.g., noun, verb, etc). POS tagging is considered as one of the most basic tasks in NLP, as it is usually the first component in an NLP pipeline. This is because POS tags are shown to be useful features in various NLP tasks, such as named entity recognition BIBREF0 , BIBREF1 , machine translation BIBREF2 , BIBREF3 and constituency parsing BIBREF4 . Therefore, for any language, building a successful NLP system usually requires a well-performing POS tagger.", "There are quite a number of research on Indonesian POS tagging BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . However, almost all of them are not evaluated on a common dataset. Even when they are, their train-test split are not the same. This lack of a common benchmark dataset makes a fair comparison among these works difficult. Moreover, despite the success of neural network models for English POS tagging BIBREF9 , BIBREF10 , the use of neural networks is generally unexplored for Indonesian. As a result, published results may not reflect the actual state-of-the-art performance of Indonesian POS tagger.", "In this work, we explored different neural network architectures for Indonesian POS tagging. We evaluated our experiments on the IDN Tagged Corpus BIBREF11 . Our best model achieves 97.47 INLINEFORM0 score, a new state-of-the-art result for Indonesian POS tagging on the dataset. We release the dataset split that we used to serve as a benchmark for future work." ], [ "Pisceldo et al. BIBREF5 built an Indonesian POS tagger by employing a conditional random field (CRF) BIBREF12 and a maximum entropy model. They used contextual unigram and bigram features and achieved accuracy scores of 80-90% on PANL10N dataset tagged manually using their proposed tagset. The dataset consists of 15K sentences. Another work used a hidden Markov model enhanced with an affix tree to better handle out-of-vocabulary (OOV) words BIBREF6 . They evaluated their models on the same PANL10N dataset and achieved more than 90% overall accuracy and roughly 70% accuracy for the OOV cases. We note that while the datasets are the same, the split could be different. Thus, making a fair comparison between them is difficult.", "Dinakaramani et al. BIBREF11 proposed IDN Tagged Corpus, a new manually annotated POS tagging corpus for Indonesian. The corpus consists of 10K sentences and 250K tokens, and its tagset is different than that of the PANL10N dataset. The corpus is available online. A rule-based tagger is developed in BIBREF7 using the aformentioned dataset, and is able to achieve an accuracy of 80%.", "One of the neural network-based POS taggers for Indonesian is proposed in BIBREF8 . They used a feedforward neural network with an architecture similar to that proposed in BIBREF13 . They evaluated their methods on the new POS tagging corpus BIBREF11 and separated the evaluation of multi- and single-word expressions. They experimented with several word embedding algorithms trained on Indonesian Wikipedia data and reported macro-averaged INLINEFORM0 score of 91 and 73 for the single- and multi-word expression cases respectively. We remark that the choice of macro-averaged INLINEFORM1 score is more suitable than accuracy for POS tagging because of the class imbalance in the dataset. There are too many words with NN as the true POS tag, so accuracy is not the best metric in such case." ], [ "We used the IDN Tagged Corpus proposed in BIBREF11 . The corpus contains 10K sentences and 250K tokens that are tagged manually. Due to the small size, we used 5-fold cross-validation to split the corpus into training, development, and test sets. We did not split multi-word expressions but treated them as if they are a single token. All 5 folds of the dataset are available publicly to serve as a benchmark for future work." ], [ "We used two simple baselines: majority tag (Major) and memorization (Memo). Major simply predicts the majority POS tag found in the training set for all words. Memo remembers the word-tag assignments from the training set and uses them to predict the tags on the test set. If there is an unknown word, it simply outputs the majority tag found in the training set." ], [ "We adopted a rule-based tagger designed by Rashel et al. BIBREF14 as one of our comparisons. Firstly, the tagger tags named entities and multi-word expressions based on a dictionary. Then, it uses MorphInd BIBREF15 to tag the rest of the words. Finally, they employ 15 hand-crafted rules to resolve ambiguous tags in the post-processing step. We want to note that we did not use their provided tokenizer since the IDN Tagged Corpus dataset is already tokenized. Their implementation is available online.", "We used CRF BIBREF12 as another comparison since it is the most common non-neural model for sequence labeling tasks. We employed contextual words as well as affixes as features. For some context window size INLINEFORM0 , the complete list of features is:", "the current word, as well as INLINEFORM0 preceding and succeeding words;", "two and three leading characters of the current word and INLINEFORM0 preceding and succeeding words;", "two and three trailing characters of the current word and INLINEFORM0 preceding and succeeding words.", "The last two features are meant to capture prefixes and suffixes in Indonesian which usually consist of two or three characters. One advantage of this feature extraction approach is that it does not require language-specific tools such as stemmer or morphological segmenter. This advantage is particularly useful for Indonesian which does not have well-established tools for such purposes. We padded the input sentence with padding tokens to ensure that every token has enough preceding and succeeding words for context window size INLINEFORM0 . For the implementation, we used pycrfsuite.", "Our neural network-based POS tagger can be divided into 3 steps: embedding, encoding, and prediction. First, the tagger embeds the words and optionally additional features of such words (e.g., affixes). From this embedding process, we get vector representations of the words and the features. Next, the tagger learns contextual information in the encoding step via either a feedforward network with context window or a bidirectional LSTM BIBREF16 . Finally, in prediction step, the tagger predicts the POS tags from the output of the encoding step using either a softmax or a CRF layer.", "Embedding. In the embedding step, the tagger obtains vector representations of each word and additional features. We experimented with several additional features: prefixes, suffixes, and characters. Prefix features are the first 2 and 3 characters of the word. Likewise, suffix features are the last 2 and 3 characters of the word. For the character features, we followed BIBREF9 by embedding each character and composing the resulting vectors with a max-pooled CNN. The final embedding of a word is then the concatenation of all these vectors. Fig. FIGREF17 shows an illustration of the process.", "Encoding. In the encoding step, the tagger learns contextual information by using either a feedforward network with context window or a bidirectional LSTM (biLSTM). The feedforward network accepts as input the concatenation of the embedding of the current word and INLINEFORM0 preceding and succeeding words for some context window size INLINEFORM1 . Formally, given a sequence of word embedding INLINEFORM2 , the input of the feedforward network at timestep INLINEFORM3 is DISPLAYFORM0 ", "where INLINEFORM0 denotes a concatenation. The feedforward network then computes DISPLAYFORM0 ", " where INLINEFORM0 is the output vector, INLINEFORM1 is a dropout mask vector, and INLINEFORM2 are parameters. The output vector INLINEFORM3 has length equal to the number of possible tags. Its INLINEFORM4 -th component defines the (unnormalized) log probability of the INLINEFORM5 -th word having tag INLINEFORM6 .", "On the other hand, the biLSTM accepts as input the sequence of word embeddings, and for each timestep, the output from the forward and backward LSTM are concatenated to form the final output. Formally, the output at each timestep INLINEFORM0 can be expressed as DISPLAYFORM0 ", "where DISPLAYFORM0 ", " The vector INLINEFORM0 is then passed through INLINEFORM1 as before to obtain INLINEFORM2 .", "Prediction. In the prediction step, the tagger predicts the POS tag of the INLINEFORM0 -th word based on the output vector INLINEFORM1 . We tested two approaches: a softmax layer with greedy decoding and a CRF layer with Viterbi decoding. With a softmax layer, the tagger simply normalizes INLINEFORM2 and predicts using greedy decoding, i.e. picking the tag with the highest probability. In contrast, with a CRF layer, the tagger treats INLINEFORM3 as emission probability scores, models the tag-to-tag transition probability scores, and uses Viterbi algorithm to select the most probable tag sequence as the prediction. We refer readers to BIBREF17 to read more about how the CRF layer and Viterbi decoding work. We want to note that when we only embed words, encode using feedforward network, and predict using greedy decoding, the tagger is effectively the same as that in BIBREF8 . Also, when only the word and character features are used, with a biLSTM and CRF layer, the tagger is effectively the same as that in BIBREF9 . Our implementation code is available online." ], [ "For all models, we preprocessed the dataset by lowercasing all words, except when the characters were embedded. For the CRF model, we used L2 regularization whose coefficient was tuned to the development set. As we mentioned previously, we tuned the context window size INLINEFORM0 to the development set as well.", "For the neural tagger, we set the size of the word, affix, and character embedding to 100, 20, and 30 respectively. We applied dropout regularization to the embedding layers. The max-pooled CNN has 30 filters for each filter width. We set the feedforward network and the biLSTM to have 100 hidden units. We put a dropout layer before the biLSTM input layer. We tuned the learning rate, dropout rate, context window size, and CNN filter width to the development set. As we said earlier, we experimented with different configurations in the embedding, encoding, and prediction step. We evaluated each configuration on the development set as well.", "At training time, we used a batch size of 8, decayed the learning rate by half if the INLINEFORM0 score on the development set did not improve after 2 epochs, and stopped the training early if the score still did not improve after decaying the learning rate 5 times. To address the exploding gradient problem, we normalized the gradient norm at 1, following the suggestion in BIBREF18 . To handle the out-of-vocabulary problem, we converted singleton words and affixes occurring fewer than 5 times in the training data into a special token for unknown words/affixes." ], [ "Since the dataset is highly imbalanced (majority of words are nouns), using accuracy score as the evaluation metric is not appropriate as it gives a high score to a model that always predicts nouns regardless of input. Therefore, we decided to use INLINEFORM0 score which considers both precision and recall of the predictions.", "Since there are multiple tags, there are two flavors to compute an overall INLINEFORM0 score: micro and macro average. For POS tagging task where the tags do not span multiple words, micro-average INLINEFORM1 score is exactly the same as accuracy score. Thus, macro-average INLINEFORM2 score is our only option. However, there is still an issue. Macro-average INLINEFORM3 score computes the overall INLINEFORM4 score by averaging the INLINEFORM5 score of each tag. This approach means that when the model wrongly predicts a rarely occurring tag (e.g., foreign word), it is penalized as heavily as it does a frequent tag. To address this problem, we used weighted macro-average INLINEFORM6 score which takes into account the tag proportion imbalance. It computes the weighted average of the scores where each weight is equal to the corresponding tag's proportion in the dataset. This functionality is available in the scikit-learn library." ], [ "Firstly, we report on our tuning experiments for the neural tagger. Table TABREF27 shows the evaluation results of the many configurations of our neural tagger on the development set. We group the results by the encoding and prediction step configuration. For each group, we show the highest INLINEFORM0 score among many embedding configurations. As we can see, biLSTM with CRF layer achieves 97.60 INLINEFORM1 score, the best score on the development set. This result agrees with many previous work in neural sequence labeling that a bidirectional LSTM with CRF layer performs best BIBREF10 , BIBREF17 , BIBREF9 . Therefore, we will use this tagger to represent the neural model hereinafter.", "To understand the performance of the neural model for each tag, we plot the confusion matrix from the development set of the first fold in Fig. FIGREF30 . The figure shows that the model can predict most tags almost perfectly, except for X and WH tag. The X tag is described as \"a word or part of a sentence which its category is unknown or uncertain\". The X tag is rather rare, as it only appears 397 times out of over 250K tokens. Some words annotated as X are typos and slang words. Some foreign terms and abbreviations are also annotated with X. The model might get confused as such words are usually tagged with a noun tag (NN or NNP). We also see that the model seems to confuse question words (WH) such as apa (what) or siapa (who) as SC since these words may be used in subordinate clauses as well. Looking at the data closely, we found that the tagging of such words are inconsistent. This inconsistency contributes to the inability of the model to distinguish the two tags well.", "Next, we present the result of evaluating the baselines and other comparisons on the test set in Table TABREF28 . The INLINEFORM0 scores are averaged over the 5 cross-validation folds. We see that Major baseline performs very poorly compared to the Memo baseline, which surprisingly achieves over 90 INLINEFORM1 points. This result suggests that Memo is a more suitable baseline for this dataset in contrast with Major. The result also provides evidence to the usefulness of our evaluation metric which heavily penalizes a simple majority vote model. Furthermore, we notice that the rule-based tagger by Rashel et al. BIBREF7 performs worse than Memo, indicating that Memo is not just suitable but also quite a strong baseline. Moving on, we observe how CRF has 6 points advantage over Memo, signaling that incorporating contextual features and modeling tag-to-tag transitions are useful. Lastly, the biLSTM with CRF tagger performs the best with 97.47 INLINEFORM2 score.", "To understand how each feature in the embedding step affects the neural tagger, we performed feature ablation on the development set and put the result in Table TABREF29 . We see that with only words as features (first row), the neural tagger only achieves 96.06 INLINEFORM0 score. Employing character features boosts the score up to 97.42, a gain of 1.36 points. Adding prefix and suffix features improves the performance further by 0.08 and 0.10 points respectively. From this result, we see that it is the character features that positively affect the neural tagger the most." ], [ "We experimented with several baselines and comparisons for Indonesian POS tagging task. Our comparisons include a rule-based tagger, a well-established probabilistic model for sequence labeling (CRF), and a neural model. We tested many configurations for our neural model: the features (words, affixes, characters), the architecture (feedforward, biLSTM), and the output layer (softmax, CRF). We evaluated all our models on the IDN Tagged Corpus BIBREF11 , a manually annotated and publicly available Indonesian POS tagging dataset. Our best model achieves 97.47 INLINEFORM0 score, a new state-of-the-art result on the dataset. We make our cross-validation split available publicly to serve as a benchmark for future work." ] ], "section_name": [ "Introduction", "Related Work", "Dataset", "Baselines", "Comparisons", "Experiments Setup", "Evaluation", "Results and Discussion", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "0b4f3a7dff683c7ba95f1a67a95a58a4243daaf7", "6d37d2a5dcf496866e96b7f9466726003ffa22db", "942bd96c7b9fb4b859c1571ab88479352c24a792" ], "answer": [ { "evidence": [ "Dinakaramani et al. BIBREF11 proposed IDN Tagged Corpus, a new manually annotated POS tagging corpus for Indonesian. The corpus consists of 10K sentences and 250K tokens, and its tagset is different than that of the PANL10N dataset. The corpus is available online. A rule-based tagger is developed in BIBREF7 using the aformentioned dataset, and is able to achieve an accuracy of 80%." ], "extractive_spans": [ "10K" ], "free_form_answer": "", "highlighted_evidence": [ "Dinakaramani et al. BIBREF11 proposed IDN Tagged Corpus, a new manually annotated POS tagging corpus for Indonesian. The corpus consists of 10K sentences and 250K tokens, and its tagset is different than that of the PANL10N dataset. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Dinakaramani et al. BIBREF11 proposed IDN Tagged Corpus, a new manually annotated POS tagging corpus for Indonesian. The corpus consists of 10K sentences and 250K tokens, and its tagset is different than that of the PANL10N dataset. The corpus is available online. A rule-based tagger is developed in BIBREF7 using the aformentioned dataset, and is able to achieve an accuracy of 80%." ], "extractive_spans": [ "10K sentences", "250K tokens" ], "free_form_answer": "", "highlighted_evidence": [ "The corpus consists of 10K sentences and 250K tokens, and its tagset is different than that of the PANL10N dataset. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We used the IDN Tagged Corpus proposed in BIBREF11 . The corpus contains 10K sentences and 250K tokens that are tagged manually. Due to the small size, we used 5-fold cross-validation to split the corpus into training, development, and test sets. We did not split multi-word expressions but treated them as if they are a single token. All 5 folds of the dataset are available publicly to serve as a benchmark for future work." ], "extractive_spans": [ "10K sentences and 250K tokens" ], "free_form_answer": "", "highlighted_evidence": [ "We used the IDN Tagged Corpus proposed in BIBREF11 . The corpus contains 10K sentences and 250K tokens that are tagged manually." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "2be704fafa28fdc5cbbc32cf931f044d902b9fa9", "d57b92857a4b7ed325bbaed1ae380a71e1aafba4", "eb5afe221f5331f8d56eb0fb33df9436f4d746be" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table I DEV F1 SCORE OF EACH NEURAL TAGGER ARCHITECTURE" ], "extractive_spans": [], "free_form_answer": "Feedforward, biLSTM", "highlighted_evidence": [ "FLOAT SELECTED: Table I DEV F1 SCORE OF EACH NEURAL TAGGER ARCHITECTURE" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our neural network-based POS tagger can be divided into 3 steps: embedding, encoding, and prediction. First, the tagger embeds the words and optionally additional features of such words (e.g., affixes). From this embedding process, we get vector representations of the words and the features. Next, the tagger learns contextual information in the encoding step via either a feedforward network with context window or a bidirectional LSTM BIBREF16 . Finally, in prediction step, the tagger predicts the POS tags from the output of the encoding step using either a softmax or a CRF layer.", "Encoding. In the encoding step, the tagger learns contextual information by using either a feedforward network with context window or a bidirectional LSTM (biLSTM). The feedforward network accepts as input the concatenation of the embedding of the current word and INLINEFORM0 preceding and succeeding words for some context window size INLINEFORM1 . Formally, given a sequence of word embedding INLINEFORM2 , the input of the feedforward network at timestep INLINEFORM3 is DISPLAYFORM0" ], "extractive_spans": [ "feedforward", "bidirectional LSTM (biLSTM)" ], "free_form_answer": "", "highlighted_evidence": [ "Our neural network-based POS tagger can be divided into 3 steps: embedding, encoding, and prediction. First, the tagger embeds the words and optionally additional features of such words (e.g., affixes). From this embedding process, we get vector representations of the words and the features. Next, the tagger learns contextual information in the encoding step via either a feedforward network with context window or a bidirectional LSTM BIBREF16 . Finally, in prediction step, the tagger predicts the POS tags from the output of the encoding step using either a softmax or a CRF layer.", "Encoding. In the encoding step, the tagger learns contextual information by using either a feedforward network with context window or a bidirectional LSTM (biLSTM)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our neural network-based POS tagger can be divided into 3 steps: embedding, encoding, and prediction. First, the tagger embeds the words and optionally additional features of such words (e.g., affixes). From this embedding process, we get vector representations of the words and the features. Next, the tagger learns contextual information in the encoding step via either a feedforward network with context window or a bidirectional LSTM BIBREF16 . Finally, in prediction step, the tagger predicts the POS tags from the output of the encoding step using either a softmax or a CRF layer." ], "extractive_spans": [ "feedforward network ", "bidirectional LSTM" ], "free_form_answer": "", "highlighted_evidence": [ "Our neural network-based POS tagger can be divided into 3 steps: embedding, encoding, and prediction. First, the tagger embeds the words and optionally additional features of such words (e.g., affixes). From this embedding process, we get vector representations of the words and the features. Next, the tagger learns contextual information in the encoding step via either a feedforward network with context window or a bidirectional LSTM BIBREF16 . " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "1228a6a12ceaad6ae4087b8e808edbec8c101d00", "bb7ed862e07e7fd67ff0aeaecc24a10ac3ca533d", "dbdd630c7f289a2efc40d28c93a0fe6b83ef39f5" ], "answer": [ { "evidence": [ "We adopted a rule-based tagger designed by Rashel et al. BIBREF14 as one of our comparisons. Firstly, the tagger tags named entities and multi-word expressions based on a dictionary. Then, it uses MorphInd BIBREF15 to tag the rest of the words. Finally, they employ 15 hand-crafted rules to resolve ambiguous tags in the post-processing step. We want to note that we did not use their provided tokenizer since the IDN Tagged Corpus dataset is already tokenized. Their implementation is available online." ], "extractive_spans": [ "Rashel et al. BIBREF14" ], "free_form_answer": "", "highlighted_evidence": [ "We adopted a rule-based tagger designed by Rashel et al. BIBREF14 as one of our comparisons." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We adopted a rule-based tagger designed by Rashel et al. BIBREF14 as one of our comparisons. Firstly, the tagger tags named entities and multi-word expressions based on a dictionary. Then, it uses MorphInd BIBREF15 to tag the rest of the words. Finally, they employ 15 hand-crafted rules to resolve ambiguous tags in the post-processing step. We want to note that we did not use their provided tokenizer since the IDN Tagged Corpus dataset is already tokenized. Their implementation is available online." ], "extractive_spans": [ "rule-based tagger designed by Rashel et al. BIBREF14" ], "free_form_answer": "", "highlighted_evidence": [ "We adopted a rule-based tagger designed by Rashel et al. BIBREF14 as one of our comparisons." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We adopted a rule-based tagger designed by Rashel et al. BIBREF14 as one of our comparisons. Firstly, the tagger tags named entities and multi-word expressions based on a dictionary. Then, it uses MorphInd BIBREF15 to tag the rest of the words. Finally, they employ 15 hand-crafted rules to resolve ambiguous tags in the post-processing step. We want to note that we did not use their provided tokenizer since the IDN Tagged Corpus dataset is already tokenized. Their implementation is available online." ], "extractive_spans": [ "rule-based tagger designed by Rashel et al. BIBREF14" ], "free_form_answer": "", "highlighted_evidence": [ "We adopted a rule-based tagger designed by Rashel et al. BIBREF14 as one of our comparisons. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "1a84f40e6a418da52b0d5c852ceeee97fa2bc5f3", "2e6120a1be977890a5317bb2f151582eb10390d3", "fe7ad4f5f5373931917bc3d7887c7f9b08f3f9cd" ], "answer": [ { "evidence": [ "In this work, we explored different neural network architectures for Indonesian POS tagging. We evaluated our experiments on the IDN Tagged Corpus BIBREF11 . Our best model achieves 97.47 INLINEFORM0 score, a new state-of-the-art result for Indonesian POS tagging on the dataset. We release the dataset split that we used to serve as a benchmark for future work." ], "extractive_spans": [ "IDN Tagged Corpus " ], "free_form_answer": "", "highlighted_evidence": [ "We evaluated our experiments on the IDN Tagged Corpus BIBREF11 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this work, we explored different neural network architectures for Indonesian POS tagging. We evaluated our experiments on the IDN Tagged Corpus BIBREF11 . Our best model achieves 97.47 INLINEFORM0 score, a new state-of-the-art result for Indonesian POS tagging on the dataset. We release the dataset split that we used to serve as a benchmark for future work." ], "extractive_spans": [ "IDN Tagged Corpus" ], "free_form_answer": "", "highlighted_evidence": [ "We evaluated our experiments on the IDN Tagged Corpus BIBREF11 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We used the IDN Tagged Corpus proposed in BIBREF11 . The corpus contains 10K sentences and 250K tokens that are tagged manually. Due to the small size, we used 5-fold cross-validation to split the corpus into training, development, and test sets. We did not split multi-word expressions but treated them as if they are a single token. All 5 folds of the dataset are available publicly to serve as a benchmark for future work." ], "extractive_spans": [ " IDN Tagged Corpus" ], "free_form_answer": "", "highlighted_evidence": [ "We used the IDN Tagged Corpus proposed in BIBREF11 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "question": [ "what is the size of the idn tagged corpus?", "what neural network models were explored?", "what rule based models were evaluated?", "what datasets have been used for this task?" ], "question_id": [ "5dfd58f91e7740899c23ebfe04b7176edce9ead2", "c09bceea67273c10a0621da1a83b409f53342fd9", "732bd97ae34541f215c436e2a1b98db1649cba27", "183b385fb59ff1e3f658d4555a08b67c005a8734" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Figure 1. Illustration of the embedding step. The word and its affixes are embedded to obtain their vector representations. Character embeddings of the word are composed with a max-pooled CNN. The final word embedding is the concatenation of all the result vectors.", "Table I DEV F1 SCORE OF EACH NEURAL TAGGER ARCHITECTURE", "Table II TEST F1 SCORE OF EACH METHOD, AVERAGED OVER 5 FOLDS", "Figure 2. Confusion matrix of the best biLSTM with CRF tagger from the development set of the first fold. The tagger seems to have difficulties dealing with words annotated as X and confuse WH as SC." ], "file": [ "2-Figure1-1.png", "3-TableI-1.png", "4-TableII-1.png", "4-Figure2-1.png" ] }
[ "what neural network models were explored?" ]
[ [ "1809.03391-Comparisons-6", "1809.03391-3-TableI-1.png" ] ]
[ "Feedforward, biLSTM" ]
52
1906.04287
Chinese Embedding via Stroke and Glyph Information: A Dual-channel View
Recent studies have consistently given positive hints that morphology is helpful in enriching word embeddings. In this paper, we argue that Chinese word embeddings can be substantially enriched by the morphological information hidden in characters which is reflected not only in strokes order sequentially, but also in character glyphs spatially. Then, we propose a novel Dual-channel Word Embedding (DWE) model to realize the joint learning of sequential and spatial information of characters. Through the evaluation on both word similarity and word analogy tasks, our model shows its rationality and superiority in modelling the morphology of Chinese.
{ "paragraphs": [ [ "Word embeddings are fixed-length vector representations for words BIBREF0 , BIBREF1 . In recent years, the morphology of words is drawing more and more attention BIBREF2 , especially for Chinese whose writing system is based on logograms.", "UTF8gbsn With the gradual exploration of the semantic features of Chinese, scholars have found that not only words and characters are important semantic carriers, but also stroke feature of Chinese characters is crucial for inferring semantics BIBREF3 . Actually, a Chinese word usually consists of several characters, and each character can be further decomposed into a stroke sequence which is certain and changeless, and this kind of stroke sequence is very similar to the construction of English words. In Chinese, a particular sequence of strokes can reflect the inherent semantics. As shown in the upper half of Figure FIGREF3 , the Chinese character “驾\" (drive) can be decomposed into a sequence of eight strokes, where the last three strokes together correspond to a root character “马\" (horse) similar to the root “clar\" of English word “declare\" and “clarify\".", "Moreover, Chinese is a language originated from Oracle Bone Inscriptions (a kind of hieroglyphics). Its character glyphs have a spatial structure similar to graphs which can convey abundant semantics BIBREF4 . Additionally, the critical reason why Chinese characters are so rich in morphological information is that they are composed of basic strokes in a 2-D spatial order. However, different spatial configurations of strokes may lead to different semantics. As shown in the lower half of Figure 1, three Chinese characters “入\" (enter), “八\" (eight) and “人\" (man) share exactly a common stroke sequence, but they have completely different semantics because of their different spatial configurations.", "In addition, some biological investigations have confirmed that there are actually two processing channels for Chinese language. Specifically, Chinese readers not only activate the left brain which is a dominant hemisphere in processing alphabetic languages BIBREF5 , BIBREF6 , BIBREF7 , but also activate the areas of the right brain that are responsible for image processing and spatial information at the same time BIBREF8 . Therefore, we argue that the morphological information of characters in Chinese consists of two parts, i.e., the sequential information hidden in root-like strokes order, and the spatial information hidden in graph-like character glyphs. Along this line, we propose a novel Dual-channel Word Embedding (DWE) model for Chinese to realize the joint learning of sequential and spatial information in characters. Finally, we evaluate DWE on two representative tasks, where the experimental results exactly validate the superiority of DWE in capturing the morphological information of Chinese." ], [ "Traditional methods on getting word embeddings are mainly based on the distributional hypothesis BIBREF9 : words with similar contexts tend to have similar semantics. To explore more interpretable models, some scholars have gradually noticed the importance of the morphology of words in conveying semantics BIBREF10 , BIBREF11 , and some studies have proved that the morphology of words can indeed enrich the semantics of word embeddings BIBREF12 , BIBREF13 , BIBREF2 . More recently, Wieting et al. wieting2016charagram proposed to represent words using character n-gram count vectors. Further, Bojanowski et al. bojanowski2017enriching improved the classic skip-gram model BIBREF0 by taking subwords into account in the acquisition of word embeddings, which is instructive for us to regard certain stroke sequences as roots in English." ], [ "The complexity of Chinese itself has given birth to a lot of research on Chinese embedding, including the utilization of character features BIBREF14 and radicals BIBREF15 , BIBREF16 , BIBREF17 . Considering the 2-D graphic structure of Chinese characters, Su and Lee su2017learning creatively proposed to enhance word representations by character glyphs. Lately, Cao et al. cao2018cw2vec proposed that a Chinese word can be decomposed into a sequence of strokes which correspond to subwords in English, and Wu et al. wu2019glyce designed a Tianzige-CNN to model the spatial structure of Chinese characters from the perspective of image processing. However, their methods are either somewhat loose for the stroke criteria or unable to capture the interactions between strokes and character glyphs." ], [ "As we mentioned earlier, it is reasonable and imperative to learn Chinese word embeddings from two channels, i.e., a sequential stroke n-gram channel and a spatial glyph channel. Inspired by the previous works BIBREF14 , BIBREF18 , BIBREF4 , BIBREF19 , we propose to combine the representation of Chinese words with the representation of characters to obtain finer-grained semantics, so that unknown words can be identified and their relationship with other known Chinese characters can be found by distinguishing the common stroke sequences or character glyph they share.", "UTF8gbsn Our DWE model is shown in Figure FIGREF9 . For an arbitrary Chinese word INLINEFORM0 , e.g., “驾车\", it will be firstly decomposed into several characters, e.g., “驾\" and “车\", and each of the characters will be further processed in a dual-channel character embedding sub-module to refine its morphological information. In sequential channel, each character can be decomposed into a stroke sequence according to the criteria of Chinese writing system as shown in Figure FIGREF3 . After retrieving the stroke sequence, we add special boundary symbols INLINEFORM1 and INLINEFORM2 at the beginning and end of it and adopt an efficient approach by utilizing the stroke n-gram method BIBREF3 to extract strokes order information for each character. More precisely, we firstly scan each character throughout the training corpus and obtain a stroke n-gram dictionary INLINEFORM3 . Then, we use INLINEFORM4 to denote the collection of stroke n-grams of each character INLINEFORM5 in INLINEFORM6 . While in spatial channel, to capture the semantics hidden in glyphs, we render the glyph INLINEFORM7 for each character INLINEFORM8 and apply a well-known CNN structure, LeNet BIBREF20 , to process each character glyph, which is also helpful to distinguish between those characters that are identical in strokes.", "After that, we combine the representation of words with the representation of characters and define the word embedding for INLINEFORM0 as follows: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are compositional operation. INLINEFORM6 is the word ID embedding and INLINEFORM7 is the number of characters in INLINEFORM8 .", "According to the previous work BIBREF0 , we compute the similarity between current word INLINEFORM0 and one of its context words INLINEFORM1 by defining a score function as INLINEFORM2 , where INLINEFORM3 and INLINEFORM4 are embedding vectors of INLINEFORM5 and INLINEFORM6 respectively. Following the previous works BIBREF0 , BIBREF21 , the objective function is defined as follows: DISPLAYFORM0 ", "where INLINEFORM0 is the number of negative samples and INLINEFORM1 is the expectation term. For each INLINEFORM2 in training corpus INLINEFORM3 , a set of negative samples INLINEFORM4 will be selected according to the distribution INLINEFORM5 , which is usually set as the word unigram distribution. And INLINEFORM6 is the sigmoid function." ], [ "We download parts of Chinese Wikipedia articles from Large-Scale Chinese Datasets for NLP. For word segmentation and filtering the stopwords, we apply the jieba toolkit based on the stopwords table. Finally, we get 11,529,432 segmented words. In accordance with their work BIBREF14 , all items whose Unicode falls into the range between 0x4E00 and 0x9FA5 are Chinese characters. We crawl the stroke information of all 20,402 characters from an online dictionary and render each character glyph to a 28 INLINEFORM0 28 1-bit grayscale bitmap by using Pillow." ], [ "We choose adagrad BIBREF23 as our optimizing algorithm, and we set the batch size as 4,096 and learning rate as 0.05. In practice, the slide window size INLINEFORM0 of stroke INLINEFORM1 -grams is set as INLINEFORM2 . The dimension of all word embeddings of different models is consistently set as 300. We use two test tasks to evaluate the performance of different models: one is word similarity, and the other is word analogy. A word similarity test consists of multiple word pairs and similarity scores annotated by humans. Good word representations should make the calculated similarity have a high rank correlation with human annotated scores, which is usually measured by the Spearman's correlation INLINEFORM3 BIBREF24 .", "The form of an analogy problem is like “king\":“queen\" = “man\":“?\", and “woman\" is the most proper answer to “?\". That is, in this task, given three words INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , the goal is to infer the fourth word INLINEFORM3 which satisfies “ INLINEFORM4 is to INLINEFORM5 that is similar to INLINEFORM6 is to INLINEFORM7 \". We use INLINEFORM8 BIBREF0 and INLINEFORM9 function BIBREF25 to calculate the most appropriate word INLINEFORM10 . By using the same data used in BIBREF14 and BIBREF3 , we adopt two manually-annotated datasets for Chinese word similarity task, i.e., wordsim-240 and wordsim-296 BIBREF26 and a three-group dataset for Chinese word analogy task." ], [ "We use gensim to implement both CBOW and Skipgram and apply the source codes pulished by the authors to implement CWE, JWE, GWE and GloVe. Since Cao et al. cao2018cw2vec did not publish their code, we follow their paper and reproduce cw2vec in mxnet which we also use to implement sisg BIBREF21 and our DWE. To encourage further research, we will publish our model and datasets." ], [ "UTF8gbsn The experimental results are shown in Table TABREF11 . We can observe that our DWE model achieves the best results both on dataset wordsim-240 and wordsim-296 in the similarity task as expected because of the particularity of Chinese morphology, but it only improves the accuracy for the family group in the analogy task.", "Actually, it is not by chance that we get these results, because DWE has the advantage of distinguishing between morphologically related words, which can be verified by the results of the similarity task. Meanwhile, in the word analogy task, those words expressing family relations in Chinese are mostly compositional in their character glyphs. For example, in an analogy pair “兄弟\" (brother) : “姐妹\" (sister) = “儿子\" (son) : “女儿\" (daughter), we can easily find that “兄弟\" and “儿子\" share an exactly common part of glyph “儿\" (male relative of a junior generation) while “姐妹\" and “女儿\" share an exactly common part of glyph “女\" (female), and this kind of morphological pattern can be accurately captured by our model. However, most of the names of countries, capitals and cities are transliterated words, and the relationship between the morphology and semantics of words is minimal, which is consistent with the findings reported in BIBREF4 . For instance, in an analogy pair “西班牙\" (Spain) : “马德里\" (Madrid) = “法国\" (France) : “巴黎\" (Paris), we cannot infer any relevance among these four words literally because they are all translated by pronunciation.", "In summary, since different words that are morphologically similar tend to have similar semantics in Chinese, simultaneously modeling the sequential and spatial information of characters from both stroke n-grams and glyph features can indeed improve the modeling of Chinese word representations substantially." ], [ "In this article, we first analyzed the similarities and differences in terms of morphology between alphabetical languages and Chinese. Then, we delved deeper into the particularity of Chinese morphology and proposed our DWE model by taking into account the sequential information of strokes order and the spatial information of glyphs. Through the evaluation on two representative tasks, our model shows its superiority in capturing the morphological information of Chinese." ] ], "section_name": [ "Introduction", "Morphological Word Representations", "Embedding for Chinese Language", "DWE Model", "Dataset Preparation", "Experimental Setup", "Baseline Methods", "Experimental Results", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "0b58eb8ae6eced6e137d217f9d64bb1c40980b69", "a286f11986c622ad1a927b922fab8b7501c4f55e", "ec4f368a2fccd7fdc0ae8ea91bdebe36ef970b0c" ], "answer": [ { "evidence": [ "We download parts of Chinese Wikipedia articles from Large-Scale Chinese Datasets for NLP. For word segmentation and filtering the stopwords, we apply the jieba toolkit based on the stopwords table. Finally, we get 11,529,432 segmented words. In accordance with their work BIBREF14 , all items whose Unicode falls into the range between 0x4E00 and 0x9FA5 are Chinese characters. We crawl the stroke information of all 20,402 characters from an online dictionary and render each character glyph to a 28 INLINEFORM0 28 1-bit grayscale bitmap by using Pillow." ], "extractive_spans": [], "free_form_answer": "11,529,432 segmented words and 20,402 characters", "highlighted_evidence": [ "Finally, we get 11,529,432 segmented words. ", "We crawl the stroke information of all 20,402 characters from an online dictionary and render each character glyph to a 28 INLINEFORM0 28 1-bit grayscale bitmap by using Pillow." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We download parts of Chinese Wikipedia articles from Large-Scale Chinese Datasets for NLP. For word segmentation and filtering the stopwords, we apply the jieba toolkit based on the stopwords table. Finally, we get 11,529,432 segmented words. In accordance with their work BIBREF14 , all items whose Unicode falls into the range between 0x4E00 and 0x9FA5 are Chinese characters. We crawl the stroke information of all 20,402 characters from an online dictionary and render each character glyph to a 28 INLINEFORM0 28 1-bit grayscale bitmap by using Pillow." ], "extractive_spans": [ "11,529,432 segmented words" ], "free_form_answer": "", "highlighted_evidence": [ "Finally, we get 11,529,432 segmented words." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We download parts of Chinese Wikipedia articles from Large-Scale Chinese Datasets for NLP. For word segmentation and filtering the stopwords, we apply the jieba toolkit based on the stopwords table. Finally, we get 11,529,432 segmented words. In accordance with their work BIBREF14 , all items whose Unicode falls into the range between 0x4E00 and 0x9FA5 are Chinese characters. We crawl the stroke information of all 20,402 characters from an online dictionary and render each character glyph to a 28 INLINEFORM0 28 1-bit grayscale bitmap by using Pillow." ], "extractive_spans": [ "11,529,432 segmented words" ], "free_form_answer": "", "highlighted_evidence": [ "We download parts of Chinese Wikipedia articles from Large-Scale Chinese Datasets for NLP. For word segmentation and filtering the stopwords, we apply the jieba toolkit based on the stopwords table. Finally, we get 11,529,432 segmented words.", "11529432 segmented " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "5eb634317fae78746bc6a0fa7eb8b7a5035e79f5", "9e35fcbc85729863ecf041f9d6bd44fbfb34449c", "a6ad7be63f10118774e430967b009924e10a508e" ], "answer": [ { "evidence": [ "We choose adagrad BIBREF23 as our optimizing algorithm, and we set the batch size as 4,096 and learning rate as 0.05. In practice, the slide window size INLINEFORM0 of stroke INLINEFORM1 -grams is set as INLINEFORM2 . The dimension of all word embeddings of different models is consistently set as 300. We use two test tasks to evaluate the performance of different models: one is word similarity, and the other is word analogy. A word similarity test consists of multiple word pairs and similarity scores annotated by humans. Good word representations should make the calculated similarity have a high rank correlation with human annotated scores, which is usually measured by the Spearman's correlation INLINEFORM3 BIBREF24 ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We use two test tasks to evaluate the performance of different models: one is word similarity, and the other is word analogy." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "We choose adagrad BIBREF23 as our optimizing algorithm, and we set the batch size as 4,096 and learning rate as 0.05. In practice, the slide window size INLINEFORM0 of stroke INLINEFORM1 -grams is set as INLINEFORM2 . The dimension of all word embeddings of different models is consistently set as 300. We use two test tasks to evaluate the performance of different models: one is word similarity, and the other is word analogy. A word similarity test consists of multiple word pairs and similarity scores annotated by humans. Good word representations should make the calculated similarity have a high rank correlation with human annotated scores, which is usually measured by the Spearman's correlation INLINEFORM3 BIBREF24 ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ " We use two test tasks to evaluate the performance of different models: one is word similarity, and the other is word analogy. " ], "unanswerable": false, "yes_no": false }, { "evidence": [ "We choose adagrad BIBREF23 as our optimizing algorithm, and we set the batch size as 4,096 and learning rate as 0.05. In practice, the slide window size INLINEFORM0 of stroke INLINEFORM1 -grams is set as INLINEFORM2 . The dimension of all word embeddings of different models is consistently set as 300. We use two test tasks to evaluate the performance of different models: one is word similarity, and the other is word analogy. A word similarity test consists of multiple word pairs and similarity scores annotated by humans. Good word representations should make the calculated similarity have a high rank correlation with human annotated scores, which is usually measured by the Spearman's correlation INLINEFORM3 BIBREF24 ." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We use two test tasks to evaluate the performance of different models: one is word similarity, and the other is word analogy." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "6a87f453fa5e2d57c0880a09bb882a5c51cd1503", "76f415fdce6fdd69a3f9eba2f82b6339680e11da", "92fb81f3fd2a35f938ab7c77eb7ab43b7318e400" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "How much data do they use to train the embeddings?", "Do they evaluate their embeddings in any downstream task appart from word similarity and word analogy?", "What dialects of Chinese are explored?" ], "question_id": [ "5f7f4a1d4380c118a58ed506c057d3b7aa234c1e", "a79a23573d74ec62cbed5d5457a51419a66f6296", "d427e9d181434078c78b7ee33a26b269f160f6d2" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: The upper part is an example for illustrating the inclusion relationship hidden in strokes order and character glyphs. The lower part reflects that a common stroke sequence may form different Chinese characters if their spatial configurations are different.", "Figure 2: An illustration of our Dual-channel Word Embedding (DWE) model.", "Table 1: Performance on word similarity and word analogy task. The dimension of embeddings is set as 300. The evaluation metric is ρ for word similarity and accuracy percentage for word analogy." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png" ] }
[ "How much data do they use to train the embeddings?" ]
[ [ "1906.04287-Dataset Preparation-0" ] ]
[ "11,529,432 segmented words and 20,402 characters" ]
53
1912.10162
Design and implementation of an open source Greek POS Tagger and Entity Recognizer using spaCy
This paper proposes a machine learning approach to part-of-speech tagging and named entity recognition for Greek, focusing on the extraction of morphological features and classification of tokens into a small set of classes for named entities. The architecture model that was used is introduced. The greek version of the spaCy platform was added into the source code, a feature that did not exist before our contribution, and was used for building the models. Additionally, a part of speech tagger was trained that can detect the morphology of the tokens and performs higher than the state-of-the-art results when classifying only the part of speech. For named entity recognition using spaCy, a model that extends the standard ENAMEX type (organization, location, person) was built. Certain experiments that were conducted indicate the need for flexibility in out-of-vocabulary words and there is an effort for resolving this issue. Finally, the evaluation results are discussed.
{ "paragraphs": [ [ "In the research field of Natural Language Processing (NLP) there are several tasks that contribute to understanding natural text. These tasks can manipulate natural language, such as tokenization process, and consequently can be used in other implementations, in order to extract syntactic or semantic information. One such task for syntactic components is Part of Speech Tagging (POS Tagging). Part of Speech Tagging in corpus linguistics is a process where a word is assigned with a label of the grammatical term, given the context it appears in. In many languages, POS Tagging models achieve an accuracy of 96 to 97 percent BIBREF0.", "Part of Speech Tagging for highly inflective languages, such as Greek is quite a difficult task. In the Greek Language, words can have different morphological forms, depending on the part of speech (verbs have up to ten different forms). For that purpose, there is a need for a tagset that can support morphological features for improvement of Greek POS Tagging BIBREF1.", "Another main task for extracting semantic information is Named Entity Recognition (NER). Named Entity Recognition is a process where a word or a set of words reference to a world object. Most Natural Language Processing models classify named entities that describe people, locations, organizations, following the ENAMEX type or can be more complex by detecting numerical types, like percentages (NUMEX) or dates (TIMEX) BIBREF2.", "The greek Part of Speech Tagging and Named Entity Recognition models presented in this paper were developed using the spaCy library BIBREF3. SpaCy is an open source, Natural Language Processing library that supports a variety of tasks, including POS Tagging, Named Entity Recognition, Dependency Parsing, etc. SpaCy uses sophisticated neural network-based models for the implementation of Natural Language Processing components that achieve state-of-the-art results in many of these tasks.", "In the following chapters the process for implementing Part of Speech Tagging and Named Entity Recognition for the Greek Language is explained. A dataset with extended POS Tags was found and matched to a set of morphological rules, according to a treebank. The dataset was then processed, fed to the spaCy model and used for training. Similarly, for Named Entity Recognition, datasets from different sources were compared to a custom set of rules for named entities. Finally, different experiments were conducted for evaluating the accuracy of the models." ], [ "SpaCy uses a deep learning formula for implementing NLP models, summarised as “embed, encode, attend, predict”. In spaCy's approach text is inserted in the model in the form of unique numerical values (ID) for every input that can represent a token of a corpus or a class of the NLP task (part of speech tag, named entity class). At the embedding stage, features such as the prefix, the suffix, the shape and the lowercase form of a word are used for the extraction of hashed values that reflect word similarities.", "At this stage a vocabulary with hashed values and their vectors exist in the model. For the exploitation of adjacent vectors in the state of encoding, values pass through the Convolutional Neural Network (CNN) and get merged with their context. The result of the encoding process is a matrix of vectors that represents information. Before the prediction of an ID, the matrix has to be passed through the Attention Layer of the CNN, using a query vector to summarize the input.", "At prediction, a Softmax function is used for the prediction of a super tag with part of speech and morphology information. Similarly for named entities, the available class is predicted. After the training process of the model, the CNN is able to be used for NLP tasks.", "In the latest release of spaCy the deep learning models are reported to be “10 times smaller, 20% more accurate and cheaper to run than the previous generation” BIBREF3. The models are implemented using Thinc, spaCy’s machine learning library." ], [ "The Institute for Language and Speech Processing was the first to implement a Part of Speech Tagger with morphological features and has evaluated the experiments in terms of the error rate of the predicted classes BIBREF4. These models can be accessed from web services offered by the Institute . However, the creation of a compound Greek POS tagger using spaCy, a fast and accurate NLP python framework is new.", "For the creation of a Part of Speech Tagger in the Greek Language a number of steps was followed. The tags from the “Makedonia” dataset, which is described below, were extracted and matched to a set of morphological rules. The tokens in the dataset were adjusted to annotation rules that the model will use. Different parameters in the configuration of spaCy's model were tested while training and their results are presented in SECREF6." ], [ "The dataset comes from texts of the Greek newspaper “Makedonia”. The articles in the newspaper are categorized in different subjects, such as sports, health, economy and political news. Data retrieval was done from the website of the clarin project BIBREF5 and consist of a set of xml files with information at paragraph, sentence and word level. It must be underlined that this annotation was performed by the Institute for Language and Speech Processing and data is licenced under the CC - BY - NC - SA licence.", "Information about the dataset includes the tokens of a set of articles and their position in a sentence, the lemma and the part of speech of every token. The various values of POS tags were retrieved and incorporated into a tag map. The labels and morphology they describe are explained below." ], [ "Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6. It must be mentioned that the extraction of morphological rules and the matching with the tags was done using the Greek version of the Universal Dependencies BIBREF7." ], [ "The articles from the newspaper were fed in spaCy library into the proper format for training. Different parameters were tested, in order to get the optimal result. The dataset was shuffled, using the same seed for all the experiments and was split into a train set (70%), a test set (20%) and a validation set (10%). Information was passed through the training algorithm in batches with an increasing batch size from 4 to 32 and a step of 1.001. Additionally, a dropout rate was configured in every batch, initialized to 0.6 which dropped during the training process to 0.4. Most of the experiments were trained using 30 epochs.", "The main area of study for the experiments focuses on three important components. At first, we investigate the difference in results between part of speech taggers that classify morphological features and taggers that detect only the part of speech. Moreover, we explore the significance of pretrained vectors used from a model and their effect on the extraction of better results. Most importantly, the usage of subwords of tokens from a tagger as embeddings is issued. For the experiments, precision, recall and f1 score are used as evaluation metrics." ], [ "In the first experiment the model was trained using pretrained vectors extracted from two different sources, Common Crawl and Wikipedia and can be found at the official FastText web page BIBREF8. Both sources were trained on the same algorithm called FastText BIBREF9, an extension of Word2Vec that treats tokens as an average sum of sub-words and finds similarities of words based on their n-grams. The configuration of the FastText model for Wikipedia vectors is according to BIBREF10, whilst the model for CC vectors is a position-weight CBOW 5 length n-grams with a window size of 5 tokens and 10 negative words. The file with the Common Crawl vectors consists of 2.000.000 tokens with 300 dimension, whereas the file with the Wikipedia vectors consists of 300.000 tokens with 300 dimension.The results can be viewed in the following table, with the first part describing the Common Crawl results and the second one the Wikipedia results.", "At the results, POS and morph classes refer to the tag labels explained in SECREF4, whilst only POS classes relate to annotated labels that describe only the part of speech. It is evident that even though the CC vectors are noisy, coming from a web source, they lead to better results than Wikipedia, possibly because they have a larger variety of tokens.", "In the next experiment, the dataset was used for the composition of embeddings for the part of speech tagger. The dataset was trained on a FastText model with the same parameters that extracted the Common Crawl vectors. As a result, 140.000 vectors with 300 dimension were exported. It must be mentioned that the tagset with the morphological features was used.", "The values of the metrics in this case were almost as good and comparable to the CC ones. However, the model trained with a larger vocabulary had higher results. Also, the model with the dataset vectors did not have the flexibility to classify unknown words.", "As a next step, the test set of the dataset was altered by replacing words with syntactical mistakes to test the tolerance of the model in OOV words. Suffixes of verbs were altered and vowels were replaced with others, affecting 20% of the tokens of the dataset. Using again the more complex tagset for training, the results can be found in Table 3.", "What can be concluded is that the model did not have a flexibility in OOV words. Of course, this can also be an advantage, meaning that the model recognized the mismatch of a wrong word with its class.", "One disadvantage that the previous model had is that for unknown words the model assigned a zero vector, affecting the testing results. In order to minimize this problem, the unknown words were first passed through a FastText model to get a vector from their subwords. The resulting vectors were imported in the vocabulary with the CC vectors before training. The model was also trained using as a vocabulary the unknown words and the tokens from the Common Crawl vectors, both buffered in the same FastText model. Results are listed in Table 4.", "It was noticed that the model performed better when using the vectors from different FastText models. It was expected that the second experiment would have performed better, as the tokens were inserted into the same FastText model and the vectors exported from both sources should match." ], [ "In BIBREF11 the development of an entity recognizer with named entities that follow a proper set of rules is described with evaluation metrics that reach 86% for precision and 81% for recall. Our implementation follows these rules as well. Also, a pretrained model is offered from a library called polyglot for recognition BIBREF12, which has evaluated NER in Greek with statistical machine translation.", "For the creation of a Named Entity Recognizer in the Greek Language a number of steps was followed. The entities from the “Makedonia” dataset were extracted and annotated, forming a set of keywords that matched a specific set of rules the entities had to follow. These keywords were used to reform the dataset and also to find entities from a larger dataset, like Wikipedia. The spaCy model was trained using both datasets and their results are compared to a test set. Additionally, the spaCy model was trained using as a feature the POS tags of the tokens. All results are presented in SECREF13." ], [ "In the “Makedonia” dataset information about named entities is organized with the index of the character the named entity starts, the index of the character the named entity ends and the class of the named entity. The dataset was parsed and the named entities were added into the keyword list, with every record representing the token (or the set of tokens) and its class. Noise was removed from the list and the records were sorted by the length of the entity. The keyword list had an average of 72.000 records." ], [ "In order to gain more information about the context of the Greek entities, a percentage of Greek Wikipedia was used. After applying sentence and token segmentation on Wikipedia text and using a pretrained model from polyglot, the keyword list increased. The keyword list had at this point about 350,000 records and consisted of 4 classes: location (LOC), organization (ORG), person (PERSON) and facility (FAC). A percentage of Greek Wikipedia was parsed and used for training in spaCy. The results from the training are presented in SECREF13." ], [ "Both datasets were fed into the library in proper format for training. In training process, the entity recognizer had the same configuration with the POS tagger, using the same percentages for train, validation and test sets. It must be noted that all the models used the Common Crawl pretrained vectors for a vocabulary. The results are compared using the macro F1 score.", "At first the datasets from both sources (Makedonia, Wikipedia) were used for training with 10 iterations and testing from the model. The results can be viewed in the following table:", "It seemed that the average F1 score was higher for the Makedonia corpus, as it was the basis of the configuration for the keyword list. In order to have an objective evaluation, the results of each corpus per entity class were observed.", "Both sources had good results in non entity tokens, which affected the F1 score. Moreover, the model did not perform well for facilities, as polyglot's Greek recognizer does not support that class and FAC entities cover a small amount of the list.", "In the second experiment, the datasets were compared to a common test set that followed the desired set of rules.", "Again, the Makedonia corpus performed better, because of the proper annotation on the keyword list.", "In an experiment worth mentioning the correlation of the part of speech with the performance of the recognizer was explored. In this experiment, both pipelines (part of speech, entity recognition) were used for training with 30 iterations and the model was trained twice: with and without the usage of the part of speech information for recognition.", "It is evident that the recognizer did not gain knowledge from the part of speech tags of the tokens." ], [ "Natural Language Processing meets numerous problems in its applications, especially in uncommon languages such as Greek. This paper proposes a machine learning approach to part-of-speech tagging and named entity recognition for Greek, a highly inflected language using spaCy, a very robust and popular framework. Although significant work has been done, there are several more things that can be accomplished. The need of more datasets for the Greek language is evident, but the results are quite satisfying, comparable to other languages." ] ], "section_name": [ "Introduction", "SpaCy's deep learning model for POS tagging and Named Entity Recognition", "Creating a Greek POS Tagger using spaCy", "Creating a Greek POS Tagger using spaCy ::: Dataset evaluation and selection", "Creating a Greek POS Tagger using spaCy ::: Creation of the Tag Map with reference to Universal Dependencies", "Creating a Greek POS Tagger using spaCy ::: POS Tagger training", "Creating a Greek POS Tagger using spaCy ::: Evaluation and comparison of results", "Creating a state of the art Named Entity Recognizer using spaCy", "Creating a state of the art Named Entity Recognizer using spaCy ::: Dataset evaluation and selection", "Creating a state of the art Named Entity Recognizer using spaCy ::: Usage of Wikipedia dataset for training", "Creating a state of the art Named Entity Recognizer using spaCy ::: Evaluation and comparison of results", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "42cd2ca5121830f67b272b59ddd74a5e7e27617b", "587e9747cf63f3ff6388885294b69b26186af164", "dd73ede58ac296f7031d3767739d2cc855788e1a" ], "answer": [ { "evidence": [ "The values of the metrics in this case were almost as good and comparable to the CC ones. However, the model trained with a larger vocabulary had higher results. Also, the model with the dataset vectors did not have the flexibility to classify unknown words.", "As a next step, the test set of the dataset was altered by replacing words with syntactical mistakes to test the tolerance of the model in OOV words. Suffixes of verbs were altered and vowels were replaced with others, affecting 20% of the tokens of the dataset. Using again the more complex tagset for training, the results can be found in Table 3.", "What can be concluded is that the model did not have a flexibility in OOV words. Of course, this can also be an advantage, meaning that the model recognized the mismatch of a wrong word with its class.", "One disadvantage that the previous model had is that for unknown words the model assigned a zero vector, affecting the testing results. In order to minimize this problem, the unknown words were first passed through a FastText model to get a vector from their subwords. The resulting vectors were imported in the vocabulary with the CC vectors before training. The model was also trained using as a vocabulary the unknown words and the tokens from the Common Crawl vectors, both buffered in the same FastText model. Results are listed in Table 4.", "It was noticed that the model performed better when using the vectors from different FastText models. It was expected that the second experiment would have performed better, as the tokens were inserted into the same FastText model and the vectors exported from both sources should match." ], "extractive_spans": [ "model did not have a flexibility in OOV words", "One disadvantage that the previous model had is that for unknown words the model assigned a zero vector", "It was noticed that the model performed better when using the vectors from different FastText models" ], "free_form_answer": "", "highlighted_evidence": [ "Also, the model with the dataset vectors did not have the flexibility to classify unknown words.\n\nAs a next step, the test set of the dataset was altered by replacing words with syntactical mistakes to test the tolerance of the model in OOV words. Suffixes of verbs were altered and vowels were replaced with others, affecting 20% of the tokens of the dataset. Using again the more complex tagset for training, the results can be found in Table 3.\n\nWhat can be concluded is that the model did not have a flexibility in OOV words. Of course, this can also be an advantage, meaning that the model recognized the mismatch of a wrong word with its class.\n\nOne disadvantage that the previous model had is that for unknown words the model assigned a zero vector, affecting the testing results. In order to minimize this problem, the unknown words were first passed through a FastText model to get a vector from their subwords. The resulting vectors were imported in the vocabulary with the CC vectors before training. The model was also trained using as a vocabulary the unknown words and the tokens from the Common Crawl vectors, both buffered in the same FastText model. Results are listed in Table 4.\n\nIt was noticed that the model performed better when using the vectors from different FastText models." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "One disadvantage that the previous model had is that for unknown words the model assigned a zero vector, affecting the testing results. In order to minimize this problem, the unknown words were first passed through a FastText model to get a vector from their subwords. The resulting vectors were imported in the vocabulary with the CC vectors before training. The model was also trained using as a vocabulary the unknown words and the tokens from the Common Crawl vectors, both buffered in the same FastText model. Results are listed in Table 4." ], "extractive_spans": [ "for unknown words the model assigned a zero vector" ], "free_form_answer": "", "highlighted_evidence": [ "One disadvantage that the previous model had is that for unknown words the model assigned a zero vector, affecting the testing results." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The values of the metrics in this case were almost as good and comparable to the CC ones. However, the model trained with a larger vocabulary had higher results. Also, the model with the dataset vectors did not have the flexibility to classify unknown words.", "One disadvantage that the previous model had is that for unknown words the model assigned a zero vector, affecting the testing results. In order to minimize this problem, the unknown words were first passed through a FastText model to get a vector from their subwords. The resulting vectors were imported in the vocabulary with the CC vectors before training. The model was also trained using as a vocabulary the unknown words and the tokens from the Common Crawl vectors, both buffered in the same FastText model. Results are listed in Table 4." ], "extractive_spans": [ "Also, the model with the dataset vectors did not have the flexibility to classify unknown words.", "the previous model had is that for unknown words the model assigned a zero vector, affecting the testing results" ], "free_form_answer": "", "highlighted_evidence": [ "The values of the metrics in this case were almost as good and comparable to the CC ones. However, the model trained with a larger vocabulary had higher results. Also, the model with the dataset vectors did not have the flexibility to classify unknown words.", "One disadvantage that the previous model had is that for unknown words the model assigned a zero vector, affecting the testing results. In order to minimize this problem, the unknown words were first passed through a FastText model to get a vector from their subwords. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "9d4f78a96810ccd5c61ff9feda0baa133377a988", "5291cada7aff5443ccea1fb95a8f84b7dbabb8a5", "71dd5d86160d5ee8d6dd8a735343d215ec23f6e2" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Results based on CommonCrawl pretrained vectors and based on Wikipedia pretrained vectors", "At the results, POS and morph classes refer to the tag labels explained in SECREF4, whilst only POS classes relate to annotated labels that describe only the part of speech. It is evident that even though the CC vectors are noisy, coming from a web source, they lead to better results than Wikipedia, possibly because they have a larger variety of tokens.", "Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6. It must be mentioned that the extraction of morphological rules and the matching with the tags was done using the Greek version of the Universal Dependencies BIBREF7." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results based on CommonCrawl pretrained vectors and based on Wikipedia pretrained vectors", "At the results, POS and morph classes refer to the tag labels explained in SECREF4, whilst only POS classes relate to annotated labels that describe only the part of speech. It is evident that even though the CC vectors are noisy, coming from a web source, they lead to better results than Wikipedia, possibly because they have a larger variety of tokens.", "The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6. It must be mentioned that the extraction of morphological rules and the matching with the tags was done using the Greek version of the Universal Dependencies BIBREF7." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "At the results, POS and morph classes refer to the tag labels explained in SECREF4, whilst only POS classes relate to annotated labels that describe only the part of speech. It is evident that even though the CC vectors are noisy, coming from a web source, they lead to better results than Wikipedia, possibly because they have a larger variety of tokens." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "At the results, POS and morph classes refer to the tag labels explained in SECREF4, whilst only POS classes relate to annotated labels that describe only the part of speech. It is evident that even though the CC vectors are noisy, coming from a web source, they lead to better results than Wikipedia, possibly because they have a larger variety of tokens." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "The main area of study for the experiments focuses on three important components. At first, we investigate the difference in results between part of speech taggers that classify morphological features and taggers that detect only the part of speech. Moreover, we explore the significance of pretrained vectors used from a model and their effect on the extraction of better results. Most importantly, the usage of subwords of tokens from a tagger as embeddings is issued. For the experiments, precision, recall and f1 score are used as evaluation metrics.", "FLOAT SELECTED: Table 1: Results based on CommonCrawl pretrained vectors and based on Wikipedia pretrained vectors", "FLOAT SELECTED: Table 6: Results of the different train sets per class" ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The main area of study for the experiments focuses on three important components. At first, we investigate the difference in results between part of speech taggers that classify morphological features and taggers that detect only the part of speech. Moreover, we explore the significance of pretrained vectors used from a model and their effect on the extraction of better results. Most importantly, the usage of subwords of tokens from a tagger as embeddings is issued. For the experiments, precision, recall and f1 score are used as evaluation metrics.", "FLOAT SELECTED: Table 1: Results based on CommonCrawl pretrained vectors and based on Wikipedia pretrained vectors", "FLOAT SELECTED: Table 6: Results of the different train sets per class" ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "907d60ac9b066c79ffce07a04fe4a567688f7588", "ac3416a407a2ff361895138b8b1e1313f0563f75", "bc396d79f7f6c7c67d0d49b9f3ffd23d3bc81531" ], "answer": [ { "evidence": [ "In order to gain more information about the context of the Greek entities, a percentage of Greek Wikipedia was used. After applying sentence and token segmentation on Wikipedia text and using a pretrained model from polyglot, the keyword list increased. The keyword list had at this point about 350,000 records and consisted of 4 classes: location (LOC), organization (ORG), person (PERSON) and facility (FAC). A percentage of Greek Wikipedia was parsed and used for training in spaCy. The results from the training are presented in SECREF13." ], "extractive_spans": [], "free_form_answer": "Extended with facility (FAC) type.", "highlighted_evidence": [ "In order to gain more information about the context of the Greek entities, a percentage of Greek Wikipedia was used. After applying sentence and token segmentation on Wikipedia text and using a pretrained model from polyglot, the keyword list increased. The keyword list had at this point about 350,000 records and consisted of 4 classes: location (LOC), organization (ORG), person (PERSON) and facility (FAC).", "facility (FAC)" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Another main task for extracting semantic information is Named Entity Recognition (NER). Named Entity Recognition is a process where a word or a set of words reference to a world object. Most Natural Language Processing models classify named entities that describe people, locations, organizations, following the ENAMEX type or can be more complex by detecting numerical types, like percentages (NUMEX) or dates (TIMEX) BIBREF2.", "In order to gain more information about the context of the Greek entities, a percentage of Greek Wikipedia was used. After applying sentence and token segmentation on Wikipedia text and using a pretrained model from polyglot, the keyword list increased. The keyword list had at this point about 350,000 records and consisted of 4 classes: location (LOC), organization (ORG), person (PERSON) and facility (FAC). A percentage of Greek Wikipedia was parsed and used for training in spaCy. The results from the training are presented in SECREF13." ], "extractive_spans": [ "The keyword list had at this point about 350,000 records and consisted of 4 classes: location (LOC), organization (ORG), person (PERSON) and facility (FAC)" ], "free_form_answer": "", "highlighted_evidence": [ "Most Natural Language Processing models classify named entities that describe people, locations, organizations, following the ENAMEX type or can be more complex by detecting numerical types, like percentages (NUMEX) or dates (TIMEX) BIBREF2.", "In order to gain more information about the context of the Greek entities, a percentage of Greek Wikipedia was used. After applying sentence and token segmentation on Wikipedia text and using a pretrained model from polyglot, the keyword list increased. The keyword list had at this point about 350,000 records and consisted of 4 classes: location (LOC), organization (ORG), person (PERSON) and facility (FAC). A percentage of Greek Wikipedia was parsed and used for training in spaCy. The results from the training are presented in SECREF13." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Another main task for extracting semantic information is Named Entity Recognition (NER). Named Entity Recognition is a process where a word or a set of words reference to a world object. Most Natural Language Processing models classify named entities that describe people, locations, organizations, following the ENAMEX type or can be more complex by detecting numerical types, like percentages (NUMEX) or dates (TIMEX) BIBREF2.", "The greek Part of Speech Tagging and Named Entity Recognition models presented in this paper were developed using the spaCy library BIBREF3. SpaCy is an open source, Natural Language Processing library that supports a variety of tasks, including POS Tagging, Named Entity Recognition, Dependency Parsing, etc. SpaCy uses sophisticated neural network-based models for the implementation of Natural Language Processing components that achieve state-of-the-art results in many of these tasks." ], "extractive_spans": [ "SpaCy is an open source, Natural Language Processing library that supports a variety of tasks, including POS Tagging, Named Entity Recognition, Dependency Parsing, etc. SpaCy uses sophisticated neural network-based models" ], "free_form_answer": "", "highlighted_evidence": [ "Most Natural Language Processing models classify named entities that describe people, locations, organizations, following the ENAMEX type or can be more complex by detecting numerical types, like percentages (NUMEX) or dates (TIMEX) BIBREF2.\n\nThe greek Part of Speech Tagging and Named Entity Recognition models presented in this paper were developed using the spaCy library BIBREF3. SpaCy is an open source, Natural Language Processing library that supports a variety of tasks, including POS Tagging, Named Entity Recognition, Dependency Parsing, etc. SpaCy uses sophisticated neural network-based models for the implementation of Natural Language Processing components that achieve state-of-the-art results in many of these tasks." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "39c60f62fef1248dffe1d89ebc6d40a615330e46", "9a539116c3f7d0180a3f7b57801e17819dcce0a1", "be1417c4a6e3e2ba6decfe9baccfa5f08a1d037c" ], "answer": [ { "evidence": [ "Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6. It must be mentioned that the extraction of morphological rules and the matching with the tags was done using the Greek version of the Universal Dependencies BIBREF7." ], "extractive_spans": [ "like the gender, the number, and the case" ], "free_form_answer": "", "highlighted_evidence": [ "Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Information about the dataset includes the tokens of a set of articles and their position in a sentence, the lemma and the part of speech of every token. The various values of POS tags were retrieved and incorporated into a tag map. The labels and morphology they describe are explained below.", "Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6. It must be mentioned that the extraction of morphological rules and the matching with the tags was done using the Greek version of the Universal Dependencies BIBREF7." ], "extractive_spans": [ "Information about the dataset includes the tokens of a set of articles and their position in a sentence, the lemma and the part of speech of every token", "The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case" ], "free_form_answer": "", "highlighted_evidence": [ "Information about the dataset includes the tokens of a set of articles and their position in a sentence, the lemma and the part of speech of every token. The various values of POS tags were retrieved and incorporated into a tag map.", "Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6.", "Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6. It must be mentioned that the extraction of morphological rules and the matching with the tags was done using the Greek version of the Universal Dependencies BIBREF7." ], "extractive_spans": [ "The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers" ], "free_form_answer": "", "highlighted_evidence": [ "Different labels were found at the dataset and were matched to a label map, where for each label the part of the speech and their morphology are analyzed. In more detail, the first two characters refer to the part of speech and accordingly extend to more information about it. The label map supports 16 standard part of speech tags: Adjective, Adposition, Adverb, Coordinating Conjuction, Determiner, Interjection, Noun, Numeral, Particle, Pronoun, Proper Noun, Punctuation, Subordinating Conjuction, Symbol, Verb and Other. Each tag describes morphological features of the word, depending on the part of the speech to which it refers like the gender, the number, and the case BIBREF6. It must be mentioned that the extraction of morphological rules and the matching with the tags was done using the Greek version of the Universal Dependencies BIBREF7." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "What are the issues identified for out-of-vocabulary words?", "Is the morphology detection task evaluated?", "How does the model proposed extend ENAMEX?", "Which morphological features are extracted?" ], "question_id": [ "0a5fd0e5f4ab12be57be20416a5ea7c3db5fb662", "5d03a82a70f7b1ab9829891403ec31607828cbd5", "6cad6f074b0486210ffa4982c8d1632f5aa91d91", "d38b3e0896b105d171e69ce34c689e4a7e934522" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "search_query": [ "morphology", "morphology", "morphology", "morphology" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Table 1: Results based on CommonCrawl pretrained vectors and based on Wikipedia pretrained vectors", "Table 4: Common Crawl pretrained + vectors annotated from out of vocabulary words and all vectors annotated from FastText (Common Crawl pretrained and from out of vocabulary words)", "Table 2: Usage of pretrained vectors from dataset", "Table 6: Results of the different train sets per class", "Table 7: Comparison of results with common test set", "Table 5: Comparison of Macro Average F1 score with different train sets" ], "file": [ "3-Table1-1.png", "3-Table4-1.png", "3-Table2-1.png", "4-Table6-1.png", "4-Table7-1.png", "4-Table5-1.png" ] }
[ "How does the model proposed extend ENAMEX?" ]
[ [ "1912.10162-Creating a state of the art Named Entity Recognizer using spaCy ::: Usage of Wikipedia dataset for training-0", "1912.10162-Introduction-2", "1912.10162-Introduction-3" ] ]
[ "Extended with facility (FAC) type." ]
54
1909.13184
Towards Automatic Bot Detection in Twitter for Health-related Tasks
With the increasing use of social media data for health-related research, the credibility of the information from this source has been questioned as the posts may originate from automated accounts or "bots". While automatic bot detection approaches have been proposed, there are none that have been evaluated on users posting health-related information. In this paper, we extend an existing bot detection system and customize it for health-related research. Using a dataset of Twitter users, we first show that the system, which was designed for political bot detection, underperforms when applied to health-related Twitter users. We then incorporate additional features and a statistical machine learning classifier to significantly improve bot detection performance. Our approach obtains F_1 scores of 0.7 for the "bot" class, representing improvements of 0.339. Our approach is customizable and generalizable for bot detection in other health-related social media cohorts.
{ "paragraphs": [ [ "In recent years, social media has evolved into an important source of information for various types of health-related research. Social networks encapsulate large volumes of data associated with diverse health topics, generated by active user bases in continuous growth. Twitter, for example, has 330 million monthly active users worldwide that generate almost 500 million micro-blogs (tweets) per day. For some years, the use of the platform to share personal health information has been growing, particularly amongst people living with one or more chronic conditions and those living with disability. Twenty percent of social network site users living with chronic conditions gather and share health information on the sites, compared with 12% of social network site users who report no chronic conditions. Social media data is thus being widely used for health-related research, for tasks such as adverse drug reaction detection BIBREF0, syndromic surveillance BIBREF1, subject recruitment for cancer trials BIBREF2, and characterizing drug abuse BIBREF3, to name a few. Twitter is particularly popular in research due to the availability of the public streaming API, which releases a sample of publicly posted data in real time. While early health-related research from social media focused almost exclusively on population-level studies, some very recent research tasks have focused on performing longitudinal data analysis at the user level, such as mining health-related information from cohorts of pregnant women BIBREF4.", "When conducting user-level studies from social media, one challenge is to ascertain the credibility of the information posted. Particularly, it is important to verify, when deriving statistical estimates from user cohorts, that the user accounts represent humans and not bots (accounts that can be controlled to automatically produce content and interact with other profiles)BIBREF5, BIBREF6. Bots may spread false information by automatically retweeting posts without a human verifying the facts or to influence public opinions on particular topics on purpose BIBREF5, BIBREF7, BIBREF8. For example, a recent study BIBREF9 showed that the highest proportion of anti-vaccine content is generated by accounts with unknown or intermediate bot scores, meaning that the existing methods were not able to fully determine if they were indeed bots. Automatic bot detection techniques mostly rely on extracting features from users' profiles and their social networks BIBREF10, BIBREF11. Some studies have used Honeypot profiles on Twitter to identify and analyze bots BIBREF12, while other studies have analyzed social proximity BIBREF13 or both social and content proximities BIBREF10, tweet timing intervals BIBREF14, or user-level content-based and graph-based features BIBREF15. However, in response to efforts towards keeping Twitter bot-free, bots have evolved and changed to overcome the detection techniques BIBREF16.", "The objectives of this study are to (i) evaluate an existing bot detection system on user-level datasets selected for their health-related content, and (ii) extend the bot detection system for effective application within the health realm. Bot detection approaches have been published in the past few years, but most of the code and data necessary for reproducing the published results were not made available BIBREF17, BIBREF18, BIBREF19. The only system for which we found both operational code and data available, Botometer BIBREF20 (formerly BotOrNot), was chosen as the benchmark system for this study. To the best of our knowledge, this paper presents the first study on health-related bot detection. We have made the classification code and training set of annotated users available at (we will provide a URL with the camera-ready version of the paper)." ], [ "To identify bots in health-related social media data, we retrieved a sample of $10,417$ users from a database containing more than 400 million publicly available tweets posted by more than $100,000$ users who have announced their pregnancy on Twitter BIBREF4. This sample is based on related work for detecting users who have mentioned various pregnancy outcomes in their tweets. Two professional annotators manually categorized the $10,417$ users as \"bot,\" \"non-bot,\" or \"unavailable,\" based on their publicly available Twitter sites. Users were annotated broadly as \"bot\" if, in contrast to users annotated as \"non-bot,\" they do not appear to be posting personal information. Users were annotated as \"unavailable\" if their Twitter sites could not be viewed at the time of annotation, due to modifying their privacy settings or being removed or suspended from Twitter. Based on 1000 overlapping annotations, their inter-annotator agreement (IAA) was $\\kappa $ = $0.93$ (Cohen’s kappa BIBREF21), considered \"almost perfect agreement\" BIBREF22. Their IAA does not include disagreements resulting from the change of a user's status to or from \"unavailable\" in the time between the first and second annotations. Upon resolving the disagreements, 413 $(4\\%)$ users were annotated as \"bot,\" 7849 $(75.35\\%)$ as \"non-bot,\" and $20.69$ $(19.9\\%)$ as \"unavailable\"." ], [ "We used the 8262 \"bot\" and \"non-bot\" users in experiments to train and evaluate three classification systems. We split the users into $80\\%$ (training) and $20\\%$ (test) sets, stratified based on the distribution of \"bot\" and \"non-bot\" users. The training set includes $61,160,686$ tweets posted by 6610 users, and the held-out test set includes $15,703,735$ tweets posted by 1652 users. First, we evaluated Botometer on our held-out test set. Botometer is a publicly available bot detection system designed for political dot detection. It outputs a score between 0 and 1 for a user, representing the likelihood that a user is a bot. Second, we used the Botometer score for each user as a feature in training a gradient boosting classifier which is a decision tree-based ensemble machine learning algorithm with gradient boosting BIBREF23 and can be used to address class imbalance. To adapt the Botometer scores to our binary classification task, we set the threshold to $0.47$, based on performing 5-fold cross validation over the training set. To further address the class imbalance, we used the Synthetic Minority Over-sampling Technique (SMOTE)BIBREF24 to create artificial instances of \"bot\" users in the training set. We also performed 5-fold cross validation over the training set to optimize parameters for the classifier; we used exponential as the loss function, set the number of estimators to 200, and set the learning rate to $0.1$. Third, we used the classifier with an extended set of features that are not used by Botometer. Based on our manual annotation, we consider the following features to be potentially informative for distinguishing \"bot\" and \"non-bot\" users in health-related data:", "Tweet Diversity. Considering that \"bot\" users may re-post the same tweets, we used the ratio of a user's unique tweets to the total number of tweets posted by the user, where 0 indicates that the user has posted only the same tweet multiple times, and 1 indicates that each tweet is unique and has been posted only once. As Figure 1 illustrates, a subset of \"bot\" users (in the training set) have posted more of the same tweets than \"non-bot\" users.", "URL score. During manual annotation, we found that \"bot\" users' tweets frequently contain URLs (e.g., advertisements for health-related products, such as medications), so we use the ratio of the number of a user's tweets containing a URL to the total number of tweets posted by the user.", "Mean Daily Posts. Considering that \"bot\" users may post tweets more frequently than \"non-bot\" users, we measured the average and standard deviation of the number of tweets posted daily by a user. As Figure 1 illustrates, a subset of \"bot\" users post, on average, more tweets daily than \"non-bot\" users.", "Topics. Considering that \"bot\" users may post tweets about a limited number of targeted topics, we used topic modeling to the measure the heterogeneity of topics in a user's tweets. We used Latent Dirichlet Allocation (LDA)BIBREF25 to extract the top five topics from all of the users' 1000 most recent tweets (or all the tweets if a user has posted less than 1000 tweets), and used the mean of the weights of each topic across all of a user's tweets.", "Mean Post Length. Considering that the length of tweets may be different between \"bot\" and \"non-bot\" users, we used the mean word length and standard deviation of a user's tweets.", "Profile Picture. In addition to tweet-related features, we used features based on information in users' profiles. Considering that a \"non-bot\" user's profile picture may be more likely to contain a face, we used a publicly available system to detect the number of faces in a profile picture. As Figure 2, illustrates a face was not detected in the profile picture of the majority of \"non-bot\" users (in the training set), whereas at least one face was detected in the profile picture of the majority of \"bot\" users.", "User Name. Finally, we used a publicly available lexicon to detect the presence or absence of a person's name in a user name. As Figure 2 illustrates, the name of a person is present (1) in approximately half of \"non-bot\" user names, whereas the name of a person is absent (0) in the majority of \"bot\" user names." ], [ "Table 1 presents the precision, recall, and F$_1$-scores for the three bot detection systems evaluated on the held-out test set. The F$_1$-score for the \"bot\" class indicates that Botometer ($0.361$), designed for political bot detection, does not generalize well for detecting \"bot\" users in health-related data. Although the classifier with only the Botometer score as a feature ($0.286$) performs even worse than the default Botometer system, our extended feature set significantly improves performance ($0.700$). For imbalanced data, a higher F$_1$-score for the majority class is typical; in this case, it reflects that we have modeled the detection of \"bot\" users based on their natural distribution in health-related data." ], [ "Our results demonstrate that (i) a publicly available bot detection system, designed for political bot detection, underperforms when applied to health-related data, and (ii) extending the system with simple features derived from health-related data significantly improves performance. An F$_1$-score of $0.700$ for the \"bot\" class represents a promising benchmark for automatic classification of highly imbalanced Twitter data and, in this case, for detecting users who are not reporting information about their own pregnancy on Twitter. Detecting such users is particularly important in the process of automatically selecting cohortsBIBREF26 from a population of social media users for user-level observational studiesBIBREF27.", "A brief error analysis of the 25 false negatives users (in the held-out test set of 1652 users) from the classifier with the extended feature set reveals that, while only one of the users is an account that automatically re-posts other users' tweets, the majority of the errors can be attributed to our broad definition of \"bot\" users, which includes health-related companies, organizations, forums, clubs, and support groups that are not posting personal information. These users are particularly challenging to automatically identify as \"bot\" users because, with humans posting on behalf of an online maternity store, or to a pregnancy forum, for example, their tweets resemble those posted by \"non-bot\" users. In future work, we will focus on deriving features for modeling the nuances that distinguish such \"bot\" users." ], [ "As the use of social networks, such as Twitter, in health research is increasing, there is a growing need to validate the credibility of the data prior to making conclusions. The presence of bots in social media presents a crucial problem, particularly because bots may be customized to perpetuate specific biased or false information, or to execute advertising or marketing goals. We demonstrate that, while existing systems have been successful in detecting bots in other domains, they do not perform as well for detecting health-related bots. Using a machine learning algorithm on top of an existing bot detection system, and a set of simple derived features, we were able to significantly improve bot detection performance in health-related data. Introducing more features would likely contribute to further improving performance, which we will explore in future work." ], [ "This study was funded in part by the National Library of Medicine (NLM) (grant number: R01LM011176) and the National Institute on Drug Abuse (NIDA) (grant number: R01DA046619) of the National Institutes of Health (NIH). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health." ] ], "section_name": [ "Introduction", "Methods ::: Corpus", "Methods ::: Classification", "Results", "Discussion", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "0bb652250238b8a2bc1611bbb65d3066b9af10cf", "28f414828dbd453c48239b09751c59ab82373c6b", "e5def8aa3b90557a540ea619defeb5c52810c940" ], "answer": [ { "evidence": [ "Table 1 presents the precision, recall, and F$_1$-scores for the three bot detection systems evaluated on the held-out test set. The F$_1$-score for the \"bot\" class indicates that Botometer ($0.361$), designed for political bot detection, does not generalize well for detecting \"bot\" users in health-related data. Although the classifier with only the Botometer score as a feature ($0.286$) performs even worse than the default Botometer system, our extended feature set significantly improves performance ($0.700$). For imbalanced data, a higher F$_1$-score for the majority class is typical; in this case, it reflects that we have modeled the detection of \"bot\" users based on their natural distribution in health-related data." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Table 1 presents the precision, recall, and F$_1$-scores for the three bot detection systems evaluated on the held-out test set. The F$_1$-score for the \"bot\" class indicates that Botometer ($0.361$), designed for political bot detection, does not generalize well for detecting \"bot\" users in health-related data. " ], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "4d74ee06035906362b40c3d9f5ad49e3087d3c2b", "b89742393ae718f34cd5c772a093a36f6b39a0ae", "e595c947c85305e0e009c01130e8709ebc876a7f" ], "answer": [ { "evidence": [ "To identify bots in health-related social media data, we retrieved a sample of $10,417$ users from a database containing more than 400 million publicly available tweets posted by more than $100,000$ users who have announced their pregnancy on Twitter BIBREF4. This sample is based on related work for detecting users who have mentioned various pregnancy outcomes in their tweets. Two professional annotators manually categorized the $10,417$ users as \"bot,\" \"non-bot,\" or \"unavailable,\" based on their publicly available Twitter sites. Users were annotated broadly as \"bot\" if, in contrast to users annotated as \"non-bot,\" they do not appear to be posting personal information. Users were annotated as \"unavailable\" if their Twitter sites could not be viewed at the time of annotation, due to modifying their privacy settings or being removed or suspended from Twitter. Based on 1000 overlapping annotations, their inter-annotator agreement (IAA) was $\\kappa $ = $0.93$ (Cohen’s kappa BIBREF21), considered \"almost perfect agreement\" BIBREF22. Their IAA does not include disagreements resulting from the change of a user's status to or from \"unavailable\" in the time between the first and second annotations. Upon resolving the disagreements, 413 $(4\\%)$ users were annotated as \"bot,\" 7849 $(75.35\\%)$ as \"non-bot,\" and $20.69$ $(19.9\\%)$ as \"unavailable\"." ], "extractive_spans": [ "413 $(4\\%)$ users were annotated as \"bot,\" 7849 $(75.35\\%)$ as \"non-bot,\" and $20.69$ $(19.9\\%)$ as \"unavailable\"" ], "free_form_answer": "", "highlighted_evidence": [ "Upon resolving the disagreements, 413 $(4\\%)$ users were annotated as \"bot,\" 7849 $(75.35\\%)$ as \"non-bot,\" and $20.69$ $(19.9\\%)$ as \"unavailable\"." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We used the 8262 \"bot\" and \"non-bot\" users in experiments to train and evaluate three classification systems. We split the users into $80\\%$ (training) and $20\\%$ (test) sets, stratified based on the distribution of \"bot\" and \"non-bot\" users. The training set includes $61,160,686$ tweets posted by 6610 users, and the held-out test set includes $15,703,735$ tweets posted by 1652 users. First, we evaluated Botometer on our held-out test set. Botometer is a publicly available bot detection system designed for political dot detection. It outputs a score between 0 and 1 for a user, representing the likelihood that a user is a bot. Second, we used the Botometer score for each user as a feature in training a gradient boosting classifier which is a decision tree-based ensemble machine learning algorithm with gradient boosting BIBREF23 and can be used to address class imbalance. To adapt the Botometer scores to our binary classification task, we set the threshold to $0.47$, based on performing 5-fold cross validation over the training set. To further address the class imbalance, we used the Synthetic Minority Over-sampling Technique (SMOTE)BIBREF24 to create artificial instances of \"bot\" users in the training set. We also performed 5-fold cross validation over the training set to optimize parameters for the classifier; we used exponential as the loss function, set the number of estimators to 200, and set the learning rate to $0.1$. Third, we used the classifier with an extended set of features that are not used by Botometer. Based on our manual annotation, we consider the following features to be potentially informative for distinguishing \"bot\" and \"non-bot\" users in health-related data:", "Tweet Diversity. Considering that \"bot\" users may re-post the same tweets, we used the ratio of a user's unique tweets to the total number of tweets posted by the user, where 0 indicates that the user has posted only the same tweet multiple times, and 1 indicates that each tweet is unique and has been posted only once. As Figure 1 illustrates, a subset of \"bot\" users (in the training set) have posted more of the same tweets than \"non-bot\" users.", "URL score. During manual annotation, we found that \"bot\" users' tweets frequently contain URLs (e.g., advertisements for health-related products, such as medications), so we use the ratio of the number of a user's tweets containing a URL to the total number of tweets posted by the user.", "Mean Daily Posts. Considering that \"bot\" users may post tweets more frequently than \"non-bot\" users, we measured the average and standard deviation of the number of tweets posted daily by a user. As Figure 1 illustrates, a subset of \"bot\" users post, on average, more tweets daily than \"non-bot\" users.", "Topics. Considering that \"bot\" users may post tweets about a limited number of targeted topics, we used topic modeling to the measure the heterogeneity of topics in a user's tweets. We used Latent Dirichlet Allocation (LDA)BIBREF25 to extract the top five topics from all of the users' 1000 most recent tweets (or all the tweets if a user has posted less than 1000 tweets), and used the mean of the weights of each topic across all of a user's tweets.", "Mean Post Length. Considering that the length of tweets may be different between \"bot\" and \"non-bot\" users, we used the mean word length and standard deviation of a user's tweets.", "Profile Picture. In addition to tweet-related features, we used features based on information in users' profiles. Considering that a \"non-bot\" user's profile picture may be more likely to contain a face, we used a publicly available system to detect the number of faces in a profile picture. As Figure 2, illustrates a face was not detected in the profile picture of the majority of \"non-bot\" users (in the training set), whereas at least one face was detected in the profile picture of the majority of \"bot\" users.", "User Name. Finally, we used a publicly available lexicon to detect the presence or absence of a person's name in a user name. As Figure 2 illustrates, the name of a person is present (1) in approximately half of \"non-bot\" user names, whereas the name of a person is absent (0) in the majority of \"bot\" user names." ], "extractive_spans": [ "Tweet Diversity", "URL score", "Mean Daily Posts", "Topics", "Mean Post Length", "Profile Picture" ], "free_form_answer": "", "highlighted_evidence": [ "Third, we used the classifier with an extended set of features that are not used by Botometer. Based on our manual annotation, we consider the following features to be potentially informative for distinguishing \"bot\" and \"non-bot\" users in health-related data:\n\nTweet Diversity. Considering that \"bot\" users may re-post the same tweets, we used the ratio of a user's unique tweets to the total number of tweets posted by the user, where 0 indicates that the user has posted only the same tweet multiple times, and 1 indicates that each tweet is unique and has been posted only once. As Figure 1 illustrates, a subset of \"bot\" users (in the training set) have posted more of the same tweets than \"non-bot\" users.\n\nURL score. During manual annotation, we found that \"bot\" users' tweets frequently contain URLs (e.g., advertisements for health-related products, such as medications), so we use the ratio of the number of a user's tweets containing a URL to the total number of tweets posted by the user.\n\nMean Daily Posts. Considering that \"bot\" users may post tweets more frequently than \"non-bot\" users, we measured the average and standard deviation of the number of tweets posted daily by a user. As Figure 1 illustrates, a subset of \"bot\" users post, on average, more tweets daily than \"non-bot\" users.\n\nTopics. Considering that \"bot\" users may post tweets about a limited number of targeted topics, we used topic modeling to the measure the heterogeneity of topics in a user's tweets. We used Latent Dirichlet Allocation (LDA)BIBREF25 to extract the top five topics from all of the users' 1000 most recent tweets (or all the tweets if a user has posted less than 1000 tweets), and used the mean of the weights of each topic across all of a user's tweets.\n\nMean Post Length. Considering that the length of tweets may be different between \"bot\" and \"non-bot\" users, we used the mean word length and standard deviation of a user's tweets.\n\nProfile Picture. In addition to tweet-related features, we used features based on information in users' profiles. Considering that a \"non-bot\" user's profile picture may be more likely to contain a face, we used a publicly available system to detect the number of faces in a profile picture. As Figure 2, illustrates a face was not detected in the profile picture of the majority of \"non-bot\" users (in the training set), whereas at least one face was detected in the profile picture of the majority of \"bot\" users.\n\nUser Name. Finally, we used a publicly available lexicon to detect the presence or absence of a person's name in a user name. As Figure 2 illustrates, the name of a person is present (1) in approximately half of \"non-bot\" user names, whereas the name of a person is absent (0) in the majority of \"bot\" user names." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To identify bots in health-related social media data, we retrieved a sample of $10,417$ users from a database containing more than 400 million publicly available tweets posted by more than $100,000$ users who have announced their pregnancy on Twitter BIBREF4. This sample is based on related work for detecting users who have mentioned various pregnancy outcomes in their tweets. Two professional annotators manually categorized the $10,417$ users as \"bot,\" \"non-bot,\" or \"unavailable,\" based on their publicly available Twitter sites. Users were annotated broadly as \"bot\" if, in contrast to users annotated as \"non-bot,\" they do not appear to be posting personal information. Users were annotated as \"unavailable\" if their Twitter sites could not be viewed at the time of annotation, due to modifying their privacy settings or being removed or suspended from Twitter. Based on 1000 overlapping annotations, their inter-annotator agreement (IAA) was $\\kappa $ = $0.93$ (Cohen’s kappa BIBREF21), considered \"almost perfect agreement\" BIBREF22. Their IAA does not include disagreements resulting from the change of a user's status to or from \"unavailable\" in the time between the first and second annotations. Upon resolving the disagreements, 413 $(4\\%)$ users were annotated as \"bot,\" 7849 $(75.35\\%)$ as \"non-bot,\" and $20.69$ $(19.9\\%)$ as \"unavailable\"." ], "extractive_spans": [ "a sample of $10,417$ users from a database containing more than 400 million publicly available tweets posted by more than $100,000$ users who have announced their pregnancy on Twitter", "Two professional annotators manually categorized the $10,417$ users as \"bot,\" \"non-bot,\" or \"unavailable,\" based on their publicly available Twitter sites", "Users were annotated broadly as \"bot\" if, in contrast to users annotated as \"non-bot,\" they do not appear to be posting personal information", " Users were annotated as \"unavailable\" if their Twitter sites could not be viewed at the time of annotation" ], "free_form_answer": "", "highlighted_evidence": [ "To identify bots in health-related social media data, we retrieved a sample of $10,417$ users from a database containing more than 400 million publicly available tweets posted by more than $100,000$ users who have announced their pregnancy on Twitter BIBREF4. This sample is based on related work for detecting users who have mentioned various pregnancy outcomes in their tweets. Two professional annotators manually categorized the $10,417$ users as \"bot,\" \"non-bot,\" or \"unavailable,\" based on their publicly available Twitter sites. Users were annotated broadly as \"bot\" if, in contrast to users annotated as \"non-bot,\" they do not appear to be posting personal information. Users were annotated as \"unavailable\" if their Twitter sites could not be viewed at the time of annotation, due to modifying their privacy settings or being removed or suspended from Twitter." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "1152cb5acd11aefcd7e46d8b09ec128aa293db96", "4841892ef5ff1d6eb84d8be5737a64c3e1664436", "d2125da5370f486d0bb64f7866d947006b270830" ], "answer": [ { "evidence": [ "We used the 8262 \"bot\" and \"non-bot\" users in experiments to train and evaluate three classification systems. We split the users into $80\\%$ (training) and $20\\%$ (test) sets, stratified based on the distribution of \"bot\" and \"non-bot\" users. The training set includes $61,160,686$ tweets posted by 6610 users, and the held-out test set includes $15,703,735$ tweets posted by 1652 users. First, we evaluated Botometer on our held-out test set. Botometer is a publicly available bot detection system designed for political dot detection. It outputs a score between 0 and 1 for a user, representing the likelihood that a user is a bot. Second, we used the Botometer score for each user as a feature in training a gradient boosting classifier which is a decision tree-based ensemble machine learning algorithm with gradient boosting BIBREF23 and can be used to address class imbalance. To adapt the Botometer scores to our binary classification task, we set the threshold to $0.47$, based on performing 5-fold cross validation over the training set. To further address the class imbalance, we used the Synthetic Minority Over-sampling Technique (SMOTE)BIBREF24 to create artificial instances of \"bot\" users in the training set. We also performed 5-fold cross validation over the training set to optimize parameters for the classifier; we used exponential as the loss function, set the number of estimators to 200, and set the learning rate to $0.1$. Third, we used the classifier with an extended set of features that are not used by Botometer. Based on our manual annotation, we consider the following features to be potentially informative for distinguishing \"bot\" and \"non-bot\" users in health-related data:" ], "extractive_spans": [], "free_form_answer": "An existing bot detection score for each user can be used as a feature in training", "highlighted_evidence": [ "First, we evaluated Botometer on our held-out test set. Botometer is a publicly available bot detection system designed for political dot detection. It outputs a score between 0 and 1 for a user, representing the likelihood that a user is a bot. Second, we used the Botometer score for each user as a feature in training a gradient boosting classifier which is a decision tree-based ensemble machine learning algorithm with gradient boosting BIBREF23 and can be used to address class imbalance. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "As the use of social networks, such as Twitter, in health research is increasing, there is a growing need to validate the credibility of the data prior to making conclusions. The presence of bots in social media presents a crucial problem, particularly because bots may be customized to perpetuate specific biased or false information, or to execute advertising or marketing goals. We demonstrate that, while existing systems have been successful in detecting bots in other domains, they do not perform as well for detecting health-related bots. Using a machine learning algorithm on top of an existing bot detection system, and a set of simple derived features, we were able to significantly improve bot detection performance in health-related data. Introducing more features would likely contribute to further improving performance, which we will explore in future work." ], "extractive_spans": [ "Using a machine learning algorithm on top of an existing bot detection system, and a set of simple derived features, we were able to significantly improve bot detection performance in health-related data. " ], "free_form_answer": "", "highlighted_evidence": [ "As the use of social networks, such as Twitter, in health research is increasing, there is a growing need to validate the credibility of the data prior to making conclusions. The presence of bots in social media presents a crucial problem, particularly because bots may be customized to perpetuate specific biased or false information, or to execute advertising or marketing goals. We demonstrate that, while existing systems have been successful in detecting bots in other domains, they do not perform as well for detecting health-related bots. Using a machine learning algorithm on top of an existing bot detection system, and a set of simple derived features, we were able to significantly improve bot detection performance in health-related data. Introducing more features would likely contribute to further improving performance, which we will explore in future work." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "As the use of social networks, such as Twitter, in health research is increasing, there is a growing need to validate the credibility of the data prior to making conclusions. The presence of bots in social media presents a crucial problem, particularly because bots may be customized to perpetuate specific biased or false information, or to execute advertising or marketing goals. We demonstrate that, while existing systems have been successful in detecting bots in other domains, they do not perform as well for detecting health-related bots. Using a machine learning algorithm on top of an existing bot detection system, and a set of simple derived features, we were able to significantly improve bot detection performance in health-related data. Introducing more features would likely contribute to further improving performance, which we will explore in future work." ], "extractive_spans": [ "simple derived features, we were able to significantly improve bot detection performance in health-related data" ], "free_form_answer": "", "highlighted_evidence": [ "Using a machine learning algorithm on top of an existing bot detection system, and a set of simple derived features, we were able to significantly improve bot detection performance in health-related data. Introducing more features would likely contribute to further improving performance, which we will explore in future work." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "6cf8a1a253cf7feb20c87da4b279ef9363d8a9a3", "8f62a43336ed07495794b31fad52922fa4999d9c", "9c4b7dcb27e7404fc35205d7ecd8b00eadf26295" ], "answer": [ { "evidence": [ "In recent years, social media has evolved into an important source of information for various types of health-related research. Social networks encapsulate large volumes of data associated with diverse health topics, generated by active user bases in continuous growth. Twitter, for example, has 330 million monthly active users worldwide that generate almost 500 million micro-blogs (tweets) per day. For some years, the use of the platform to share personal health information has been growing, particularly amongst people living with one or more chronic conditions and those living with disability. Twenty percent of social network site users living with chronic conditions gather and share health information on the sites, compared with 12% of social network site users who report no chronic conditions. Social media data is thus being widely used for health-related research, for tasks such as adverse drug reaction detection BIBREF0, syndromic surveillance BIBREF1, subject recruitment for cancer trials BIBREF2, and characterizing drug abuse BIBREF3, to name a few. Twitter is particularly popular in research due to the availability of the public streaming API, which releases a sample of publicly posted data in real time. While early health-related research from social media focused almost exclusively on population-level studies, some very recent research tasks have focused on performing longitudinal data analysis at the user level, such as mining health-related information from cohorts of pregnant women BIBREF4." ], "extractive_spans": [ "Social networks encapsulate large volumes of data associated with diverse health topics, generated by active user bases in continuous growth.", " Social media data is thus being widely used for health-related research, for tasks such as adverse drug reaction detection BIBREF0, syndromic surveillance BIBREF1, subject recruitment for cancer trials BIBREF2, and characterizing drug abuse BIBREF3, to name a few. " ], "free_form_answer": "", "highlighted_evidence": [ "In recent years, social media has evolved into an important source of information for various types of health-related research. Social networks encapsulate large volumes of data associated with diverse health topics, generated by active user bases in continuous growth. Twitter, for example, has 330 million monthly active users worldwide that generate almost 500 million micro-blogs (tweets) per day. For some years, the use of the platform to share personal health information has been growing, particularly amongst people living with one or more chronic conditions and those living with disability. Twenty percent of social network site users living with chronic conditions gather and share health information on the sites, compared with 12% of social network site users who report no chronic conditions. Social media data is thus being widely used for health-related research, for tasks such as adverse drug reaction detection BIBREF0, syndromic surveillance BIBREF1, subject recruitment for cancer trials BIBREF2, and characterizing drug abuse BIBREF3, to name a few. Twitter is particularly popular in research due to the availability of the public streaming API, which releases a sample of publicly posted data in real time. While early health-related research from social media focused almost exclusively on population-level studies, some very recent research tasks have focused on performing longitudinal data analysis at the user level, such as mining health-related information from cohorts of pregnant women BIBREF4." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In recent years, social media has evolved into an important source of information for various types of health-related research. Social networks encapsulate large volumes of data associated with diverse health topics, generated by active user bases in continuous growth. Twitter, for example, has 330 million monthly active users worldwide that generate almost 500 million micro-blogs (tweets) per day. For some years, the use of the platform to share personal health information has been growing, particularly amongst people living with one or more chronic conditions and those living with disability. Twenty percent of social network site users living with chronic conditions gather and share health information on the sites, compared with 12% of social network site users who report no chronic conditions. Social media data is thus being widely used for health-related research, for tasks such as adverse drug reaction detection BIBREF0, syndromic surveillance BIBREF1, subject recruitment for cancer trials BIBREF2, and characterizing drug abuse BIBREF3, to name a few. Twitter is particularly popular in research due to the availability of the public streaming API, which releases a sample of publicly posted data in real time. While early health-related research from social media focused almost exclusively on population-level studies, some very recent research tasks have focused on performing longitudinal data analysis at the user level, such as mining health-related information from cohorts of pregnant women BIBREF4." ], "extractive_spans": [ " drug reaction detection", "syndromic surveillance", "subject recruitment for cancer trials", "characterizing drug abuse" ], "free_form_answer": "", "highlighted_evidence": [ " Social media data is thus being widely used for health-related research, for tasks such as adverse drug reaction detection BIBREF0, syndromic surveillance BIBREF1, subject recruitment for cancer trials BIBREF2, and characterizing drug abuse BIBREF3, to name a few. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In recent years, social media has evolved into an important source of information for various types of health-related research. Social networks encapsulate large volumes of data associated with diverse health topics, generated by active user bases in continuous growth. Twitter, for example, has 330 million monthly active users worldwide that generate almost 500 million micro-blogs (tweets) per day. For some years, the use of the platform to share personal health information has been growing, particularly amongst people living with one or more chronic conditions and those living with disability. Twenty percent of social network site users living with chronic conditions gather and share health information on the sites, compared with 12% of social network site users who report no chronic conditions. Social media data is thus being widely used for health-related research, for tasks such as adverse drug reaction detection BIBREF0, syndromic surveillance BIBREF1, subject recruitment for cancer trials BIBREF2, and characterizing drug abuse BIBREF3, to name a few. Twitter is particularly popular in research due to the availability of the public streaming API, which releases a sample of publicly posted data in real time. While early health-related research from social media focused almost exclusively on population-level studies, some very recent research tasks have focused on performing longitudinal data analysis at the user level, such as mining health-related information from cohorts of pregnant women BIBREF4." ], "extractive_spans": [ "almost exclusively on population-level studies", "very recent research tasks have focused on performing longitudinal data analysis at the user level, such as mining health-related information from cohorts of pregnant women" ], "free_form_answer": "", "highlighted_evidence": [ "While early health-related research from social media focused almost exclusively on population-level studies, some very recent research tasks have focused on performing longitudinal data analysis at the user level, such as mining health-related information from cohorts of pregnant women BIBREF4." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Do the authors report results on only English datasets?", "What are the characteristics of the dataset of Twitter users?", "How can an existing bot detection system by customized for health-related research?", "What type of health-related research takes place in social media?" ], "question_id": [ "4379a3ece3fdb93b71db43f62833f5f724c49842", "0abc2499195185c94837e0340d00cd3b83ee795e", "138ad61b43c85d5db166ea9bd3d3b19bb2e2bbfb", "7e906dc00e92088a25df3719104d1750e5a27485" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "twitter", "twitter", "twitter", "twitter" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: The distribution of features for ”bot” and ”non-bot” users in the training set.", "Figure 2: The distribution of faces detected in profile pictures and names detected in user names for ”bot” and ”nonbot” users in the training set.", "Table 1: Precision, recall, and F1-score for three bot detection systems evaluated on a held-out test set of 1652 users. Precision, recall, and F1-scores are reported for the ”non-bot” class (NB), the ”bot” class (B), and an average of the two classes (avg.)." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "4-Table1-1.png" ] }
[ "How can an existing bot detection system by customized for health-related research?" ]
[ [ "1909.13184-Conclusion-0", "1909.13184-Methods ::: Classification-0" ] ]
[ "An existing bot detection score for each user can be used as a feature in training" ]
55
1906.11085
Enhancing PIO Element Detection in Medical Text Using Contextualized Embedding
In this paper, we investigate a new approach to Population, Intervention and Outcome (PIO) element detection, a common task in Evidence Based Medicine (EBM). The purpose of this study is two-fold: to build a training dataset for PIO element detection with minimum redundancy and ambiguity and to investigate possible options in utilizing state of the art embedding methods for the task of PIO element detection. For the former purpose, we build a new and improved dataset by investigating the shortcomings of previously released datasets. For the latter purpose, we leverage the state of the art text embedding, Bidirectional Encoder Representations from Transformers (BERT), and build a multi-label classifier. We show that choosing a domain specific pre-trained embedding further optimizes the performance of the classifier. Furthermore, we show that the model could be enhanced by using ensemble methods and boosting techniques provided that features are adequately chosen.
{ "paragraphs": [ [ "Evidence-based medicine (EBM) is of primary importance in the medical field. Its goal is to present statistical analyses of issues of clinical focus based on retrieving and analyzing numerous papers in the medical literature BIBREF0 . The PubMed database is one of the most commonly used databases in EBM BIBREF1 .", "Biomedical papers, describing randomized controlled trials in medical intervention, are published at a high rate every year. The volume of these publications makes it very challenging for physicians to find the best medical intervention for a given patient group and condition BIBREF2 . Computational methods and natural language processing (NLP) could be adopted in order to expedite the process of biomedical evidence synthesis. Specifically, NLP tasks applied to well structured documents and queries can help physicians extract appropriate information to identify the best available evidence in the context of medical treatment.", "Clinical questions are formed using the PIO framework, where clinical issues are broken down into four components: Population/Problem (P), Intervention (I), Comparator (C), and Outcome (O). We will refer to these categories as PIO elements, by using the common practice of merging the C and I categories. In BIBREF3 a literature screening performed in 10 systematic reviews was studied. It was found that using the PIO framework can significantly improve literature screening efficacy. Therefore, efficient extraction of PIO elements is a key feature of many EBM applications and could be thought of as a multi-label sentence classification problem.", "Previous works on PIO element extraction focused on classical NLP methods, such as Naive Bayes (NB), Support Vector Machines (SVM) and Conditional Random Fields (CRF) BIBREF4 , BIBREF5 . These models are shallow and limited in terms of modeling capacity. Furthermore, most of these classifiers are trained to extract PIO elements one by one which is sub-optimal since this approach does not allow the use of shared structure among the individual classifiers.", "Deep neural network models have increased in popularity in the field of NLP. They have pushed the state of the art of text representation and information retrieval. More specifically, these techniques enhanced NLP algorithms through the use of contextualized text embeddings at word, sentence, and paragraph levels BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 .", "More recently, jin2018pico proposed a bidirectional long short term memory (LSTM) model to simultaneously extract PIO components from PubMed abstracts. To our knowledge, that study was the first in which a deep learning framework was used to extract PIO elements from PubMed abstracts.", "In the present paper, we build a dataset of PIO elements by improving the methodology found in BIBREF12 . Furthermore, we built a multi-label PIO classifier, along with a boosting framework, based on the state of the art text embedding, BERT. This embedding model has been proven to offer a better contextualization compared to a bidirectional LSTM model BIBREF9 ." ], [ "In this study, we introduce PICONET, a multi-label dataset consisting of sequences with labels Population/Problem (P), Intervention (I), and Outcome (O). This dataset was created by collecting structured abstracts from PubMed and carefully choosing abstract headings representative of the desired categories. The present approach is an improvement over a similar approach used in BIBREF12 .", "Our aim was to perform automatic labeling while removing as much ambiguity as possible. We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English). Structured abstract sections from PubMed have labels such as introduction, goals, study design, findings, or discussion; however, the majority of these labels are not useful for P, I, and O extraction since most are general (e.g. methods) and do not isolate a specific P, I, O sequence. Therefore, in order to narrow down abstract sections that correspond to the P label, for example, we needed to find a subset of labels such as, but not limited to population, patients, and subjects. We performed a lemmatization of the abstract section labels in order to cluster similar categories such as subject and subjects. Using this approach, we carefully chose candidate labels for each P, I, and O, and manually looked at a small number of samples for each label to determine if text was representative.", "Since our goal was to collect sequences that are uniquely representative of a description of Population, Intervention, and Outcome, we avoided a keyword-based approach such as in BIBREF12 . For example, using a keyword-based approach would yield a sequence labeled population and methods with the label P, but such abstract sections were not purely about the population and contained information about the interventions and study design making them poor candidates for a P label. Thus, we were able to extract portions of abstracts pertaining to P, I, and O categories while minimizing ambiguity and redundancy. Moreover, in the dataset from BIBREF12 , a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset. We avoided this approach and kept the full abstract sections. The full abstracts were kept in conjunction with our belief that keeping the full section retains more feature-rich sequences for each sequence, and that individual sentences from long abstract sections can be poor candidates for the corresponding label.", "For sections with labels such as population and intervention, we created a mutli-label. We also included negative examples by taking sentences from sections with headings such as aim. Furthermore, we cleaned the remaining data with various approaches including, but not limited to, language identification, removal of missing values, cleaning unicode characters, and filtering for sequences between 5 and 200 words, inclusive." ], [ "BERT (Bidirectional Encoder Representations from Transformers) is a deep bidirectional attention text embedding model. The idea behind this model is to pre-train a bidirectional representation by jointly conditioning on both left and right contexts in all layers using a transformer BIBREF13 , BIBREF9 . Like any other language model, BERT can be pre-trained on different contexts. A contextualized representation is generally optimized for downstream NLP tasks.", "Since its release, BERT has been pre-trained on a multitude of corpora. In the following, we describe different BERT embedding versions used for our classification problem. The first version is based on the original BERT release BIBREF9 . This model is pre-trained on the BooksCorpus (800M words) BIBREF14 and English Wikipedia (2,500M words). For Wikipedia, text passages were extracted while lists were ignored. The second version is BioBERT BIBREF15 , which was trained on biomedical corpora: PubMed (4.5B words) and PMC (13.5B words)." ], [ "The classification model is built on top of the BERT representation by adding a dense layer corresponding to the multi-label classifier with three output neurons corresponding to PIO labels. In order to insure that independent probabilities are assigned to the labels, as a loss function we have chosen the binary cross entropy with logits (BCEWithLogits) defined by DISPLAYFORM0 ", "where t and y are the target and output vectors, respectively; n is the number of independent targets (n=3). The outputs are computed by applying the logistic function to the weighted sums of the last hidden layer activations, s, DISPLAYFORM0 DISPLAYFORM1 ", "For the original BERT model, we have chosen the smallest uncased model, Bert-Base. The model has 12 attention layers and all texts are converted to lowercase by the tokenizer BIBREF9 . The architecture of the model is illustrated in Figure FIGREF7 .", "Using this framework, we trained the model using the two pretrained embedding models described in the previous section. It is worth to mention that the embedding is contextualized during the training phase. For both models, the pretrained embedding layer is frozen during the first epoch (the embedding vectors are not updated). After the first epoch, the embedding layer is unfrozen and the vectors are fine-tuned for the classification task during training. The advantage of this approach is that few parameters need to be learned from scratch BIBREF16 , BIBREF11 , BIBREF9 ." ], [ "In order to quantify the performance of the classification model, we computed the precision and recall scores. On average, it was found that the model leads to better results when trained using the BioBERT embedding. In addition, the performance of the PIO classifier was measured by averaging the three Area Under Receiver Operating Characteristic Curve (ROC_AUC) scores for P, I, and O. The ROC_AUC score of 0.9951 was obtained by the model using the general BERT embedding. This score was improved to 0.9971 when using the BioBERT model pre-trained on medical context. The results are illustrated in Figure FIGREF9 ." ], [ "We further applied ensemble methods to enhance the model. This approach consists of combining predictions from base classifiers with features of the input data to increase the accuracy of the model BIBREF17 .", "We investigate an important family of ensemble methods known as boosting, and more specifically a Light Gradient Boosting Machine (LGBM) algorithm, which consists of an implementation of fast gradient boosting on decision trees. In this study, we use a library implemented by Microsoft BIBREF18 . In our model, we learn a linear combination of the prediction given by the base classifiers and the input text features to predict the labels. As features, we consider the average term frequency-inverse document frequency (TF-IDF) score for each instance and the frequency of occurrence of quantitative information elements (QIEF) (e.g. percentage, population, dose of medicine). Finally, the output of the binary cross entropy with logits layer (predicted probabilities for the three classes) and the feature information are fed to the LGBM.", "We train the base classifier using the original training dataset, using INLINEFORM0 of the whole data as training dataset, and use a five-fold cross-validation framework to train the LGBM on the remaining INLINEFORM1 of the data to avoid any information leakage. We train the LGBM on four folds and test on the excluded one and repeat the process for all five folds.", "The results of the LGBM classifier for the different boosting frameworks and the scores from the base classifiers are illustrated in Table TABREF14 . The highest average ROC_AUC score of 0.9998 is obtained in the case of combining the two base learners along with the TF-IDF and QIEF features." ], [ "In this paper, we presented an improved methodology to extract PIO elements, with reduced ambiguity, from abstracts of medical papers. The proposed technique was used to build a dataset of PIO elements that we call PICONET. We further proposed a model of PIO elements classification using state of the art BERT embedding. It has been shown that using the contextualized BioBERT embedding improved the accuracy of the classifier. This result reinforces the idea of the importance of embedding contextualization in subsequent classification tasks in this specific context.", "In order to enhance the accuracy of the model, we investigated an ensemble method based on the LGBM algorithm. We trained the LGBM model, with the above models as base learners, to optimize the classification by learning a linear combination of the predicted probabilities, for the three classes, with the TF-IDF and QIEF scores. The results indicate that these text features were adequate for boosting the contextualized classification model. We compared the performance of the classifier when using the features with one of the base learners and the case where we combine the base learners along with the features. We obtained the best performance in the latter case.", "The present work resulted in the creation of a PIO elements dataset, PICONET, and a classification tool. These constitute an important component of our system of automatic mining of medical abstracts. We intend to extend the dataset to full medical articles. The model will be modified to take into account the higher complexity of full text data and more efficient features for model boosting will be investigated." ] ], "section_name": [ "Introduction", "Datasets", "Background", "The Model", "Performance Comparison", "Model Boosting ", "Discussion and Conclusion" ] }
{ "answers": [ { "annotation_id": [ "1923f34f957352c90e9aa26eefe1b1299ab58121", "863dc2194c86e3a786a38819fc3cb96df5fe0565", "cfd5caff8b3e463423fa306493380bbe850600e5" ], "answer": [ { "evidence": [ "We investigate an important family of ensemble methods known as boosting, and more specifically a Light Gradient Boosting Machine (LGBM) algorithm, which consists of an implementation of fast gradient boosting on decision trees. In this study, we use a library implemented by Microsoft BIBREF18 . In our model, we learn a linear combination of the prediction given by the base classifiers and the input text features to predict the labels. As features, we consider the average term frequency-inverse document frequency (TF-IDF) score for each instance and the frequency of occurrence of quantitative information elements (QIEF) (e.g. percentage, population, dose of medicine). Finally, the output of the binary cross entropy with logits layer (predicted probabilities for the three classes) and the feature information are fed to the LGBM." ], "extractive_spans": [ "Light Gradient Boosting Machine (LGBM)" ], "free_form_answer": "", "highlighted_evidence": [ "We investigate an important family of ensemble methods known as boosting, and more specifically a Light Gradient Boosting Machine (LGBM) algorithm, which consists of an implementation of fast gradient boosting on decision trees." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We investigate an important family of ensemble methods known as boosting, and more specifically a Light Gradient Boosting Machine (LGBM) algorithm, which consists of an implementation of fast gradient boosting on decision trees. In this study, we use a library implemented by Microsoft BIBREF18 . In our model, we learn a linear combination of the prediction given by the base classifiers and the input text features to predict the labels. As features, we consider the average term frequency-inverse document frequency (TF-IDF) score for each instance and the frequency of occurrence of quantitative information elements (QIEF) (e.g. percentage, population, dose of medicine). Finally, the output of the binary cross entropy with logits layer (predicted probabilities for the three classes) and the feature information are fed to the LGBM." ], "extractive_spans": [ "Light Gradient Boosting Machine" ], "free_form_answer": "", "highlighted_evidence": [ "We investigate an important family of ensemble methods known as boosting, and more specifically a Light Gradient Boosting Machine (LGBM) algorithm, which consists of an implementation of fast gradient boosting on decision trees. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We investigate an important family of ensemble methods known as boosting, and more specifically a Light Gradient Boosting Machine (LGBM) algorithm, which consists of an implementation of fast gradient boosting on decision trees. In this study, we use a library implemented by Microsoft BIBREF18 . In our model, we learn a linear combination of the prediction given by the base classifiers and the input text features to predict the labels. As features, we consider the average term frequency-inverse document frequency (TF-IDF) score for each instance and the frequency of occurrence of quantitative information elements (QIEF) (e.g. percentage, population, dose of medicine). Finally, the output of the binary cross entropy with logits layer (predicted probabilities for the three classes) and the feature information are fed to the LGBM." ], "extractive_spans": [ "Light Gradient Boosting Machine" ], "free_form_answer": "", "highlighted_evidence": [ "We investigate an important family of ensemble methods known as boosting, and more specifically a Light Gradient Boosting Machine (LGBM) algorithm, which consists of an implementation of fast gradient boosting on decision trees. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "6e660fe316bc435e37530616f97fd44470bd74fd", "b4a7bf87d0696bc1b86b5eb48ee693be5d917cb4", "e44308c1fb911478b34790669969fc2d25f563c9" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [ "Since its release, BERT has been pre-trained on a multitude of corpora. In the following, we describe different BERT embedding versions used for our classification problem. The first version is based on the original BERT release BIBREF9 . This model is pre-trained on the BooksCorpus (800M words) BIBREF14 and English Wikipedia (2,500M words). For Wikipedia, text passages were extracted while lists were ignored. The second version is BioBERT BIBREF15 , which was trained on biomedical corpora: PubMed (4.5B words) and PMC (13.5B words)." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In the following, we describe different BERT embedding versions used for our classification problem." ], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "0ccccec4008ea4768cd7d2e46703faa0bc2ce202", "1a4c002c7e1c8fa73b984af5682b7d80034ba360", "227eb89761d2a94da3a20e2ab81a2cb43441ad00" ], "answer": [ { "evidence": [ "Our aim was to perform automatic labeling while removing as much ambiguity as possible. We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English). Structured abstract sections from PubMed have labels such as introduction, goals, study design, findings, or discussion; however, the majority of these labels are not useful for P, I, and O extraction since most are general (e.g. methods) and do not isolate a specific P, I, O sequence. Therefore, in order to narrow down abstract sections that correspond to the P label, for example, we needed to find a subset of labels such as, but not limited to population, patients, and subjects. We performed a lemmatization of the abstract section labels in order to cluster similar categories such as subject and subjects. Using this approach, we carefully chose candidate labels for each P, I, and O, and manually looked at a small number of samples for each label to determine if text was representative." ], "extractive_spans": [ "363,078 structured abstracts" ], "free_form_answer": "", "highlighted_evidence": [ "We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our aim was to perform automatic labeling while removing as much ambiguity as possible. We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English). Structured abstract sections from PubMed have labels such as introduction, goals, study design, findings, or discussion; however, the majority of these labels are not useful for P, I, and O extraction since most are general (e.g. methods) and do not isolate a specific P, I, O sequence. Therefore, in order to narrow down abstract sections that correspond to the P label, for example, we needed to find a subset of labels such as, but not limited to population, patients, and subjects. We performed a lemmatization of the abstract section labels in order to cluster similar categories such as subject and subjects. Using this approach, we carefully chose candidate labels for each P, I, and O, and manually looked at a small number of samples for each label to determine if text was representative." ], "extractive_spans": [ "363,078" ], "free_form_answer": "", "highlighted_evidence": [ "We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English)." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "62f3c61d258f7fc56183a112d1bf0ac4d70a0894", "88362601c16f82a6e72c7041d51927a2d3d4b82b", "bf5cb5af8f4511580e3c068e74ef2117eb89012b" ], "answer": [ { "evidence": [ "In this study, we introduce PICONET, a multi-label dataset consisting of sequences with labels Population/Problem (P), Intervention (I), and Outcome (O). This dataset was created by collecting structured abstracts from PubMed and carefully choosing abstract headings representative of the desired categories. The present approach is an improvement over a similar approach used in BIBREF12 ." ], "extractive_spans": [], "free_form_answer": "The new dataset was collected from structured abstracts from PubMed and filtering abstract headings representative of the desired categories.", "highlighted_evidence": [ "This dataset was created by collecting structured abstracts from PubMed and carefully choosing abstract headings representative of the desired categories." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this study, we introduce PICONET, a multi-label dataset consisting of sequences with labels Population/Problem (P), Intervention (I), and Outcome (O). This dataset was created by collecting structured abstracts from PubMed and carefully choosing abstract headings representative of the desired categories. The present approach is an improvement over a similar approach used in BIBREF12 ." ], "extractive_spans": [ "collecting structured abstracts from PubMed and carefully choosing abstract headings representative of the desired categories" ], "free_form_answer": "", "highlighted_evidence": [ "This dataset was created by collecting structured abstracts from PubMed and carefully choosing abstract headings representative of the desired categories. The present approach is an improvement over a similar approach used in BIBREF12 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our aim was to perform automatic labeling while removing as much ambiguity as possible. We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English). Structured abstract sections from PubMed have labels such as introduction, goals, study design, findings, or discussion; however, the majority of these labels are not useful for P, I, and O extraction since most are general (e.g. methods) and do not isolate a specific P, I, O sequence. Therefore, in order to narrow down abstract sections that correspond to the P label, for example, we needed to find a subset of labels such as, but not limited to population, patients, and subjects. We performed a lemmatization of the abstract section labels in order to cluster similar categories such as subject and subjects. Using this approach, we carefully chose candidate labels for each P, I, and O, and manually looked at a small number of samples for each label to determine if text was representative." ], "extractive_spans": [], "free_form_answer": "By searching for structured abstracts on PubMed using specific filters.", "highlighted_evidence": [ "We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English). " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "810e4e9caffc2ebbc7ba11b4c07323234ef63814", "bb37c026729630900ab97d9916cddd7352a95131", "f4d9c984e5d1a5684e6af96dad256d7890bf16d0" ], "answer": [ { "evidence": [ "Our aim was to perform automatic labeling while removing as much ambiguity as possible. We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English). Structured abstract sections from PubMed have labels such as introduction, goals, study design, findings, or discussion; however, the majority of these labels are not useful for P, I, and O extraction since most are general (e.g. methods) and do not isolate a specific P, I, O sequence. Therefore, in order to narrow down abstract sections that correspond to the P label, for example, we needed to find a subset of labels such as, but not limited to population, patients, and subjects. We performed a lemmatization of the abstract section labels in order to cluster similar categories such as subject and subjects. Using this approach, we carefully chose candidate labels for each P, I, and O, and manually looked at a small number of samples for each label to determine if text was representative." ], "extractive_spans": [], "free_form_answer": "The P, I, and O labels were automatically assigned after clustering lemmatized labels from the structured abstract sections.", "highlighted_evidence": [ "Structured abstract sections from PubMed have labels such as introduction, goals, study design, findings, or discussion; however, the majority of these labels are not useful for P, I, and O extraction since most are general (e.g. methods) and do not isolate a specific P, I, O sequence. Therefore, in order to narrow down abstract sections that correspond to the P label, for example, we needed to find a subset of labels such as, but not limited to population, patients, and subjects. We performed a lemmatization of the abstract section labels in order to cluster similar categories such as subject and subjects. Using this approach, we carefully chose candidate labels for each P, I, and O, and manually looked at a small number of samples for each label to determine if text was representative." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our aim was to perform automatic labeling while removing as much ambiguity as possible. We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English). Structured abstract sections from PubMed have labels such as introduction, goals, study design, findings, or discussion; however, the majority of these labels are not useful for P, I, and O extraction since most are general (e.g. methods) and do not isolate a specific P, I, O sequence. Therefore, in order to narrow down abstract sections that correspond to the P label, for example, we needed to find a subset of labels such as, but not limited to population, patients, and subjects. We performed a lemmatization of the abstract section labels in order to cluster similar categories such as subject and subjects. Using this approach, we carefully chose candidate labels for each P, I, and O, and manually looked at a small number of samples for each label to determine if text was representative." ], "extractive_spans": [ "automatic labeling", "lemmatization of the abstract section labels in order to cluster similar categories", "manually looked at a small number of samples for each label to determine if text was representative" ], "free_form_answer": "", "highlighted_evidence": [ "Our aim was to perform automatic labeling while removing as much ambiguity as possible. We performed a search on April 11, 2019 on PubMed for 363,078 structured abstracts with the following filters: Article Types (Clinical Trial), Species (Humans), and Languages (English). Structured abstract sections from PubMed have labels such as introduction, goals, study design, findings, or discussion; however, the majority of these labels are not useful for P, I, and O extraction since most are general (e.g. methods) and do not isolate a specific P, I, O sequence. Therefore, in order to narrow down abstract sections that correspond to the P label, for example, we needed to find a subset of labels such as, but not limited to population, patients, and subjects. We performed a lemmatization of the abstract section labels in order to cluster similar categories such as subject and subjects. Using this approach, we carefully chose candidate labels for each P, I, and O, and manually looked at a small number of samples for each label to determine if text was representative." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The authors themselves " ], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "293c6073fc0a4c38b71452cc5a020cca8350cda6", "4fca5a32b0f35b0c8e4c49c12aa27f0c9904bfb5", "a551c2e0a9d9fd2c5b2ebeddbd57535b367466b3" ], "answer": [ { "evidence": [ "Since our goal was to collect sequences that are uniquely representative of a description of Population, Intervention, and Outcome, we avoided a keyword-based approach such as in BIBREF12 . For example, using a keyword-based approach would yield a sequence labeled population and methods with the label P, but such abstract sections were not purely about the population and contained information about the interventions and study design making them poor candidates for a P label. Thus, we were able to extract portions of abstracts pertaining to P, I, and O categories while minimizing ambiguity and redundancy. Moreover, in the dataset from BIBREF12 , a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset. We avoided this approach and kept the full abstract sections. The full abstracts were kept in conjunction with our belief that keeping the full section retains more feature-rich sequences for each sequence, and that individual sentences from long abstract sections can be poor candidates for the corresponding label." ], "extractive_spans": [ "using a keyword-based approach would yield a sequence labeled population and methods with the label P, but such abstract sections were not purely about the population and contained information about the interventions and study design making them poor candidates for a P label.", "Moreover, in the dataset from BIBREF12 , a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset." ], "free_form_answer": "", "highlighted_evidence": [ "Since our goal was to collect sequences that are uniquely representative of a description of Population, Intervention, and Outcome, we avoided a keyword-based approach such as in BIBREF12 . For example, using a keyword-based approach would yield a sequence labeled population and methods with the label P, but such abstract sections were not purely about the population and contained information about the interventions and study design making them poor candidates for a P label.", "Moreover, in the dataset from BIBREF12 , a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset. We avoided this approach and kept the full abstract sections. The full abstracts were kept in conjunction with our belief that keeping the full section retains more feature-rich sequences for each sequence, and that individual sentences from long abstract sections can be poor candidates for the corresponding label." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this study, we introduce PICONET, a multi-label dataset consisting of sequences with labels Population/Problem (P), Intervention (I), and Outcome (O). This dataset was created by collecting structured abstracts from PubMed and carefully choosing abstract headings representative of the desired categories. The present approach is an improvement over a similar approach used in BIBREF12 .", "Since our goal was to collect sequences that are uniquely representative of a description of Population, Intervention, and Outcome, we avoided a keyword-based approach such as in BIBREF12 . For example, using a keyword-based approach would yield a sequence labeled population and methods with the label P, but such abstract sections were not purely about the population and contained information about the interventions and study design making them poor candidates for a P label. Thus, we were able to extract portions of abstracts pertaining to P, I, and O categories while minimizing ambiguity and redundancy. Moreover, in the dataset from BIBREF12 , a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset. We avoided this approach and kept the full abstract sections. The full abstracts were kept in conjunction with our belief that keeping the full section retains more feature-rich sequences for each sequence, and that individual sentences from long abstract sections can be poor candidates for the corresponding label." ], "extractive_spans": [], "free_form_answer": "In the previous dataset a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset.", "highlighted_evidence": [ "The present approach is an improvement over a similar approach used in BIBREF12 .", "Moreover, in the dataset from BIBREF12 , a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset. We avoided this approach and kept the full abstract sections. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Since our goal was to collect sequences that are uniquely representative of a description of Population, Intervention, and Outcome, we avoided a keyword-based approach such as in BIBREF12 . For example, using a keyword-based approach would yield a sequence labeled population and methods with the label P, but such abstract sections were not purely about the population and contained information about the interventions and study design making them poor candidates for a P label. Thus, we were able to extract portions of abstracts pertaining to P, I, and O categories while minimizing ambiguity and redundancy. Moreover, in the dataset from BIBREF12 , a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset. We avoided this approach and kept the full abstract sections. The full abstracts were kept in conjunction with our belief that keeping the full section retains more feature-rich sequences for each sequence, and that individual sentences from long abstract sections can be poor candidates for the corresponding label." ], "extractive_spans": [], "free_form_answer": "Information about the intervention and study design is mistakenly marked by a P label; a P-labeled section that contained more than one sentence would be split into multiple P-labeled sentences.", "highlighted_evidence": [ "For example, using a keyword-based approach would yield a sequence labeled population and methods with the label P, but such abstract sections were not purely about the population and contained information about the interventions and study design making them poor candidates for a P label. ", "Moreover, in the dataset from BIBREF12 , a section labeled as P that contained more than one sentence would be split into multiple P sentences to be included in the dataset." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "" ], "question": [ "what boosting techniques were used?", "did they experiment with other text embeddings?", "what is the size of this improved dataset?", "how was the new dataset collected?", "who annotated the new dataset?", "what shortcomings of previous datasets are mentioned?" ], "question_id": [ "04796aaa59eeb2176339c0651838670fd916074d", "ebb33f3871b8c2ffd2c451bc06480263b8e870e0", "afd1c482c311e25fc42b9dd59cdc32ac542f5752", "ae1c4f9e33d0cd64d9a313c318ad635620303cdd", "7066f33c373115b1ead905fe70a1e966f77ebeee", "018b81f810a39b3f437a85573d24531efccd835f" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "" ] }
{ "caption": [ "Table 1: Number of occurrences of each category P, I and O in abstracts.", "Figure 1: Structure of the classifier.", "Table 2: Performance of the classifiers in terms of ROC AUC and F1 scores.", "Figure 2: ROC AUC scores and confusion matrices.", "Figure 3: An illustration of the LGBM framework: : combining the two base models and the TF-IDF and QIEF features." ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "4-Table2-1.png", "4-Figure2-1.png", "4-Figure3-1.png" ] }
[ "how was the new dataset collected?", "who annotated the new dataset?", "what shortcomings of previous datasets are mentioned?" ]
[ [ "1906.11085-Datasets-0", "1906.11085-Datasets-1" ], [ "1906.11085-Datasets-1" ], [ "1906.11085-Datasets-0", "1906.11085-Datasets-2" ] ]
[ "By searching for structured abstracts on PubMed using specific filters.", "The P, I, and O labels were automatically assigned after clustering lemmatized labels from the structured abstract sections.", "Information about the intervention and study design is mistakenly marked by a P label; a P-labeled section that contained more than one sentence would be split into multiple P-labeled sentences." ]
57
1908.09892
Does BERT agree? Evaluating knowledge of structure dependence through agreement relations
Learning representations that accurately model semantics is an important goal of natural language processing research. Many semantic phenomena depend on syntactic structure. Recent work examines the extent to which state-of-the-art models for pre-training representations, such as BERT, capture such structure-dependent phenomena, but is largely restricted to one phenomenon in English: number agreement between subjects and verbs. We evaluate BERT's sensitivity to four types of structure-dependent agreement relations in a new semi-automatically curated dataset across 26 languages. We show that both the single-language and multilingual BERT models capture syntax-sensitive agreement patterns well in general, but we also highlight the specific linguistic contexts in which their performance degrades.
{ "paragraphs": [ [ "Learning general-purpose sentence representations which accurately model sentential semantic content is a current goal of natural language processing research BIBREF0, BIBREF1, BIBREF2, BIBREF3. A prominent and successful approach is to pre-train neural networks to encode sentences into fixed length vectors BIBREF4, BIBREF5, with common architecture choices based on recurrent neural networks BIBREF6, BIBREF7, convolutional neural networks, or transformers BIBREF8. Many core linguistic phenomena that one would like to model in general-purpose sentence representations depend on syntactic structure BIBREF9, BIBREF10. Despite the fact that none of the aforementioned architectures have explicit syntactic structural representations, there is some evidence that these models can approximate such structure-dependent phenomena under certain conditions BIBREF11, BIBREF12, BIBREF13, BIBREF14, in addition to their widespread success in practical tasks.", "The recently introduced BERT model BIBREF15, which is based on transformers, achieves state-of-the-art results on eleven natural language processing tasks. In this work, we assess BERT's ability to learn structure-dependent linguistic phenomena of agreement relations. To test whether BERT is sensitive to agreement relations, we use the cloze test BIBREF16, in which we mask out one of two words in an agreement relation and ask BERT to predict the masked word, one of the two tasks on which BERT is initially trained.", "BIBREF17 adapted the experimental setup of BIBREF13, BIBREF11 and BIBREF18 to use the cloze test to assess BERT's sensitivity to number agreement in English subject-verb agreement relations. The results showed that the single-language BERT model performed surprisingly well at this task (above 80% accuracy in all experiments), even when there were multiple “distractors” in the sentence (other nouns that differed from the subject in number). This suggests that BERT is actually learning to approximate structure-dependent computation, and not simply relying on flawed heuristics.", "However, English subject-verb agreement is a rather restricted phenomenon, with the majority of verbs having only two inflected forms and only one morphosyntactic feature (number) involved. To what extent does Goldberg's BIBREF17 result hold for subject-verb agreement in other languages, including more morphologically rich ones, as well as for other types of agreement relations? Building on Goldberg's BIBREF17 work, we expand the experiment to 26 languages and four types of agreement relations, which include more challenging examples.", "In Section 2, we define what is meant by agreement relations and outline the particular agreement relations under study. Section 3 introduces our newly curated cross-linguistic dataset of agreement relations, while section 4 discusses our experimental setup. We report the results of our experiments in section 5. All data and code are available at https://github.com/geoffbacon/does-bert-agree." ], [ "Agreement phenomena are an important and cross-linguistically common property of natural languages, and as such have been extensively studied in syntax and morphology BIBREF19. Languages often express grammatical features, such as number and gender, through inflectional morphology. An agreement relation is a morphophonologically overt co-variance in feature values between two words in a syntactic relationship BIBREF20. In other words, agreement refers to when the morphosyntactic features of one word are reflected in its syntactic dependents. In this way, agreement relations are overt markers of covert syntactic structure. Thus, evaluating a model's ability to capture agreement relations is also an evaluation of its ability to capture syntactic structure.", "Following BIBREF21, we call the syntactically dependent word the “target” of the agreement relation, and the word with which it agrees we call the “controller”. An example of an agreement relation in English is given in (UNKREF2), in which the inflected form of the verb be (are) reflects the plural number of its syntactic head keys. In all examples in this section, the controller and target are given in bold. In this example, keys is the controller and are is the target of the agreement relation.", "The keys to the door are on the table.", "The agreement relation in (UNKREF2) is between a subject and its verb, but there are other types of agreement relations. In addition to subject-verb agreement, three other types of agreement relations are cross-linguistically common: agreement of noun with i) determiner, ii) attributive adjective and iii) predicate adjective BIBREF22. The latter two types are distinguished by whether the adjective modifies the noun within a noun phrase or whether it is predicated of the subject of a clause. The first two types are sometimes categorized as nominal concord rather than agreement, but for our purposes this is merely a difference in terminology.", "The morphosyntactic feature in the agreement relation in (UNKREF2) is number, a feature that is cross-linguistically common in agreement systems. In addition to number, the most commonly involved in agreement relations are gender, case and person BIBREF22.", "With its comparatively limited inflectional morphology, English only exhibits subject-verb and determiner agreement (in demonstratives, “this” vs. “these”) and even then only agrees for number. Languages with richer inflectional morphology tend to display more agreement types and involve more features. French, for example, employs all four types of agreement relations. Examples are given in (UNKREF3)-(UNKREF6). The subject and verb in (UNKREF3) agree for number, while the noun and determiner in (UNKREF4), the noun and attributive adjective in (UNKREF5) and the subject and predicated adjective in (UNKREF6) agree for both number and gender.", "`The keys to the door are on the table.'", "`I can see the keys.'", "`I no longer want the completely broken keys.'", "`The keys to the door are broken.'", "Previous work using agreement relations to assess knowledge of syntactic structure in modern neural networks has focussed on subject-verb agreement in number BIBREF17, BIBREF11, BIBREF13. In our work, we study all four types of agreement relations and all four features discussed above. Moreover, previous work using any method to assess BERT's knowledge of syntactic structure has focussed exclusively on the single-language English model BIBREF23, BIBREF17, BIBREF24, BIBREF25, BIBREF26, BIBREF27. We expand this line of work to 26 languages. Not all languages in our sample exhibit all four types of agreement nor use all four features examined, but they all exhibit at least one of the agreement types involving at least one of the features." ], [ "Our study requires two types of data. First, we need sentences containing agreement relations. We mask out one of the words in the agreement relation and ask BERT to predict the masked word. We are interested in BERT's ability to predict words that respect the agreement relation, that is, words which share the morphosyntactic features of the word with which it agrees. To measure this, we need to know the feature values for each word in BERT's vocabulary. This is our second type of data. Throughout this paper, we refer to the first type of data as the cloze data, and the second as the feature data.", "In the design of our datasets, we followed two principles. First, we chose data sources that are available across multiple languages, because we are interested in cross-linguistic generality. The languages in this study are those with sufficiently large data sources that also appear in the multilingual BERT model. Second, we use naturally-occurring data (cf. BIBREF18)." ], [ "We sourced our cloze data from version 2.4 of the Universal Dependencies treebanks BIBREF28. The UD treebanks use a consistent schema across all languages to annotate naturally occurring sentences at the word level with rich grammatical information. We used the part-of-speech and dependency information to identify potential agreement relations. Specifically, we identified all instances of subject-verb, noun-determiner, noun-attributive adjective and subject-predicate adjective word pairs. We then used the morphosyntactic annotations for number, gender, case and person to filter out word pairs that disagree due to errors in the underlying data source (e.g. one is annotated as plural while the other is singular) or that are not annotated for any of the four features.", "This method is language-agnostic, but due to errors in the underlying UD corpora, yielded some false positives (e.g. predicate adjective agreement in English). To correct for this, we consulted reference grammars of each language to note which of the four types of agreement exist in the language. We removed all examples that are of the wrong type for the language (8% of harvested examples). Across the 26 languages, we curated almost one million cloze examples. Their breakdown across agreement type and language is shown in Tables 1 and 2.", "In all four types of agreement studied, the controller of the agreement is a noun or pronoun, while the target can be a determiner, adjective or verb. Because of this part-of-speech restriction, we chose to mask out the controller in every cloze example so that BERT is evaluated against the same vocabulary across all four types. This also means that we only need to collect feature data on nouns and pronouns." ], [ "Our feature data comes from both the UD and the UniMorph projects BIBREF29. The UniMorph project also uses a consistent schema across all languages to annotate word types with morphological features. Although this schema is not the same as that used in UD, there is a deterministic mapping between the two BIBREF30.", "In this work, a word can take on a particular bundle of feature values (e.g. singular, feminine and third person) if it appears with those features in either UD or UniMorph. The UniMorph data directly specifies what bundles of feature values a word can take on. For the Universal Dependencies data, we say a word can take on a particular bundle if we ever see it with that bundle of feature values in a Universal Dependencies corpus for that language. Both sources individually allow for a word to have multiple feature bundles (e.g. sheep in English can be singular or plural). In these cases, we keep all possible feature bundles. Finally, we filter out words that do not appear in BERT's vocabulary." ], [ "Our experiment is designed to measure BERT's ability to model syntactic structure. Our experimental set up is an adaptation of that of BIBREF17. As in previous work, we mask one word involved in an agreement relation and ask BERT to predict it. BIBREF17, following BIBREF13, considered a correct prediction to be one in which the masked word receives a higher probability than other inflected forms of the lemma. For example, when dogs is masked, a correct response gives more probability to dogs than dog. This evaluation leaves open the possibility that selectional restrictions or frequency are responsible for the results rather than sensitivity to syntactic structure BIBREF11. To remove this possibility, we take into account all words of the same part-of-speech as the masked word. Concretely, we consider a correct prediction to be one in which the average probability of all possible correct words is higher than that of all incorrect words. By “correct words”, we mean words with the exact same feature values and the same part of speech as the masked word. By “incorrect words”, we mean words of the same part of speech as the masked word but that differ from the masked word with respect to at least one feature value. We ignore cloze examples in which there are fewer than 10 possible correct and 10 incorrect answers in our feature data. The average example in our cloze data is evaluated using 1,468 words, compared with 2 in BIBREF17.", "Following BIBREF17, we use the pre-trained BERT models from the original authors, but through the PyTorch implementation. BIBREF17 showed that in his experiments the base BERT model performed better than the larger model, so we restrict our attention to the base model. For English, we use the model trained only on English data, whereas for all other languages we use the multilingual model." ], [ "Overall, BERT performs well on our experimental task, suggesting that it is able to model syntactic structure. BERT was correct in 94.3% of all cloze examples. This high performance is found across all four types of agreement relations. Figure FIGREF13 shows that BERT performed above 90% accuracy in each type. Performance is best on determiner and attributive agreement relations, while worst on subject-verb and predicate adjective.", "In figure FIGREF14, we see BERT's performance for each language. BERT performs well for the majority of languages, although some fare much worse than others. It is important to note that it is an unfair comparison because even though the datasets were curated using the same methodology, each language's dataset is different. It is possible, for example, that the examples we have for Basque are simply harder than they are for Portuguese.", "Finally, we ask how BERT's performance is affected by distance between the controller and the target, as well as the number of distractors. Figure FIGREF15 shows BERT's performance, aggregated over all languages and types, as a function of the distance involved in the agreement, while figure FIGREF16 shows the same for number of distractors. There is a slight but consistent decrease in performance as the distance and the number of distractors increase. The decline in performance begins later in figure FIGREF16 but drops more rapidly once it does." ], [ "Given the success of large pre-trained language representation models on downstream tasks, it is not surprising that that the field wants to understand the extent of their linguistic knowledge. In our work, we looked exclusively at the predictions BERT makes at the word level. BIBREF24 and BIBREF26 examined the internal representations of BERT to find that syntactic concepts are learned at lower levels than semantic concepts. BIBREF23 are also interested in syntactic knowledge and propose a method to evaluate whether entire syntax trees are embedded in a linear transformation of a model's word representation space, finding that BERT does capture such information. As a complementary approach, BIBREF27 studied the attention mechanism of BERT, finding clear correlates with interpretable linguistic structures such as direct objects, and suggest that BERT's success is due in part to its syntactic awareness. However, by subjecting it to existing psycholinguistic tasks, BIBREF32 found that BERT fails in its ability to understand negation. In concurrent work, BIBREF33 show that BERT does not consistently outperform LSTM-based models on English subject-verb agreement tasks." ], [ "Core linguistic phenomena depend on syntactic structure. Yet current state-of-the-art models in language representations, such as BERT, do not have explicit syntactic structural representations. Previous work by BIBREF17 showed that BERT captures English subject-verb number agreement well despite this lack of explicit structural representation. We replicated this result using a different evaluation methodology that addresses shortcomings in the original methodology and expanded the study to 26 languages. Our study further broadened existing work by considering the most cross-linguistically common agreement types as well as the most common morphosyntactic features. The main result of this expansion into more languages, types and features is that BERT, without explicit syntactic structure, is still able to capture syntax-sensitive agreement patterns well. However, our analysis highlights an important qualification of this result. We showed that BERT's ability to model syntax-sensitive agreement relations decreases slightly as the dependency becomes longer range, and as the number of distractors increases. We release our new curated cross-linguistic datasets and code in the hope that it is useful to future research that may probe why this pattern appears.", "The experimental setup we used has some known limitations. First, in certain languages some of the cloze examples we studied contain redundant information. Even when one word from an agreement relation is masked out, other cues remain in the sentence (e.g. when masking out the noun for a French attributive adjective agreement relation, number information is still available from the determiner). To counter this in future work, we plan to run our experiment twice, masking out the controller and then the target. Second, we used a different evaluation scheme than previous work BIBREF17 by averaging BERT's predictions over many word types and plan to compare both schemes in future work." ] ], "section_name": [ "Introduction", "Structure-dependent agreement relations", "Data", "Data ::: Cloze data", "Data ::: Feature data", "Experiment", "Results", "Related work", "Conclusions & future work" ] }
{ "answers": [ { "annotation_id": [ "0d793a7b8ede528eb2779881524d1931954b85a1", "3c0c4c73f7b4dfca23f46ca92873f0aed856ca94", "9f08691ea8cf3467522202a5d8c20c425eae2300" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Overall, BERT performs well on our experimental task, suggesting that it is able to model syntactic structure. BERT was correct in 94.3% of all cloze examples. This high performance is found across all four types of agreement relations. Figure FIGREF13 shows that BERT performed above 90% accuracy in each type. Performance is best on determiner and attributive agreement relations, while worst on subject-verb and predicate adjective.", "In figure FIGREF14, we see BERT's performance for each language. BERT performs well for the majority of languages, although some fare much worse than others. It is important to note that it is an unfair comparison because even though the datasets were curated using the same methodology, each language's dataset is different. It is possible, for example, that the examples we have for Basque are simply harder than they are for Portuguese.", "FLOAT SELECTED: Figure 1: Accuracy per agreement type aggregated across all languages. In all four types, BERT performed above 90% accuracy. Accuracy is slightly lower for predicate adjectives and subject-verb agreement relations, which typically have longer distance dependencies. Error bars are bootstrapped 95% confidence intervals.", "FLOAT SELECTED: Figure 2: Accuracy per language aggregated across all four agreement types. In all 26 languages, BERT performs above 60% accuracy. In most languages BERT performs above 90% accuracy, although performance is significantly lower for a handful of languages. Error bars are bootstrapped 95% confidence intervals." ], "extractive_spans": [], "free_form_answer": "For some language yes, but not for another.", "highlighted_evidence": [ "Figure FIGREF13 shows that BERT performed above 90% accuracy in each type. Performance is best on determiner and attributive agreement relations, while worst on subject-verb and predicate adjective.\n\nIn figure FIGREF14, we see BERT's performance for each language. BERT performs well for the majority of languages, although some fare much worse than others.", "FLOAT SELECTED: Figure 1: Accuracy per agreement type aggregated across all languages. In all four types, BERT performed above 90% accuracy. Accuracy is slightly lower for predicate adjectives and subject-verb agreement relations, which typically have longer distance dependencies. Error bars are bootstrapped 95% confidence intervals.", "FLOAT SELECTED: Figure 2: Accuracy per language aggregated across all four agreement types. In all 26 languages, BERT performs above 60% accuracy. In most languages BERT performs above 90% accuracy, although performance is significantly lower for a handful of languages. Error bars are bootstrapped 95% confidence intervals." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "BIBREF17 adapted the experimental setup of BIBREF13, BIBREF11 and BIBREF18 to use the cloze test to assess BERT's sensitivity to number agreement in English subject-verb agreement relations. The results showed that the single-language BERT model performed surprisingly well at this task (above 80% accuracy in all experiments), even when there were multiple “distractors” in the sentence (other nouns that differed from the subject in number). This suggests that BERT is actually learning to approximate structure-dependent computation, and not simply relying on flawed heuristics.", "Overall, BERT performs well on our experimental task, suggesting that it is able to model syntactic structure. BERT was correct in 94.3% of all cloze examples. This high performance is found across all four types of agreement relations. Figure FIGREF13 shows that BERT performed above 90% accuracy in each type. Performance is best on determiner and attributive agreement relations, while worst on subject-verb and predicate adjective." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The results showed that the single-language BERT model performed surprisingly well at this task (above 80% accuracy in all experiments), even when there were multiple “distractors” in the sentence (other nouns that differed from the subject in number). ", "Overall, BERT performs well on our experimental task, suggesting that it is able to model syntactic structure. BERT was correct in 94.3% of all cloze examples." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "78b3096097e3ac98c1ab1b83d8bc583d699f7b6a", "be4d9736d02be63c5aaba8b9765bc82adf2c952f", "c881baf071de6075ed860daa084c667f6ed05b20" ], "answer": [ { "evidence": [ "We sourced our cloze data from version 2.4 of the Universal Dependencies treebanks BIBREF28. The UD treebanks use a consistent schema across all languages to annotate naturally occurring sentences at the word level with rich grammatical information. We used the part-of-speech and dependency information to identify potential agreement relations. Specifically, we identified all instances of subject-verb, noun-determiner, noun-attributive adjective and subject-predicate adjective word pairs. We then used the morphosyntactic annotations for number, gender, case and person to filter out word pairs that disagree due to errors in the underlying data source (e.g. one is annotated as plural while the other is singular) or that are not annotated for any of the four features." ], "extractive_spans": [ "subject-verb", "noun-determiner", "noun-attributive adjective", "subject-predicate adjective" ], "free_form_answer": "", "highlighted_evidence": [ "Specifically, we identified all instances of subject-verb, noun-determiner, noun-attributive adjective and subject-predicate adjective word pairs. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "With its comparatively limited inflectional morphology, English only exhibits subject-verb and determiner agreement (in demonstratives, “this” vs. “these”) and even then only agrees for number. Languages with richer inflectional morphology tend to display more agreement types and involve more features. French, for example, employs all four types of agreement relations. Examples are given in (UNKREF3)-(UNKREF6). The subject and verb in (UNKREF3) agree for number, while the noun and determiner in (UNKREF4), the noun and attributive adjective in (UNKREF5) and the subject and predicated adjective in (UNKREF6) agree for both number and gender.", "`The keys to the door are on the table.'", "`I can see the keys.'", "`I no longer want the completely broken keys.'", "`The keys to the door are broken.'" ], "extractive_spans": [ "The subject and verb in (UNKREF3) agree for number, while the noun and determiner in (UNKREF4), the noun and attributive adjective in (UNKREF5) and the subject and predicated adjective in (UNKREF6) agree for both number and gender." ], "free_form_answer": "", "highlighted_evidence": [ " The subject and verb in (UNKREF3) agree for number, while the noun and determiner in (UNKREF4), the noun and attributive adjective in (UNKREF5) and the subject and predicated adjective in (UNKREF6) agree for both number and gender.", "The subject and verb in (UNKREF3) agree for number, while the noun and determiner in (UNKREF4), the noun and attributive adjective in (UNKREF5) and the subject and predicated adjective in (UNKREF6) agree for both number and gender.\n\n`The keys to the door are on the table.'\n\n`I can see the keys.'\n\n`I no longer want the completely broken keys.'\n\n`The keys to the door are broken.'" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The agreement relation in (UNKREF2) is between a subject and its verb, but there are other types of agreement relations. In addition to subject-verb agreement, three other types of agreement relations are cross-linguistically common: agreement of noun with i) determiner, ii) attributive adjective and iii) predicate adjective BIBREF22. The latter two types are distinguished by whether the adjective modifies the noun within a noun phrase or whether it is predicated of the subject of a clause. The first two types are sometimes categorized as nominal concord rather than agreement, but for our purposes this is merely a difference in terminology.", "Previous work using agreement relations to assess knowledge of syntactic structure in modern neural networks has focussed on subject-verb agreement in number BIBREF17, BIBREF11, BIBREF13. In our work, we study all four types of agreement relations and all four features discussed above. Moreover, previous work using any method to assess BERT's knowledge of syntactic structure has focussed exclusively on the single-language English model BIBREF23, BIBREF17, BIBREF24, BIBREF25, BIBREF26, BIBREF27. We expand this line of work to 26 languages. Not all languages in our sample exhibit all four types of agreement nor use all four features examined, but they all exhibit at least one of the agreement types involving at least one of the features." ], "extractive_spans": [], "free_form_answer": "Subject-verb agreement, noun-determiner agreement, noun -attributive adjective agreement and noun-predicate adjective agreement.", "highlighted_evidence": [ " In addition to subject-verb agreement, three other types of agreement relations are cross-linguistically common: agreement of noun with i) determiner, ii) attributive adjective and iii) predicate adjective BIBREF22. ", "In our work, we study all four types of agreement relations and all four features discussed above. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "no", "no" ], "question": [ "Do single-language BERT outperforms multilingual BERT?", "What types of agreement relations do they explore?" ], "question_id": [ "e2c8d7f3ef5913582503e50244ca7158d0a62c42", "654fe0109502f2ed2dc8dad359dbbce4393e03dc" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "BERT", "BERT" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Table 1: Number of cloze examples per agreement type in our new cross-linguistic dataset on agreement relations. Previous work has largely focused on subjectverb agreement in English.", "Table 2: Language statistics of our new cross-linguistic dataset on agreement relations. Most previous work has focused on English. “# cloze” is the number of cloze examples curated for each language, and “# feature bundles” is the number of unique sets of morphosyntactic features harvested for word types in BERT’s vocabulary.", "Figure 1: Accuracy per agreement type aggregated across all languages. In all four types, BERT performed above 90% accuracy. Accuracy is slightly lower for predicate adjectives and subject-verb agreement relations, which typically have longer distance dependencies. Error bars are bootstrapped 95% confidence intervals.", "Figure 3: Accuracy as a function of distance between controller and target of agreement, aggregated across all languages and agreement types. BERT is relatively robust to longer-distance dependencies but does show a small decrease as the dependency length increases. Error bars are bootstrapped 95% confidence intervals.", "Figure 2: Accuracy per language aggregated across all four agreement types. In all 26 languages, BERT performs above 60% accuracy. In most languages BERT performs above 90% accuracy, although performance is significantly lower for a handful of languages. Error bars are bootstrapped 95% confidence intervals.", "Figure 4: Accuracy as a function of number of distractors (other nouns in the sentence with different feature values), aggregated across all languages and agreement types. As with distance, BERT is quite robust to distractors although there is a more noticeable decrease in accuracy as more distractors are present. Error bars are bootstrapped 95% confidence intervals." ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "5-Figure1-1.png", "5-Figure3-1.png", "5-Figure2-1.png", "6-Figure4-1.png" ] }
[ "Do single-language BERT outperforms multilingual BERT?", "What types of agreement relations do they explore?" ]
[ [ "1908.09892-Introduction-2", "1908.09892-5-Figure2-1.png", "1908.09892-Results-0", "1908.09892-5-Figure1-1.png", "1908.09892-Results-1" ], [ "1908.09892-Structure-dependent agreement relations-9", "1908.09892-Structure-dependent agreement relations-7", "1908.09892-Structure-dependent agreement relations-6", "1908.09892-Data ::: Cloze data-0", "1908.09892-Structure-dependent agreement relations-5", "1908.09892-Structure-dependent agreement relations-8", "1908.09892-Structure-dependent agreement relations-3", "1908.09892-Structure-dependent agreement relations-10" ] ]
[ "For some language yes, but not for another.", "Subject-verb agreement, noun-determiner agreement, noun -attributive adjective agreement and noun-predicate adjective agreement." ]
58
1806.03369
#SarcasmDetection is soooo general! Towards a Domain-Independent Approach for Detecting Sarcasm
Automatic sarcasm detection methods have traditionally been designed for maximum performance on a specific domain. This poses challenges for those wishing to transfer those approaches to other existing or novel domains, which may be typified by very different language characteristics. We develop a general set of features and evaluate it under different training scenarios utilizing in-domain and/or out-of-domain training data. The best-performing scenario, training on both while employing a domain adaptation step, achieves an F1 of 0.780, which is well above baseline F1-measures of 0.515 and 0.345. We also show that the approach outperforms the best results from prior work on the same target domain.
{ "paragraphs": [ [ "Sarcasm, a creative device used to communicate an intended meaning that is actually the opposite of its literal meaning, is notoriously difficult to convey and interpret through text, in part because doing so relies heavily upon shared contextual understandings that can be marked more easily by altered prosody (e.g., emphasis upon certain words) or non-verbal signals (e.g., rolling one's eyes). It is a complex process even for humans, and in fact an inability to detect sarcasm has been linked with a number of neurocognitive disorders, including dementia BIBREF0 . It is similarly a challenging open task in natural language processing, and has direct implications to a number of other critical application areas, such as sentiment analysis.", "Most research on automatic sarcasm detection to date has focused on the Twitter domain, which boasts an ample source of publicly-available data, some of which is already self-labeled by users for the presence of sarcasm (e.g., with #sarcasm). However, Twitter is highly informal, space-restricted, and subject to frequent topic fluctuations from one post to the next due to the ebb and flow of current events—in short, it is not broadly representative of most text domains. Thus, sarcasm detectors trained using features designed for maximum Twitter performance are not necessarily transferable to other domains. Despite this, it is desirable to develop approaches that can harness the more generalizable information present in the abundance of Twitter data.", "In this work, we develop a set of domain-independent features for sarcasm detection and show that the features generally perform well across text domains. Further, we validate that domain adaptation can be applied to sarcasm detection to leverage patterns in out-of-domain training data, even when results from training only on that source domain data are extremely bad (far below baseline results), to improve over training on only the target data or over training on the simply combined dataset. Finally, we make a new dataset of sarcastic and non-sarcastic tweets available online as a resource to other researchers." ], [ "The majority of work on automatic sarcasm detection has been done using Twitter, and to a smaller extent Amazon product reviews. Research outside of those domains has been scarce, but interesting. Notably, Burfoot and Baldwin Burfoot:2009:ASD:1667583.1667633 automatically detected satirical news articles using unigrams, lexical features, and semantic validity features, and Justo et al. Justo2014124 used n-gram, linguistic, and semantic features to detect the presence of sarcasm in the Internet Argument Corpus BIBREF1 . The remainder of this section describes prior work with Twitter and Amazon." ], [ "Twitter is a micro-blogging service that allows users to post short “tweets” to share content or describe their feelings or opinions in 140 characters or less. For researchers, it boasts a low cost of annotation and plentiful supply of data (users often self-label their tweets using the “#” symbol—many explicitly label their sarcastic tweets using the hashtag “#sarcasm”). A variety of approaches have been taken toward automatically detecting sarcasm on Twitter, including explicitly using the information present in a tweet's hashtag(s); Maynard and Greenwood maynard2014cares learned which hashtags characteristically corresponded with sarcastic tweets, and used the presence of those indicators to predict other sarcastic tweets, with high success. BIBREF2 liebrecht2013perfect detected sarcasm in Dutch tweets using unigram, bigram, and trigram features.", " BIBREF3 Rajadesingan:2015:SDT:2684822.2685316 detected sarcastic tweets based on features adapted from behavioral models of sarcasm usage, drawing extensively from individual users' Twitter histories and relying heavily on situational context and user characteristics. The system also employed lexical features and grammatical correctness as a means of modelling different aspects of the user's behavior.", "Other researchers have had success identifying sarcasm by a tweet's use of positive sentiment to describe a negative situation BIBREF4 , employing contextual BIBREF5 or pragmatic BIBREF6 features, and observing the writing style and emotional scenario of a tweet BIBREF7 . An underlying theme among these methods is that the features are generally designed specifically for use with tweets. A major challenge in developing a more general approach for sarcasm detection lies in developing features that are present across many domains, yet still specific enough to reliably capture the differences between sarcastic and non-sarcastic text.", "Finally, some researchers have recently explored approaches that rely on word embeddings and/or carefully tailored neural networks, rather than on task-specific feature design BIBREF8 , BIBREF9 , BIBREF10 . Since neural networks offer little transparency, it is uncertain whether the features learned in these approaches would be easily transferable across text domains for this task (prior research on other tasks suggests that the features computed by deep neural networks grow increasingly specific to the training dataset—and in turn, to the training domain—with each layer BIBREF11 ). Although an interesting question, the focus herein is on uncovering the specific types of features capable of leveraging general patterns for sarcasm detection, and this can be more easily examined using shallower learning algorithms." ], [ "Research on automatic sarcasm detection in other domains has been limited, but recently a publicly-available corpus of sarcastic and non-sarcastic Amazon product reviews was released by Filatova FILATOVA12.661 to facilitate research. BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 test many feature combinations on this dataset, including those based on metadata (e.g., Amazon star rating), sentiment, grammar, the presence of interjections (e.g., “wow”) or laughter (e.g., through onomatopoeia or acronyms such as “lol”), the presence of emoticons, and bag-of-words features. Their highest F1 (0.744) is achieved using all of these with a logistic regression classifier; however, using only the star rating, they still achieve an F1 of 0.717. This highlights the need for high-performing, general features for sarcasm detection; metadata features are highly domain-specific, and even bag-of-words trends may be unique to certain domains (“trump” was one of the most common unigrams in our own Twitter training set, but only occurred once across all Amazon product reviews).", "Prior to the release of Filatova's dataset, BIBREF13 davidov-tsur-rappoport:2010:CONLL developed a semi-supervised approach to classify tweets or Amazon reviews as sarcastic or non-sarcastic by clustering samples based on grammatical features and the full or partial presence of automatically-extracted text patterns. They evaluated their work on a sample of the classified instances annotated by anonymous users on Amazon Mechanical Turk. They tested several different seed sets with their approach, one of which contained a mixture of positive Amazon reviews, positive #sarcasm-tagged tweets, and a manually-selected sample of negative tweets. Although they did not report test results on Amazon reviews using this seed set, they did report test results on #sarcasm-tagged tweets, achieving an F-measure of 0.545. Their work is the closest to ours, because it attempts to harness training samples from both the Twitter and Amazon review domains." ], [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 .", "The downloaded tweets were filtered to remove retweets, “@replies,” and tweets containing links. Retweets were removed to avoid having duplicate copies of identical tweets in the dataset, @replies were removed in case the hashtag referred to content in the tweet to which it replied rather than content in the tweet itself, and tweets with links were likewise removed in case the hashtag referred to content in the link rather than in the tweet itself. Requiring that the specified hashtag trailed the rest of the tweet (it could only be followed by other hashtags) was done based on the observation that when sarcastic or emotional hashtags occur in the main tweet body, the tweet generally discusses sarcasm or the specified emotion, rather than actually expressing sarcasm or the specified emotion. Finally, requiring that only one of the specified hashtags trailed the tweet eliminated cases of ambiguity between sarcastic and non-sarcastic tweets. All trailing “#sarcasm” or emotion hashtags were removed from the data before training and testing, and both datasets were randomly divided into training (80%) and testing (20%) sets. Further details are shown in Table TABREF6 ." ], [ "Three feature sets were developed (one general, and two targeted toward Twitter and Amazon, respectively). Resources used to develop the features are described in Table TABREF9 . Five classifiers (Naïve Bayes, J48, Bagging, DecisionTable, and SVM), all from the Weka library, were tested using five-fold cross-validation on the training sets, and the highest-scoring (Naïve Bayes) was selected for use on the test set.", "The Twitter- (T) and Amazon-specific (A) features are shown in Table TABREF11 . Domain-specific features were still computed for instances from the other domain unless it was impossible to compute those features in that domain (i.e., Amazon Star Rating for Twitter instances), in which case they were left empty. Twitter-specific features are based on the work of BIBREF15 maynard2014cares and BIBREF4 RiloffSarcasm. Maynard and Greenwood detect sarcastic tweets by checking for the presence of learned hashtags that correspond with sarcastic tweets, as well as sarcasm-indicator phrases and emoticons. We construct binary features based on their work, and on Riloff et al.'s work RiloffSarcasm, which determined whether or not a tweet was sarcastic by checking for positive sentiment phrases contrasting with negative situations (both of which were learned from other sarcastic tweets). We also add a feature indicating the presence of laughter terms. Amazon-based features are primarily borrowed from BIBREF12 's buschmeier-cimiano-klinger:2014:W14-26 earlier work on the Amazon dataset. [4]Individual binary features for each of the sarcasm hashtags (5 features) and laughter tokens (9 features) were also included.", "We model some of our general features after those from BIBREF4 RiloffSarcasm, under the premise that the underlying principle that sarcasm often associates positive expressions with negative situations holds true across domains. Since positive sentiment phrases and negative situations learned from tweets are unlikely to generalize to different domains, we instead use three sentiment lexicons to build features that capture positive and negative sentiment rather than checking for specific learned phrases. Likewise, rather than bootstrapping specific negative situations from Twitter, we calculate the pointwise mutual information (PMI) between the most positive or negative word in the instance and the n-grams that immediately proceed it to create a more general version of the feature. Other general features developed for this work rely on syntactic characteristics, or are bag-of-words-style features corresponding to the tokens most strongly correlated or most common in sarcastic and non-sarcastic instances from Twitter and Amazon training data. All general features are outlined in Table TABREF14 ." ], [ "The features used for each train/test scenario are shown in the first column of Table TABREF18 . Twitter Features refers to all features listed in Table TABREF11 preceded by the parenthetical (T), and Amazon Features to all features preceded by (A). General: Other Polarity includes the positive and negative percentages, average polarities, overall polarities, and largest polarity gap features from Table TABREF14 . General: Subjectivity includes the % strongly subjective positive words, % weakly subjective positive words, and their negative counterparts. We also include two baselines: the All Sarcasm case assumes that every instance is sarcastic, and the Random case randomly assigns each instance as sarcastic or non-sarcastic.", "Results are reported for models trained only on Twitter, only on Amazon, on both training sets, and on both training sets when Daumé's daumeiii:2007:ACLMain EasyAdapt technique is applied, employing Twitter as the algorithm's source domain and Amazon as its target domain. EasyAdapt works by modifying the feature space so that it contains three mappings of the original features: a general (source + target) version, a source-only version, and a target-only version. More specifically, assuming an input feature set INLINEFORM0 for some INLINEFORM1 , where INLINEFORM2 is the number of features in the set, EasyAdapt transforms INLINEFORM3 to the augmented set, INLINEFORM4 . The mappings INLINEFORM5 for the source and target domain data, respectively, are defined as INLINEFORM6 and INLINEFORM7 , where INLINEFORM8 is the zero vector. Refer to Daumé daumeiii:2007:ACLMain for an in-depth discussion of this technique.", "Each model was tested on the Amazon test data (the model trained only on Twitter was also tested on the Twitter test set). Amazon reviews were selected as the target domain since the Twitter dataset was much larger than the Amazon dataset; this scenario is more consistent with the typically stated goal of domain adaptation (a large labeled out-of-domain source dataset and a small amount of labeled data in the target domain), and most clearly highlights the need for a domain-general approach. [6]Part-of-speech is considered in MPQA; Amazon and Twitter data was tagged using Stanford CoreNLP BIBREF20 and the Twitter POS-tagger BIBREF21 , respectively.", "Finally, we include the best results reported by BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 on the same Amazon dataset. For a more direct comparison between our work and theirs, we also report the results from using all of our features under the same classification conditions as theirs (10-fold cross-validation using scikit-learn's Logistic Regression, tuning with an F1 objective). We refer to the latter case as Our Results, Same Classifier as Prior Best." ], [ "The results, including each of the training scenarios noted earlier, are presented in Table TABREF18 . Precision, recall, and F1 on the positive (sarcastic) class were recorded. The highest F1 achieved (0.780) among all cases was from training on the EasyAdapted Twitter and Amazon data. In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset). Training on both without EasyAdapt led to an F1 of 0.595 (or 0.715 when training only on Amazon-specific features), and finally, training only on Twitter data led to an F1 of 0.276. Training and testing on Twitter produced an F1 of 0.583 when training on all features." ], [ "When testing on Amazon reviews, the worst-performing case was that in which the classifier was trained only on Twitter data (it did not manage to outperform either baseline). This underscores the inherent variations in the data across the two domains; despite the fact that many of the features were deliberately designed to be generalizable and robust to domain-specific idiosyncrasies, the different trends across domains still confused the classifier.", "However, combining all of that same Twitter data with a much smaller amount of Amazon data (3998 Twitter training instances relative to 1003 Amazon training instances) and applying EasyAdapt to the combined dataset performed quite well ( INLINEFORM0 =0.780). The classifier was able to take advantage of a wealth of additional Twitter samples that had led to terrible performance on their own ( INLINEFORM1 =0.276). Thus, the high performance demonstrated when the EasyAdapt algorithm is applied to the training data from the two domains is particularly impressive. It shows that more data is indeed better data—provided that the proper features are selected and the classifier is properly guided in handling it.", "Overall, the system cut the error rate from .256 to .220, representing a 14% relative reduction in error over prior best results on the Amazon dataset. Our results testing on Twitter are not directly comparable to others, since prior work's datasets could not be released; however, our results ( INLINEFORM0 =0.583) are in line with those reported previously ( BIBREF4 RiloffSarcasm: INLINEFORM1 =0.51; BIBREF13 davidov-tsur-rappoport:2010:CONLL: INLINEFORM2 =0.545). Additionally, our Twitter data did not contain many indicators shown to be discriminative in the past (leading our general features to be better predictors of sarcasm even when training/testing entirely within the domain), and our focus in developing features was on general performance rather than performance on Twitter specifically.", "Both datasets were somewhat noisy. Many full-length reviews that were marked as “sarcastic” were only partially so, and included other sentences that were not sarcastic at all. This may have been particularly problematic when strong polarity was present in those sentences. An example of this is shown in Figure FIGREF20 , where the highlighted portion of the review indicates the sarcastic segment submitted by the annotator, and awesome, the most polar word in the entire review (circled), is outside that highlighted sentence.", "Since tweets are self-labeled, users' own varying definitions of sarcasm lead to some extreme idiosyncrasies in the kinds of tweets labeled as sarcastic. Sarcastic tweets were also often dependent upon outside context. Some examples include (#sarcasm tags were removed in the actual dataset): “My daughter's 5th grade play went over as professional, flawless, and well rehearsed as a Trump speech. #sarcasm,” “#MilanAlessandria Mario Balotelli scored the fifth goal in the 5-0 win. He should play for the #Azzurri at #EURO2016. #sarcasm,” and “Good morning #sarcasm.”", "Finally, some past research has found that it is more difficult to discriminate between sarcastic and non-sarcastic texts when the non-sarcastic texts contain sentiment BIBREF6 , BIBREF8 . Since our non-sarcastic tweets are emotionally-charged, our classifier may have exhibited lower performance than it would have with only neutral non-sarcastic tweets. Since distinguishing between literal and sarcastic sentiment is useful for real-world applications of sarcasm detection, we consider the presence of sentiment in our dataset to be a worthwhile challenge.", "Regarding the general features developed for this work, the polarity- and subjectivity-based features performed well, while performance using only PMI features was lower. PMI scores in particular may have been negatively impacted by common Twitter characteristics, such as the trend to join keywords together in hashtags, and the use of acronyms that are unconventional in other domains. These issues could be addressed to some extent in the future via word segmentation tools, spell-checkers, and acronym expansion." ], [ "This work develops a set of domain-independent features and demonstrates their usefulness for general sarcasm detection. Moreover, it shows that by applying a domain adaptation step to the extracted features, even a surplus of “bad” training data can be used to improve the performance of the classifier on target domain data, reducing error by 14% relative to prior work. The Twitter corpus described in this paper is publicly available for research purposes,[2] and represents a substantial contribution to multiple NLP sub-communities. This shared corpus of tweets annotated for sarcasm will hasten the advancement of further research. In the future, we plan to extend our approach to detect sarcasm in a completely novel domain, literature, eventually integrating the work into an application to support reading comprehension." ], [ "This material is based upon work supported by the NSF Graduate Research Fellowship Program under Grant 1144248, and the NSF under Grant 1262860. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation." ] ], "section_name": [ "Introduction", "Related Work", "Sarcasm Detection on Twitter", "Sarcasm Detection on Amazon Reviews", "Data Collection", "Features", "Evaluation", "Results", "Discussion", "Conclusions", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "66e6b514683194db91d92678bfe10f10f8500cde", "9d9d727df9f984ebc313ef3fa44f1a1c392651ea", "c67571b582b35be1140f01da7d9ff0cac2ac8538" ], "answer": [ { "evidence": [ "The results, including each of the training scenarios noted earlier, are presented in Table TABREF18 . Precision, recall, and F1 on the positive (sarcastic) class were recorded. The highest F1 achieved (0.780) among all cases was from training on the EasyAdapted Twitter and Amazon data. In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset). Training on both without EasyAdapt led to an F1 of 0.595 (or 0.715 when training only on Amazon-specific features), and finally, training only on Twitter data led to an F1 of 0.276. Training and testing on Twitter produced an F1 of 0.583 when training on all features.", "FLOAT SELECTED: Table 5: Test Results — Full Analysis" ], "extractive_spans": [], "free_form_answer": "By 0,008 F1, 0, 02 Recall and 0,02 Precision.", "highlighted_evidence": [ "In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset).", "FLOAT SELECTED: Table 5: Test Results — Full Analysis" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Research on automatic sarcasm detection in other domains has been limited, but recently a publicly-available corpus of sarcastic and non-sarcastic Amazon product reviews was released by Filatova FILATOVA12.661 to facilitate research. BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 test many feature combinations on this dataset, including those based on metadata (e.g., Amazon star rating), sentiment, grammar, the presence of interjections (e.g., “wow”) or laughter (e.g., through onomatopoeia or acronyms such as “lol”), the presence of emoticons, and bag-of-words features. Their highest F1 (0.744) is achieved using all of these with a logistic regression classifier; however, using only the star rating, they still achieve an F1 of 0.717. This highlights the need for high-performing, general features for sarcasm detection; metadata features are highly domain-specific, and even bag-of-words trends may be unique to certain domains (“trump” was one of the most common unigrams in our own Twitter training set, but only occurred once across all Amazon product reviews).", "The results, including each of the training scenarios noted earlier, are presented in Table TABREF18 . Precision, recall, and F1 on the positive (sarcastic) class were recorded. The highest F1 achieved (0.780) among all cases was from training on the EasyAdapted Twitter and Amazon data. In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset). Training on both without EasyAdapt led to an F1 of 0.595 (or 0.715 when training only on Amazon-specific features), and finally, training only on Twitter data led to an F1 of 0.276. Training and testing on Twitter produced an F1 of 0.583 when training on all features." ], "extractive_spans": [], "free_form_answer": "New best result is F1 score of 0.752 compared to 0.744 of the best previous work.", "highlighted_evidence": [ "BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 test many feature combinations on this dataset, including those based on metadata (e.g., Amazon star rating), sentiment, grammar, the presence of interjections (e.g., “wow”) or laughter (e.g., through onomatopoeia or acronyms such as “lol”), the presence of emoticons, and bag-of-words features. Their highest F1 (0.744) is achieved using all of these with a logistic regression classifier; however, using only the star rating, they still achieve an F1 of 0.717.", "In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The results, including each of the training scenarios noted earlier, are presented in Table TABREF18 . Precision, recall, and F1 on the positive (sarcastic) class were recorded. The highest F1 achieved (0.780) among all cases was from training on the EasyAdapted Twitter and Amazon data. In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset). Training on both without EasyAdapt led to an F1 of 0.595 (or 0.715 when training only on Amazon-specific features), and finally, training only on Twitter data led to an F1 of 0.276. Training and testing on Twitter produced an F1 of 0.583 when training on all features.", "FLOAT SELECTED: Table 5: Test Results — Full Analysis" ], "extractive_spans": [], "free_form_answer": "by 0.008 in terms of F1 score, and 0.02 in terms of recall and precision ", "highlighted_evidence": [ "The results, including each of the training scenarios noted earlier, are presented in Table TABREF18 . Precision, recall, and F1 on the positive (sarcastic) class were recorded. The highest F1 achieved (0.780) among all cases was from training on the EasyAdapted Twitter and Amazon data.", "FLOAT SELECTED: Table 5: Test Results — Full Analysis", "In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset). Training on both without EasyAdapt led to an F1 of 0.595 (or 0.715 when training only on Amazon-specific features), and finally, training only on Twitter data led to an F1 of 0.276. Training and testing on Twitter produced an F1 of 0.583 when training on all features." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "3b32020d96f2e95ab8ffd53e2410486a7fa57c0f", "8b2a40a32693821770f18b1bd70a7c2e5f1e4b9c", "b63aa3f83878971880b786826d0c7a966b21ceb7" ], "answer": [ { "evidence": [ "The results, including each of the training scenarios noted earlier, are presented in Table TABREF18 . Precision, recall, and F1 on the positive (sarcastic) class were recorded. The highest F1 achieved (0.780) among all cases was from training on the EasyAdapted Twitter and Amazon data. In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset). Training on both without EasyAdapt led to an F1 of 0.595 (or 0.715 when training only on Amazon-specific features), and finally, training only on Twitter data led to an F1 of 0.276. Training and testing on Twitter produced an F1 of 0.583 when training on all features.", "FLOAT SELECTED: Table 5: Test Results — Full Analysis", "Research on automatic sarcasm detection in other domains has been limited, but recently a publicly-available corpus of sarcastic and non-sarcastic Amazon product reviews was released by Filatova FILATOVA12.661 to facilitate research. BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 test many feature combinations on this dataset, including those based on metadata (e.g., Amazon star rating), sentiment, grammar, the presence of interjections (e.g., “wow”) or laughter (e.g., through onomatopoeia or acronyms such as “lol”), the presence of emoticons, and bag-of-words features. Their highest F1 (0.744) is achieved using all of these with a logistic regression classifier; however, using only the star rating, they still achieve an F1 of 0.717. This highlights the need for high-performing, general features for sarcasm detection; metadata features are highly domain-specific, and even bag-of-words trends may be unique to certain domains (“trump” was one of the most common unigrams in our own Twitter training set, but only occurred once across all Amazon product reviews)." ], "extractive_spans": [ " F1 (0.744)" ], "free_form_answer": "", "highlighted_evidence": [ "The results, including each of the training scenarios noted earlier, are presented in Table TABREF18 . Precision, recall, and F1 on the positive (sarcastic) class were recorded. ", "FLOAT SELECTED: Table 5: Test Results — Full Analysis", "Their highest F1 (0.744) is achieved using all of these with a logistic regression classifier; however, using only the star rating, they still achieve an F1 of 0.717. This highlights the need for high-performing, general features for sarcasm detection; metadata features are highly domain-specific, and even bag-of-words trends may be unique to certain domains (“trump” was one of the most common unigrams in our own Twitter training set, but only occurred once across all Amazon product reviews)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The results, including each of the training scenarios noted earlier, are presented in Table TABREF18 . Precision, recall, and F1 on the positive (sarcastic) class were recorded. The highest F1 achieved (0.780) among all cases was from training on the EasyAdapted Twitter and Amazon data. In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset). Training on both without EasyAdapt led to an F1 of 0.595 (or 0.715 when training only on Amazon-specific features), and finally, training only on Twitter data led to an F1 of 0.276. Training and testing on Twitter produced an F1 of 0.583 when training on all features." ], "extractive_spans": [ " BIBREF12 buschmeier-cimiano-klinger:2014:W14-26" ], "free_form_answer": "", "highlighted_evidence": [ "In comparison, training only on the Amazon reviews produced an F1 of 0.713 (training and testing only on Amazon reviews with our features but with the same classifier and cross-validation settings as BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 led to an F1 of 0.752, outperforming prior best results on that dataset)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Research on automatic sarcasm detection in other domains has been limited, but recently a publicly-available corpus of sarcastic and non-sarcastic Amazon product reviews was released by Filatova FILATOVA12.661 to facilitate research. BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 test many feature combinations on this dataset, including those based on metadata (e.g., Amazon star rating), sentiment, grammar, the presence of interjections (e.g., “wow”) or laughter (e.g., through onomatopoeia or acronyms such as “lol”), the presence of emoticons, and bag-of-words features. Their highest F1 (0.744) is achieved using all of these with a logistic regression classifier; however, using only the star rating, they still achieve an F1 of 0.717. This highlights the need for high-performing, general features for sarcasm detection; metadata features are highly domain-specific, and even bag-of-words trends may be unique to certain domains (“trump” was one of the most common unigrams in our own Twitter training set, but only occurred once across all Amazon product reviews)." ], "extractive_spans": [ "logistic regression classifier" ], "free_form_answer": "", "highlighted_evidence": [ "BIBREF12 buschmeier-cimiano-klinger:2014:W14-26 test many feature combinations on this dataset, including those based on metadata (e.g., Amazon star rating), sentiment, grammar, the presence of interjections (e.g., “wow”) or laughter (e.g., through onomatopoeia or acronyms such as “lol”), the presence of emoticons, and bag-of-words features. Their highest F1 (0.744) is achieved using all of these with a logistic regression classifier; however, using only the star rating, they still achieve an F1 of 0.717." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "0f0570bb2d261afc23702e407b54e4d1086650c0", "83a6afac7c644e7cba8e44059e2665f0386478e1", "8c851b694df2e7d610b5bf6e6508917d8992e68e" ], "answer": [ { "evidence": [ "The features used for each train/test scenario are shown in the first column of Table TABREF18 . Twitter Features refers to all features listed in Table TABREF11 preceded by the parenthetical (T), and Amazon Features to all features preceded by (A). General: Other Polarity includes the positive and negative percentages, average polarities, overall polarities, and largest polarity gap features from Table TABREF14 . General: Subjectivity includes the % strongly subjective positive words, % weakly subjective positive words, and their negative counterparts. We also include two baselines: the All Sarcasm case assumes that every instance is sarcastic, and the Random case randomly assigns each instance as sarcastic or non-sarcastic." ], "extractive_spans": [ "the All Sarcasm case", "the Random case" ], "free_form_answer": "", "highlighted_evidence": [ "We also include two baselines: the All Sarcasm case assumes that every instance is sarcastic, and the Random case randomly assigns each instance as sarcastic or non-sarcastic." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The features used for each train/test scenario are shown in the first column of Table TABREF18 . Twitter Features refers to all features listed in Table TABREF11 preceded by the parenthetical (T), and Amazon Features to all features preceded by (A). General: Other Polarity includes the positive and negative percentages, average polarities, overall polarities, and largest polarity gap features from Table TABREF14 . General: Subjectivity includes the % strongly subjective positive words, % weakly subjective positive words, and their negative counterparts. We also include two baselines: the All Sarcasm case assumes that every instance is sarcastic, and the Random case randomly assigns each instance as sarcastic or non-sarcastic." ], "extractive_spans": [ "All Sarcasm case assumes that every instance is sarcastic", " Random case randomly assigns each instance as sarcastic or non-sarcastic" ], "free_form_answer": "", "highlighted_evidence": [ "We also include two baselines: the All Sarcasm case assumes that every instance is sarcastic, and the Random case randomly assigns each instance as sarcastic or non-sarcastic." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The features used for each train/test scenario are shown in the first column of Table TABREF18 . Twitter Features refers to all features listed in Table TABREF11 preceded by the parenthetical (T), and Amazon Features to all features preceded by (A). General: Other Polarity includes the positive and negative percentages, average polarities, overall polarities, and largest polarity gap features from Table TABREF14 . General: Subjectivity includes the % strongly subjective positive words, % weakly subjective positive words, and their negative counterparts. We also include two baselines: the All Sarcasm case assumes that every instance is sarcastic, and the Random case randomly assigns each instance as sarcastic or non-sarcastic." ], "extractive_spans": [ "All Sarcasm", "Random case" ], "free_form_answer": "", "highlighted_evidence": [ "We also include two baselines: the All Sarcasm case assumes that every instance is sarcastic, and the Random case randomly assigns each instance as sarcastic or non-sarcastic." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "4aec768e82dd4fe1a568e2bd913557d590fe6a9c", "6fb793778d2a9a24ec9c21197d9611f389f920b3", "aefbf9c686c03d8b13fb1d7fd94724127081b60f" ], "answer": [ { "evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 ." ], "extractive_spans": [ "Twitter, and Amazon product reviews" ], "free_form_answer": "", "highlighted_evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 ." ], "extractive_spans": [ "Data was taken from two domains: Twitter, and Amazon product reviews. " ], "free_form_answer": "", "highlighted_evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 ." ], "extractive_spans": [ "Twitter", "Amazon " ], "free_form_answer": "", "highlighted_evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "1fb34e025da7f691133da44a5b158cdacc8e251a", "249aaf9ebb48b3dfebd52dc8c9f22cec5c15e761", "9849faa44bd819cbe303b34a512c9489fcc7f4a6" ], "answer": [ { "evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 ." ], "extractive_spans": [ "Twitter dataset", " Amazon product reviews" ], "free_form_answer": "", "highlighted_evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 .", "The downloaded tweets were filtered to remove retweets, “@replies,” and tweets containing links. Retweets were removed to avoid having duplicate copies of identical tweets in the dataset, @replies were removed in case the hashtag referred to content in the tweet to which it replied rather than content in the tweet itself, and tweets with links were likewise removed in case the hashtag referred to content in the link rather than in the tweet itself. Requiring that the specified hashtag trailed the rest of the tweet (it could only be followed by other hashtags) was done based on the observation that when sarcastic or emotional hashtags occur in the main tweet body, the tweet generally discusses sarcasm or the specified emotion, rather than actually expressing sarcasm or the specified emotion. Finally, requiring that only one of the specified hashtags trailed the tweet eliminated cases of ambiguity between sarcastic and non-sarcastic tweets. All trailing “#sarcasm” or emotion hashtags were removed from the data before training and testing, and both datasets were randomly divided into training (80%) and testing (20%) sets. Further details are shown in Table TABREF6 ." ], "extractive_spans": [], "free_form_answer": "Twitter product reviews containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust”, and Amazon product reviews from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661.", "highlighted_evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. ", " All trailing “#sarcasm” or emotion hashtags were removed from the data before training and testing, and both datasets were randomly divided into training (80%) and testing (20%) sets. Further details are shown in Table TABREF6 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016. Tweets containing the latter six hashtags, corresponding to Ekman's six basic emotions BIBREF14 , were labeled as non-sarcastic. Those hashtags were chosen because their associated tweets were expected to still express opinions, similarly to sarcastic tweets, but in a non-sarcastic way. Tweets containing #sarcasm were labeled as sarcastic; annotating tweets with the #sarcasm hashtag as such is consistent with the vast majority of prior work in the Twitter domain BIBREF6 , BIBREF2 , BIBREF15 , BIBREF3 , BIBREF5 , BIBREF8 , BIBREF10 .", "The downloaded tweets were filtered to remove retweets, “@replies,” and tweets containing links. Retweets were removed to avoid having duplicate copies of identical tweets in the dataset, @replies were removed in case the hashtag referred to content in the tweet to which it replied rather than content in the tweet itself, and tweets with links were likewise removed in case the hashtag referred to content in the link rather than in the tweet itself. Requiring that the specified hashtag trailed the rest of the tweet (it could only be followed by other hashtags) was done based on the observation that when sarcastic or emotional hashtags occur in the main tweet body, the tweet generally discusses sarcasm or the specified emotion, rather than actually expressing sarcasm or the specified emotion. Finally, requiring that only one of the specified hashtags trailed the tweet eliminated cases of ambiguity between sarcastic and non-sarcastic tweets. All trailing “#sarcasm” or emotion hashtags were removed from the data before training and testing, and both datasets were randomly divided into training (80%) and testing (20%) sets. Further details are shown in Table TABREF6 ." ], "extractive_spans": [ "Twitter, and Amazon product reviews" ], "free_form_answer": "", "highlighted_evidence": [ "Data was taken from two domains: Twitter, and Amazon product reviews. The Amazon reviews were from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661. To build our Twitter dataset, tweets containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust” were downloaded regularly during February and March 2016.", "All trailing “#sarcasm” or emotion hashtags were removed from the data before training and testing, and both datasets were randomly divided into training (80%) and testing (20%) sets." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "question": [ "by how much did their approach outperform previous work?", "what was the previous best results model?", "what are the baseline models?", "what domains are explored?", "what training data was used?" ], "question_id": [ "74396ead9f88a9efc7626240ce128582ab69ef2b", "8a7a9d205014c42cb0e24a0f3f38de2176fe74c0", "eaed0b721cc3137b964f5265c7ecf76f565053e9", "ba7fea78b0b888a714cb7d89944b69c5038a1ef1", "38af3f25c36c3725a31304ab96e2c200c55792b4" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ] }
{ "caption": [ "Table 1: Twitter and Amazon Dataset Distributions", "Table 3: Domain-Specific Features", "Table 2: Lexical Resources", "Table 4: General Feature Set", "Table 5: Test Results — Full Analysis", "Figure 1: Example from Amazon Product Reviews" ], "file": [ "2-Table1-1.png", "3-Table3-1.png", "3-Table2-1.png", "4-Table4-1.png", "5-Table5-1.png", "5-Figure1-1.png" ] }
[ "by how much did their approach outperform previous work?", "what training data was used?" ]
[ [ "1806.03369-5-Table5-1.png", "1806.03369-Results-0", "1806.03369-Sarcasm Detection on Amazon Reviews-0" ], [ "1806.03369-Data Collection-1", "1806.03369-Data Collection-0" ] ]
[ "by 0.008 in terms of F1 score, and 0.02 in terms of recall and precision ", "Twitter product reviews containing exactly one of the trailing hashtags “#sarcasm,” “#happiness,” “#sadness,” “#anger,” “#surprise,” “#fear,” and “#disgust”, and Amazon product reviews from the publicly available sarcasm corpus developed by Filatova FILATOVA12.661." ]
60
2003.07459
Offensive Language Identification in Greek
As offensive language has become a rising issue for online communities and social media platforms, researchers have been investigating ways of coping with abusive content and developing systems to detect its different types: cyberbullying, hate speech, aggression, etc. With a few notable exceptions, most research on this topic so far has dealt with English. This is mostly due to the availability of language resources for English. To address this shortcoming, this paper presents the first Greek annotated dataset for offensive language identification: the Offensive Greek Tweet Dataset (OGTD). OGTD is a manually annotated dataset containing 4,779 posts from Twitter annotated as offensive and not offensive. Along with a detailed description of the dataset, we evaluate several computational models trained and tested on this data.
{ "paragraphs": [ [ "In the age of social media, offensive content online has become prevalent in recent years. There are many types of offensive content online such as racist and sexist posts and insults and threats targeted at individuals or groups. As such content increasingly occurs online, it has become a growing issue for online communities. This has come to the attention of social media platforms and authorities underlining the urgency to moderate and deal with such content. Several studies in NLP have approached offensive language identification applying machine learning and deep learning systems on annotated data to identify such content. Researchers in the field have worked with different definitions of offensive language with hate speech being the most studied among these types BIBREF0. BIBREF1 investigate the similarity between these sub-tasks. With a few noteworthy exceptions, most research so far has dealt with English, due to the availability of language resources. This gap in the literature recently started to be addressed with studies on Spanish BIBREF2, Hindi BIBREF3, and German BIBREF4, to name a few.", "In this paper we contribute in this direction presenting the first Greek annotated dataset for offensive language identification: the Offensive Greek Tweet Dataset (OGTD). OGTD uses a working definition of offensive language inspired by the OLID dataset for English BIBREF5 used in the recent OffensEval (SemEval-2019 Task 6) BIBREF6. In its version, 1.0 OGTD contains nearly 4,800 posts collected from Twitter and manually annotated by a team of volunteers, resulting in a high-quality annotated dataset. We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], [ "The bulk of work on detecting abusive posts online addressed particular types of such language like textual attacks and hate speech BIBREF7, aggression BIBREF3, and others. OGTD considers a more general definition of offensiveness inspired by the first layer of the hierarchical annotation model described in BIBREF5. BIBREF5 model distinguishes targeted from general profanity, and considers the target of offensive posts as indicators of potential hate speech posts (insults targeted at groups) and cyberbulling posts (insults targeted at individuals).", "", "Offensive Language: Previous work presented a dataset with sentences labelled as flame (i.e. attacking or containing abusive words) or okay BIBREF8 with a Naïve Bayes hybrid classifier and a user offensiveness estimation using an offensive lexicon and sentence syntactic structures BIBREF9. A dataset of 3.3M comments from the Yahoo Finance and News website, labelled as abusive or clean, was utilized in several experiments using n-grams, linguistic and syntactic features, combined with different types of word and comment embeddings as distributional semantics features BIBREF10. The usefulness of character n-grams for abusive language detection was explored on the same dataset with three different methods BIBREF11. The most recent project expanded on existing ideas for defining offensive language and presented the OLID (Offensive Language Identification Dataset), a corpus of Twitter posts hierarchically annotated on three levels, whether they contain offensive language or not, whether the offense is targeted and finally, the target of the offense BIBREF5. A CNN (Convolutional neural network) deep learning approach outperformed every model trained, with pre-trained FastText embeddings and updateable embeddings learned by the model as features. In OffensEval (SemEval-2019 Task 6), participants had the opportunity to use the OLID to train their own systems, with the top teams outperforming the original models trained on the dataset.", "", "Hate Speech: A study dataset of tweets posted after the murder of Drummer Lee Rigby in the UK, manually annotated as offensive or antagonistic in terms of race ethnicity or religion for hate speech identification with multiple classifiers BIBREF12. A logistic regression classifier trained with paragraph2vec word representations of comments from Yahoo Finance BIBREF13. The latest approaches in detecting hate speech include a dataset of Twitter posts, labelled as hateful, offensive or clean, used to train a logistic regression classifier with part-of-speech and word n-grams and a sentiment lexicon BIBREF0 and a linear SVM trained on character 4-grams, with an extra RBF SVM meta-classifier that boosts accuracy in hateful language detection BIBREF14. Both attempts tried to distinguish offensive language and hate speech, with the hate class being the hardest to classify.", "" ], [ "Research on other languages includes datasets such as: A Dutch corpus of posts from the social networking site Ask.fm for the detection of cyberbullying BIBREF15, a German Twitter corpus exploring the issue of hate speech targeted to refugees BIBREF16, another Dutch corpus using data from two anti-Islamic groups in Facebook BIBREF17, a hate speech corpus in Italian BIBREF18, an abusive language corpus in Arabic BIBREF19, a corpus of offensive comments from Facebook and Reddit in Danish BIBREF20, another Twitter corpus in German BIBREF4 for GermEval2018, a second Italian corpus from Facebook and Twitter BIBREF21, an aggressive post corpus from Mexican Twitter in Spanish BIBREF2 and finally an aggressive comments corpus from Facebook in Hindi BIBREF3. SemEval 2019 presented a novel task: Multilingual detection of hate speech specifically against immigrants and women with a dataset from Twitter, in English and Spanish BIBREF22." ], [ "The posts in OGTD v1.0 were collected between May and June, 2019. We used the Twitter API initially collecting tweets from popular and trending hashtags in Greece, including television programs such as series, reality and entertainment shows. Due to the municipal, regional as well as the European Parliament election taking place at the time, many hashtags included tweets discussing the elections. The intuition behind this approach is that Twitter as a microblogging service often gathers complaints and profane comments on widely viewed television and politics, and as such, this period was a good opportunity for data collection.", "Following the methodology described in BIBREF5 and others, including a recent comparable Danish dataset BIBREF20, we collected tweets using keywords such as sensitive or obscene language. Queries for tweets containing common curse words and expressions usually found in offensive messages in Greek as keywords (such as the well-known word for “asshole”, “μαλάκας” (malakas) or “go to hell”, “στο διάολο” (sto diaolo), etc.) returned a large number of tweets. Aiming to compile a dataset including offensive tweets of diverse types (sexist, racist, etc.) targeted at various social groups, the Twitter API was queried with expletives such as “πουτάνα” (poutana, “whore”), “καριόλα” (kariola, “bitch”), “πούστης” (poustis, “faggot”), etc. and their plural forms, to explore the semantic and pragmatic differences of the expletives mentioned above in their different contextual environments. The challenge is to recognize between ironic and insulting uses of these swear words, a common phenomenon in Greek.", "The final query for data collection was for tweets containing “είσαι” (eisai, “you are”) as a keyword, inspired by BIBREF5. This particular keyword is considered a stop word as it is quite common and frequent in languages but was suspected to prove helpful for building the dataset for this particular project, as offensive language often follows the following structure: auxiliary verb (be) + noun/adjective. The immediacy of social media and specifically Twitter provides the opportunity for targeted insults to be investigated, following data mining of tweets including “you are” as a keyword. In fact, many tweets present in the dataset showed users verbally insulting other users or famous people and TV personas, confirming that “είσαι” was a facilitating keyword for the task in question." ], [ "We collected a set of 49,154 tweets. URLs, Emojis and Emoticons were removed, while usernames and user mentions were filtered as @USER following the same methodology described in OLID BIBREF5. Duplicate punctuation such as question and exclamation marks was normalized. After removing duplicate tweets, the dataset was comprised of 46,218 tweets of which 5,000 were randomly sampled for annotation. We used LightTag to annotate the dataset due to its simple and straightforward user interface and limitless annotations, provided by the software creators.", "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. Inter-annotator agreement was subsequently calculated and labels with 100% agreement were deemed acceptable annotations. In cases of disagreement, labels with majority agreement above 66% were selected as the actual annotations of the tweets in question. For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%. Figure FIGREF6 is a confusion matrix that shows the inter-annotator agreement or reliability, statistically measured by Cohen's kappa coefficient. The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content. The final distribution of labels in the new Offensive Greek Tweet Dataset (OGTD), along with the breakdown of the data into training and testing, is showing in Table TABREF5." ], [ "Before experimenting with OGTD, an unique aspect of Greek which is the accentuation of characters for correct pronunciation needed to be normalized. When posting a tweet, many users omit accents due to their haste, resulting in a mixed dataset containing fully accented tweets, partially-accented tweets, and non-accented tweets. To achieve data uniformity and to avoid ambiguity, every word is lower-cased and then normalized to its non-accented equivalent.", "Several experiments were conducted with the OGTD, each one utilizing a different combination from a pool of features (e.g. TF/IDF unigrams, bigrams, POS and dependency relation tags) to train machine learning models. These features were selected based on previous methodology used by researchers and taking the dataset size into consideration. The TF-IDF weighted features are often used for text classification and are useful for determining how important a word is to a post in a corpus. The threshold for corpus specific words was set to 80%, ignoring terms appearing in more than 80% of the documents while the minimum document frequency was set to 6, and both unigrams and bigrams were tested. Given the consistent use of linguistic features for training machine learning models and results from previous work for offensive language detection, part-of-speech (POS) and dependency relation tags were considered as additional features. Using the spaCy pipeline for Greek, POS-tags and dependency relations were extracted for every token in a tweet, which were then transformed to count matrices. A sentiment lexicon was considered, but one suitable for this project is as of yet unavailable for Greek.", "For the first six deep learning models we used Greek word embeddings trained on a large Greek web corpus BIBREF23. Each Greek word can be represented with a 300 dimensional vector using the trained model. The vector then can be used to feed in to the deep learning models which will be described in section SECREF16. For the last deep learning architecture we wanted to use a BERT BIBREF24 model trained on Greek. However there was no BERT model available for Greek language. The model that came closest our requirement was multilingual BERT model trained on 108 languages BIBREF24 including Greek. Since training BERT is a very computationaly expensive task we used the available multilingual BERT cased model for the sixth deep learning architecture." ], [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features." ], [ "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24. These models has been used in HASOC 2019 and achieved a third place finish in English task and a eighth place finish in German and Hindi subtasks BIBREF26. Parameters described in BIBREF26 were used as the default parameters in order to ease the training process. The code for the deep learning has been made available on Github ." ], [ "The performance of individual classifiers for offensive language identification with TF/IDF unigram features is demonstrated in table TABREF8 below. We can see that both linear classifiers (SVM and SGDC) outperform the other classifiers in terms of macro-F1, which does not take label imbalance into account. The Linear SVM and SGDC perform almost identically, with the Linear SVM performing slightly better in recall score for the Not Offensive class and SGDC in recall score for the Offensive class. Bernoulli Naïve Bayes performs better than all classifiers in recall score for the Offensive class but yields the lowest precision score of all classifiers. While the RBF SVM and Multinomial Naïve Bayes yield better recall score for the Not Offensive class, their recall scores for the Offensive class are really low. For a binary text classification task like offensive language detection, a high recall score for both classes, especially for the Offensive class, is important for a model to be considered successful. Thus, the Linear SVM can be considered the marginally best model trained with OGTD, as its weighted average precision and recall scores are higher.", "Models trained with TF/IDF bigram features performed worse, with scores of all evaluation metrics dropping with the exception of Multinomial Naïve Bayes which improved in F1-score for the Not Offensive class. The full results are reported in table TABREF9 below. Three other approaches were opted for training the models with the implementation of POS and dependency relation tags via a transformation pipeline, also including TF/IDF unigram features, performing better than the addition of bigrams.", "Experiments with linguistic features were conducted, to inspect their efficiency for this task. For these experiments, the RBF SVM was not used due to data handling problems by the model in the scikit-learn library. In the first experiment, TF/IDF unigram features were combined with POS and dependency relation tags. The results of implementing all three features are shown in table TABREF10 below. While the Linear SVM model improved the recall score on the previous model trained with bigrams, the other models show a significant drop in their performance.", "In the next experiment, POS tags were used in conjunction with TF/IDF unigram features. Surprisingly, the addition of POS tags in the Linear SVM yields the same F1-score as the first model trained on TF/IDF unigram features, yielding lower precision scores for both classes, while the recall score for the Offensive class improved marginally. The Naïve Bayes models show a marginal decrease in their performance. On the other hand, the performance of SGDC significantly decreases with POS tags only and, interestingly enough, its recall score for the Offensive class is the worst among classifiers. The complete results are presented in table TABREF11 below.", "The experiment with linguistic features was the combination of dependency relation tags with TF/IDF unigrams. This experimented yielded the same F1-score of 80% as the other Linear SVM classifiers, performing almost identically with the previous model trained with POS tags, only bested in precision for the Offensive class. While the recall score for Offensive instances improves on the first model trained only on TF/IDF unigrams by 0.01%, the recall score for Not Offensive instances drops by the same amount. The recall score for the Not Offensive class was already high, so this increase in recall score could slightly facilitate the offensive language detection task. Without improving upon the first SGDC presented, the SGDC rised in performance overall and as for the Naïve Bayes representatives, the both the Multinomial and Bernoulli approaches performed better than in the second experiment. The complete results are shown in table TABREF12 below.", "The performance of the deep learning models is presented in table TABREF18. As we can see LSTM and GRU with Attention outperformed all the other models in-terms of macro-f1. Notably it outperformed all other classifical models and deep learning models in precision, recall and f1 for Offensive class as well as the Not Offensive class. However, fine tuning BERT-Base Multilingual Cased model did not achieve good results. For this task monolingual Greek word embeddings perform significantly better than the multilingual bert embeddings. LSTM and GRU with Attention can be considered as the best model trained for OGTD." ], [ "The data annotated in OGTD proved to be facilitating in offensive language detection with a significant success for Greek, taking into consideration its size and label distribution, with the best model (LSTM and GRU with Attention) achieving a F1-macro of 0.89. Among the classical machine learning approaches, the linear SVM model achieved the best results, 0.80, whereas the the Stochastic Gradient Descent (SGD) learning classifier yielded the best recall score for the Offensive class, at 0.61. In terms of features used, TF/IDF matrices of word unigrams proved to work work well with multiple classical ML classifiers. Overall, it is clear that deep learning models with word embedding feature provide better results than the classical machine learning models.", "Of the linguistic features, POS tags improved the performance of the Linear SVM marginally in terms of recall for the Offensive class, other classifiers deteriorated in their performance.It is not yet clear whether this is due to the accuracy of the Greek model available for spaCy in producing such tags or the tags themselves as features and is a subject that can be explored with further improvements of spaCy or other NLP tools developed for Greek. The dataset itself contains many instances with neologisms, creative uses of language or and even rare slang words, therefore training the existing model with such instances could improve both spaCy's accuracy for POS and dependency relation tags and the Linear SVM's performance in text classification for Greek." ], [ "This paper presented the Offensive Greek Tweet Dataset (OGTD), a manually annotated dataset for offensive language identification and the first Greek dataset of its kind. The OGTD v1.0 contains a total of 4,779 tweets, encompassing posts related to an array of topics popular among Greek people (e.g. political elections, TV shows, etc.). Tweets were manually annotated by a team volunteers through an annotation platform. We used the same guidelines used in the annotation of the English OLID dataset BIBREF5. Finally, we run several machine learning and deep learning classifiers and the best results were achieved by a LSTM and GRU with Attention model." ], [ "We have recently released OGTD v2.0 as training data for OffensEval 2020 (SemEval-2020 Task 12) BIBREF28. The reasoning behind the expansion of the dataset was to have a larger Greek dataset for the competition. New posts were collected in November 2019 following the same approach we used to compile v1.0 described in this paper. This second batch of tweets included tweets with trending hashtags, shows and topics from Greece at the time. Additionally, keywords that proved to retrieve interesting tweets in the first version were once again used in the search, along with new keywords like pejorative terms. When the collection was finished, 5,508 tweets were randomly sampled to be then annotated by a team of volunteers. The annotation guidelines were the same ones we used for v1.0. OGTD v2.0 combines the existing with the newly annotated tweets in a larger dataset of 10,287 instances.", "Finally, both OGTD v1.0 and v2.0 provide the opportunity for researchers to test cross-lingual learning methods as it can be used in conjunction with the English OLID and other datasets annotated using the same guidelines such as the one by sigurbergsson2019offensive for Danish and by coltekikin2020 for Turkish while simultaneously facilitating the development of language resources for NLP in Greek." ], [ "We would like to acknowledge Maria, Raphael and Anastasia, the team of volunteer annotators that provided their free time and efforts to help us produce v1.0 of the dataset of Greek tweets for offensive language detection, as well as Fotini and that helped review tweets with ambivalent labels. Additionally, we would like to express our sincere gratitude to the LightTag team and especially to Tal Perry for granting us free use for their annotation platform." ] ], "section_name": [ "Introduction", "Related Work", "Related Work ::: Non-English Datasets", "The OGTD Dataset", "The OGTD Dataset ::: Pre-processing and annotation", "Methods", "Methods ::: Models ::: Classical Machine Learning Models", "Methods ::: Models ::: Deep Learning Models", "Methods ::: Results", "Methods ::: Discussion", "Conclusion", "Conclusion ::: Ongoing - OGTD v2.0 and OffensEval 2020", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "3527e6b2267ee94040a39f5edbf53bbdb174d1e1", "9b9488ffb24eac459133eb317b1643c6b1736f30", "d4f36b7562ee3e12d283ee70900d6f55a415dcc9" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 7: Results for offensive language detection for Deep Learning models with Greek word embeddings. For each model, Precision (P), Recall (R), and F1 are reported on all classes, and weighted averages. Macro-F1 is also listed (best in bold)." ], "extractive_spans": [], "free_form_answer": "F1 Macro of 0.89", "highlighted_evidence": [ "FLOAT SELECTED: Table 7: Results for offensive language detection for Deep Learning models with Greek word embeddings. For each model, Precision (P), Recall (R), and F1 are reported on all classes, and weighted averages. Macro-F1 is also listed (best in bold)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper we contribute in this direction presenting the first Greek annotated dataset for offensive language identification: the Offensive Greek Tweet Dataset (OGTD). OGTD uses a working definition of offensive language inspired by the OLID dataset for English BIBREF5 used in the recent OffensEval (SemEval-2019 Task 6) BIBREF6. In its version, 1.0 OGTD contains nearly 4,800 posts collected from Twitter and manually annotated by a team of volunteers, resulting in a high-quality annotated dataset. We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], "extractive_spans": [ "LSTMs and GRU with attention which achieved 0.89 F1 score" ], "free_form_answer": "", "highlighted_evidence": [ "We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper we contribute in this direction presenting the first Greek annotated dataset for offensive language identification: the Offensive Greek Tweet Dataset (OGTD). OGTD uses a working definition of offensive language inspired by the OLID dataset for English BIBREF5 used in the recent OffensEval (SemEval-2019 Task 6) BIBREF6. In its version, 1.0 OGTD contains nearly 4,800 posts collected from Twitter and manually annotated by a team of volunteers, resulting in a high-quality annotated dataset. We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], "extractive_spans": [ "0.89 F1 score" ], "free_form_answer": "", "highlighted_evidence": [ "We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "3b069bd0f5441c216c5bc8bed9e6d5ba5dc9b12f", "b180982930b71615fe6124b2dee02cc13ff1d674", "ed10124e8e4ba9f9c9048d7ea5aba3b55e941b3e" ], "answer": [ { "evidence": [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24. These models has been used in HASOC 2019 and achieved a third place finish in English task and a eighth place finish in German and Hindi subtasks BIBREF26. Parameters described in BIBREF26 were used as the default parameters in order to ease the training process. The code for the deep learning has been made available on Github ." ], "extractive_spans": [], "free_form_answer": "linear SVM, RBF SVM, linear classifier with SGDC, multinomial naive bayes, bernoulli naive bayes, pooled GRU, stacked LSTM with attention, LSTM and GRU with attention, 2d convolution with pooling, GRU with Capsule, LSTM with Capsule and attention, and BERT", "highlighted_evidence": [ "Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24. These models has been used in HASOC 2019 and achieved a third place finish in English task and a eighth place finish in German and Hindi subtasks BIBREF26. Parameters described in BIBREF26 were used as the default parameters in order to ease the training process. The code for the deep learning has been made available on Github ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24. These models has been used in HASOC 2019 and achieved a third place finish in English task and a eighth place finish in German and Hindi subtasks BIBREF26. Parameters described in BIBREF26 were used as the default parameters in order to ease the training process. The code for the deep learning has been made available on Github ." ], "extractive_spans": [ "Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF)", "The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning", "The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features", "Pooled GRU", " Stacked LSTM with Attention", "LSTM and GRU with Attention", "2D Convolution with Pooling", "GRU with Capsule", "LSTM with Capsule and Attention", "BERT" ], "free_form_answer": "", "highlighted_evidence": [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term", "The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning.", "The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24. These models has been used in HASOC 2019 and achieved a third place finish in English task and a eighth place finish in German and Hindi subtasks BIBREF26. Parameters described in BIBREF26 were used as the default parameters in order to ease the training process. The code for the deep learning has been made available on Github ." ], "extractive_spans": [ "Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF)", "linear classifier with Stochastic Gradient Descent (SGDC) learning", "Multinomial Naïve Bayes", "Bernoulli Naïve Bayes", "Pooled GRU ", "Stacked LSTM with Attention ", "LSTM and GRU with Attention", "2D Convolution with Pooling ", "GRU with Capsule", " LSTM with Capsule and Attention", "BERT" ], "free_form_answer": "", "highlighted_evidence": [ "Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. ", "The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. ", "The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. ", "The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "0f2828411996920c7eff25cede613d43394c9c37", "3d931a0af386cd6f59c1a792d1cba07ddc3aac90", "e281e93bd7296afe35276749f2b7fcb20091b269" ], "answer": [ { "evidence": [ "The performance of the deep learning models is presented in table TABREF18. As we can see LSTM and GRU with Attention outperformed all the other models in-terms of macro-f1. Notably it outperformed all other classifical models and deep learning models in precision, recall and f1 for Offensive class as well as the Not Offensive class. However, fine tuning BERT-Base Multilingual Cased model did not achieve good results. For this task monolingual Greek word embeddings perform significantly better than the multilingual bert embeddings. LSTM and GRU with Attention can be considered as the best model trained for OGTD." ], "extractive_spans": [ "LSTM and GRU with Attention can be considered as the best model trained for OGTD" ], "free_form_answer": "", "highlighted_evidence": [ "As we can see LSTM and GRU with Attention outperformed all the other models in-terms of macro-f1. Notably it outperformed all other classifical models and deep learning models in precision, recall and f1 for Offensive class as well as the Not Offensive class. However, fine tuning BERT-Base Multilingual Cased model did not achieve good results. For this task monolingual Greek word embeddings perform significantly better than the multilingual bert embeddings. LSTM and GRU with Attention can be considered as the best model trained for OGTD." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper we contribute in this direction presenting the first Greek annotated dataset for offensive language identification: the Offensive Greek Tweet Dataset (OGTD). OGTD uses a working definition of offensive language inspired by the OLID dataset for English BIBREF5 used in the recent OffensEval (SemEval-2019 Task 6) BIBREF6. In its version, 1.0 OGTD contains nearly 4,800 posts collected from Twitter and manually annotated by a team of volunteers, resulting in a high-quality annotated dataset. We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], "extractive_spans": [ "LSTMs and GRU with attention" ], "free_form_answer": "", "highlighted_evidence": [ "We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this paper we contribute in this direction presenting the first Greek annotated dataset for offensive language identification: the Offensive Greek Tweet Dataset (OGTD). OGTD uses a working definition of offensive language inspired by the OLID dataset for English BIBREF5 used in the recent OffensEval (SemEval-2019 Task 6) BIBREF6. In its version, 1.0 OGTD contains nearly 4,800 posts collected from Twitter and manually annotated by a team of volunteers, resulting in a high-quality annotated dataset. We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], "extractive_spans": [ " a system using LSTMs and GRU with attention" ], "free_form_answer": "", "highlighted_evidence": [ "We trained a number of systems on this dataset and our best results have been obtained from a system using LSTMs and GRU with attention which achieved 0.89 F1 score." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "5d25afc0557b4abb45b05a9e3ba78ddf278a4674", "aec89772e84af26c7df795c14a630b2e5eca2c7e", "dedd1110a920de81e1a80f03a170b14050ed8030" ], "answer": [ { "evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. Inter-annotator agreement was subsequently calculated and labels with 100% agreement were deemed acceptable annotations. In cases of disagreement, labels with majority agreement above 66% were selected as the actual annotations of the tweets in question. For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%. Figure FIGREF6 is a confusion matrix that shows the inter-annotator agreement or reliability, statistically measured by Cohen's kappa coefficient. The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content. The final distribution of labels in the new Offensive Greek Tweet Dataset (OGTD), along with the breakdown of the data into training and testing, is showing in Table TABREF5." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "We collected a set of 49,154 tweets. URLs, Emojis and Emoticons were removed, while usernames and user mentions were filtered as @USER following the same methodology described in OLID BIBREF5. Duplicate punctuation such as question and exclamation marks was normalized. After removing duplicate tweets, the dataset was comprised of 46,218 tweets of which 5,000 were randomly sampled for annotation. We used LightTag to annotate the dataset due to its simple and straightforward user interface and limitless annotations, provided by the software creators.", "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. Inter-annotator agreement was subsequently calculated and labels with 100% agreement were deemed acceptable annotations. In cases of disagreement, labels with majority agreement above 66% were selected as the actual annotations of the tweets in question. For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%. Figure FIGREF6 is a confusion matrix that shows the inter-annotator agreement or reliability, statistically measured by Cohen's kappa coefficient. The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content. The final distribution of labels in the new Offensive Greek Tweet Dataset (OGTD), along with the breakdown of the data into training and testing, is showing in Table TABREF5." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We used LightTag to annotate the dataset due to its simple and straightforward user interface and limitless annotations, provided by the software creators.\n\nBased on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. Inter-annotator agreement was subsequently calculated and labels with 100% agreement were deemed acceptable annotations. In cases of disagreement, labels with majority agreement above 66% were selected as the actual annotations of the tweets in question. For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%. Figure FIGREF6 is a confusion matrix that shows the inter-annotator agreement or reliability, statistically measured by Cohen's kappa coefficient. The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content. The final distribution of labels in the new Offensive Greek Tweet Dataset (OGTD), along with the breakdown of the data into training and testing, is showing in Table TABREF5." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "annotation_id": [ "3780663cab87d43c7a7b80c1c4bac7efc4ff1ae4", "7fa83c95d79fd550ebd62c6023d55c0faf200192", "e6eac65cda4fe6e3cb508a170af8cc34d52e9d6e" ], "answer": [ { "evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. Inter-annotator agreement was subsequently calculated and labels with 100% agreement were deemed acceptable annotations. In cases of disagreement, labels with majority agreement above 66% were selected as the actual annotations of the tweets in question. For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%. Figure FIGREF6 is a confusion matrix that shows the inter-annotator agreement or reliability, statistically measured by Cohen's kappa coefficient. The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content. The final distribution of labels in the new Offensive Greek Tweet Dataset (OGTD), along with the breakdown of the data into training and testing, is showing in Table TABREF5." ], "extractive_spans": [], "free_form_answer": "Three, plus 2 in case of disagreement below 66%.", "highlighted_evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset.", "For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. Inter-annotator agreement was subsequently calculated and labels with 100% agreement were deemed acceptable annotations. In cases of disagreement, labels with majority agreement above 66% were selected as the actual annotations of the tweets in question. For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%. Figure FIGREF6 is a confusion matrix that shows the inter-annotator agreement or reliability, statistically measured by Cohen's kappa coefficient. The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content. The final distribution of labels in the new Offensive Greek Tweet Dataset (OGTD), along with the breakdown of the data into training and testing, is showing in Table TABREF5." ], "extractive_spans": [ "three" ], "free_form_answer": "", "highlighted_evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. Inter-annotator agreement was subsequently calculated and labels with 100% agreement were deemed acceptable annotations. In cases of disagreement, labels with majority agreement above 66% were selected as the actual annotations of the tweets in question. For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%. Figure FIGREF6 is a confusion matrix that shows the inter-annotator agreement or reliability, statistically measured by Cohen's kappa coefficient. The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content. The final distribution of labels in the new Offensive Greek Tweet Dataset (OGTD), along with the breakdown of the data into training and testing, is showing in Table TABREF5." ], "extractive_spans": [ "three volunteers " ], "free_form_answer": "", "highlighted_evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "annotation_id": [ "33d014696000add449bcd01195110a4d75fef604", "5bd49b98aae469285faea9e2ec2becabf1c4d480", "d8ff6c89ede57c0c39382189ff1c24af48feb3e1" ], "answer": [ { "evidence": [ "We collected a set of 49,154 tweets. URLs, Emojis and Emoticons were removed, while usernames and user mentions were filtered as @USER following the same methodology described in OLID BIBREF5. Duplicate punctuation such as question and exclamation marks was normalized. After removing duplicate tweets, the dataset was comprised of 46,218 tweets of which 5,000 were randomly sampled for annotation. We used LightTag to annotate the dataset due to its simple and straightforward user interface and limitless annotations, provided by the software creators.", "FLOAT SELECTED: Table 1: Distribution of labels in the OGTD v1.0." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "After removing duplicate tweets, the dataset was comprised of 46,218 tweets of which 5,000 were randomly sampled for annotation. ", "FLOAT SELECTED: Table 1: Distribution of labels in the OGTD v1.0." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "FLOAT SELECTED: Table 1: Distribution of labels in the OGTD v1.0." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Distribution of labels in the OGTD v1.0." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "Based on explicit annotation guidelines written in Greek and our proposal of the definition of offensive language, a team of three volunteers were asked to classify each tweet found in the dataset with one of the following tags: Offensive, Not Offensive and Spam, which was introduced to filter out spam from the dataset. Inter-annotator agreement was subsequently calculated and labels with 100% agreement were deemed acceptable annotations. In cases of disagreement, labels with majority agreement above 66% were selected as the actual annotations of the tweets in question. For labels with complete disagreement between annotators, one of the authors of this paper reviewed the tweets with two extra human judges, to get the desired majority agreement above 66%. Figure FIGREF6 is a confusion matrix that shows the inter-annotator agreement or reliability, statistically measured by Cohen's kappa coefficient. The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content. The final distribution of labels in the new Offensive Greek Tweet Dataset (OGTD), along with the breakdown of the data into training and testing, is showing in Table TABREF5." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The benchmark annotated dataset produced contained 4,779 tweets, containing over 29% offensive content." ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "487fde86e7bd05774a5a263878491d79325cd5af", "846ba12ce4d20447002168677c7deb820554c99c", "bf179d4c5ab6f1d67291ead6968f406f812bff11" ], "answer": [ { "evidence": [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24. These models has been used in HASOC 2019 and achieved a third place finish in English task and a eighth place finish in German and Hindi subtasks BIBREF26. Parameters described in BIBREF26 were used as the default parameters in order to ease the training process. The code for the deep learning has been made available on Github ." ], "extractive_spans": [ " Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF)", "linear classifier with Stochastic Gradient Descent (SGDC) learning", "The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features", "Pooled GRU", "Stacked LSTM with Attention", "LSTM and GRU with Attention", "2D Convolution with Pooling", "GRU with Capsule", "LSTM with Capsule and Attention", "BERT" ], "free_form_answer": "", "highlighted_evidence": [ "Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term.", " The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning", "The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24. These models has been used in HASOC 2019 and achieved a third place finish in English task and a eighth place finish in German and Hindi subtasks BIBREF26. Parameters described in BIBREF26 were used as the default parameters in order to ease the training process. The code for the deep learning has been made available on Github ." ], "extractive_spans": [ "Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF)", "linear classifier with Stochastic Gradient Descent (SGDC) learning", "Multinomial Naïve Bayes", "Bernoulli Naïve Bayes", "Pooled GRU ", "Stacked LSTM with Attention ", "LSTM and GRU with Attention", "2D Convolution with Pooling", "GRU with Capsule", "LSTM with Capsule and Attention", "BERT" ], "free_form_answer": "", "highlighted_evidence": [ "Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. ", "The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. ", "The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. ", "The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24. These models has been used in HASOC 2019 and achieved a third place finish in English task and a eighth place finish in German and Hindi subtasks BIBREF26. Parameters described in BIBREF26 were used as the default parameters in order to ease the training process. The code for the deep learning has been made available on Github ." ], "extractive_spans": [ "Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF)", "linear classifier with Stochastic Gradient Descent (SGDC) learning", "Multinomial Naïve Bayes", "Bernoulli Naïve Bayes", "Pooled GRU BIBREF25", "Stacked LSTM with Attention BIBREF25", "LSTM and GRU with Attention BIBREF25", "2D Convolution with Pooling BIBREF26", "GRU with Capsule BIBREF27", "LSTM with Capsule and Attention BIBREF26", "BERT BIBREF24" ], "free_form_answer": "", "highlighted_evidence": [ "Every classical model was considered on the condition it could take matrices as input for fitting and was trained with the default settings because of the size of the dataset. Five models were trained: Two SVMs, one with linear kernel and the other with a radial basis function kernel (RBF), both with a value of 1 in the penalty parameter C of the error term. The gamma value of the RBF SVM which indicates how much influence a single training example has, was set to 2. The third classifier trained was another linear classifier with Stochastic Gradient Descent (SGDC) learning. The gradient of the loss is estimated each sample at a time and the SGDC is updated along the way with a decreasing learning rate. The parameters for maximum epochs and the stopping criterion were defined using the default values in scikit-learn. The final classifier was two models based on the Bayes theorem: Multinomial Naïve Bayes, which works with occurrence counts, and Bernoulli Naïve Bayes, which is designed for binary features.", "Six different deep learning models were considered. All of these models have been used in an aggression detection task. The models are Pooled GRU BIBREF25, Stacked LSTM with Attention BIBREF25, LSTM and GRU with Attention BIBREF25, 2D Convolution with Pooling BIBREF26, GRU with Capsule BIBREF27, LSTM with Capsule and Attention BIBREF26 and BERT BIBREF24." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ], "nlp_background": [ "two", "two", "two", "two", "two", "two", "two" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "question": [ "What is the performance of the best model?", "What are the models tested on the dataset?", "Which method best performs on the offensive language identification task?", "Did they use crowdsourcing for the annotations?", "How many annotators did they have?", "Is the dataset balanced?", "What models do they experiment on?" ], "question_id": [ "9465d96a1368299fd3662d91aa94ba85347b4ccd", "e8c3f59313df20db0cdd49b84a37c44da849fe17", "f61268905626c0b2a715282478a5e373adda516c", "d9949dd4865e79c53284932d868ca8fd10d55e70", "de689a17b0b9fb6bbb80e9b85fb44b36b56de2fd", "5a90871856beeefaa69a1080e1b3c8b5d4b2b937", "6cb3007a09ab0f1602cdad20cc0437fbdd4d7f3e" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "language identification", "language identification", "language identification", "", "", "", "" ], "topic_background": [ "research", "research", "research", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Distribution of labels in the OGTD v1.0.", "Figure 1: Cohen’s Kappa for each pair of annotators", "Table 2: Results for offensive language detection with TF/IDF unigram features. For each model, Precision (P), Recall (R), and F1 are reported on all classes, and weighted averages. Macro-F1 is also listed (best in bold).", "Table 3: Results for offensive language detection with TF/IDF bigram features. For each model, Precision (P), Recall (R), and F1 are reported on all classes, and weighted averages. Macro-F1 is also listed (best in bold).", "Table 4: Results for offensive language detection with TF/IDF unigram features, POS and dependency relation tags. For each model, Precision (P), Recall (R), and F1 are reported on all classes, and weighted averages. Macro-F1 is also listed (best in bold).", "Table 5: Results for offensive language detection with TF/IDF unigram features and POS tags. For each model, Precision (P), Recall (R), and F1 are reported on all classes, and weighted averages. Macro-F1 is also listed (best in bold).", "Table 6: Results for offensive language detection with TF/IDF unigram features and dependency relation tags. For each model, Precision (P), Recall (R), and F1 are reported on all classes, and weighted averages. Macro-F1 is also listed (best in bold).", "Table 7: Results for offensive language detection for Deep Learning models with Greek word embeddings. For each model, Precision (P), Recall (R), and F1 are reported on all classes, and weighted averages. Macro-F1 is also listed (best in bold).", "Table 8: Distribution of labels in the OGTD v2.0." ], "file": [ "3-Table1-1.png", "3-Figure1-1.png", "4-Table2-1.png", "4-Table3-1.png", "4-Table4-1.png", "4-Table5-1.png", "4-Table6-1.png", "5-Table7-1.png", "6-Table8-1.png" ] }
[ "What is the performance of the best model?", "What are the models tested on the dataset?", "How many annotators did they have?" ]
[ [ "2003.07459-Introduction-1", "2003.07459-5-Table7-1.png" ], [ "2003.07459-Methods ::: Models ::: Classical Machine Learning Models-0", "2003.07459-Methods ::: Models ::: Deep Learning Models-0" ], [ "2003.07459-The OGTD Dataset ::: Pre-processing and annotation-1" ] ]
[ "F1 Macro of 0.89", "linear SVM, RBF SVM, linear classifier with SGDC, multinomial naive bayes, bernoulli naive bayes, pooled GRU, stacked LSTM with attention, LSTM and GRU with attention, 2d convolution with pooling, GRU with Capsule, LSTM with Capsule and attention, and BERT", "Three, plus 2 in case of disagreement below 66%." ]
61
1803.08614
MultiBooked: A Corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification
While sentiment analysis has become an established field in the NLP community, research into languages other than English has been hindered by the lack of resources. Although much research in multi-lingual and cross-lingual sentiment analysis has focused on unsupervised or semi-supervised approaches, these still require a large number of resources and do not reach the performance of supervised approaches. With this in mind, we introduce two datasets for supervised aspect-level sentiment analysis in Basque and Catalan, both of which are under-resourced languages. We provide high-quality annotations and benchmarks with the hope that they will be useful to the growing community of researchers working on these languages.
{ "paragraphs": [ [ "Sentiment analysis has become an established field with a number of subfields (aspect-level sentiment analysis, social media sentiment analysis, cross-lingual sentiment analysis), all of which require some kind of annotated resource, either to train a machine-learning based classifier or to test the performance of proposed approaches.", "Although much research into multi-lingual and cross-lingual sentiment analysis has focused on unsupervised or semi-supervised approaches BIBREF0 , BIBREF1 , BIBREF2 , these techniques still require certain resources (linked wordnets, seed lexicon) and do not generally reach the performance of supervised approaches.", "In English the state-of-the-art for binary sentiment analysis often reaches nearly 90 percent accuracy BIBREF3 , BIBREF4 , BIBREF5 , but for other languages there is a marked drop in accuracy. This is mainly due to the lack of annotations and resources in these languages. This is especially true of corpora annotated at aspect-level. Unlike document- or tweet-level annotation, aspect-level annotation requires a large amount of effort from the annotators, which further reduces the likelihood of finding an aspect-level sentiment corpus in under-resourced languages. We are, however, aware of one corpus annotated for aspects in German BIBREF6 , although German is not a particularly low-resource language.", "The movement towards multi-lingual datasets for sentiment analysis is important because many languages offer different challenges, such as complex morphology or highly productive word formation, which can not be overcome by focusing only on English data.", "The novelty of this work lies in creating corpora which cover both Basque and Catalan languages and are annotated in such a way that they are compatible with similarly compiled corpora available in a number of languages BIBREF7 . This allows for further research into cross-lingual sentiment analysis, as well as introducing the first resource for aspect-level sentiment analysis in Catalan and Basque. The corpus is available at http://hdl.handle.net/10230/33928 or https://jbarnesspain.github.io/resources/." ], [ "In English there are many datasets available for document- and sentence-level sentiment analysis across different domains and at different levels of annotation BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . These resources have been built up over a period of more than a decade and are currently necessary to achieve state-of-the-art performance.", "Corpora annotated at fine-grained levels (opinion- or aspect-level) require more effort from annotators, but are able to capture information which is not present at document- or sentence-level, such as nested opinions or differing polarities of different aspects of a single entity. In English, the MPQA corpus BIBREF13 has been widely used in fine-grained opinion research. More recently, a number of SemEval tasks have concentrated on aspect-level sentiment analysis BIBREF14 , BIBREF15 , BIBREF16 .", "The Iberian peninsula contains two official languages (Portuguese and Spanish), as well as three co-official languages (Basque, Catalan, and Galician) and several smaller languages (Aragonese, Gascon). The two official languages do have available resources for sentiment at tweet-level BIBREF17 , BIBREF18 , as well as at aspect-level BIBREF7 , BIBREF19 , BIBREF20 . The co-official languages, however, have almost none. The authors are aware of a small discourse-related sentiment corpus available in Basque BIBREF21 , as well as a stance corpus in Catalan BIBREF22 . These resources, however, are limited in size and scope." ], [ "In order to improve the lack of data in low-resource languages, we introduce two aspect-level sentiment datasets to the community, available for Catalan and Basque. To collect suitable corpora, we crawl hotel reviews from www.booking.com. Booking.com allows you to search for reviews in Catalan, but it does not include Basque. Therefore, for Basque we crawled reviews from a number of other websites that allow users to comment on their stay", "Many of the reviews that we found through crawling are either 1) in Spanish, 2) include a mix of Spanish and the target language, or 3) do not contain any sentiment phrases. Therefore, we use a simple language identification method in order to remove any Spanish or mixed reviews and also remove any reviews that are shorter than 7 tokens. This finally gave us a total of 568 reviews in Catalan and 343 reviews in Basque, collected from November 2015 to January 2016.", "We preprocess them through a very light normalization, after which we perform tokenization, pos-tagging and lemmatization using Ixa-pipes Agerri2014.", "Our final documents are in KAF/NAF format BIBREF23 , BIBREF24 . This is a stand-off xml format originally from the Kyoto project BIBREF23 and allows us to enrich our documents with many layers of linguistic information, such as the pos tag of a word, its lemma, whether it is a polar word, and if so, if it has an opinion holder or target. The advantage of this format is that we do not have to change the original text in any way." ], [ "For annotation, we adopt the approach taken in the OpeNER project BIBREF7 , where annotators are free to choose both the span and label for any part of the text." ], [ "In the OpeNER annotation scheme (see Table TABREF8 for a short summary), an annotator reads a review and must first decide if there is any positive or negative attitudes in the sentence. If there are, they then decide if the sentence is on topic. Since these reviews are about hotels, we constrain the opinion targets and opinion expressions to those that deal with aspects of the hotel. Annotators should annotate the span of text which refers to:", "opinion holders,", "opinion targets,", "and opinion expressions.", "If any opinion expression is found, the annotators must then also determine the polarity of the expression, which can be strong negative, negative, positive, or strong positive. As the opinion holder and targets are often implicit, we only require that each review has at least one annotated opinion expression.", "For the strong positive and strong negative labels, annotators must use clues such as adverbial modifiers ('very bad'), inherently strong adjectives ('horrible'), and any use of capitalization, repetition, or punctuation ('BAAAAD!!!!!') in order to decide between the default polarity and the strong version." ], [ "We used the KafAnnotator Tool BIBREF7 to annotate each review. This tool allows the user to select a span of tokens and to annotate them as any of the four labels mentioned in Section SECREF3 .", "The annotation of each corpus was performed in three phases: first, each annotator annotated a small number of reviews (20-50), after which they compared annotations and discussed any differences. Second, the annotators annotated half of the remaining reviews and met again to discuss any new differences. Finally, they annotated the remaining reviews. For cases of conflict after the final iteration, a third annotator decided between the two.", "The final Catalan corpus contains 567 annotated reviews and the final Basque corpus 343." ], [ "The reviews are typical hotel reviews, which often mention various aspects of the hotel or experience and the polarity towards these aspects. An example is shown in Example", "Statistics for the two corpora are shown in Table TABREF12 ." ], [ "Common metrics for determining inter-annotator agreement, e.g. Cohen's Kappa BIBREF25 or Fleiss' Kappa BIBREF26 , can not be applied when annotating sequences, as the annotators are free to choose which parts of a sequence to include. Therefore, we use the agr metric BIBREF13 , which is defined as: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are annotators and INLINEFORM2 and INLINEFORM3 are the set of annotations for each annotator. If we consider INLINEFORM4 to be the gold standard, INLINEFORM5 corresponds to the recall of the system, and precision if INLINEFORM6 is the gold standard. For each pair of annotations, we report the average of the INLINEFORM7 metric with both annotators as the temporary gold standard, DISPLAYFORM0 ", "Perfect agreement, therefore, is 1.0 and no agreement whatsoever is 0.0. Similar annotation projects BIBREF13 report INLINEFORM0 scores that range between 0.6 and 0.8 in general.", "For polarity, we assign integers to each label (Strong Negative: 0, Negative: 1, Positive: 2, Strong Positive: 3). For each sentence of length INLINEFORM0 , we take the mean squared error (MSE), DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are the sets of annotations for the sentence in question. This approach punishes larger discrepancies in polarity more than small discrepancies, i.e. if annotator 1 decides an opinion expression is strong negative and annotator two that the same expression is positive, this will be reflected in a larger MSE score than if annotator 2 had chosen negative. Perfect agreement between annotators would lead to a MSE of 0.0, with the maximum depending on the length of the phrase. For a phrase of ten words, the worst MSE possible (assuming annotator 1 labeled all words strong positive and annotator 2 labeled them strong negative) would be a 9.0. We take the mean of all the MSE scores in the corpus.", "Inter-annotator agreement is reported in Table TABREF17 .", "The inter-annotator agreement for target and expressions is high and in line with previous annotation efforts BIBREF13 , given the fact that annotators could choose any span for these labels and were not limited to the number of annotations they could make. This reflects the clarity of the guidelines used to guide the annotation process.", "The agreement score for opinion holders is somewhat lower and stems from the fact that there were relatively few instances of explicit opinion holders. Additionally, Catalan and Basque both have agreement features for verbs, which could be considered an implicit mention of the opinion holder. This is not always clear, however. Finally, the mean squared error of the polarity scores shows that annotators generally agree on where and which polarity score should be given. Again, the mean squared error in this annotation scheme requires both annotators to choose the same span and the same polarity to achieve perfect agreement." ], [ "During annotation, there were certain sentences which presented a great deal of problems for the annotators. Many of these are difficult because of 1) nested opinions, 2) implicit opinions reported only through the presence or absence of certain aspects, or 3) the difficulty to identify the span of an expression. Here, we give examples of each difficulty and detail how these were resolved during the annotation process.", "In the Basque sentence in Example UID18 , we can see that there are two distinct levels of aspects. First, the aspect `hotel', which has a positive polarity and then the sub-aspect `workers'. We avoid the problem of deciding which is the opinion target by treating these as two separate opinions, whose targets are `hotel' and `workers'.", "If there was an implicit opinion based on the presence or absence of a desirable aspect, such as the one seen in Example UID19 , we asked annotators to identify the phrase that indicates presence or absence, i.e. `there was', as the opinion phrase.", "Finally, in order to improve overlap in span selection, we instructed annotators to choose the smallest span possible that retains the necessary information. Even after several iterations, however, there were still discrepancies with difficult examples, such as the one shown in Example UID20 , where the opinion target could be either `attention', `the attention', or `the attention that the staff gave'." ], [ "In order to provide a simple baseline, we frame the extraction of opinion holders, targets, and phrases as a sequence labeling task and map the NAF tags to BIO tags for the opinions in each review. These tags serve as the gold labels which will need to be predicted at test time. We also perform classification of the polarity of opinion expressions.", "For the extraction of opinion holders, targets, and expressions we train a Conditional Random Field (CRF) on standard features for supervised sequence labeling (word-, subword-, and part-of-speech information of the current word and previous words). For the classification of the polarity of opinion expressions, we use a Bag-of-Words approach to extract features and then train a linear SVM classifier", "For evaluation, we perform a 10-fold cross-validation with 80 percent of the data reserved for training during each fold. For extraction and classification, we report the weighted INLINEFORM0 score. The results of the benchmark experiment (shown in Table TABREF23 ) show that these simple baselines achieve results which are somewhat lower but still comparable to similar tasks in English BIBREF5 . The drop is not surprising given that we use a relatively simple baseline system and due to the fact that Catalan and Basque have richer morphological systems than English, which were not exploited." ], [ "In this paper we have presented the MultiBooked corpus – a corpus of hotel reviews annotated for aspect-level sentiment analysis available in Basque and Catalan. The aim of this annotation project is to allow researchers to enable research on supervised aspect-level sentiment analysis in Basque and Catalan, as well as provide useful data for cross- and multi-lingual sentiment analysis. We also provide inter-annotator agreement scores and benchmarks, as well as making the corpus available to the community." ], [ "lrec lit" ] ], "section_name": [ "Introduction", "Related Work", "Data Collection", "Annotation", "Guidelines", "Process", "Dataset Characteristics", "Agreement Scores", "Difficult Examples", "Benchmarks", "Conclusion", "Language Resource References" ] }
{ "answers": [ { "annotation_id": [ "0ffef76adfb6c2d887c3f296613d481fb44f3cd2", "4a6b081b035adb2cec15ad1841de6385df7c48de", "619b4781266cdf38f5a66d889d2e4b74c0228f67" ], "answer": [ { "evidence": [ "In order to improve the lack of data in low-resource languages, we introduce two aspect-level sentiment datasets to the community, available for Catalan and Basque. To collect suitable corpora, we crawl hotel reviews from www.booking.com. Booking.com allows you to search for reviews in Catalan, but it does not include Basque. Therefore, for Basque we crawled reviews from a number of other websites that allow users to comment on their stay" ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Booking.com allows you to search for reviews in Catalan, but it does not include Basque. Therefore, for Basque we crawled reviews from a number of other websites that allow users to comment on their stay" ], "unanswerable": false, "yes_no": false }, { "evidence": [ "In order to improve the lack of data in low-resource languages, we introduce two aspect-level sentiment datasets to the community, available for Catalan and Basque. To collect suitable corpora, we crawl hotel reviews from www.booking.com. Booking.com allows you to search for reviews in Catalan, but it does not include Basque. Therefore, for Basque we crawled reviews from a number of other websites that allow users to comment on their stay", "Many of the reviews that we found through crawling are either 1) in Spanish, 2) include a mix of Spanish and the target language, or 3) do not contain any sentiment phrases. Therefore, we use a simple language identification method in order to remove any Spanish or mixed reviews and also remove any reviews that are shorter than 7 tokens. This finally gave us a total of 568 reviews in Catalan and 343 reviews in Basque, collected from November 2015 to January 2016." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "To collect suitable corpora, we crawl hotel reviews from www.booking.com. Booking.com allows you to search for reviews in Catalan, but it does not include Basque. Therefore, for Basque we crawled reviews from a number of other websites that allow users to comment on their stay\n\nMany of the reviews that we found through crawling are either 1) in Spanish, 2) include a mix of Spanish and the target language, or 3) do not contain any sentiment phrases. Therefore, we use a simple language identification method in order to remove any Spanish or mixed reviews and also remove any reviews that are shorter than 7 tokens. This finally gave us a total of 568 reviews in Catalan and 343 reviews in Basque, collected from November 2015 to January 2016." ], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "5b9fe5d6bb7265e432c36d03f86c9d10090ecc95", "7b14d1e2ff363459c184548d479fb019638f0b6a", "c098b15751ef5b5b4a0663b13894740d2b0533f9" ], "answer": [ { "evidence": [ "Many of the reviews that we found through crawling are either 1) in Spanish, 2) include a mix of Spanish and the target language, or 3) do not contain any sentiment phrases. Therefore, we use a simple language identification method in order to remove any Spanish or mixed reviews and also remove any reviews that are shorter than 7 tokens. This finally gave us a total of 568 reviews in Catalan and 343 reviews in Basque, collected from November 2015 to January 2016." ], "extractive_spans": [], "free_form_answer": "911", "highlighted_evidence": [ "This finally gave us a total of 568 reviews in Catalan and 343 reviews in Basque, collected from November 2015 to January 2016." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The final Catalan corpus contains 567 annotated reviews and the final Basque corpus 343." ], "extractive_spans": [ "The final Catalan corpus contains 567 annotated reviews and the final Basque corpus 343." ], "free_form_answer": "", "highlighted_evidence": [ "The final Catalan corpus contains 567 annotated reviews and the final Basque corpus 343." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The final Catalan corpus contains 567 annotated reviews and the final Basque corpus 343." ], "extractive_spans": [], "free_form_answer": "910", "highlighted_evidence": [ "The final Catalan corpus contains 567 annotated reviews and the final Basque corpus 343." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "114ad0284b0e1ed0d32f344b9db46d2e056a77ca", "3251d28ef0a2ab205588c5a541befbcfb8aae59e", "993ca956d8ae23eb9bf3a96c1f7299de217607c7" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "Do any of their reviews contain translations for both Catalan and Basque?", "What is the size of their published dataset?", "How many annotators do they have for their dataset?" ], "question_id": [ "211c242c028b35bb9cbd5e303bb6c750f859fd34", "9b05d5f723a8a452522907778a084b52e27fd924", "21175d8853fd906266f884bced85c598c35b1cbc" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Simplified annotation guidelines.", "Figure 1: An opinion annotation following the annotation scheme detailed in Section 4.1..", "Table 2: Corpus Statistics", "Table 3: Inter-annotator agreement scores. AvgAgr score is reported for targets, expressions and holders and averaged mean squared error is reported for polarity.", "Table 4: Weighted F1 scores for extraction of opinion targets, expressions and holders, as well as the weighted F1 for classification of polarity." ], "file": [ "2-Table1-1.png", "2-Figure1-1.png", "3-Table2-1.png", "3-Table3-1.png", "4-Table4-1.png" ] }
[ "What is the size of their published dataset?" ]
[ [ "1803.08614-Process-2", "1803.08614-Data Collection-1" ] ]
[ "910" ]
62
1909.08250
Natural Language Generation for Non-Expert Users
Motivated by the difficulty in presenting computational results, especially when the results are a collection of atoms in a logical language, to users, who are not proficient in computer programming and/or the logical representation of the results, we propose a system for automatic generation of natural language descriptions for applications targeting mainstream users. Differently from many earlier systems with the same aim, the proposed system does not employ templates for the generation task. It assumes that there exist some natural language sentences in the application domain and uses this repository for the natural language description. It does not require, however, a large corpus as it is often required in machine learning approaches. The systems consist of two main components. The first one aims at analyzing the sentences and constructs a Grammatical Framework (GF) for given sentences and is implemented using the Stanford parser and an answer set program. The second component is for sentence construction and relies on GF Library. The paper includes two use cases to demostrate the capability of the system. As the sentence construction is done via GF, the paper includes a use case evaluation showing that the proposed system could also be utilized in addressing a challenge to create an abstract Wikipedia, which is recently discussed in the BlueSky session of the 2018 International Semantic Web Conference.
{ "paragraphs": [ [ "Natural language generation (NLG) has been one of the key topics of research in natural language processing, which was highlighted by the huge body of work on NLG surveyed in BIBREF0, BIBREF1. With the advances of several devices capable of understanding spoken language and conducting conversation with human (e.g., Google Home, Amazon Echo) and the shrinking gap created by the digital devices, it is not difficult to foresee that the market and application areas of NLG systems will continue to grow, especially in applications whose users are non-experts. In such application, a user often asks for certain information and waits for the answer and a NLG module would return the answer in spoken language instead of text such as in question-answering systems or recommendation systems. The NLG system in these two applications uses templates to generate the answers in natural language for the users. A more advanced NLG system in this direction is described in BIBREF2, which works with ontologies annotated using the Attempto language and can generate a natural language description for workflows created by the systems built in the Phylotastic project. The applications targeted by these systems are significantly different from NLG systems, whose main purpose is to generate high-quality natural language description of objects or reports, such as those reported in the recent AAAI conference BIBREF3, BIBREF4, BIBREF5.", "The present paper is motivated by the need to generate natural language description of computational results to non-expert users such as those developed in the Phylotastic project. In this project, the users are experts in evolutionary biology but are none experts in ontologies and web services. When a user places a request, he/she will receive a workflow consisting of web services, whose inputs and outputs are specified by instances of classes in the ontologies working with web services, as well as the ordering and relationships between the services. To assist the user in understanding the workflow, a natural language description of the workflow is generated. In order to accomplish the task, the NLG system in the Phylotastic project proposes to annotate elements of the ontologies using Attempto, a simple subset of English with precisely defined syntax and semantics.", "In this paper, we propose a system that addresses the limitation of the system discussed in the Phylotastic project BIBREF2. Specifically, we assume that the annotations given in an ontology are natural language sentences. This is a reasonable assumption given that the developers of an ontology are usually those who have intimate knowledge about entities described in the ontology and often have some sort of comments about classes, objects, and instances of the ontology. We then show that the system is very flexible and can be used for the same purpose with new ontologies.", "The rest of the paper is organized as follows. Section SECREF2 briefly reviews the basics of Grammatical Framework (GF)BIBREF6. Section SECREF3 describes the main modules of the system. Section SECREF4 includes two use cases of the system using an available ontologies against in the context of reasoning about ontologies. Specifically, it compares with the system used in the Phylotastic project and an ontology about people. This section also contains a use case that highlights the versatility of the proposed system by addressing a challenge to create an abstract Wikipedia BIBREF7. Related works are discussed in Section SECREF5. Section SECREF6 concludes the paper." ], [ "The Grammatical Framework (GF) BIBREF6 is a system used for working with grammars. The GF Resource Grammar Library (RGL) covering syntax of various languages is the standard library for GF. A GF program has two main parts. The first part is the Abstract syntax which defines what meanings can be expressed by a grammar. The abstract syntax defines categories (i.e., types of meaning) and functions (i.e., meaning-building components). An example of an abstract syntax:", "Here, Message, People, Action and Entity are types of meanings. startcat flag states that Message is the default start category for parsing and generation. simple_sent is a function accepting 3 parameters, of type People, Action, Entity. This function returns a meaning of Message category. Intuitively, each function in the abstract syntax represents a rule in a grammar. The combination of rules used to construct a meaning type can be seen as a syntax tree.", "The second part is composed of one or more concrete syntax specifications. Each concrete syntax defines the representation of meanings in each output language. For example, to demostrate the idea that one meaning can be represented by different concrete syntaxes, we create two concrete syntaxes for two different languages: English and Italian. To translate a sentence to different languages, we only need to provide the strings representing each word in corresponding languages. The GF libraries will take responsibility to concatenate the provided strings according to the language grammar to create a complete sentence, which is the representations of the meaning, in the targeted language. The corresponding concrete syntaxes that map functions in the abstract grammar above to strings in English and in Italian is:", "In these concrete syntaxes, the linearization type definition (lincat) states that Message, People, Action and Entity are type Cl (clause), NP (noun phrase), V2 (two-place verb), and NP respectively. Linearization definitions (lin) indicate what strings are assigned to each of the meanings defined in the abstract syntax. To reduce same string declaration, the operator (oper) section defines some placeholders for strings that can be used in linearization. The mkNP, mkN, mkV2, etc. are standard constructors from ConstructorsEng/Jpn library which returns an object of the type NP, N or V2 respectively.", "GF has been used in a variety of applications, such as query-answering systems, voice communication, language learning, text analysis and translation, natural language generation BIBREF8, BIBREF9, automatic translation.", "The translation from English to Italian can be performed as follows in the GF API:", "The above command line produces a syntax tree of the sentence “Bill plays soccer” then turn that tree into a PeopleIta sentence (in Italian) which is displayed in the second line. Figure FIGREF6 shows the meaning in the abstract syntax is represented in Japanese and in Italian, i.e. the two strings represent the same meaning." ], [ "To generate a sentence, we need a sentence structure and vocabularies. Our system is developed to emulate the process of a person learning a new language and has to make guesses to understand new sentences from time to time. For example, someone, who understands the sentence “Bill plays a game” would not fully understand the sentence “Bill plays a popular board game” without knowing the meaning of “popular” and “board game” but could infer that the latter sentence indicates that its subject plays a type of game.", "The overall design of our system is given in Figure FIGREF7. Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation. The system consists of two components, understanding sentences and generating GF grammar. The first component is divided into two sub-components, one for recognizing the sentence structure and one for recognizing the sentence components. The second component consists of a GF grammar encoder and a GF grammar exporter. The encoder is responsible for generating a GF grammar for each sentence, while the exporter aggregates the grammars generated from the encoder, and produces a comprehensive grammar for the whole paragraph." ], [ "The sentence structure recognition process involves 2 modules: natural language processing (NLP) module and logical reasoning on result from NLP module. In this paper, we make use of the Stanford Parser tools described in BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14", "The NLP module tokenizes the input free text to produce a dependency-based parse tree and part-of-speech tag (POS tag). The dependency-based parse tree and the POS tag are then transform into an answer set program (ASP) BIBREF15 which contains only facts. Table TABREF13 shows the transformation of the result of NLP module into an ASP program for the sentence “Bill plays a game”. In this table, nsubj, det, dobj and punct denote relations in the dependency-based parse tree, and mean nominal subject, determiner, direct object and punctuation respectively. Full description of all relations in a dependency-based parse tree can be found in the Universal Dependency website. The second set of notations are the POS tag PRP, VBP, DT and NN corresponding to pronoun, verb, determiner and noun. Readers can find the full list of POS tag in Penn Treebank Project.", "From the collection of the dependency atoms from the dependency-based parse tree, we determine the structure of a sentence using an ASP program, called $\\Pi _1$ (Listing ).", "Each of the rule above can be read as if the right-hand side is true then the left-hand side must be true. These rules define five possible structures of a sentence represented by the atom structure(x,y). $x$ and $y$ in the atom structure(x,y) denote the type of the structure and the number of dependency relations applied to activate the rule generating this atom, respectively. We refer to $y$ as the $i$-value of the structure. For example, $structure(1,1)$ will be recognized if the nsubj relation is in the dependency-based parse tree; $structure(3,3)$ needs 3 dependency relations to be actived: nsubj, xcomp and dobj. We often use structure #$x$ to indicate a structure of type $x$.", "Together with the collection of the atoms encoding the relations in the dependency-based parse tree, $\\Pi _1$ generates several atoms of the form $structure(x,y)$ for a sentence. Among all these atoms, an atom with the highest $i$-value represents the structure constructed using the highest number of dependency relations. And hence, that structure is the most informative structure that is recoginized for the sentence. Observe that $structure(1,1)$ is the most simplified structure of any sentence." ], [ "The goal of this step is to identify the relationship between elements of a sentence structure and chunks of words in a sentence from the POS tags and the dependency-based parse tree. For example, the sentence “Bill plays a game” is encoded by a structure #2 and we expect that Bill, plays, and game correspond to the subject, verb, and object, respectively.", "We begin with recognizing the main words (components) that play the most important roles in the sentence based on a given sentence structure. This is achieved by program $\\Pi _2$ (Listing ). The first four rules of $\\Pi _2$ determine the main subject and verb of the sentence whose structure is #1, #2, #3, or #5. Structure #4 requires a special treatment since the components following tobe can be of different forms. For instance, in “Cathy is gorgeous,” the part after tobe is an adjective, but in “Cathy is a beautiful girl,” the part after tobe is a noun, though, with adjective beautiful. This is done using the four last rules of $\\Pi _2$.", "The result of program $\\Pi _2$ is an one-to-one mapping of some of the words in the sentence into the importaint components of a sentence, called main components, i.e. subject, object and verb. The mapping is constructed by using the core arguments in Universal Dependency Relations . Since not every word in the sentence is in a core argument relation, there are some words in the sentence that are not in the domain of the mapping that $\\Pi _2$ produces. We denote these words are complement components. To identify these words, we encode the Non-core dependents and Nominal dependents from Universal Dependency Relations into the set of rules in program $\\Pi _3$.", "Program $\\Pi _3$ (Listing ), together with the atoms extracted from the dependency-based parse tree such as $compound(P,N)$ ($N$ is compound noun at the position $P$ in the sentence), $amod(P,J)$ ($J$ is an adjective modifier), etc., is used to identify the complement components of the main components computed by $\\Pi _2$ while maintaining the structure of the sentence created by $\\Pi _1$. For example, a complement of a noun could be another noun (as “board” in “board game”), or an adjective (as “popular” in “popular board game”), or a preposition (as “for adults” in “board game for adults”).", "The input of Program $\\Pi _3$ is the position ($pos$) of the word in the sentence. Program $\\Pi _3$ is called whenever there is a new complement component discovered. That way of recursive calls is to identify the maximal chunk of the words that support the main components of the sentence. The result of this module is a list of vocabularies for the next steps." ], [ "The goal of the encoder is to identify appropriate GF rules for the construction of a GF grammar of a sentence given its structure and its components identified in the previous two modules. This is necessary since a sentence can be encoded in GF by more than one set of rules; for example, the sentence “Bill wants to play a game” can be encoded by the rules", "Bill $\\rightarrow $ NP, want $\\rightarrow $ VV, play $\\rightarrow $ V2, game $\\rightarrow $ NP and one of the sets of GF rules in the table below:", "", "In GF, NP, VV, V2, VP, and Cl stand for noun phrase, verb-phrase-complement verb, two-place verb, verb phrase and clause, respectively. Note that although the set of GF grammatical rules can be used to construct a constituency-based parse tree , the reverse direction is not always true. To the best of our knowledge, there exists no algorithm for converting a constituency-based parse tree to a set GF grammar rules. We therefore need to identify the GF rules for each sentence structure.", "In our system, a GF rule is assigned to a structure initially (Table TABREF19). Each rule in Table TABREF19 represents the first level of the constituency-based parse tree. It acts as the coordinator for all other succeeding rules.", "Given the seed components identified in Section SECREF15 and the above GF rules, a GF grammar for each sentence can be constructed. However, this grammar can only be used to generate fairly simple sentences. For example, for the sentence “Bill plays a popular board game with his close friends.”, a GF grammar for structure #2 can be constructed, which can only generate the sentence “Bill plays game.” because it does not contain any complement components identified in Section SECREF15. Therefore, we assgin a set of GF rules for the construction of each parameter in the GF rules in Table TABREF19. The set of GF rules has to follow two conventions. The first one is after applying the set of rules to some components of the sentence, the type of the production is one of the type in Table TABREF19, e.g. $NP$, $VP$, $Cl$, $V2$, .... The second convention is that the GF encoder will select the rules as the order from top to bottom in Table TABREF20. Note that the encoder always has information of what type of input and output for the rule it is looking for.", "For instance, we have “game” is the object (main components), and we know that we have to construct “game” in the result GF grammar to be a NP (noun phrase). Program $\\Pi _2$ identifies that there are two complement components for the word “game”, which are “board” and “popular”, a noun and an adjective respectively. The GF encoder then select the set of rules: N $\\rightarrow $ N $\\rightarrow $ CN and A $\\rightarrow $ AP to create the common noun “board game” and the adjective phrase first. The next rule is AP $\\rightarrow $ CN $\\rightarrow $ CN. The last rule to be applied is CN $\\rightarrow $ NP. The selection is easily decided since the input and the output of the rules are pre-determined, and there is no ambiguity in the selection process.", "The encoder uses the GF rules and the components identified by the previous subsections to produce different constructors for different components of a sentence. A part of the output of the GF encoder for the object “game” is", "The encoder will also create the operators that will be included in the oper section of the GF grammar for supporting the new constructor. For example, the following operators will be generated for serving the Game constructor above:" ], [ "The GF Grammar Exporter has the simplest job among all modules in the system. It creates a GF program for a paragraph using the GF grammars created for the sentences of the paragraph. By taking the union of all respective elements of each grammar for each sentence, i.e., categories, functions, linearizations and operators, the Grammar Exporter will group them into the set of categories (respectively, categories, functions, linearizations, operators) of the final grammar." ], [ "We describe our method of generating natural language in two applications. The first application is to generate a natural language description for workflow created by the system built in the Phylotastic project described in BIBREF2. Instead of requiring that the ontologies are annotated using Attempto, we use natural language sentences to annotate the ontologies. To test the feasibility of the approach, we also conduct another use case with the second ontology, that is entirely different from the ontologies used in the Phylotastic project. The ontology is about people and includes descriptions for certain class.", "The second application targets the challenge of creating an abstract Wikipedia from the BlueSky session of 2018 International Semantic Web Conference BIBREF7. We create an intermediate representation that can be used to translate the original article in English to another language. In this use case, we translate the intermediate representation back to English and measure how the translated version stacks up again the original one. We assess the generation quality automatically with BLEU-3 and ROUGE-L (F measure). BLEU BIBREF16 and ROUGE BIBREF17 algorithms are chosen to evaluate our generator since the central idea of both metrixes is “the closer a machine translation is to a professional human translation, the better it is”, thus, they are well-aligned with our use cases' purpose. In short, the higher BLUE and ROUGE score are, the more similar the hypothesis text and the reference text is. In our use case, the hypothesis for BLEU and ROUGE is the generated English content from the intermediate representation, and the reference text is the original text from Wikipedia." ], [ "As described in BIBREF2, the author's system retrieves a set of atoms from an ASP program such as those in Listing where phylotastic FindScientificNamesFromWeb GET was shortened to service, propagates the atoms, and constructs a set of sentences having similar structure to the sentence “The input of phylotastic FindScientificNamesFromWeb GET is a web link. Its outputs are a set of species names and a set of scientific names”. In this sentence, phylotastic FindScientificNamesFromWeb GET is the name of the service involved in the workflow of the Phylotastic project. All of the arguments of the atoms above are the names of classes and instances from Phylotastic ontology.", "We replace the original Attempto annotations with the natural language annotations as in Table TABREF24 and test with our system.", "With the same set of atoms as in Listing , our system generates the following description “Input of phylotastic FindScientificNamesFromWeb GET is web link. Type of web link is url. Output of phylotastic FindScientificNamesFromWeb GET is scientific names. Output of phylotastic FindScientificNamesFromWeb GET is species names. Type of scientific names is names. Type of species name is names.”.", "We also test our system with the people ontology as noted above. We extract all comments about people and replace compound sentences with simple sentences, e.g., “Mick is male and drives a white van” is replaced by the two sentences “Mick is male” and “Mick drives a white van.” to create a collection of sample sentences. We then use our system to generate a GF program which is used to generate sentences for RDF tuples. Sample outputs for some tuples are in Table TABREF25. This shows that for targeted applications, our system could do a reasonable job." ], [ "Since our system creates a GF program for a set of sentences, it could be used as an intermediate representation of a paragraph. This intermediate representation could be used by GF for automatic translation as GF is well-suited for cross-languages translation. On the other hand, we need to assess whether the intermediate representation is meaningful. This use case aims at checking the adequacy of the representation. To do so, we generate the English sentences from the GF program and evaluate the quality of these sentences against the original ones. We randomly select 5 articles from 3 Wikipedia portals: People, Mathematics and Food & Drink.", "With the small set of rules introducing in this paper to recognize sentence structure, there would be very limited 4-gram in the generated text appearing in original Wikipedia corpus. Therefore, we use BLEU-3 with equal weight distribution instead of BLEU-4 to assess the generated content. Table TABREF27 shows the summary of the number of assessable sentences from our system. Out of 62 sentences from 3 portals, the system cannot determine the structure 2 sentences in Mathematics due to their complexity. This low number of failure shows that our 5 proposed sentence structures effectively act as a lower bound on sentence recognition module.", "In terms of quality, Table TABREF28 shows the average of BLEU and ROUGE score for each portal. Note that the average BLUE score is calculated only on BLEU assessable sentences, while average ROUGE score is calculated on the sentences whose structure can be recognized and encoded by our system. We note that the BLEU or ROUGE score might not be sufficiently high for a good quality translation. We believe that two reasons contribute to this low score. First, the present system uses fairly simple sentence structures. Second, it does not consider the use of relative clauses to enrich the sentences. This feature will be added to the next version of the system.", "Table TABREF32 summarizes the result of this use case. On the left are the paragraphs extracted from the Wikipedia page about Rice in Food & Drink, Decimal in Mathematics, and about Alieu Ebrima Cham Joof from People. As we can see, the main points of the paragraphs are maintained." ], [ "The systems developed in BIBREF18, BIBREF19, BIBREF3 use statistical generation method to produce descriptions of tables or explanation and recommendation from users' reviews of an item. All three systems are capable of generating high quality descriptions and/or explanations. In comparing to these systems, our system does not use the statistical generation method. Instead, we use Grammatical Framework for the generation task. A key difference between these systems and our system lies in the requirement of a large corpus of text in a specific domain for training and generation of these systems. Our system can work with very limited data and a wide range of domains.", "Another method for generating natural language explanation for an question-answering system is proposed in BIBREF20, BIBREF4. BIBREF20 (BIBREF20) describes a system that can give reasonable and supportive evidence to the answer to a question asked to an image, while BIBREF4 (BIBREF4) generates explanations for scheduling problem using argumentation. BIBREF21 (BIBREF21) use ASP to develop a system answering questions in the do-it-yourself domain. These papers use templates to generate answers. The generated GF program generated by our system, that is used for the NLG task, is automatically created from a provided input.", "The sophisticated system presented by BIBREF5 translates both question and the given natural language text to logical representation, and uses logical reasoning to produce the answer. Our system is similar to their system in that both employ recent developments of NLP into solving NLG problems." ], [ "We propose a system implemented using answer set programming (ASP) and Grammatical Framework (GF), for automatic generation of natural language descriptions in applications targeting mainstream users. The system does not require a large corpus for the generation task and can be used in different types of applications.", "In the first type of applications, the system can work with annotated ontologies to translate a set of atoms—representing the answer to a query to the ontology—to a set of sentences. To do so, the system extracts the annotations related to the atoms in the answer and creates a GF program that is then used to generate natural language description of the given set of atoms. In the second type of applications, the system receives a paragraph of text and generates an intermediate representation—as a GF program—for the paragraph, which can be used for different purpose such as cross-translation, addressing a need identified in BIBREF7 .", "Our use cases with different ontologies and Wikipedia portals provide encouraging results. They also point to possible improvements that we plan to introduce to the next version of the system. We will focus on processing relative clauses and enriching the set of sentence structures, especially for compound and complex sentences." ] ], "section_name": [ "Introduction", "Background: Grammatical Framework", "Method", "Method ::: Sentence Structure Recognition", "Method ::: Sentence Components Recognition", "Method ::: GF Grammar Encoder", "Method ::: GF Grammar Exporter", "Experiments", "Experiments ::: NLG for Annotated Ontologies", "Experiments ::: Intermediate Representation for Wiki Pages", "Related Works", "Conclusions and Future Work" ] }
{ "answers": [ { "annotation_id": [ "102eddfaa57b6458e9a4add52953d3f32966d55a", "40482b576a1e0dd1544488d5be40da0d4882df79", "b6e751bfdd0a494a762d4173663d031d2b63d3f9" ], "answer": [ { "evidence": [ "The overall design of our system is given in Figure FIGREF7. Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation. The system consists of two components, understanding sentences and generating GF grammar. The first component is divided into two sub-components, one for recognizing the sentence structure and one for recognizing the sentence components. The second component consists of a GF grammar encoder and a GF grammar exporter. The encoder is responsible for generating a GF grammar for each sentence, while the exporter aggregates the grammars generated from the encoder, and produces a comprehensive grammar for the whole paragraph." ], "extractive_spans": [ "Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation" ], "free_form_answer": "", "highlighted_evidence": [ "Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation. The system consists of two components, understanding sentences and generating GF grammar. The first component is divided into two sub-components, one for recognizing the sentence structure and one for recognizing the sentence components. The second component consists of a GF grammar encoder and a GF grammar exporter. The encoder is responsible for generating a GF grammar for each sentence, while the exporter aggregates the grammars generated from the encoder, and produces a comprehensive grammar for the whole paragraph." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To generate a sentence, we need a sentence structure and vocabularies. Our system is developed to emulate the process of a person learning a new language and has to make guesses to understand new sentences from time to time. For example, someone, who understands the sentence “Bill plays a game” would not fully understand the sentence “Bill plays a popular board game” without knowing the meaning of “popular” and “board game” but could infer that the latter sentence indicates that its subject plays a type of game.", "The overall design of our system is given in Figure FIGREF7. Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation. The system consists of two components, understanding sentences and generating GF grammar. The first component is divided into two sub-components, one for recognizing the sentence structure and one for recognizing the sentence components. The second component consists of a GF grammar encoder and a GF grammar exporter. The encoder is responsible for generating a GF grammar for each sentence, while the exporter aggregates the grammars generated from the encoder, and produces a comprehensive grammar for the whole paragraph." ], "extractive_spans": [ "Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation" ], "free_form_answer": "", "highlighted_evidence": [ "To generate a sentence, we need a sentence structure and vocabularies.", "Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation. The system consists of two components, understanding sentences and generating GF grammar. The first component is divided into two sub-components, one for recognizing the sentence structure and one for recognizing the sentence components. The second component consists of a GF grammar encoder and a GF grammar exporter. The encoder is responsible for generating a GF grammar for each sentence, while the exporter aggregates the grammars generated from the encoder, and produces a comprehensive grammar for the whole paragraph." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To generate a sentence, we need a sentence structure and vocabularies. Our system is developed to emulate the process of a person learning a new language and has to make guesses to understand new sentences from time to time. For example, someone, who understands the sentence “Bill plays a game” would not fully understand the sentence “Bill plays a popular board game” without knowing the meaning of “popular” and “board game” but could infer that the latter sentence indicates that its subject plays a type of game.", "The overall design of our system is given in Figure FIGREF7. Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation. The system consists of two components, understanding sentences and generating GF grammar. The first component is divided into two sub-components, one for recognizing the sentence structure and one for recognizing the sentence components. The second component consists of a GF grammar encoder and a GF grammar exporter. The encoder is responsible for generating a GF grammar for each sentence, while the exporter aggregates the grammars generated from the encoder, and produces a comprehensive grammar for the whole paragraph.", "Method ::: Sentence Structure Recognition", "The sentence structure recognition process involves 2 modules: natural language processing (NLP) module and logical reasoning on result from NLP module. In this paper, we make use of the Stanford Parser tools described in BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14", "The NLP module tokenizes the input free text to produce a dependency-based parse tree and part-of-speech tag (POS tag). The dependency-based parse tree and the POS tag are then transform into an answer set program (ASP) BIBREF15 which contains only facts. Table TABREF13 shows the transformation of the result of NLP module into an ASP program for the sentence “Bill plays a game”. In this table, nsubj, det, dobj and punct denote relations in the dependency-based parse tree, and mean nominal subject, determiner, direct object and punctuation respectively. Full description of all relations in a dependency-based parse tree can be found in the Universal Dependency website. The second set of notations are the POS tag PRP, VBP, DT and NN corresponding to pronoun, verb, determiner and noun. Readers can find the full list of POS tag in Penn Treebank Project.", "From the collection of the dependency atoms from the dependency-based parse tree, we determine the structure of a sentence using an ASP program, called $\\Pi _1$ (Listing ).", "Each of the rule above can be read as if the right-hand side is true then the left-hand side must be true. These rules define five possible structures of a sentence represented by the atom structure(x,y). $x$ and $y$ in the atom structure(x,y) denote the type of the structure and the number of dependency relations applied to activate the rule generating this atom, respectively. We refer to $y$ as the $i$-value of the structure. For example, $structure(1,1)$ will be recognized if the nsubj relation is in the dependency-based parse tree; $structure(3,3)$ needs 3 dependency relations to be actived: nsubj, xcomp and dobj. We often use structure #$x$ to indicate a structure of type $x$.", "Together with the collection of the atoms encoding the relations in the dependency-based parse tree, $\\Pi _1$ generates several atoms of the form $structure(x,y)$ for a sentence. Among all these atoms, an atom with the highest $i$-value represents the structure constructed using the highest number of dependency relations. And hence, that structure is the most informative structure that is recoginized for the sentence. Observe that $structure(1,1)$ is the most simplified structure of any sentence.", "Method ::: Sentence Components Recognition", "The goal of this step is to identify the relationship between elements of a sentence structure and chunks of words in a sentence from the POS tags and the dependency-based parse tree. For example, the sentence “Bill plays a game” is encoded by a structure #2 and we expect that Bill, plays, and game correspond to the subject, verb, and object, respectively.", "We begin with recognizing the main words (components) that play the most important roles in the sentence based on a given sentence structure. This is achieved by program $\\Pi _2$ (Listing ). The first four rules of $\\Pi _2$ determine the main subject and verb of the sentence whose structure is #1, #2, #3, or #5. Structure #4 requires a special treatment since the components following tobe can be of different forms. For instance, in “Cathy is gorgeous,” the part after tobe is an adjective, but in “Cathy is a beautiful girl,” the part after tobe is a noun, though, with adjective beautiful. This is done using the four last rules of $\\Pi _2$.", "The result of program $\\Pi _2$ is an one-to-one mapping of some of the words in the sentence into the importaint components of a sentence, called main components, i.e. subject, object and verb. The mapping is constructed by using the core arguments in Universal Dependency Relations . Since not every word in the sentence is in a core argument relation, there are some words in the sentence that are not in the domain of the mapping that $\\Pi _2$ produces. We denote these words are complement components. To identify these words, we encode the Non-core dependents and Nominal dependents from Universal Dependency Relations into the set of rules in program $\\Pi _3$.", "Program $\\Pi _3$ (Listing ), together with the atoms extracted from the dependency-based parse tree such as $compound(P,N)$ ($N$ is compound noun at the position $P$ in the sentence), $amod(P,J)$ ($J$ is an adjective modifier), etc., is used to identify the complement components of the main components computed by $\\Pi _2$ while maintaining the structure of the sentence created by $\\Pi _1$. For example, a complement of a noun could be another noun (as “board” in “board game”), or an adjective (as “popular” in “popular board game”), or a preposition (as “for adults” in “board game for adults”).", "The input of Program $\\Pi _3$ is the position ($pos$) of the word in the sentence. Program $\\Pi _3$ is called whenever there is a new complement component discovered. That way of recursive calls is to identify the maximal chunk of the words that support the main components of the sentence. The result of this module is a list of vocabularies for the next steps.", "Method ::: GF Grammar Encoder", "The goal of the encoder is to identify appropriate GF rules for the construction of a GF grammar of a sentence given its structure and its components identified in the previous two modules. This is necessary since a sentence can be encoded in GF by more than one set of rules; for example, the sentence “Bill wants to play a game” can be encoded by the rules", "In GF, NP, VV, V2, VP, and Cl stand for noun phrase, verb-phrase-complement verb, two-place verb, verb phrase and clause, respectively. Note that although the set of GF grammatical rules can be used to construct a constituency-based parse tree , the reverse direction is not always true. To the best of our knowledge, there exists no algorithm for converting a constituency-based parse tree to a set GF grammar rules. We therefore need to identify the GF rules for each sentence structure.", "In our system, a GF rule is assigned to a structure initially (Table TABREF19). Each rule in Table TABREF19 represents the first level of the constituency-based parse tree. It acts as the coordinator for all other succeeding rules.", "Given the seed components identified in Section SECREF15 and the above GF rules, a GF grammar for each sentence can be constructed. However, this grammar can only be used to generate fairly simple sentences. For example, for the sentence “Bill plays a popular board game with his close friends.”, a GF grammar for structure #2 can be constructed, which can only generate the sentence “Bill plays game.” because it does not contain any complement components identified in Section SECREF15. Therefore, we assgin a set of GF rules for the construction of each parameter in the GF rules in Table TABREF19. The set of GF rules has to follow two conventions. The first one is after applying the set of rules to some components of the sentence, the type of the production is one of the type in Table TABREF19, e.g. $NP$, $VP$, $Cl$, $V2$, .... The second convention is that the GF encoder will select the rules as the order from top to bottom in Table TABREF20. Note that the encoder always has information of what type of input and output for the rule it is looking for.", "For instance, we have “game” is the object (main components), and we know that we have to construct “game” in the result GF grammar to be a NP (noun phrase). Program $\\Pi _2$ identifies that there are two complement components for the word “game”, which are “board” and “popular”, a noun and an adjective respectively. The GF encoder then select the set of rules: N $\\rightarrow $ N $\\rightarrow $ CN and A $\\rightarrow $ AP to create the common noun “board game” and the adjective phrase first. The next rule is AP $\\rightarrow $ CN $\\rightarrow $ CN. The last rule to be applied is CN $\\rightarrow $ NP. The selection is easily decided since the input and the output of the rules are pre-determined, and there is no ambiguity in the selection process.", "The encoder uses the GF rules and the components identified by the previous subsections to produce different constructors for different components of a sentence. A part of the output of the GF encoder for the object “game” is", "The encoder will also create the operators that will be included in the oper section of the GF grammar for supporting the new constructor. For example, the following operators will be generated for serving the Game constructor above:", "Method ::: GF Grammar Exporter", "The GF Grammar Exporter has the simplest job among all modules in the system. It creates a GF program for a paragraph using the GF grammars created for the sentences of the paragraph. By taking the union of all respective elements of each grammar for each sentence, i.e., categories, functions, linearizations and operators, the Grammar Exporter will group them into the set of categories (respectively, categories, functions, linearizations, operators) of the final grammar." ], "extractive_spans": [ "Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation. The system consists of two components, understanding sentences and generating GF grammar. The first component is divided into two sub-components, one for recognizing the sentence structure and one for recognizing the sentence components. The second component consists of a GF grammar encoder and a GF grammar exporter. The encoder is responsible for generating a GF grammar for each sentence, while the exporter aggregates the grammars generated from the encoder, and produces a comprehensive grammar for the whole paragraph." ], "free_form_answer": "", "highlighted_evidence": [ "To generate a sentence, we need a sentence structure and vocabularies. Our system is developed to emulate the process of a person learning a new language and has to make guesses to understand new sentences from time to time. For example, someone, who understands the sentence “Bill plays a game” would not fully understand the sentence “Bill plays a popular board game” without knowing the meaning of “popular” and “board game” but could infer that the latter sentence indicates that its subject plays a type of game.\n\nThe overall design of our system is given in Figure FIGREF7. Given a paragraph, our system produces a GF program (a pair of an abstract and a concrete syntax), which can be used for sentence generation. The system consists of two components, understanding sentences and generating GF grammar. The first component is divided into two sub-components, one for recognizing the sentence structure and one for recognizing the sentence components. The second component consists of a GF grammar encoder and a GF grammar exporter. The encoder is responsible for generating a GF grammar for each sentence, while the exporter aggregates the grammars generated from the encoder, and produces a comprehensive grammar for the whole paragraph.", "Method ::: Sentence Structure Recognition\nThe sentence structure recognition process involves 2 modules: natural language processing (NLP) module and logical reasoning on result from NLP module. In this paper, we make use of the Stanford Parser tools described in BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14\n\nThe NLP module tokenizes the input free text to produce a dependency-based parse tree and part-of-speech tag (POS tag). The dependency-based parse tree and the POS tag are then transform into an answer set program (ASP) BIBREF15 which contains only facts. Table TABREF13 shows the transformation of the result of NLP module into an ASP program for the sentence “Bill plays a game”. In this table, nsubj, det, dobj and punct denote relations in the dependency-based parse tree, and mean nominal subject, determiner, direct object and punctuation respectively. Full description of all relations in a dependency-based parse tree can be found in the Universal Dependency website. The second set of notations are the POS tag PRP, VBP, DT and NN corresponding to pronoun, verb, determiner and noun. Readers can find the full list of POS tag in Penn Treebank Project.\n\nFrom the collection of the dependency atoms from the dependency-based parse tree, we determine the structure of a sentence using an ASP program, called $\\Pi _1$ (Listing ).\n\nEach of the rule above can be read as if the right-hand side is true then the left-hand side must be true. These rules define five possible structures of a sentence represented by the atom structure(x,y). $x$ and $y$ in the atom structure(x,y) denote the type of the structure and the number of dependency relations applied to activate the rule generating this atom, respectively. We refer to $y$ as the $i$-value of the structure. For example, $structure(1,1)$ will be recognized if the nsubj relation is in the dependency-based parse tree; $structure(3,3)$ needs 3 dependency relations to be actived: nsubj, xcomp and dobj. We often use structure #$x$ to indicate a structure of type $x$.\n\nTogether with the collection of the atoms encoding the relations in the dependency-based parse tree, $\\Pi _1$ generates several atoms of the form $structure(x,y)$ for a sentence. Among all these atoms, an atom with the highest $i$-value represents the structure constructed using the highest number of dependency relations. And hence, that structure is the most informative structure that is recoginized for the sentence. Observe that $structure(1,1)$ is the most simplified structure of any sentence.\n\nMethod ::: Sentence Components Recognition\nThe goal of this step is to identify the relationship between elements of a sentence structure and chunks of words in a sentence from the POS tags and the dependency-based parse tree. For example, the sentence “Bill plays a game” is encoded by a structure #2 and we expect that Bill, plays, and game correspond to the subject, verb, and object, respectively.\n\nWe begin with recognizing the main words (components) that play the most important roles in the sentence based on a given sentence structure. This is achieved by program $\\Pi _2$ (Listing ). The first four rules of $\\Pi _2$ determine the main subject and verb of the sentence whose structure is #1, #2, #3, or #5. Structure #4 requires a special treatment since the components following tobe can be of different forms. For instance, in “Cathy is gorgeous,” the part after tobe is an adjective, but in “Cathy is a beautiful girl,” the part after tobe is a noun, though, with adjective beautiful. This is done using the four last rules of $\\Pi _2$.\n\nThe result of program $\\Pi _2$ is an one-to-one mapping of some of the words in the sentence into the importaint components of a sentence, called main components, i.e. subject, object and verb. The mapping is constructed by using the core arguments in Universal Dependency Relations . Since not every word in the sentence is in a core argument relation, there are some words in the sentence that are not in the domain of the mapping that $\\Pi _2$ produces. We denote these words are complement components. To identify these words, we encode the Non-core dependents and Nominal dependents from Universal Dependency Relations into the set of rules in program $\\Pi _3$.\n\nProgram $\\Pi _3$ (Listing ), together with the atoms extracted from the dependency-based parse tree such as $compound(P,N)$ ($N$ is compound noun at the position $P$ in the sentence), $amod(P,J)$ ($J$ is an adjective modifier), etc., is used to identify the complement components of the main components computed by $\\Pi _2$ while maintaining the structure of the sentence created by $\\Pi _1$. For example, a complement of a noun could be another noun (as “board” in “board game”), or an adjective (as “popular” in “popular board game”), or a preposition (as “for adults” in “board game for adults”).\n\nThe input of Program $\\Pi _3$ is the position ($pos$) of the word in the sentence. Program $\\Pi _3$ is called whenever there is a new complement component discovered. That way of recursive calls is to identify the maximal chunk of the words that support the main components of the sentence. The result of this module is a list of vocabularies for the next steps.\n\nMethod ::: GF Grammar Encoder\nThe goal of the encoder is to identify appropriate GF rules for the construction of a GF grammar of a sentence given its structure and its components identified in the previous two modules. This is necessary since a sentence can be encoded in GF by more than one set of rules; for example, the sentence “Bill wants to play a game” can be encoded by the rules", "Note that although the set of GF grammatical rules can be used to construct a constituency-based parse tree , the reverse direction is not always true. To the best of our knowledge, there exists no algorithm for converting a constituency-based parse tree to a set GF grammar rules. We therefore need to identify the GF rules for each sentence structure.\n\nIn our system, a GF rule is assigned to a structure initially (Table TABREF19). Each rule in Table TABREF19 represents the first level of the constituency-based parse tree. It acts as the coordinator for all other succeeding rules.\n\nGiven the seed components identified in Section SECREF15 and the above GF rules, a GF grammar for each sentence can be constructed. However, this grammar can only be used to generate fairly simple sentences. For example, for the sentence “Bill plays a popular board game with his close friends.”, a GF grammar for structure #2 can be constructed, which can only generate the sentence “Bill plays game.” because it does not contain any complement components identified in Section SECREF15. Therefore, we assgin a set of GF rules for the construction of each parameter in the GF rules in Table TABREF19. The set of GF rules has to follow two conventions. The first one is after applying the set of rules to some components of the sentence, the type of the production is one of the type in Table TABREF19, e.g. $NP$, $VP$, $Cl$, $V2$, .... The second convention is that the GF encoder will select the rules as the order from top to bottom in Table TABREF20. Note that the encoder always has information of what type of input and output for the rule it is looking for.", "For instance, we have “game” is the object (main components), and we know that we have to construct “game” in the result GF grammar to be a NP (noun phrase). Program $\\Pi _2$ identifies that there are two complement components for the word “game”, which are “board” and “popular”, a noun and an adjective respectively. The GF encoder then select the set of rules: N $\\rightarrow $ N $\\rightarrow $ CN and A $\\rightarrow $ AP to create the common noun “board game” and the adjective phrase first. The next rule is AP $\\rightarrow $ CN $\\rightarrow $ CN. The last rule to be applied is CN $\\rightarrow $ NP. The selection is easily decided since the input and the output of the rules are pre-determined, and there is no ambiguity in the selection process.\n\nThe encoder uses the GF rules and the components identified by the previous subsections to produce different constructors for different components of a sentence. A part of the output of the GF encoder for the object “game” is\n\nThe encoder will also create the operators that will be included in the oper section of the GF grammar for supporting the new constructor. For example, the following operators will be generated for serving the Game constructor above:\n\nMethod ::: GF Grammar Exporter\nThe GF Grammar Exporter has the simplest job among all modules in the system. It creates a GF program for a paragraph using the GF grammars created for the sentences of the paragraph. By taking the union of all respective elements of each grammar for each sentence, i.e., categories, functions, linearizations and operators, the Grammar Exporter will group them into the set of categories (respectively, categories, functions, linearizations, operators) of the final grammar.", "For instance, we have “game” is the object (main components), and we know that we have to construct “game” in the result GF grammar to be a NP (noun phrase). Program $\\Pi _2$ identifies that there are two complement components for the word “game”, which are “board” and “popular”, a noun and an adjective respectively. The GF encoder then select the set of rules: N $\\rightarrow $ N $\\rightarrow $ CN and A $\\rightarrow $ AP to create the common noun “board game” and the adjective phrase first. The next rule is AP $\\rightarrow $ CN $\\rightarrow $ CN. The last rule to be applied is CN $\\rightarrow $ NP. The selection is easily decided since the input and the output of the rules are pre-determined, and there is no ambiguity in the selection process.\n\nThe encoder uses the GF rules and the components identified by the previous subsections to produce different constructors for different components of a sentence. A part of the output of the GF encoder for the object “game” is\n\nThe encoder will also create the operators that will be included in the oper section of the GF grammar for supporting the new constructor. For example, the following operators will be generated for serving the Game constructor above:\n\nMethod ::: GF Grammar Exporter\nThe GF Grammar Exporter has the simplest job among all modules in the system. It creates a GF program for a paragraph using the GF grammars created for the sentences of the paragraph. By taking the union of all respective elements of each grammar for each sentence, i.e., categories, functions, linearizations and operators, the Grammar Exporter will group them into the set of categories (respectively, categories, functions, linearizations, operators) of the final grammar." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] }, { "annotation_id": [ "43ebc2d9edc303d284df660bacdda40cc3b8f2ef", "99d32acecde92827b18f32a4b6804bccbb367561", "9fba0066c7471c32b436f9a62637a0883b877be0" ], "answer": [ { "evidence": [ "We describe our method of generating natural language in two applications. The first application is to generate a natural language description for workflow created by the system built in the Phylotastic project described in BIBREF2. Instead of requiring that the ontologies are annotated using Attempto, we use natural language sentences to annotate the ontologies. To test the feasibility of the approach, we also conduct another use case with the second ontology, that is entirely different from the ontologies used in the Phylotastic project. The ontology is about people and includes descriptions for certain class.", "The second application targets the challenge of creating an abstract Wikipedia from the BlueSky session of 2018 International Semantic Web Conference BIBREF7. We create an intermediate representation that can be used to translate the original article in English to another language. In this use case, we translate the intermediate representation back to English and measure how the translated version stacks up again the original one. We assess the generation quality automatically with BLEU-3 and ROUGE-L (F measure). BLEU BIBREF16 and ROUGE BIBREF17 algorithms are chosen to evaluate our generator since the central idea of both metrixes is “the closer a machine translation is to a professional human translation, the better it is”, thus, they are well-aligned with our use cases' purpose. In short, the higher BLUE and ROUGE score are, the more similar the hypothesis text and the reference text is. In our use case, the hypothesis for BLEU and ROUGE is the generated English content from the intermediate representation, and the reference text is the original text from Wikipedia." ], "extractive_spans": [ "The first application is to generate a natural language description for workflow created by the system built in the Phylotastic project", "The second application targets the challenge of creating an abstract Wikipedia from the BlueSky session of 2018 International Semantic Web Conference" ], "free_form_answer": "", "highlighted_evidence": [ "The first application is to generate a natural language description for workflow created by the system built in the Phylotastic project described in BIBREF2.", "The second application targets the challenge of creating an abstract Wikipedia from the BlueSky session of 2018 International Semantic Web Conference BIBREF7." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We describe our method of generating natural language in two applications. The first application is to generate a natural language description for workflow created by the system built in the Phylotastic project described in BIBREF2. Instead of requiring that the ontologies are annotated using Attempto, we use natural language sentences to annotate the ontologies. To test the feasibility of the approach, we also conduct another use case with the second ontology, that is entirely different from the ontologies used in the Phylotastic project. The ontology is about people and includes descriptions for certain class." ], "extractive_spans": [ "natural language description for workflow created by the system built in the Phylotastic project", "about people and includes descriptions for certain class" ], "free_form_answer": "", "highlighted_evidence": [ "We describe our method of generating natural language in two applications. The first application is to generate a natural language description for workflow created by the system built in the Phylotastic project described in BIBREF2. Instead of requiring that the ontologies are annotated using Attempto, we use natural language sentences to annotate the ontologies. To test the feasibility of the approach, we also conduct another use case with the second ontology, that is entirely different from the ontologies used in the Phylotastic project. The ontology is about people and includes descriptions for certain class." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We describe our method of generating natural language in two applications. The first application is to generate a natural language description for workflow created by the system built in the Phylotastic project described in BIBREF2. Instead of requiring that the ontologies are annotated using Attempto, we use natural language sentences to annotate the ontologies. To test the feasibility of the approach, we also conduct another use case with the second ontology, that is entirely different from the ontologies used in the Phylotastic project. The ontology is about people and includes descriptions for certain class.", "The present paper is motivated by the need to generate natural language description of computational results to non-expert users such as those developed in the Phylotastic project. In this project, the users are experts in evolutionary biology but are none experts in ontologies and web services. When a user places a request, he/she will receive a workflow consisting of web services, whose inputs and outputs are specified by instances of classes in the ontologies working with web services, as well as the ordering and relationships between the services. To assist the user in understanding the workflow, a natural language description of the workflow is generated. In order to accomplish the task, the NLG system in the Phylotastic project proposes to annotate elements of the ontologies using Attempto, a simple subset of English with precisely defined syntax and semantics.", "The second application targets the challenge of creating an abstract Wikipedia from the BlueSky session of 2018 International Semantic Web Conference BIBREF7. We create an intermediate representation that can be used to translate the original article in English to another language. In this use case, we translate the intermediate representation back to English and measure how the translated version stacks up again the original one. We assess the generation quality automatically with BLEU-3 and ROUGE-L (F measure). BLEU BIBREF16 and ROUGE BIBREF17 algorithms are chosen to evaluate our generator since the central idea of both metrixes is “the closer a machine translation is to a professional human translation, the better it is”, thus, they are well-aligned with our use cases' purpose. In short, the higher BLUE and ROUGE score are, the more similar the hypothesis text and the reference text is. In our use case, the hypothesis for BLEU and ROUGE is the generated English content from the intermediate representation, and the reference text is the original text from Wikipedia.", "Experiments ::: Intermediate Representation for Wiki Pages", "Since our system creates a GF program for a set of sentences, it could be used as an intermediate representation of a paragraph. This intermediate representation could be used by GF for automatic translation as GF is well-suited for cross-languages translation. On the other hand, we need to assess whether the intermediate representation is meaningful. This use case aims at checking the adequacy of the representation. To do so, we generate the English sentences from the GF program and evaluate the quality of these sentences against the original ones. We randomly select 5 articles from 3 Wikipedia portals: People, Mathematics and Food & Drink." ], "extractive_spans": [], "free_form_answer": "The first application is to build a natural language description of the ontologies built in an evolutionary biology project called Phylotastic, so that biologists can understand the output, without knowledge of ontologies. The second aims to create an abstract or intermediate representation of the Wikipedia pages from the BlueSky session in 2018.", "highlighted_evidence": [ "The first application is to generate a natural language description for workflow created by the system built in the Phylotastic project described in BIBREF2. Instead of requiring that the ontologies are annotated using Attempto, we use natural language sentences to annotate the ontologies. ", "The present paper is motivated by the need to generate natural language description of computational results to non-expert users such as those developed in the Phylotastic project. In this project, the users are experts in evolutionary biology but are none experts in ontologies and web services.", "The second application targets the challenge of creating an abstract Wikipedia from the BlueSky session of 2018 International Semantic Web Conference BIBREF7. We create an intermediate representation that can be used to translate the original article in English to another language. In this use case, we translate the intermediate representation back to English and measure how the translated version stacks up again the original one. ", "Experiments ::: Intermediate Representation for Wiki Pages\nSince our system creates a GF program for a set of sentences, it could be used as an intermediate representation of a paragraph. This intermediate representation could be used by GF for automatic translation as GF is well-suited for cross-languages translation. On the other hand, we need to assess whether the intermediate representation is meaningful. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ] } ], "nlp_background": [ "zero", "zero" ], "paper_read": [ "no", "no" ], "question": [ "How does sentence construction component works?", "What are two use cases that demonstrate capability of created system?" ], "question_id": [ "87c00edc497274ae6a972c3097818de85b1b384f", "de4e949c6917ff6933f5fa2a3062ba703aba014c" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "search_query": [ "computer vision", "computer vision" ], "topic_background": [ "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Google translation for the Japanese sentence generated by GF. The two sentences in English and in Italian are the two representations of the meaning encoded in the abstract syntax.", "Figure 2: System Overview", "Table 1: Transformation from NLP result to asp program", "Table 2: GF Rules Assigned to Each Structure", "Table 3: Extended GF Rules", "Table 4: Atoms from Phylotastic project and its annotation", "Table 5: Sample outputs for the people ontology.", "Table 6: BLEU assessable sentences", "Table 7: BLUE and ROUGE score", "Table 8: Original sentences extracted from Wikipedia and corresponding generated sentences" ], "file": [ "4-Figure1-1.png", "4-Figure2-1.png", "5-Table1-1.png", "7-Table2-1.png", "8-Table3-1.png", "10-Table4-1.png", "10-Table5-1.png", "11-Table6-1.png", "11-Table7-1.png", "12-Table8-1.png" ] }
[ "What are two use cases that demonstrate capability of created system?" ]
[ [ "1909.08250-Experiments ::: Intermediate Representation for Wiki Pages-0", "1909.08250-Experiments-1", "1909.08250-Introduction-1", "1909.08250-Experiments-0" ] ]
[ "The first application is to build a natural language description of the ontologies built in an evolutionary biology project called Phylotastic, so that biologists can understand the output, without knowledge of ontologies. The second aims to create an abstract or intermediate representation of the Wikipedia pages from the BlueSky session in 2018." ]
63
1612.07486
Continuous multilinguality with language vectors
Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.
{ "paragraphs": [ [ "Neural language models BIBREF0 , BIBREF1 , BIBREF2 have become an essential component in several areas of natural language processing (NLP), such as machine translation, speech recognition and image captioning. They have also become a common benchmarking application in machine learning research on recurrent neural networks (RNN), because producing an accurate probabilistic model of human language is a very challenging task which requires all levels of linguistic analysis, from pragmatics to phonology, to be taken into account.", "A typical language model is trained on text in a single language, and if one needs to model multiple languages the standard solution is to train a separate model for each language. This presupposes large quantities of monolingual data in each of the languages that needs to be covered and each model with its parameters is completely independent of any of the other models.", "We propose instead to use a single model with real-valued vectors to indicate the language used, and to train this model with a large number of languages. We thus get a language model whose predictive distribution INLINEFORM0 is a continuous function of the language vector INLINEFORM1 , a property that is trivially extended to other neural NLP models. In this paper, we explore the “language space” containing these vectors, and in particular explore what happens when we move beyond the points representing the languages of the training corpus.", "The motivation of combining languages into one single model is at least two-fold: First of all, languages are related and share many features and properties, a fact that is ignored when using independent models. The second motivation is data sparseness, an issue that heavily influences the reliability of data-driven models. Resources are scarce for most languages in the world (and also for most domains in otherwise well-supported languages), which makes it hard to train reasonable parameters. By combining data from many languages, we hope to mitigate this issue.", "In contrast to related work, we focus on massively multilingual data sets to cover for the first time a substantial amount of the linguistic diversity in the world in a project related to data-driven language modeling. We do not presuppose any prior knowledge about language similarities and evolution and let the model discover relations on its own purely by looking at the data. The only supervision that is giving during training is a language identifier as a one-hot encoding. From that and the actual training examples, the system learns dense vector representations for each language included in our data set along with the character-level RNN parameters of the language model itself." ], [ "Multilingual language models is not a new idea BIBREF3 , the novelty of our work lies primarily in the use of language vectors and the empirical evaluation using nearly a thousand languages.", "Concurrent with this work, Johnson2016zeroshot conducted a study using neural machine translation (NMT), where a sub-word decoder is told which language to generate by means of a special language identifier token in the source sentence. This is close to our model, although beyond a simple interpolation experiment (as in our sec:generating) they did not further explore the language vectors, which would have been challenging to do given the small number of languages used in their study.", "Ammar2016manylanguages used one-hot language identifiers as input to a multilingual word-based dependency parser, based on multilingual word embeddings. Given that they report this resulting in higher accuracy than using features from a typological database, it is a reasonable guess that their system learned language vectors which were able to encode syntactic properties relevant to the task. Unfortunately, they also did not look closer at the language vector space, which would have been interesting given the relatively large and diverse sample of languages represented in the Universal Dependencies treebanks.", "Our evaluation in sec:clustering calls to mind previous work on automatic language classification, by Wichmann2010evaluating among others. However, our purpose is not to detect genealogical relationships, even though we use the strong correlation between such classifications and our language vectors as evidence that the vector space captures sensible information about languages." ], [ "We base our experiments on a large collection of Bible translations crawled from the web, coming from various sources and periods of times. Any other multilingual data collection would work as well, but with the selected corpus we have the advantage that we cover the same genre and roughly the same coverage for each language involved. It is also easy to divide the data into training and test sets by using Bible verse numbers, which allows us to control for semantic similarity between languages in a way that would have been difficult in a corpus that is not multi-parallel. Altogether we have 1,303 translations in 990 languages that we can use for our purposes. These were chosen so that the model alphabet size is below 1000 symbols, which was satisfied by choosing only translations in Latin, Cyrillic or Greek script.", "Certainly, there are disadvantages as well, such as the limited size (roughly 500 million tokens in total, with most languages having only one translation of the New Testament each, with roughly 200 thousand tokens), the narrow domain and the high overlap of named entities. The latter can lead to some unexpected effects when using nonsensical language vectors, as the model will then generate a sequence of random names.", "The corpus deviates in some ways from an ideal multi-parallel corpus. Most translations are of the complete New Testament, whereas around 300 also contain the Old Testament (thus several times longer), and around ten contain only portions of the New Testament. Additionally, several languages have multiple translations, which are then concatenated. These translations may vary in age and style, but historical versions of languages (with their own ISO 639-3 code) are treated as distinct languages. During training we enforce a uniform distribution between languages when selecting training examples." ], [ "Our model is based on a standard stacked character-based LSTM BIBREF4 with two layers, followed by a hidden layer and a final output layer with softmax activations. The only modification made to accommodate the fact that we train the model with text in nearly a thousand languages, rather than one, is that language embedding vectors are concatenated to the inputs of the LSTMs at each time step and the hidden layer before the softmax. We used three separate embeddings for these levels, in an attempt to capture different types of information about languages. The model structure is summarized in fig:model.", "In our experiments we use 1024-dimensional LSTMs, 128-dimensional character embeddings, and 64-dimensional language embeddings. Layer normalization BIBREF5 is used, but no dropout or other regularization since the amount of data is very large (about 3 billion characters) and training examples are seen at most twice. For smaller models early stopping is used. We use Adam BIBREF6 for optimization. Training takes between an hour and a few days on a K40 GPU, depending on the data size." ], [ "In this section, we present several experiments with the model described. For exploring the language vector space, we use hierarchical agglomerative clustering for visualization. For measuring performance, we use cross-entropy on held out-data. For this, we use a set of the 128 most commonly translated Bible verses, to ensure that the held-out set is as large and overlapping as possible among languages." ], [ "Our first experiment tries to answer what happens when more and more languages are added to the model. There are two settings: adding languages in a random order, or adding the most closely related languages first. Cross-entropy plots for these settings are shown in fig:random and fig:swe.", "In both cases, the model degrades gracefully (or even improves) for a number of languages, but then degrades linearly (i.e. exponential growth of perplexity) with exponentially increasing number of languages.", "For comparison, fig:swesize compares this to the effect of decreasing the number of parameters in the LSTM by successively halving the hidden state size. Here the behavior is similar, but unlike the Swedish model which got somewhat better when closely related languages were added, the increase in cross-entropy is monotone. It would be interesting to investigate how the number of model parameters needs to be scaled up in order to accommodate the additional languages, but unfortunately the computational resources for such an experiment increases with the number of languages and would not be practical to carry out with our current equipment." ], [ "We now take a look at the language vectors found during training with the full model of 990 languages. fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family. While our experiments indicate that finding more remote relationships (say, connecting the Germanic languages to the Celtic) is difficult for the model, it is clear that the language vectors preserves similarity properties between languages.", "In additional experiments we found the overall structure of these clusterings to be relatively stable across models, but for very similar languages (such as Danish and the two varieties of Norwegian) the hierarchy might differ, and the some holds for languages or groups that are significantly different from the major groups. An example from fig:germanic is English, which is traditionally classified as a West Germanic language with strong influences from North Germanic as well as Romance languages. In the figure English is (weakly) grouped with the West Germanic languages, but in other experiments it is instead weakly grouped with North Germanic." ], [ "Since our language model is conditioned on a language vector, we can gain some intuitive understanding of the language space by generating text from different points in it. These points could be either one of the vectors learned during training, or some arbitrary other point. tab:interpolation shows text samples from different points along the line between Modern English [eng] and Middle English [enm]. Consistent with the results of Johnson2016zeroshot, it appears that the interesting region lies rather close to 0.5. Compare also to our fig:eng-deu, which shows that up until about a third of the way between English and German, the language model is nearly perfectly tuned to English." ], [ "By means of cross-entropy, we can also visualize the relation between languages in the multilingual space. Figure FIGREF12 plots the interpolation results for two relatively dissimilar languages, English and German. As expected, once the language vector moves too close to the German one, model performance drops drastically.", "More interesting results can be obtained if we interpolate between two language variants and compute cross-entropy of a text that represents an intermediate form. fig:eng-enm shows the cross-entropy of the King James Version of the Bible (published 1611), when interpolating between Modern English (1500–) and Middle English (1050–1500). The optimal point turns out to be close to the midway point between them." ], [ "If we have a sample of an unknown language or language variant, it is possible to estimate its language vector by backpropagating through the language model with all parameters except the language vector fixed. We found that a very small set of sentences is enough to give a considerable improvement in cross-entropy on held-out sentences. In this experiment, we used 32 sentences from the King James Version of the Bible. Using the resulting language vector, test set cross-entropy improved from 1.39 (using the Modern English language vector as initial value) to 1.35. This is comparable to the result obtained in sec:interpolation, except that here we do not restrict the search space to points on a straight line between two language vectors." ], [ "We have shown that language vectors, dense vector representations of natural languages, can be learned efficiently from raw text and possess several interesting properties. First, they capture language similarity to the extent that language family trees can be reconstructed by clustering the vectors. Second, they allow us to interpolate between languages in a sensible way, and even allow adopting the model using a very small set of text, simply by optimizing the language vector." ] ], "section_name": [ "Introduction", "Related Work", "Data", "Methods", "Results", "Model capacity", "Structure of the language space", "Generating Text", "Mixing and Interpolating Between Languages", "Language identification", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "35a52b94300bc1ef86915f6c453ed28e95f65fa8", "3d997e92d027b0bbdaec1eb72b757998dbdaa3e7", "6d03fa2b0e09db6e32d531d7e9c9e7cab3ec353a" ], "answer": [ { "evidence": [ "We now take a look at the language vectors found during training with the full model of 990 languages. fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family. While our experiments indicate that finding more remote relationships (say, connecting the Germanic languages to the Celtic) is difficult for the model, it is clear that the language vectors preserves similarity properties between languages." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We now take a look at the language vectors found during training with the full model of 990 languages. fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family. While our experiments indicate that finding more remote relationships (say, connecting the Germanic languages to the Celtic) is difficult for the model, it is clear that the language vectors preserves similarity properties between languages." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "In contrast to related work, we focus on massively multilingual data sets to cover for the first time a substantial amount of the linguistic diversity in the world in a project related to data-driven language modeling. We do not presuppose any prior knowledge about language similarities and evolution and let the model discover relations on its own purely by looking at the data. The only supervision that is giving during training is a language identifier as a one-hot encoding. From that and the actual training examples, the system learns dense vector representations for each language included in our data set along with the character-level RNN parameters of the language model itself." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "From that and the actual training examples, the system learns dense vector representations for each language included in our data set along with the character-level RNN parameters of the language model itself." ], "unanswerable": false, "yes_no": true }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "6136adec513ce0a862b5194ef9edd10b13b2c9c7", "900e0d3e5cf38daadcc9b41b3504453440c371b3", "e731a06cc832c60432ca696b800c99e5d9ede7bf" ], "answer": [ { "evidence": [ "Our model is based on a standard stacked character-based LSTM BIBREF4 with two layers, followed by a hidden layer and a final output layer with softmax activations. The only modification made to accommodate the fact that we train the model with text in nearly a thousand languages, rather than one, is that language embedding vectors are concatenated to the inputs of the LSTMs at each time step and the hidden layer before the softmax. We used three separate embeddings for these levels, in an attempt to capture different types of information about languages. The model structure is summarized in fig:model.", "In contrast to related work, we focus on massively multilingual data sets to cover for the first time a substantial amount of the linguistic diversity in the world in a project related to data-driven language modeling. We do not presuppose any prior knowledge about language similarities and evolution and let the model discover relations on its own purely by looking at the data. The only supervision that is giving during training is a language identifier as a one-hot encoding. From that and the actual training examples, the system learns dense vector representations for each language included in our data set along with the character-level RNN parameters of the language model itself." ], "extractive_spans": [ "character-level RNN" ], "free_form_answer": "", "highlighted_evidence": [ "Our model is based on a standard stacked character-based LSTM BIBREF4 with two layers, followed by a hidden layer and a final output layer with softmax activations", "From that and the actual training examples, the system learns dense vector representations for each language included in our data set along with the character-level RNN parameters of the language model itself." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our model is based on a standard stacked character-based LSTM BIBREF4 with two layers, followed by a hidden layer and a final output layer with softmax activations. The only modification made to accommodate the fact that we train the model with text in nearly a thousand languages, rather than one, is that language embedding vectors are concatenated to the inputs of the LSTMs at each time step and the hidden layer before the softmax. We used three separate embeddings for these levels, in an attempt to capture different types of information about languages. The model structure is summarized in fig:model." ], "extractive_spans": [ "standard stacked character-based LSTM BIBREF4" ], "free_form_answer": "", "highlighted_evidence": [ "Our model is based on a standard stacked character-based LSTM BIBREF4 with two layers, followed by a hidden layer and a final output layer with softmax activations." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our model is based on a standard stacked character-based LSTM BIBREF4 with two layers, followed by a hidden layer and a final output layer with softmax activations. The only modification made to accommodate the fact that we train the model with text in nearly a thousand languages, rather than one, is that language embedding vectors are concatenated to the inputs of the LSTMs at each time step and the hidden layer before the softmax. We used three separate embeddings for these levels, in an attempt to capture different types of information about languages. The model structure is summarized in fig:model." ], "extractive_spans": [ "LSTM" ], "free_form_answer": "", "highlighted_evidence": [ "Our model is based on a standard stacked character-based LSTM BIBREF4 with two layers, followed by a hidden layer and a final output layer with softmax activations. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "106c794d1d17efe07aa4e4077dbb1175163d0953", "10c42232749b9ecd29962d001436fb730172d623", "c18fb45fdec3f54cd26299f9659104e9fc55b31e" ], "answer": [ { "evidence": [ "We now take a look at the language vectors found during training with the full model of 990 languages. fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family. While our experiments indicate that finding more remote relationships (say, connecting the Germanic languages to the Celtic) is difficult for the model, it is clear that the language vectors preserves similarity properties between languages." ], "extractive_spans": [ "hierarchical clustering" ], "free_form_answer": "", "highlighted_evidence": [ "fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We now take a look at the language vectors found during training with the full model of 990 languages. fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family. While our experiments indicate that finding more remote relationships (say, connecting the Germanic languages to the Celtic) is difficult for the model, it is clear that the language vectors preserves similarity properties between languages.", "In additional experiments we found the overall structure of these clusterings to be relatively stable across models, but for very similar languages (such as Danish and the two varieties of Norwegian) the hierarchy might differ, and the some holds for languages or groups that are significantly different from the major groups. An example from fig:germanic is English, which is traditionally classified as a West Germanic language with strong influences from North Germanic as well as Romance languages. In the figure English is (weakly) grouped with the West Germanic languages, but in other experiments it is instead weakly grouped with North Germanic.", "FLOAT SELECTED: Figure 5: Hierarchical clustering of language vectors of Germanic languages." ], "extractive_spans": [], "free_form_answer": "By doing hierarchical clustering of word vectors", "highlighted_evidence": [ "We now take a look at the language vectors found during training with the full model of 990 languages. fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family. While our experiments indicate that finding more remote relationships (say, connecting the Germanic languages to the Celtic) is difficult for the model, it is clear that the language vectors preserves similarity properties between languages.\n\nIn additional experiments we found the overall structure of these clusterings to be relatively stable across models, but for very similar languages (such as Danish and the two varieties of Norwegian) the hierarchy might differ, and the some holds for languages or groups that are significantly different from the major groups. An example from fig:germanic is English, which is traditionally classified as a West Germanic language with strong influences from North Germanic as well as Romance languages. In the figure English is (weakly) grouped with the West Germanic languages, but in other experiments it is instead weakly grouped with North Germanic.", "FLOAT SELECTED: Figure 5: Hierarchical clustering of language vectors of Germanic languages." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We now take a look at the language vectors found during training with the full model of 990 languages. fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family. While our experiments indicate that finding more remote relationships (say, connecting the Germanic languages to the Celtic) is difficult for the model, it is clear that the language vectors preserves similarity properties between languages." ], "extractive_spans": [], "free_form_answer": "By applying hierarchical clustering on language vectors found during training", "highlighted_evidence": [ "We now take a look at the language vectors found during training with the full model of 990 languages. fig:germanic shows a hierarchical clustering of the subset of Germanic languages, which closely matches the established genetic relationships in this language family. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "infinity", "infinity", "infinity" ], "paper_read": [ "no", "no", "no" ], "question": [ "Do they explore how their word representations vary across languages?", "Which neural language model architecture do they use?", "How do they show genetic relationships between languages?" ], "question_id": [ "4cf05da602669a4c09c91ff5a1baae6e30adefdf", "7380e62edcb11f728f6d617ee332dc8b5752b185", "f37b01e0c366507308fca44c20d3f69621b94a6e" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "search_query": [ "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Schematic of our model. The three parts of the language vector are concatenated with the inputs to the two LSTM:s and the final softmax layer.", "Figure 2: Cross-entropy of the test sets from the first four languages added to our model. At the leftmost point (x = 1), only Chayahuita is used for training the model so no results are available for the other languages.", "Figure 3: Cross-entropy of the test sets from Scandinavian languages. The languages added at each step are: Swedish, Norwegian+Danish, Icelandic+Faroese, remaining Germanic, remaining Indo-European, all remaining languages.", "Figure 4: Cross-entropy of the Swedish test set, given two conditions: increasing number of languages by the given factor (adding the most similar languages first) or decreasing number of parameters by the same factor (for a monolingual model, which is why the curves meet at x = 1).", "Figure 5: Hierarchical clustering of language vectors of Germanic languages.", "Figure 6: Cross-entropy of interpolated language models for English and German measured on English held-out text.", "Figure 7: Cross-entropy of interpolated language models for modern and middle English tested on data from the King James Bible." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "3-Figure4-1.png", "4-Figure5-1.png", "4-Figure6-1.png", "5-Figure7-1.png" ] }
[ "How do they show genetic relationships between languages?" ]
[ [ "1612.07486-Structure of the language space-1", "1612.07486-4-Figure5-1.png", "1612.07486-Structure of the language space-0" ] ]
[ "By applying hierarchical clustering on language vectors found during training" ]
64
1612.03762
From narrative descriptions to MedDRA: automagically encoding adverse drug reactions
The collection of narrative spontaneous reports is an irreplaceable source for the prompt detection of suspected adverse drug reactions (ADRs): qualified domain experts manually revise a huge amount of narrative descriptions and then encode texts according to MedDRA standard terminology. The manual annotation of narrative documents with medical terminology is a subtle and expensive task, since the number of reports is growing up day-by-day. MagiCoder, a Natural Language Processing algorithm, is proposed for the automatic encoding of free-text descriptions into MedDRA terms. MagiCoder procedure is efficient in terms of computational complexity (in particular, it is linear in the size of the narrative input and the terminology). We tested it on a large dataset of about 4500 manually revised reports, by performing an automated comparison between human and MagiCoder revisions. For the current base version of MagiCoder, we measured: on short descriptions, an average recall of $86\%$ and an average precision of $88\%$; on medium-long descriptions (up to 255 characters), an average recall of $64\%$ and an average precision of $63\%$. From a practical point of view, MagiCoder reduces the time required for encoding ADR reports. Pharmacologists have simply to review and validate the MagiCoder terms proposed by the application, instead of choosing the right terms among the 70K low level terms of MedDRA. Such improvement in the efficiency of pharmacologists' work has a relevant impact also on the quality of the subsequent data analysis. We developed MagiCoder for the Italian pharmacovigilance language. However, our proposal is based on a general approach, not depending on the considered language nor the term dictionary.
{ "paragraphs": [ [ "Pharmacovigilance includes all activities aimed to systematically study risks and benefits related to the correct use of marketed drugs. The development of a new drug, which begins with the production and ends with the commercialization of a pharmaceutical product, considers both pre-clinical studies (usually tests on animals) and clinical studies (tests on patients). After these phases, a pharmaceutical company can require the authorization for the commercialization of the new drug. Notwithstanding, whereas at this stage drug benefits are well-know, results about drug safety are not conclusive BIBREF0 . The pre-marketing tests cited above have some limitations: they involve a small number of patients; they exclude relevant subgroups of population such as children and elders; the experimentation period is relatively short, less than two years; the experimentation does not deal with possibly concomitant pathologies, or with the concurrent use of other drugs. For all these reasons, non-common Adverse Drug Reactions (ADRs), such as slowly-developing pathologies (e.g., carcinogenesis) or pathologies related to specific groups of patients, are hardly discovered before the commercialization. It may happen that drugs are withdrawn from the market after the detection of unexpected collateral effects. Thus, it stands to reason that the post-marketing control of ADRs is a necessity, considering the mass production of drugs. As a consequence, pharmacovigilance plays a crucial role in human healthcare improvement BIBREF0 .", "Spontaneous reporting is the main method pharmacovigilance adopts in order to identify adverse drug reactions. Through spontaneous reporting, health care professionals, patients, and pharmaceutical companies can voluntarily send information about suspected ADRs to the national regulatory authority. The spontaneous reporting is an important activity. It provides pharmacologists and regulatory authorities with early alerts, by considering every drug on the market and every patient category.", "The Italian system of pharmacovigilance requires that in each local healthcare structure (about 320 in Italy) there is a qualified person responsible for pharmacovigilance. Her/his assignment is to collect reports of suspected ADRs and to send them to the National Network of Pharmacovigilance (RNF, in Italian) within seven days since they have been received. Once reports have been notified and sent to RNF they are analysed by both local pharmacovigilance centres and by the Drug Italian Agency (AIFA). Subsequently, they are sent to Eudravigilance BIBREF1 and to VigiBase BIBREF2 (the European and the worldwide pharmacovigilance network RNF is part of, respectively). In general, spontaneous ADR reports are filled out by health care professionals (e.g., medical specialists, general practitioners, nurses), but also by citizens. In last years, the number of ADR reports in Italy has grown rapidly, going from approximately ten thousand in 2006 to around sixty thousand in 2014 BIBREF3 , as shown in Figure FIGREF3 .", "Since the post-marketing surveillance of drugs is of paramount importance, such an increase is certainly positive. At the same time, the manual review of the reports became difficult and often unbearable both by people responsible for pharmacovigilance and by regional centres. Indeed, each report must be checked, in order to control its quality; it is consequently encoded and transferred to RNF via “copy by hand” (actually, a printed copy).", "Recently, to increase the efficiency in collecting and managing ADR reports, a web application, called VigiFarmaco, has been designed and implemented for the Italian pharmacovigilance. Through VigiFarmaco, a spontaneous report can be filled out online by both healthcare professionals and citizens (through different user-friendly forms), as anonymous or registered users. The user is guided in compiling the report, since it has to be filled step-by-step (each phase corresponds to a different report section, i.e., “Patient”, “Adverse Drug Reaction”, “Drug Treatments”, and “Reporter”, respectively). At each step, data are validated and only when all of them have been correctly inserted the report can be successfully submitted.", "Once ADR reports are submitted, they need to be validated by a pharmacovigilance supervisor. VigiFarmaco provides support also in this phase and is useful also for pharmacovigilance supervisors. Indeed, VigiFarmaco reports are high-quality documents, since they are automatically validated (the presence, the format, and the consistency of data are validated at the filling time). As a consequence, they are easier to review (especially with respect to printed reports). Moreover, thanks to VigiFarmaco, pharmacologists can send reports (actually, XML files BIBREF4 ) to RNF by simply clicking a button, after reviewing it.", "Online reports have grown up to become the 30% of the total number of Italian reports. As expected, it has been possible to observe that the average time between the dispatch of online reports and the insertion into RNF is sensibly shorter with respect to the insertion from printed reports. Notwithstanding, there is an operation which still requires the manual intervention of responsibles for pharmacovigilance also for online report revisions: the encoding in MedDRA terminology of the free text, through which the reporter describes one or more adverse drug reactions. MedDRA (Medical Dictionary for Regulatory Activities) is a medical terminology introduced with the purpose to standardize and facilitate the sharing of information about medicinal products in particular with respect to regulatory activities BIBREF5 . The description of a suspected ADR through narrative text could seem redundant/useless. Indeed, one could reasonably imagine sound solutions based either on an autocompletion form or on a menu with MedDRA terms. In these solutions, the description of ADRs would be directly encoded by the reporter and no expert work for MedDRA terminology extraction would be required. However, such solutions are not completely suited for the pharmacovigilance domain and the narrative description of ADRs remains a desirable feature, for at least two reasons. First, the description of an ADR by means of one of the seventy thousand MedDRA terms is a complex task. In most cases, the reporter who points out the adverse reaction is not an expert in MedDRA terminology. This holds in particular for citizens, but it is still valid for several professionals. Thus, describing ADRs by means of natural language sentences is simpler. Second, the choice of the suitable term(s) from a given list or from an autocompletion field can influence the reporter and limit her/his expressiveness. As a consequence, the quality of the description would be also in this case undermined. Therefore, VigiFarmaco offers a free-text field for specifying the ADR with all the possible details, without any restriction about the content or strict limits to the length of the written text. Consequently, MedDRA encoding has then to be manually implemented by qualified people responsible for pharmacovigilance, before the transmission to RNF. As this work is expensive in terms of time and attention required, a problem about the accuracy of the encoding may occur given the continuous growing of the number of reports.", "According to the described scenario, in this paper we propose INLINEFORM0 , an original Natural Language Processing (NLP) BIBREF6 algorithm and related software tool, which automatically assigns one or more terms from a dictionary to a narrative text. A preliminary version of INLINEFORM1 has been proposed in BIBREF7 . MagiCoder has been first developed for supporting pharmacovigilance supervisors in using VigiFarmaco, providing them with an initial automatic MedDRA encoding of the ADR descriptions in the online reports collected by VigiFarmaco, that the supervisors check and may correct or accept as it is. In this way, the encoding task, previously completely manual, becomes semi-automatic, reducing errors and the required time for accomplishing it. In spite of its first goal, MagiCoder has now evolved in an autonomous algorithm and software usable in all contexts where terms from a dictionary have to be recognized in a free narrative text. With respect to other solutions already available in literature and market, MagiCoder has been designed to be efficient and less computationally expensive, unsupervised, and with no need of training. MagiCoder uses stemming to be independent from singular/plural and masculine/feminine forms. Moreover, it uses string distance and other techniques to find best matching terms, discarding similar and non optimal terms.", "With respect to the first version BIBREF7 , we extended our proposal following several directions. First of all, we refined the procedure: MagiCoder has been equipped with some heuristic criteria and we started to address the problem of including auxiliary dictionaries (e.g., in order to deal with synonyms). MagiCoder computational complexity has been carefully studied and we will show that it is linear in the size of the dictionary (in this case, the number of LLTs in MedDRA) and the text description. We performed an accurate test of MagiCoder performances: by means of well-known statistical measures, we collected a significant set of quantitative information about the effective behavior of the procedure. We largely discuss some crucial key-points we met in the development of this version of MagiCoder, proposing short-time solutions we are addressing as work in progress, such as changes in stemming algorithm, considering synonyms, term filtering heuristics.", "The paper is organized as follows. In Section SECREF2 we provide some background notions and we discuss related work. In Section SECREF3 we present the algorithm MagiCoder, by providing both a qualitative description and the pseudocode. In Section SECREF4 we spend some words about the user interface of the related software tool. In Section SECREF5 we explain the benchmark we developed to test INLINEFORM0 performances and its results. Section SECREF6 is devoted to some discussions. Finally, in Section SECREF7 we summarize the main features of our work and sketch some future research lines." ], [ "Automatic detection of adverse drug reactions from text has recently received an increasing interest in pharmacovigilance research. Narrative descriptions of ADRs come from heterogeneous sources: spontaneous reporting, Electronic Health Records, Clinical Reports, and social media. In BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 some NLP approaches have been proposed for the extraction of ADRs from text. In BIBREF13 , the authors collect narrative discharge summaries from the Clinical Information System at New York Presbyterian Hospital. MedLEE, an NLP system, is applied to this collection, for identifing medication events and entities, which could be potential adverse drug events. Co-occurrence statistics with adjusted volume tests were used to detect associations between the two types of entities, to calculate the strengths of the associations, and to determine their cutoff thresholds. In BIBREF14 , the authors report on the adaptation of a machine learning-based system for the identification and extraction of ADRs in case reports. The role of NLP approaches in optimised machine learning algorithms is also explored in BIBREF15 , where the authors address the problem of automatic detection of ADR assertive text segments from several sources, focusing on data posted by users on social media (Twitter and DailyStrenght, a health care oriented social media). Existing methodologies for NLP are discussed and an experimental comparison between NLP-based machine learning algorithms over data sets from different sources is proposed. Moreover, the authors address the issue of data imbalance for ADR description task. In BIBREF16 the authors propose to use association mining and Proportional Reporting Ratio (PRR, a well-know pharmacovigilance statistical index) to mine the associations between drugs and adverse reactions from the user contributed content in social media. In order to extract adverse reactions from on-line text (from health care communities), the authors apply the Consumer Health Vocabulary to generate ADR lexicon. ADR lexicon is a computerized collection of health expressions derived from actual consumer utterances, linked to professional concepts and reviewed and validated by professionals and consumers. Narrative text is preprocessed following standard NLP techniques (such as stop word removal, see Section SECREF12 ). An experiment using ten drugs and five adverse drug reactions is proposed. The Food and Drug Administration alerts are used as the gold standard, to test the performance of the proposed techniques. The authors developed algorithms to identify ADRs from threads of drugs, and implemented association mining to calculate leverage and lift for each possible pair of drugs and adverse reactions in the dataset. At the same time, PRR is also calculated.", "Other related papers about pharmacovigilance and machine learning or data mining are BIBREF17 , BIBREF18 . In BIBREF19 , a text extraction tool is implemented on the .NET platform for preprocessing text (removal of stop words, Porter stemming BIBREF20 and use of synonyms) and matching medical terms using permutations of words and spelling variations (Soundex, Levenshtein distance and Longest common subsequence distance BIBREF21 ). Its performance has been evaluated on both manually extracted medical terms from summaries of product characteristics and unstructured adverse effect texts from Martindale (a medical reference for information about drugs and medicines) using the WHO-ART and MedDRA medical terminologies. A lot of linguistic features have been considered and a careful analysis of performances has been provided. In BIBREF22 the authors develop an algorithm in order to help coders in the subtle task of auto-assigning ICD-9 codes to clinical narrative descriptions. Similarly to MagiCoder, input descriptions are proposed as free text. The test experiment takes into account a reasoned data set of manually annotated radiology reports, chosen to cover all coding classes according to ICD-9 hierarchy and classification: the test obtains an accuracy of INLINEFORM0 ." ], [ "The Medical Dictionary for Regulatory Activities (MedDRA) BIBREF5 is a medical terminology used to classify adverse event information associated with the use of biopharmaceuticals and other medical products (e.g., medical devices and vaccines). Coding these data to a standard set of MedDRA terms allows health authorities and the biopharmaceutical industry to exchange and analyze data related to the safe use of medical products BIBREF23 . It has been developed by the International Conference on Harmonization (ICH); it belongs to the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA); it is controlled and periodically revised by the MedDRA Mainteinance And Service Organization (MSSO). MedDRA is available in eleven European languages and in Chinese and Japanese too. It is updated twice a year (in March and in September), following a collaboration-based approach: everyone can propose new reasonable updates or changes (due to effects of events as the onset of new pathologies) and a team of experts eventually decides about the publication of updates. MedDRA terms are organised into a hierarchy: the SOC (System Organ Class) level includes the most general terms; the LLT (Low Level Terms) level includes more specific terminologies. Between SOC and LLT there are three intermediate levels: HLGT (High Level Group Terms), HLT (High Level Terms), and PT (Preferred Terms).", "The encoding of ADRs through MedDRA is extremely important for report analysis as for a prompt detection of problems related to drug-based treatments. Thanks to MedDRA it is possible to group similar/analogous cases described in different ways (e.g., by synonyms) or with different details/levels of abstraction.", "Table TABREF8 shows an example of the hierarchy: reaction Itch is described starting from Skin disorders (SOC), Epidermal conditions (HLGT), Dermatitis and Eczema (HLT), and Asteatotic Eczema (PT). Preferred Terms are Low Level Terms chosen to be representative of a group of terms. It should be stressed that the hierarchy is multiaxial: for example, a PT can be grouped into one or more HLT, but it belongs to only one primary SOC term." ], [ "A natural language ADR description is a completely free text. The user has no limitations, she/he can potentially write everything: a number of online ADR descriptions actually contain information not directly related to drug effects. Thus, an NLP software has to face and solve many issues: Trivial orthographical errors; Use of singular versus plural nouns; The so called “false positives”, i.e., syntactically retrieved inappropriate results, which are closely resembling to correct solutions; The structure of the sentence, i.e., the way an assertion is built up in a given language. Also the “intelligent” detection of linguistic connectives is a crucial issue. For example, the presence of a negation can potentially change the overall meaning of a description.", "In general, a satisfactory automatic support of human reasoning and work is a subtle task: for example, the uncontrolled extension of the dictionary with auxiliary synonymous (see Section SECREF66 ) or the naive ad hoc management of particular cases, can limit the efficiency and the desired of the algorithm. For these reasons, we carefully designed INLINEFORM0 , even through a side-by-side collaboration between pharmacologists and computer scientists, in order to yield an efficient tool, capable to really support pharmacovigilance activities.", "In literature, several NLP algorithms already exist, and several interesting approaches (such as the so called morpho-analysis of natural language) have been studied and proposed BIBREF24 , BIBREF6 , BIBREF25 . According to the described pharmacovigilance domain, we considered algorithms for the morpho-analysis and the part-of-speech (PoS) extraction techniques BIBREF24 , BIBREF6 too powerful and general purpose for the solution of our problem. Indeed, in most cases ADR descriptions are written in a very succinct way, without using verbs, punctuation, or other lexical items, and introducing acronyms. Moreover, clinical and technical words are often not recognized correctly because not included in usual dictionaries. All these considerations limit the benefits of using morpho-analysis and PoS for our purposes.", "Thus, we decided to design and develop an ad hoc algorithm for the problem we are facing, namely that of deriving MedDRA terms from narrative text and mapping segments of text in effective LLTs. This task has to be done in a very feasible time (we want that each interaction user/MagiCoder requires less than a second) and the solution offered to the expert has to be readable and useful. Therefore, we decided to ignore the structure of the narrative description and address the issue in a simpler way. Main features of MagiCoder can be summarized as follows:", "In this paper we consider the Italian context of Pharmacovigilance and, as a consequence, we will consider and process by MagiCoder textual descriptions written in Italian language. We will discuss the potentiality of MagiCoder on other languages and some preliminary results in Section SECREF7 ." ], [ "The main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms.", "From an abstract point of view, we try to recognize, in the narrative description, single words belonging to LLTs, which do not necessarily occupy consecutive positions in the text. This way, we try to “reconstruct” MedDRA terms, taking into account the fact that in a description the reporter can permute or omit words. As we will show, MagiCoder has not to deal with computationally expensive tasks, such as taking into account subroutines for permutations and combinations of words (as, for example, in BIBREF19 ).", "We can distinguish five phases in the procedure that will be discussed in detail in Sections UID18 , UID19 , UID20 , UID23 , UID28 , respectively.", "Definition of ad hoc data structures: the design of data structures is central to perform an efficient computation; our main data structures are hash tables, in order to guarantee an efficient access both to MedDRA terms and to words belonging to MedDRA terms.", "Preprocessing of the original text: tokenization (i.e., segmentation of the text into syntactical units), stemming (i.e., reduction of words to a particular root form), elimination of computationally irrelevant words.", "Word-by-word linear scan of the description and “voting task”: a word “votes” LLTs it belongs to. For each term voted by one or more words, we store some information about the retrieved syntactical matching.", "Weights calculation: recognized terms are weighted depending on information about syntactical matching.", "Sorting of voted terms and winning terms release: the set of voted term is pruned, terms are sorted and finally a solution (a set of winning terms) is released.", "The algorithm proceeds with a word-by-word comparison. We iterate on the preprocessed text and we test if a single word INLINEFORM0 , a token, occurs into one or many LLTs.", "In order to efficiently test if a token belongs to one or more LLTs, we need to know which words belong to each term. The LLT level of MedDRA is actually a set of phrases, i.e., sequences of words. By scanning these sequences, we build a meta-dictionary of all the words which compose LLTs. As we will describe in Section SECREF48 , in INLINEFORM0 time units (where INLINEFORM1 and INLINEFORM2 are the cardinality of the set of LLTs and the length of the longest LLT in MedDRA, respectively) we build a hash table having all the words occurring in MedDRA as keys, where the value associated to key INLINEFORM3 contains information about the set of LLTs containing INLINEFORM4 . This way, we can verify the presence in MedDRA of a word INLINEFORM5 encountered in the ADR description in constant time. We call this meta-dictionary INLINEFORM6 . We build a meta dictionary also from a stemmed version of MedDRA, to verify the presence of stemmed descriptions. We call it INLINEFORM7 . Finally, also the MedDRA dictionary is loaded into a hash table according to LLT identifiers and, in general, all our main data structures are hash tables.", "We aim to stress that, to retain efficiency, we preferred exact string matching with respect to approximate string matching, when looking for a word into the meta dictionary. Approximate string matching would allow us to retrieve terms that would be lost in exact string matching (e.g., we could recognize misspelled words in the ADR description), but it would worsen the performances of the text recognition tool, since direct access to the dictionary would not be possible. We discuss the problem of retrieving syntactical variations of the same words and the problem of addressing orthographical errors in Section SECREF7 .", "Given a natural language ADR description, the text has to be preprocessed in order to perform an efficient computation. We adopt a well-know technique such as tokenization BIBREF26 : a phrase is reduced to tokens, i.e., syntactical units which often, as in our case, correspond to words. A tokenized text can be easily manipulated as an enumerable object, e.g., an array. A stop word is a word that can be considered irrelevant for the text analysis (e.g., an article or an interjection). Words classified as stop-words are removed from the tokenized text. In particular, in this release of our software we decided to not take into account connectives, e.g., conjunctions, disjunctions, negations. The role of connectives, in particular of negation, is discussed in Section SECREF6 .", "A fruitful preliminary work is the extraction of the corresponding stemmed version from the original tokenized and stop-word free text. Stemming is a linguistic technique that, given a word, reduces it to a particular kind of root form BIBREF20 , BIBREF26 . It is useful in text analysis, in order to avoid problems such as missing word recognition due to singular/plural forms (e.g., hand/hands). In some cases, stemming procedures are able to recognize the same root both for the adjectival and the noun form of a word. Stemming is also potentially harmful, since it can generate so called “false positives” terms. A meaningful example can be found in Italian language. The plural of the word mano (in English, hand) is mani (in English, hands), and their stemmed root is man, which is also the stemmed version of mania (in English, mania). Several stemming algorithms exist, and their impact on the performances of MagiCoder is discussed in Section SECREF6 .", " INLINEFORM0 scans the text word-by-word (remember that each word corresponds to a token) once and performs a “voting task”: at the INLINEFORM1 -th step, it marks (i.e., “votes”) with index INLINEFORM2 each LLT INLINEFORM3 containing the current ( INLINEFORM4 -th) word of the ADR description. Moreover, it keeps track of the position where the INLINEFORM5 -th word occurs in INLINEFORM6 .", " INLINEFORM0 tries to find a word match both for the exact and the stemmed version of the meta dictionary and keeps track of the kind of match it has eventually found. It updates a flag, initially set to 0, if at least a stemmed matching is found in an LLT. If a word INLINEFORM1 has been exactly recognized in a term INLINEFORM2 , the match between the stemmed versions of INLINEFORM3 and INLINEFORM4 is not considered. At the end of the scan, the procedure has built a sub-dictionary containing only terms “voted” at least by one word. We call INLINEFORM5 the sub-dictionary of voted terms.", "Each voted term INLINEFORM0 is equipped with two auxiliary data structures, containing, respectively:", "the positions of the voting words in the ADR description; we call INLINEFORM0 this sequence of indexes;", "the positions of the voted words in the MedDRA term INLINEFORM0 ; we call INLINEFORM1 this sequence of indexes.", "Moreover, we endow each voted term INLINEFORM0 with a third structure that will contain the sorting criteria we define below; we will call it INLINEFORM1 .", "Let us now introduce some notations we will use in the following. We denote as INLINEFORM0 the function that, given an LLT INLINEFORM1 , returns the number of words contained in INLINEFORM2 (excluding the stop words). We denote as INLINEFORM3 (resp. INLINEFORM4 ) the function that returns the number of indexes belonging to INLINEFORM5 (resp. INLINEFORM6 ). We denote as INLINEFORM7 and INLINEFORM8 the functions that return the maximum and the minimum indexes in INLINEFORM9 , respectively.", "From now on, sometimes we explicitly list the complete denomination of a terms: we will use the notation “name”(id), where “name” is the MedDRA description and id is its identifier, that is possibly used to refer to the term. Let us exemplify these notions by introducing an example. Consider the following ADR description: “anaphylactic shock (hypotension + cutaneous rash) 1 hour after taking the drug”. Words in it are numbered from 0 (anaphylactic) to 9 (drug). The complete set of data structures coming from the task is too big to be reported here, thus we focus only on two LLTs. At the end of the voting task, INLINEFORM0 will include, among others, “Anaphylactic shock” (10002199) and “Anaphylactic reaction to drug” (10054844). We will have that INLINEFORM1 (i.e., “anaphylactic” and “shock”) while INLINEFORM2 (i.e., “anaphylactic” and “drug”). On the other hand, INLINEFORM3 , revealing that both words in the term have been voted, while INLINEFORM4 , suggesting that only two out of three words in the term have been voted (in particular, “reaction” has not been voted). In this example all words in the description have been voted without using the stemming.", "After the voting task, selected terms have to be ordered. Notice that a purely syntactical recognition of words in LLTs potentially generates a large number of voted terms. For example, in the Italian version of MedDRA, the word “male” (in English, “pain”) occurs 3385 times.", "So we have to: i) filter a subset of highly feasible solutions, by means of quantitative weights we assigns to candidate solutions; ii) choose a good final selection strategy in order to release a small set of final “winning” MedDRA terms (this latter point will be discussed in Section UID28 ).", "For this purpose, we define four criteria to assign “weights” to voted terms accordingly.", "In the following, INLINEFORM0 is a normalization factor (w.r.t. the length, in terms of words, of the LLT INLINEFORM1 ). First three criteria have 0 as optimum value and 1 as worst value, while the fourth criterion has optimum value to 1 and it grows in worst cases.", "", "First, we consider how much part of the words of each voted LLT have not been recognized. INLINEFORM0 ", "In the example we introduced before, we have that INLINEFORM0 (i.e., all words of the terms have been recognized in the description) while INLINEFORM1 (i.e., one word out of three has not been recognized in the description).", "", "The algorithm considers whether a perfect matching has been performed using or not stemmed words. INLINEFORM0 is simply a flag. INLINEFORM1 holds if stemming has been used at least once in the voting procedure of INLINEFORM2 , and it is valued 1, otherwise it is valued 0.", "For example, INLINEFORM0 and INLINEFORM1 .", "", "The use of stemming allows one to find a number of (otherwise lost) matches. As side effect, we often obtain a quite large set of joint winner candidate terms. In this phase, we introduce a string distance comparison between recognized words in the original text and voted LLTs. Among the possible string metrics, we use the so called pair distance BIBREF27 , which is robust with respect to word permutation. Thus, INLINEFORM0 ", "where INLINEFORM0 is the pair distance function (between strings INLINEFORM1 and INLINEFORM2 ) and INLINEFORM3 is the term “rebuilt” from the words in ADR description corresponding to indexes in INLINEFORM4 .", "For example, INLINEFORM0 (i.e., the concatenation of the voters and the term are equal) and INLINEFORM1 .", "", "We want to estimate how an LLT has been covered. INLINEFORM0 ", "The intuitive meaning of the criterion is to quantify the “quality” of the coverage. If an LLT has been covered by nearby words, it will be considered a good candidate for the solution. This criterion has to be carefully implemented, taking into account possible duplicated voted words.", "After computing (and storing) the weights related to the above criteria, for each voted term INLINEFORM0 we have the data structure INLINEFORM1 , containing the weights corresponding to the four criteria. These weights will be used, after a first heuristic selection, to sort a subset of the syntactically retrieved terms.", "Continuing the example introduced before, we have that INLINEFORM0 while INLINEFORM1 . Thus, concluding, we obtain that INLINEFORM2 while INLINEFORM3 .", "In order to provide an effective support to pharmacovigilance experts' work, it is important to offer only a small set of good candidate solutions. As previously said, the pure syntactical recognition of MedDRA terms into a free-text generates a possibly large set of results. Therefore, the releasing strategy has to be carefully designed in order to select onlt best suitable solutions. We will provide an heuristic selection, followed by a sorting of the survived voted terms; then we propose a release phase of solutions, further refined by a final heuristic criterium.", "As a first step, we provide an initial pruning of the syntactically retrieved terms guided by the ordered-phrases heuristic criterium. In the ordered-phrases criterium we reintroduce the order of words in the narrative description as a selection discriminating factor. From the set of selected LLTs, we remove those terms where voters (i.e., tokens in the original free text) appear in the ADR description in a relative order different from that of the corresponing voted tokens in the LLT. We do that only for those LLTs having voters that voted for more than one term.", "Let us consider the following example. On the (Italian) narrative description “edema della glottide-lingua, parestesia al volto, dispnea” (in English, “edema glottis-tongue, facial paresthesia, dyspnoea”), the voting procedure of MagiCoder finds, among the solutions, the MedDRA terms “Edema della glottide” (“Edema glottis”), “Edema della lingua” (“Edema tongue”), “Edema del volto” (“Edema face”), “Parestesia della lingua” (“Paresthesia tongue”), and “Dispnea” (“Dyspnoea”). The ordererd-phrase criterium removes LLT “Parestesia della lingua” from the set of candidate solutions because “lingua” votes for two terms but in the narrative text it appears before than “parestesia” while in the LLT it appears after.", "We call INLINEFORM0 the set of voted terms after the selection by the ordered-phrases criterium. We proceed then by ordering INLINEFORM1 : we use a multiple-value sorting on elements in INLINEFORM2 , for each INLINEFORM3 . The obtained subdictionary is dubbed as INLINEFORM4 and it has possibly most suitable solutions on top.", "After this phase, the selection of the “winning terms” takes place. The main idea is to select and return a subset of voted terms which “covers” the ADR description. We create the set INLINEFORM0 as follows. We iterate on the ordered dictionary and for each INLINEFORM1 we select INLINEFORM2 if all the following conditions hold:", " INLINEFORM0 is completely covered, i.e., INLINEFORM1 ;", " INLINEFORM0 does not already belong to INLINEFORM1 ;", " INLINEFORM0 is not a prefix of another selected term INLINEFORM1 ;", " INLINEFORM0 has been voted without stemming (i.e., INLINEFORM1 ) or, for any INLINEFORM2 , INLINEFORM3 has not been covered (i.e., none term voted by INLINEFORM4 has been already selected) or INLINEFORM5 has not been exactly covered (i.e., only its stem has been recognized in some term INLINEFORM6 ).", "At this stage, we have a set of MedDRA terms which “covers” the narrative description. We further select a subset INLINEFORM0 of INLINEFORM1 with a second heuristic, the maximal-set-of-voters criterium.", "The maximal-set-of-voters criterium deletes from the solution those terms which can be considered “extensions” of other ones. For each pair of terms INLINEFORM0 and INLINEFORM1 , it checks if INLINEFORM2 is a subset of INLINEFORM3 (considered as sets of indexes). If it is the case, INLINEFORM4 is removed from INLINEFORM5 .", "In INLINEFORM0 we do not need to consider ad hoc subroutines to address permutations and combinations of words (as it is done, for example, in BIBREF19 ). In Natural Language Processing, permutations and combinations of words are important, since in spoken language the order of words can change w.r.t. the formal structure of the sentences. Moreover, some words can be omitted, while the sentence still retains the same meaning. These aspects come for free from our voting procedure: after the scan, we retrieve the information that a set of words covers a term INLINEFORM1 , but the order between words does not necessarily matter." ], [ "Figure SECREF34 depicts the pseudocode of MagiCoder. We represent dictionaries either as sets of words or as sets of functions. We describe the main procedures and functions used in the pseudocode.", "Procedure INLINEFORM0 takes the narrative description, performs tokenization and stop-word removal and puts it into an array of words.", "Procedures INLINEFORM0 and INLINEFORM1 get LLTs and create a dictionary of words and of their stemmed versions, respectively, which belong to LLTs, retaining the information about the set of terms containing each word.", "By the functional notation INLINEFORM0 (resp., INLINEFORM1 ), we refer to the set of LLTs containing the word INLINEFORM2 (resp., the stem of INLINEFORM3 ).", "Function INLINEFORM0 returns the stemmed version of word INLINEFORM1 .", "Function INLINEFORM0 returns the position of word INLINEFORM1 in term INLINEFORM2 .", " INLINEFORM0 is a flag, initially set to 0, which holds 1 if at least a stemmed matching with the MedDRA term INLINEFORM1 is found.", " INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are arrays and INLINEFORM3 appends INLINEFORM4 to array INLINEFORM5 , where INLINEFORM6 may be an element or a sequence of elements.", " INLINEFORM0 ( INLINEFORM1 ) are the weights related to the criteria defined in Section UID23 .", "Procedure INLINEFORM0 performs the multi-value sorting of the array INLINEFORM1 based on the values of the properties INLINEFORM2 of its elements.", "Procedure INLINEFORM0 , where INLINEFORM1 is a set of terms and INLINEFORM2 is a term, tests whether INLINEFORM3 (considered as a string) is prefix of a term in INLINEFORM4 . Dually, procedure INLINEFORM5 tests if in INLINEFORM6 there are one or more prefixes of INLINEFORM7 , and eventually remove them from INLINEFORM8 .", "Function INLINEFORM0 specifies whether a word INLINEFORM1 has been already covered (i.e., a term voted by INLINEFORM2 has been selected) in the (partial) solution during the term release: INLINEFORM3 holds 1 if INLINEFORM4 has been covered (with or without stemming) and it holds 0 otherwise. We assume that before starting the final phase of building the solution (i.e., the returned set of LLTs), INLINEFORM5 for any word INLINEFORM6 belonging to the description.", "Procedures INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 is a set of terms, implement ordered-phrases and maximal-set-of-voters criteria (defined in Section UID28 ), respectively.", "Function INLINEFORM0 , returns the first INLINEFORM1 elements of an ordered set INLINEFORM2 . If INLINEFORM3 , the function returns the complete list of ordered terms and INLINEFORM4 nil values.", "[!t] MagiCoder( INLINEFORM0 text, INLINEFORM1 dictionary, INLINEFORM2 integer)", " INLINEFORM0 : the narrative description;", " INLINEFORM0 : a data structure containing the MedDRA INLINEFORM1 s;", " INLINEFORM0 : the maximum number of winning terms that have to be released by the procedure an ordered set of LLTs INLINEFORM1 = CreateMetaDict( INLINEFORM2 ) INLINEFORM3 = CreateStemMetaDict( INLINEFORM4 ) adr_clear = Preprocessing( INLINEFORM5 ) adr_length = adr_clear.length INLINEFORM6 = INLINEFORM7 for each non-stop-word in the description (i INLINEFORM8 test whether the current word belongs to MedDRA adr_clear[i] INLINEFORM9 for each term containing the word t INLINEFORM10 (adr_clear[i]) keep track of the index of the voting word INLINEFORM11 [ INLINEFORM12 ,i] keep track of the index of the recognized word in INLINEFORM13 INLINEFORM14 [ INLINEFORM15 , INLINEFORM16 (adr_clear[i])]", " INLINEFORM0 = INLINEFORM1 test if the current (stemmed) word belongs the stemmed MedDRA stem(adr_clear[i]) INLINEFORM2 t INLINEFORM3 (stem(adr_clear[i])) test if the current term has not been exactly voted by the same word i INLINEFORM4 INLINEFORM5 [ INLINEFORM6 , i] INLINEFORM7 [ INLINEFORM8 , INLINEFORM9 (adr_clear[i])] keep track that INLINEFORM10 has been covered by a stemmed word INLINEFORM11 = true INLINEFORM12 = INLINEFORM13 for each voted term, calculate the four weights of the corresponding criteria t INLINEFORM14 INLINEFORM15 [ INLINEFORM16 ] filtering of the voted terms by the first heuristic criterium INLINEFORM17 multiple value sorting of the voted terms INLINEFORM18 = sortby( INLINEFORM19 ) t INLINEFORM20 index INLINEFORM21 select a term INLINEFORM22 if it has been completely covered, its i-th voting word has not been covered or if its i-th voting word has been perfectly recognized in INLINEFORM23 and if INLINEFORM24 is not prefix of another already selected terms INLINEFORM25 AND (( INLINEFORM26 = false OR (mark(adr_clear(index))=0)) AND t INLINEFORM27 AND prefix( INLINEFORM28 ,t)=false) mark(adr_clear(index))=1 remove from the selected term set all terms which are prefix of INLINEFORM29 INLINEFORM30 = remove_prefix( INLINEFORM31 ,t) INLINEFORM32 = INLINEFORM33 filtering of the finally selected terms by the second heuristic criterium INLINEFORM34 INLINEFORM35 INLINEFORM36 Pseudocode of MagiCoder" ], [ "Let us now conclude this section by sketching the analysis of the computational complexity of MagiCoder.", "Let INLINEFORM0 be the input size (the length, in terms of words, of the narrative description). Let INLINEFORM1 be the cardinality of the dictionary (i.e., the number of terms). Moreover, let INLINEFORM2 be the number of distinct words occurring in the dictionary and let INLINEFORM3 be the length of the longest term in the dictionary. For MedDRA, we have about 75K terms ( INLINEFORM4 ) and 17K unique words ( INLINEFORM5 ). Notice that, reasonably, INLINEFORM6 is a small constant for any dictionary; in particular, for MedDRA we have INLINEFORM7 . We assume that all update operations on auxiliary data structures require constant time INLINEFORM8 .", "Building meta-dictionaries INLINEFORM0 and INLINEFORM1 requires INLINEFORM2 time units. In fact, the simplest procedure to build these hash tables is to scan the LLT dictionary and, for each term INLINEFORM3 , to verify for each word INLINEFORM4 belonging to INLINEFORM5 whether INLINEFORM6 is a key in the hash table (this can be done in constant time). If INLINEFORM7 is a key, then we have to update the values associated to INLINEFORM8 , i.e., we add INLINEFORM9 to the set of terms containing INLINEFORM10 . Otherwise, we add the new key INLINEFORM11 and the associated term INLINEFORM12 to the hash table. We note that these meta-dictionaries are computed only once when the MedDRA dictionary changes (twice per year), then as many narrative texts as we want can be encoded without the need to rebuild them.", "It can be easily verified that the voting procedure requires in the worst case INLINEFORM0 steps: this is a totally conservative bound, since this worst case should imply that each word of the description appears in all the terms of the dictionary. A simple analysis of the occurrences of the words in MedDRA shows that this worst case never occurs: in fact, the maximal absolute frequency of a MedDRA word is 3937, and the average of the frequencies of the words is 19.1. Thus, usually, real computational complexity is much less of this worst case.", "The computation of criteria-related weights requires INLINEFORM0 time units. In particular: both criterion one and criterion two require INLINEFORM1 time steps; criterion three require INLINEFORM2 (we assume to absorb the complexity of the pair distance function); criterion four requires INLINEFORM3 time units.", "The subsequent multi-value sorting based on computed weights is a sorting algorithm which complexity can be approximated to INLINEFORM0 , based on the comparison of objects of four elements (i.e., the weights of the four criteria). Since the number of the criteria-related weights involved in the multi-sorting is constant, it can be neglected. Thus, the complexity of multi-value sorting can be considered to be INLINEFORM1 .", "Finally, to derive the best solutions actually requires INLINEFORM0 steps. The ordered-phrases criterium requires INLINEFORM1 ; the maximal set of voters criterium takes INLINEFORM2 time units.", "Thus, we conclude that MagiCoder requires in the worst case INLINEFORM0 computational steps. We again highlight that this is a (very) worst case scenario, while in average it performs quite better. Moreover, we did not take into account that each phase works on a subset of terms of the previous phase, and the size of these subset rapidly decreases in common application.", "the selection phase works only on voted terms, thus, in common applications, on a subset of the original dictionary." ], [ "MagiCoder has been implemented as a VigiFarmaco plug-in: people responsible for pharmacovigilance can consider the results of the auto-encoding of the narrative description and then revise and validate it. Figure FIGREF50 shows a screenshot of VigiFarmaco during this task. In the top part of the screen it is possible to observe the five sections composing a report. The screenshot actually shows the result of a human-MagiCoder interaction: by pressing the button “Autocodifica in MedDRA” (in English, “MedDRA auto-encoding”), the responsible for pharmacovigilance obtains a MedDRA encoding corresponding to the natural language ADR in the field “Descrizione\" (in English, “Description”). Up to six solutions are proposed as the best MedDRA term candidates returned by MagiCoder: the responsible can refuse a term (through the trash icon), change one or more terms (by an option menu), or simply validate the automatic encoding and switch to the next section “Farmaci” (in English, “Drugs”). The maximum number of six terms to be shown has been chosen in order to supply pharmacovigilance experts with a set of terms extended enough to represent the described adverse drug reaction but not so large to be redundant or excessive.", "We are testing MagiCoder performances in the daily pharmacovigilance activities. Preliminary qualitative results show that MagiCoder drastically reduces the amount of work required for the revision of a report, allowing the pharmacovigilance stakeholders to provide high quality data about suspected ADRs." ], [ "In this section we describe the experiments we performed to evaluate MagiCoder performances. The test exploits a large amount of manually revised reports we obtained from VigiSegn BIBREF3 .", "We briefly recall two metrics we used to evaluate MagiCoder: precision and recall.", "In statistical hypothesis and in particular in binary classification BIBREF28 , two main kinds of errors are pointed out: false positive errors (FP) and false negative errors (FN). In our setting, these errors can be viewed as follows: a false positive error is the inopportune retrieval of a “wrong” LLT, i.e., a term which does not correctly encode the textual description; a false negative error is the failure in the recognition of a “good” LLT, i.e., a term which effectively encode (a part of) the narrative description and that would have been selected by a human expert. As dual notions of false positive and false negative, one can define correct results, i.e., true positive (TP) and true negative (TN): in our case, a true positive is a correctly returned LLT, and a true negative is an LLT which, correctly, has not been recognized as a solution.", "Following the information retrieval tradition, the standard approach to system evaluation revolves around the notion of relevant and non-relevant solution (in information retrieval, a solution is represented by a document BIBREF28 ). We provide here a straightforward definition of relevant solution. A relevant solution is a MedDRA term which correctly encode the narrative description provided to MagiCoder. A retrieved solution is trivially defined as an output term, independently from its relevance. We dub the sets of relevant solutions and retrieved solutions as INLINEFORM0 and INLINEFORM1 , respectively.", "The evaluation of the false positive and the false negative rates, and in particular of the impact of relevant solutions among the whole set of retrieved solutions, are crucial measures in order to estimate the quality of the automatic encoding.", "The precision (P), also called positive predictive value, is the percentage of retrieved solutions that are relevant. The recall (R), also called sensitivity, is the percentage of all relevant solutions returned by the system.", "Table TABREF51 summarizes formulas for precision and recall. We provide formulas both in terms of relevant/retrieved solutions and false positives, true positives and false negatives.", "It is worth noting that the binary classification of solutions as relevant or non-relevant is referred to as the gold standard judgment of relevance. In our case, the gold standard has to be represented by a human encoding of a narrative description, i.e., a set of MedDRA terms choosen by a pharmacovigilance expert. Such a set is assumed to be definitively correct (only correct solutions are returned) and complete (all correct solutions have been returned)." ], [ "To evaluate MagiCoder performances, we developed a benchmark, which automatically compares MagiCoder behavior with human encoding on already manually revised and validated ADR reports.", "For this purpose, we exploited VigiSegn, a data warehouse and OLAP system that has been developed for the Italian Pharmacovigilance National Center BIBREF3 . This system is based on the open source business intelligence suite Pentaho. VigiSegn offers a large number of encoded ADRs. The encoding has been manually performed and validated by experts working at pharmacovigilance centres. Encoding results have then been sent to the national regulatory authority, AIFA.", "We performed a test composed by the following steps.", "We launch an ETL procedure through Pentaho Data Integration. Reports are transferred from VigiSegn to an ad hoc database TestDB. The dataset covers all the 4445 reports received, revised and validated during the year 2014 for the Italian region Veneto.", "The ETL procedure extracts the narrative descriptions from reports stored in TestDB. For each description, the procedure calls MagiCoder from", "VigiFarmaco; the output, i.e., a list of MedDRA terms, is stored in a table of TestDB.", "Manual and automatic encodings of each report are finally compared through an SQL query. In order to have two uniform data sets, we compared only those reports where MagiCoder recognized at most six terms, i.e., the maximum number of terms that human experts are allowed to select through the VigiFarmaco user interface. Moreover, we map each LLT term recognized by both the human experts and MagiCoder to its corresponding preferred term. Results are discussed below in Section UID57 .", "Table TABREF58 shows the results of this first performance test. We group narrative descriptions by increasing length (in terms of characters). We note that reported results are computed considering terms at PT level. By moving to PT level, instead of using the LLT level, we group together terms that represent the same medical concept (i.e., the same adverse reaction). In this way, we do not consider an error when MagiCoder and the human expert use two different LLTs for representing the same adverse event. The use of the LLT level for reporting purpose and the PT level for analysis purpose is suggested also by MedDRA BIBREF5 . With common PT we mean the percentage of preferred terms retrieved by human reviewers that have been recognized also by MagiCoder. Reported performances are summarized also in FIGREF59 . Note that, false positive and false negative errors are required to be as small as possible, while common PT, recall, and precision have to be as large as possible.", "MagiCoder behaves very well on very short descriptions (class 1) and on short ones (class 2). Recall and precision remain greater than 50% up to class 4. Notice that very long descriptions (class 5), on which performances drastically decrease, represent a negligible percentage of the whole set (less than 0.3%). Some remarks are mandatory. It is worth noting that this test simply estimates how much, for each report, the MagiCoder behavior is similar to the manual work, without considering the effective quality of the manual encoding. Clearly, as a set of official reports, revised and sent to RNF, we assume to deal with an high-quality encoding: notwithstanding, some errors in the human encoding possibly occur. Moreover, the query we perform to compare manual and automatic encoding is, obviously, quantitative. For each VigiSegn report, the query is able to detect common retrieved terms and terms returned either by the human expert or by MagiCoder. It is not able to fairly test redundancy errors: human experts make some encoding choices in order to avoid repetitions. Thus, an LLT INLINEFORM0 returned by MagiCoder that has not been selected by the expert because redundant is not truly a false positive. As a significative counterpart, as previously said, we notice that some reports contain slightly human omissions/errors. This suggest the evidence that we are underestimating MagiCoder performances. See the next section for some simple but significative examples." ], [ "Table TABREF61 provides some examples of the behavior of MagiCoder. We propose some free-text ADR descriptions from TestDB and we provide both the manual and the automatic encodings into LLT terms. We also provide the English translation of the natural language texts (we actually provide a quite straightforward literal translation).", "", "In Table TABREF61 we use the following notations: INLINEFORM0 and INLINEFORM1 are two identical LLTs retrieved both by the human and the automatic encoding; INLINEFORM2 and INLINEFORM3 are two semantically equivalent or similar LLTs (i.e., LLTs with the same PT) retrieved by the human and the automatic encoding, respectively; we use bold type to denote terms that have been recognized by MagiCoder but that have not been encoded by the reviewer; we use italic type in D1, D2, D3 to denote text recognized only by MagiCoder. For example, in description D3, “cefalea” (in English, “headache”) is retrieved and encoded both by the human reviewer and MagiCoder; in description D2, ADR “febbre” (in English, “fever') has been encoded with the term itself by the algorithm, whereas the reviewer encoded it with its synonym “piressia”; in D1, ADR “ipotensione” (in English, “hypotension”) has been retrieved only by MagiCoder.", "To exemplify how the ordered phrase heuristic works, we can notice that in D2 MagiCoder did not retrieve the MedDRA term “Vescicole in sede di vaccinazione” (10069623), Italian for “Vaccination site vesicles”. It belongs to the set of the voted solutions (since INLINEFORM0 ), but it has been pruned from the list of the winning terms by the ordered-phrase heuristic criterium." ], [ "We discuss here some interesting points we met developing MagiCoder. We explain the choices we made and consider some open questions." ], [ "Stemming is a useful tool for natural language processing and text searching and classification. The extraction of the stemmed form of a word is a non-trivial operation, and algorithms for stemming are very efficient. In particular, stemming for Italian language is extremely critic: this is due to the complexity of language and the number of linguistic variations and exceptions.", "For the first implementation of MagiCoder as VigiFarmaco plug-in, we used a robust implementation of the Italian stemming procedure. The procedure takes into account subtle properties of the language; in addition of the simple recognition of words up to plurals and genres, it is able, in the majority of cases, to recognize an adjectival form of a noun by extracting the same syntactical root.", "Despite the efficiency of this auxiliary algorithm, we noticed that the recognition of some MedDRA terms have been lost: in some sense, this stemming algorithm is too “aggressive” and, in some cases, counterintuitive. For example, the Italian adjective “psichiatrico” (in English, psichiatric) and its plural form “psichiatrici” have two different stems, “psichiatr” and “psichiatric”, respectively. Thus, in this case the stemmer fails in recognizing the singular and plural forms of the same word.", "We then decided to adopt the stemming algorithm also used in Apache Lucene, an open source text search engine library. This procedure is less refined w.r.t. the stemming algorithm cited above, and can be considered as a “light” stemmer: it simply elides the final vowels of a word. This induces a conservative approach and a uniform processing of the whole set of MedDRA words. This is unsatisfactory for a general problem of text processing, but it is fruitful in our setting. We repeated the MagiCoder testing both with the classical and the light stemmer: in the latter case, we measure a global enhancement of MagiCoder performance. Regarding common retrieved preferred terms, we reveal an average enhancement of about INLINEFORM0 : percentages for classes 1, 2, 3, 4 and 5 move from INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , respectively, to values in Table TABREF58 . It is reasonable to think that a simple stemming algorithm maintains the recognition of words up to plurals and genres, but in most cases, the recognition up to noun or adjectival form is potentially lost. Notwithstanding, we claim that it is possible to reduce this disadvantage thanks to the embedding in the dictionary of a reasonable set of synonyms of LLTs (see Section SECREF66 )." ], [ "MagiCoder performs a pure syntactical recognition (up to stemming) of words in the narrative description: no semantical information is used in the current version of the algorithm. In written informal language, synonyms are frequently used. A natural evolution of our NLP software may be the addition of an Italian thesaurus dictionary. This would appear a trivial extension: one could try to match MedDRA both with original words and their synonyms, and try to maximize the set of retrieved terms. We performed a preliminary test, and we observed a drastic deterioration of MagiCoder performances (both in terms of correctness and completeness): on average, common PT percentages decreases of 24%. The main reason is related to the nature of Italian language: synonymical groups include words related by figurative meaning. For example, among the synonyms of the word “faccia” (in English, “face”), one finds “viso” (in English “visage”), which is semantically related, but also “espressione” (in English, “expression”), which is not relevant in the considered medical context. Moreover, the use of synonyms of words in ADR text leads to an uncontrolled growth of the voted terms, that barely can be later dropped in the final terms release. Furthermore, the word-by-word recognition performed by MagiCoder, with the uncontrolled increase of the processed tokens (original words plus synonyms plus possible combinations), could induce a serious worsening of the computational complexity. Thus, we claim that this is not the most suitable way to address the problem and the designing of an efficient strategy to solve this problem is not trivial.", "We are developing a different solution, working side-by-side with the pharmacovigilance experts. The idea, vaguely inspired by the Consumer Health Vocabulary (recalled in Section SECREF2 and used in BIBREF16 ), is to collect a set of pseudo-LLTs, in order to enlarge the MedDRA official terminology and to generate a new ADR lexicon. This will be done on the basis of frequently retrieved locutions which are semantically equivalent to LLTs. A pseudo LLT will be regularly voted and sorted by MagiCoder and, if selected, the software will release the official (semantically equivalent) MedDRA term. Notice that, conversely to the single word synonyms solution, each pseudo-LLT is related to one and only one official term: this clearly controls the complexity deterioration. Up to now, we added to the official MedDRA terminology a set of about 1300 locutions. We automatically generated such a lexicon by considering three nouns that frequently occur in MedDRA, “aumento”, “diminuzione” e “riduzione” (in English “increase”, “decrease”, and “reduction”, respectively) and their adjectival form. For each LLT containing one of these nouns (resp., adjectives) we generate an equivalent term taking into account the corresponding adjective (resp., noun).", "This small set of synonyms induces a global improvement of MagiCoder performances on classes 4 and 5. For Class 4, both common retrieved PT percentage, precision and recall increase of INLINEFORM0 . For Class 5, we observe some significative increment: common retrieved PT moves from INLINEFORM1 to INLINEFORM2 ; precision moves from INLINEFORM3 to INLINEFORM4 ; recall moves from INLINEFORM5 to INLINEFORM6 .", "Also false negative and false positive rates suggest that the building of the MedDRA-thesaurus is a promising extension. False negatives move from INLINEFORM0 to INLINEFORM1 for Class 4 and from INLINEFORM2 to INLINEFORM3 for Class 5. False positive percentage decrease of INLINEFORM4 both for Class 4 and Class 5.", "Class 5, which enjoys a particular advantage from the introduction of the pseudo-LLTs, represents a small slice of the set of reports. Notwithstanding, these cases are very arduous to address, and we have, at least, a good evidence of the validity of our approach." ], [ "As previously said, in MagiCoder we do not take into account the structure of written sentences. In this sense, our procedure is radically different from those based on the so called part-of-speech (PoS) BIBREF29 , powerful methodologies able to perform the morpho-syntactical analysis of texts, labeling each lexical item with its grammatical properties. PoS-based text analyzers are also able to detect and deal with logical connectives such as conjunctions, disjunctions and negations. Even if connectives generally play a central role in the logical foundation of natural languages, they have a minor relevance in the problem we are addressing: ADR reports are on average badly/hurriedly written, or they do not have a complex structure (we empirically noted this also for long descriptions). Notwithstanding, negation deserves a distinct consideration, since the presence of a negation can drastically change the meaning of a phrase. First, we evaluated the frequency of negation connectives in ADR reports: we considered the same sample exploited in Section SECREF52 , and we counted the occurrences of the words “non” (Italian for “not”) and “senza” (Italian for “without”): we detected potential negations in 162 reports (i.e., only in the INLINEFORM0 of the total number, 4445). Even though negative sentences seem to be uncommon in ADR descriptions, the detection of negative forms is a short-term issue we plan to address. As a first step, we plan to recognize words that may represent negations and to signal them to the reviewer through the graphical UI. In this way, the software sends to the report reviewer an alert about the (possible) failure of the syntactical recognition." ], [ "As previously said, in order to provide an effective support to human revision work, it is necessary to provide only a small set of possible solutions. To this end, in the selection phase (described in Section UID28 ), we performed drastic cuts on voted LLTs. For example, only completely covered LLTs can contribute to the set of winning terms. This is clearly a restrictive threshold, that makes completely sense in a context where at most six solutions can be returned. In a less restrictive setting, one can relax the threshold above and try to understand how to filter more “promising” solutions among partially covered terms. In this perspective, we developed a further criterion, the Coverage Distribution, based on assumptions we made about the structure of (Italian) sentences. The following formula simply sums the indexes of the covered words for INLINEFORM0 : INLINEFORM1 ", "If INLINEFORM0 is small, it means that words in the first positions of term INLINEFORM1 have been covered. We defined INLINEFORM2 to discriminate between possibly joint winning terms. Indeed, an Italian medical description of a pathology has frequently the following shape: name of the pathology+“location” or adjective. Intuitively, we privilege terms for which the recognized words are probably the ones describing the pathology. The addition of INLINEFORM3 (with the discard of condition INLINEFORM4 in the final selection) could improve the quality of the solution if a larger set of winning terms is admissible or in the case in which the complete ordered list of voted terms is returned." ], [ "In this paper we proposed MagiCoder, a simple and efficient NLP software, able to provide a concrete support to the pharmacovigilance task, in the revision of ADR spontaneous reports. MagiCoder takes in input a narrative description of a suspected ADR and produces as outcome a list of MedDRA terms that “covers” the medical meaning of the free-text description. Differently from other BioNLP software proposed in literature, we developed an original text processing procedure. Preliminary results about MagiCoder performances are encouraging. Let us sketch here some ongoing and future work.", "We are addressing the task to include ad hoc knowledges, as the MedDRA-thesaurus described in Section SECREF66 . We are also proving that MagiCoder is robust with respect to language (and dictionary) changes. The way the algorithm has been developed suggests that MagiCoder can be a valid tool also for narrative descriptions written in English. Indeed, the algorithm retrieves a set of words, which covers an LLT INLINEFORM0 , from a free-text description, only slightly considering the order between words or the structure of the sentence. This way, we avoid the problem of “specializing” MagiCoder for any given language. We plan to test MagiCoder on the English MedDRA and, moreover, we aim to test our procedure on different dictionaries (e.g., ICD-9 classification, WHO-ART, SNOMED CT). We are collecting several sources of manually annotated corpora, as potential testing platforms. Moreover, we plan to address the management of orthographical errors possibly contained in narrative ADR descriptions. We did not take into account this issue in the current version of MagiCoder. A solution could include an ad hoc (medical term-oriented) spell checker in VigiFarmaco, to point out to the user that she/he is doing some error in writing the current word in the free description field. This should drastically reduce users' orthographical errors without heavy side effects in MagiCoder development and performances. Finally, we aim to apply MagiCoder (and its refinements) to different sources for ADR detection, such as drug information leaflets and social media BIBREF16 , BIBREF30 ." ] ], "section_name": [ "Introduction", "Natural language processing and text mining in medicine", "MedDRA Dictionary", "MagiCoder: an NLP software for ADR automatic encoding", "MagiCoder: overview", "MagiCoder: structure of the algorithm", "MagiCoder complexity analysis", "Software implementation: the user interface", "Testing MagiCoder performances", "Experiment about MagiCoder performances", "Examples", "Discussion", "Stemming and performance of the NLP software", "Synonyms", "Connectives in the narrative descriptions", "On the selection of voted terms", "Conclusions and future work" ] }
{ "answers": [ { "annotation_id": [ "4ee9a4dfe6fbdeba295baad2c405c2ac8f676329", "5250a250a7bcfa33dbe9e4c4827185a257330be0", "a9553739783c00e14782ce4714c31ce07dbfce67" ], "answer": [ { "evidence": [ "Let us now conclude this section by sketching the analysis of the computational complexity of MagiCoder.", "Finally, to derive the best solutions actually requires INLINEFORM0 steps. The ordered-phrases criterium requires INLINEFORM1 ; the maximal set of voters criterium takes INLINEFORM2 time units.", "Thus, we conclude that MagiCoder requires in the worst case INLINEFORM0 computational steps. We again highlight that this is a (very) worst case scenario, while in average it performs quite better. Moreover, we did not take into account that each phase works on a subset of terms of the previous phase, and the size of these subset rapidly decreases in common application.", "the selection phase works only on voted terms, thus, in common applications, on a subset of the original dictionary." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Let us now conclude this section by sketching the analysis of the computational complexity of MagiCoder.\n\n", "Finally, to derive the best solutions actually requires INLINEFORM0 steps. The ordered-phrases criterium requires INLINEFORM1 ; the maximal set of voters criterium takes INLINEFORM2 time units.\n\nThus, we conclude that MagiCoder requires in the worst case INLINEFORM0 computational steps. We again highlight that this is a (very) worst case scenario, while in average it performs quite better. Moreover, we did not take into account that each phase works on a subset of terms of the previous phase, and the size of these subset rapidly decreases in common application.\n\nthe selection phase works only on voted terms, thus, in common applications, on a subset of the original dictionary." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "To evaluate MagiCoder performances, we developed a benchmark, which automatically compares MagiCoder behavior with human encoding on already manually revised and validated ADR reports." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "To evaluate MagiCoder performances, we developed a benchmark, which automatically compares MagiCoder behavior with human encoding on already manually revised and validated ADR reports." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "We are testing MagiCoder performances in the daily pharmacovigilance activities. Preliminary qualitative results show that MagiCoder drastically reduces the amount of work required for the revision of a report, allowing the pharmacovigilance stakeholders to provide high quality data about suspected ADRs." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Preliminary qualitative results show that MagiCoder drastically reduces the amount of work required for the revision of a report, allowing the pharmacovigilance stakeholders to provide high quality data about suspected ADRs." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "301ff950574869195d5b861ccb6fd95efb9c26c4", "66e1e8bd498a039b311a2b25481b59983e9afd71", "b6ec90d8d2a711ff94f588ba4c05a8b03c583ec9" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false }, { "evidence": [ "MagiCoder behaves very well on very short descriptions (class 1) and on short ones (class 2). Recall and precision remain greater than 50% up to class 4. Notice that very long descriptions (class 5), on which performances drastically decrease, represent a negligible percentage of the whole set (less than 0.3%). Some remarks are mandatory. It is worth noting that this test simply estimates how much, for each report, the MagiCoder behavior is similar to the manual work, without considering the effective quality of the manual encoding. Clearly, as a set of official reports, revised and sent to RNF, we assume to deal with an high-quality encoding: notwithstanding, some errors in the human encoding possibly occur. Moreover, the query we perform to compare manual and automatic encoding is, obviously, quantitative. For each VigiSegn report, the query is able to detect common retrieved terms and terms returned either by the human expert or by MagiCoder. It is not able to fairly test redundancy errors: human experts make some encoding choices in order to avoid repetitions. Thus, an LLT INLINEFORM0 returned by MagiCoder that has not been selected by the expert because redundant is not truly a false positive. As a significative counterpart, as previously said, we notice that some reports contain slightly human omissions/errors. This suggest the evidence that we are underestimating MagiCoder performances. See the next section for some simple but significative examples.", "Class 5, which enjoys a particular advantage from the introduction of the pseudo-LLTs, represents a small slice of the set of reports. Notwithstanding, these cases are very arduous to address, and we have, at least, a good evidence of the validity of our approach." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "MagiCoder behaves very well on very short descriptions (class 1) and on short ones (class 2). Recall and precision remain greater than 50% up to class 4. Notice that very long descriptions (class 5), on which performances drastically decrease, represent a negligible percentage of the whole set (less than 0.3%).", "Class 5, which enjoys a particular advantage from the introduction of the pseudo-LLTs, represents a small slice of the set of reports. Notwithstanding, these cases are very arduous to address, and we have, at least, a good evidence of the validity of our approach." ], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "12e5a46d0f95d23ba0becc321d9a662b7908e772", "4dad974f16b98a0b20f39bdcf8a63fff94b1fea7", "917b69cdf70b023421bc9077839fc612ea559c28" ], "answer": [ { "evidence": [ "MagiCoder: overview", "The main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms.", "We can distinguish five phases in the procedure that will be discussed in detail in Sections UID18 , UID19 , UID20 , UID23 , UID28 , respectively.", "Preprocessing of the original text: tokenization (i.e., segmentation of the text into syntactical units), stemming (i.e., reduction of words to a particular root form), elimination of computationally irrelevant words.", "Word-by-word linear scan of the description and “voting task”: a word “votes” LLTs it belongs to. For each term voted by one or more words, we store some information about the retrieved syntactical matching.", "Weights calculation: recognized terms are weighted depending on information about syntactical matching.", "Sorting of voted terms and winning terms release: the set of voted term is pruned, terms are sorted and finally a solution (a set of winning terms) is released." ], "extractive_spans": [ "Preprocessing of the original text", "Word-by-word linear scan of the description and “voting task”", "Weights calculation", "Sorting of voted terms and winning terms release" ], "free_form_answer": "", "highlighted_evidence": [ "MagiCoder: overview\nThe main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms.", "We can distinguish five phases in the procedure that will be discussed in detail in Sections UID18 , UID19 , UID20 , UID23 , UID28 , respectively.", "Preprocessing of the original text: tokenization (i.e., segmentation of the text into syntactical units), stemming (i.e., reduction of words to a particular root form), elimination of computationally irrelevant words.\n\nWord-by-word linear scan of the description and “voting task”: a word “votes” LLTs it belongs to. For each term voted by one or more words, we store some information about the retrieved syntactical matching.\n\nWeights calculation: recognized terms are weighted depending on information about syntactical matching.\n\nSorting of voted terms and winning terms release: the set of voted term is pruned, terms are sorted and finally a solution (a set of winning terms) is released." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Figure SECREF34 depicts the pseudocode of MagiCoder. We represent dictionaries either as sets of words or as sets of functions. We describe the main procedures and functions used in the pseudocode.", "Procedure INLINEFORM0 takes the narrative description, performs tokenization and stop-word removal and puts it into an array of words.", "Procedures INLINEFORM0 and INLINEFORM1 get LLTs and create a dictionary of words and of their stemmed versions, respectively, which belong to LLTs, retaining the information about the set of terms containing each word.", "By the functional notation INLINEFORM0 (resp., INLINEFORM1 ), we refer to the set of LLTs containing the word INLINEFORM2 (resp., the stem of INLINEFORM3 ).", "Function INLINEFORM0 returns the stemmed version of word INLINEFORM1 .", "Function INLINEFORM0 returns the position of word INLINEFORM1 in term INLINEFORM2 .", "INLINEFORM0 is a flag, initially set to 0, which holds 1 if at least a stemmed matching with the MedDRA term INLINEFORM1 is found.", "INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are arrays and INLINEFORM3 appends INLINEFORM4 to array INLINEFORM5 , where INLINEFORM6 may be an element or a sequence of elements.", "INLINEFORM0 ( INLINEFORM1 ) are the weights related to the criteria defined in Section UID23 .", "Procedure INLINEFORM0 performs the multi-value sorting of the array INLINEFORM1 based on the values of the properties INLINEFORM2 of its elements.", "Procedure INLINEFORM0 , where INLINEFORM1 is a set of terms and INLINEFORM2 is a term, tests whether INLINEFORM3 (considered as a string) is prefix of a term in INLINEFORM4 . Dually, procedure INLINEFORM5 tests if in INLINEFORM6 there are one or more prefixes of INLINEFORM7 , and eventually remove them from INLINEFORM8 .", "Function INLINEFORM0 specifies whether a word INLINEFORM1 has been already covered (i.e., a term voted by INLINEFORM2 has been selected) in the (partial) solution during the term release: INLINEFORM3 holds 1 if INLINEFORM4 has been covered (with or without stemming) and it holds 0 otherwise. We assume that before starting the final phase of building the solution (i.e., the returned set of LLTs), INLINEFORM5 for any word INLINEFORM6 belonging to the description.", "Procedures INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 is a set of terms, implement ordered-phrases and maximal-set-of-voters criteria (defined in Section UID28 ), respectively.", "Function INLINEFORM0 , returns the first INLINEFORM1 elements of an ordered set INLINEFORM2 . If INLINEFORM3 , the function returns the complete list of ordered terms and INLINEFORM4 nil values.", "[!t] MagiCoder( INLINEFORM0 text, INLINEFORM1 dictionary, INLINEFORM2 integer)", "INLINEFORM0 : the narrative description;", "INLINEFORM0 : a data structure containing the MedDRA INLINEFORM1 s;", "INLINEFORM0 : the maximum number of winning terms that have to be released by the procedure an ordered set of LLTs INLINEFORM1 = CreateMetaDict( INLINEFORM2 ) INLINEFORM3 = CreateStemMetaDict( INLINEFORM4 ) adr_clear = Preprocessing( INLINEFORM5 ) adr_length = adr_clear.length INLINEFORM6 = INLINEFORM7 for each non-stop-word in the description (i INLINEFORM8 test whether the current word belongs to MedDRA adr_clear[i] INLINEFORM9 for each term containing the word t INLINEFORM10 (adr_clear[i]) keep track of the index of the voting word INLINEFORM11 [ INLINEFORM12 ,i] keep track of the index of the recognized word in INLINEFORM13 INLINEFORM14 [ INLINEFORM15 , INLINEFORM16 (adr_clear[i])]", "INLINEFORM0 = INLINEFORM1 test if the current (stemmed) word belongs the stemmed MedDRA stem(adr_clear[i]) INLINEFORM2 t INLINEFORM3 (stem(adr_clear[i])) test if the current term has not been exactly voted by the same word i INLINEFORM4 INLINEFORM5 [ INLINEFORM6 , i] INLINEFORM7 [ INLINEFORM8 , INLINEFORM9 (adr_clear[i])] keep track that INLINEFORM10 has been covered by a stemmed word INLINEFORM11 = true INLINEFORM12 = INLINEFORM13 for each voted term, calculate the four weights of the corresponding criteria t INLINEFORM14 INLINEFORM15 [ INLINEFORM16 ] filtering of the voted terms by the first heuristic criterium INLINEFORM17 multiple value sorting of the voted terms INLINEFORM18 = sortby( INLINEFORM19 ) t INLINEFORM20 index INLINEFORM21 select a term INLINEFORM22 if it has been completely covered, its i-th voting word has not been covered or if its i-th voting word has been perfectly recognized in INLINEFORM23 and if INLINEFORM24 is not prefix of another already selected terms INLINEFORM25 AND (( INLINEFORM26 = false OR (mark(adr_clear(index))=0)) AND t INLINEFORM27 AND prefix( INLINEFORM28 ,t)=false) mark(adr_clear(index))=1 remove from the selected term set all terms which are prefix of INLINEFORM29 INLINEFORM30 = remove_prefix( INLINEFORM31 ,t) INLINEFORM32 = INLINEFORM33 filtering of the finally selected terms by the second heuristic criterium INLINEFORM34 INLINEFORM35 INLINEFORM36 Pseudocode of MagiCoder" ], "extractive_spans": [ "Procedure INLINEFORM0 takes the narrative description, performs tokenization and stop-word removal and puts it into an array of words.", "Procedures INLINEFORM0 and INLINEFORM1 get LLTs and create a dictionary of words and of their stemmed versions, respectively", "By the functional notation INLINEFORM0 (resp., INLINEFORM1 ), we refer to the set of LLTs containing the word INLINEFORM2 (resp., the stem of INLINEFORM3 ).", "Function INLINEFORM0 returns the stemmed version of word INLINEFORM1 .\n\n", "Function INLINEFORM0 returns the position of word INLINEFORM1 in term INLINEFORM2 .\n\n", "INLINEFORM0 is a flag, initially set to 0, which holds 1 if at least a stemmed matching with the MedDRA term INLINEFORM1 is found.", "INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are arrays and INLINEFORM3 appends INLINEFORM4 to array INLINEFORM5 , where INLINEFORM6 may be an element or a sequence of elements.", "INLINEFORM0 ( INLINEFORM1 ) are the weights related to the criteria defined in Section UID23 .\n\n", "Procedure INLINEFORM0 performs the multi-value sorting of the array INLINEFORM1 based on the values of the properties INLINEFORM2 of its elements.", "Procedure INLINEFORM0 , where INLINEFORM1 is a set of terms and INLINEFORM2 is a term, tests whether INLINEFORM3 (considered as a string) is prefix of a term in INLINEFORM4 .", "Dually, procedure INLINEFORM5 tests if in INLINEFORM6 there are one or more prefixes of INLINEFORM7 , and eventually remove them from INLINEFORM8 .", "Function INLINEFORM0 specifies whether a word INLINEFORM1 has been already covered (i.e., a term voted by INLINEFORM2 has been selected) in the (partial) solution during the term release: INLINEFORM3 holds 1 if INLINEFORM4 has been covered (with or without stemming) and it holds 0 otherwise.", "Procedures INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 is a set of terms, implement ordered-phrases and maximal-set-of-voters criteria (defined in Section UID28 ), respectively.", "Function INLINEFORM0 , returns the first INLINEFORM1 elements of an ordered set INLINEFORM2 . If INLINEFORM3 , the function returns the complete list of ordered terms and INLINEFORM4 nil values." ], "free_form_answer": "", "highlighted_evidence": [ "Figure SECREF34 depicts the pseudocode of MagiCoder. We represent dictionaries either as sets of words or as sets of functions. We describe the main procedures and functions used in the pseudocode.\n\nProcedure INLINEFORM0 takes the narrative description, performs tokenization and stop-word removal and puts it into an array of words.\n\nProcedures INLINEFORM0 and INLINEFORM1 get LLTs and create a dictionary of words and of their stemmed versions, respectively, which belong to LLTs, retaining the information about the set of terms containing each word.\n\nBy the functional notation INLINEFORM0 (resp., INLINEFORM1 ), we refer to the set of LLTs containing the word INLINEFORM2 (resp., the stem of INLINEFORM3 ).\n\nFunction INLINEFORM0 returns the stemmed version of word INLINEFORM1 .\n\nFunction INLINEFORM0 returns the position of word INLINEFORM1 in term INLINEFORM2 .\n\nINLINEFORM0 is a flag, initially set to 0, which holds 1 if at least a stemmed matching with the MedDRA term INLINEFORM1 is found.\n\nINLINEFORM0 , INLINEFORM1 , INLINEFORM2 are arrays and INLINEFORM3 appends INLINEFORM4 to array INLINEFORM5 , where INLINEFORM6 may be an element or a sequence of elements.\n\nINLINEFORM0 ( INLINEFORM1 ) are the weights related to the criteria defined in Section UID23 .\n\nProcedure INLINEFORM0 performs the multi-value sorting of the array INLINEFORM1 based on the values of the properties INLINEFORM2 of its elements.\n\nProcedure INLINEFORM0 , where INLINEFORM1 is a set of terms and INLINEFORM2 is a term, tests whether INLINEFORM3 (considered as a string) is prefix of a term in INLINEFORM4 . Dually, procedure INLINEFORM5 tests if in INLINEFORM6 there are one or more prefixes of INLINEFORM7 , and eventually remove them from INLINEFORM8 .\n\nFunction INLINEFORM0 specifies whether a word INLINEFORM1 has been already covered (i.e., a term voted by INLINEFORM2 has been selected) in the (partial) solution during the term release: INLINEFORM3 holds 1 if INLINEFORM4 has been covered (with or without stemming) and it holds 0 otherwise. We assume that before starting the final phase of building the solution (i.e., the returned set of LLTs), INLINEFORM5 for any word INLINEFORM6 belonging to the description.\n\nProcedures INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 is a set of terms, implement ordered-phrases and maximal-set-of-voters criteria (defined in Section UID28 ), respectively.\n\nFunction INLINEFORM0 , returns the first INLINEFORM1 elements of an ordered set INLINEFORM2 . If INLINEFORM3 , the function returns the complete list of ordered terms and INLINEFORM4 nil values.\n\n[!t] MagiCoder( INLINEFORM0 text, INLINEFORM1 dictionary, INLINEFORM2 integer)\n\nINLINEFORM0 : the narrative description;\n\nINLINEFORM0 : a data structure containing the MedDRA INLINEFORM1 s;\n\nINLINEFORM0 : the maximum number of winning terms that have to be released by the procedure an ordered set of LLTs INLINEFORM1 = CreateMetaDict( INLINEFORM2 ) INLINEFORM3 = CreateStemMetaDict( INLINEFORM4 ) adr_clear = Preprocessing( INLINEFORM5 ) adr_length = adr_clear.length INLINEFORM6 = INLINEFORM7 for each non-stop-word in the description (i INLINEFORM8 test whether the current word belongs to MedDRA adr_clear[i] INLINEFORM9 for each term containing the word t INLINEFORM10 (adr_clear[i]) keep track of the index of the voting word INLINEFORM11 [ INLINEFORM12 ,i] keep track of the index of the recognized word in INLINEFORM13 INLINEFORM14 [ INLINEFORM15 , INLINEFORM16 (adr_clear[i])]\n\nINLINEFORM0 = INLINEFORM1 test if the current (stemmed) word belongs the stemmed MedDRA stem(adr_clear[i]) INLINEFORM2 t INLINEFORM3 (stem(adr_clear[i])) test if the current term has not been exactly voted by the same word i INLINEFORM4 INLINEFORM5 [ INLINEFORM6 , i] INLINEFORM7 [ INLINEFORM8 , INLINEFORM9 (adr_clear[i])] keep track that INLINEFORM10 has been covered by a stemmed word INLINEFORM11 = true INLINEFORM12 = INLINEFORM13 for each voted term, calculate the four weights of the corresponding criteria t INLINEFORM14 INLINEFORM15 [ INLINEFORM16 ] filtering of the voted terms by the first heuristic criterium INLINEFORM17 multiple value sorting of the voted terms INLINEFORM18 = sortby( INLINEFORM19 ) t INLINEFORM20 index INLINEFORM21 select a term INLINEFORM22 if it has been completely covered, its i-th voting word has not been covered or if its i-th voting word has been perfectly recognized in INLINEFORM23 and if INLINEFORM24 is not prefix of another already selected terms INLINEFORM25 AND (( INLINEFORM26 = false OR (mark(adr_clear(index))=0)) AND t INLINEFORM27 AND prefix( INLINEFORM28 ,t)=false) mark(adr_clear(index))=1 remove from the selected term set all terms which are prefix of INLINEFORM29 INLINEFORM30 = remove_prefix( INLINEFORM31 ,t) INLINEFORM32 = INLINEFORM33 filtering of the finally selected terms by the second heuristic criterium INLINEFORM34 INLINEFORM35 INLINEFORM36 Pseudocode of MagiCoder" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We can distinguish five phases in the procedure that will be discussed in detail in Sections UID18 , UID19 , UID20 , UID23 , UID28 , respectively.", "Definition of ad hoc data structures: the design of data structures is central to perform an efficient computation; our main data structures are hash tables, in order to guarantee an efficient access both to MedDRA terms and to words belonging to MedDRA terms.", "Preprocessing of the original text: tokenization (i.e., segmentation of the text into syntactical units), stemming (i.e., reduction of words to a particular root form), elimination of computationally irrelevant words.", "Word-by-word linear scan of the description and “voting task”: a word “votes” LLTs it belongs to. For each term voted by one or more words, we store some information about the retrieved syntactical matching.", "Weights calculation: recognized terms are weighted depending on information about syntactical matching.", "Sorting of voted terms and winning terms release: the set of voted term is pruned, terms are sorted and finally a solution (a set of winning terms) is released." ], "extractive_spans": [ "Definition of ad hoc data structures", "Preprocessing of the original text", "Word-by-word linear scan of the description and “voting task”", "Weights calculation", "Sorting of voted terms and winning terms release" ], "free_form_answer": "", "highlighted_evidence": [ "We can distinguish five phases in the procedure that will be discussed in detail in Sections UID18 , UID19 , UID20 , UID23 , UID28 , respectively.\n\nDefinition of ad hoc data structures: the design of data structures is central to perform an efficient computation; our main data structures are hash tables, in order to guarantee an efficient access both to MedDRA terms and to words belonging to MedDRA terms.\n\nPreprocessing of the original text: tokenization (i.e., segmentation of the text into syntactical units), stemming (i.e., reduction of words to a particular root form), elimination of computationally irrelevant words.\n\nWord-by-word linear scan of the description and “voting task”: a word “votes” LLTs it belongs to. For each term voted by one or more words, we store some information about the retrieved syntactical matching.\n\nWeights calculation: recognized terms are weighted depending on information about syntactical matching.\n\nSorting of voted terms and winning terms release: the set of voted term is pruned, terms are sorted and finally a solution (a set of winning terms) is released." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "53f1fc970507ab3d8d454bf2c796f5e5da0102e9", "b593f19fee6cf3984d5531d738186d6c6386fda9", "c45603e614a5585bd65cfced191a212c05349507" ], "answer": [ { "evidence": [ "INLINEFORM0 scans the text word-by-word (remember that each word corresponds to a token) once and performs a “voting task”: at the INLINEFORM1 -th step, it marks (i.e., “votes”) with index INLINEFORM2 each LLT INLINEFORM3 containing the current ( INLINEFORM4 -th) word of the ADR description. Moreover, it keeps track of the position where the INLINEFORM5 -th word occurs in INLINEFORM6 ." ], "extractive_spans": [], "free_form_answer": "The system scans the text word-by-word once and performs a voting task for each word. It also keeps track of the position of the previous words.", "highlighted_evidence": [ "NLINEFORM0 scans the text word-by-word (remember that each word corresponds to a token) once and performs a “voting task”: at the INLINEFORM1 -th step, it marks (i.e., “votes”) with index INLINEFORM2 each LLT INLINEFORM3 containing the current ( INLINEFORM4 -th) word of the ADR description. Moreover, it keeps track of the position where the INLINEFORM5 -th word occurs in INLINEFORM6 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms." ], "extractive_spans": [ "main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms" ], "free_form_answer": "", "highlighted_evidence": [ "The main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "With respect to the first version BIBREF7 , we extended our proposal following several directions. First of all, we refined the procedure: MagiCoder has been equipped with some heuristic criteria and we started to address the problem of including auxiliary dictionaries (e.g., in order to deal with synonyms). MagiCoder computational complexity has been carefully studied and we will show that it is linear in the size of the dictionary (in this case, the number of LLTs in MedDRA) and the text description. We performed an accurate test of MagiCoder performances: by means of well-known statistical measures, we collected a significant set of quantitative information about the effective behavior of the procedure. We largely discuss some crucial key-points we met in the development of this version of MagiCoder, proposing short-time solutions we are addressing as work in progress, such as changes in stemming algorithm, considering synonyms, term filtering heuristics.", "The main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms.", "INLINEFORM0 scans the text word-by-word (remember that each word corresponds to a token) once and performs a “voting task”: at the INLINEFORM1 -th step, it marks (i.e., “votes”) with index INLINEFORM2 each LLT INLINEFORM3 containing the current ( INLINEFORM4 -th) word of the ADR description. Moreover, it keeps track of the position where the INLINEFORM5 -th word occurs in INLINEFORM6 ." ], "extractive_spans": [ "The main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms.", "INLINEFORM0 scans the text word-by-word (remember that each word corresponds to a token) once and performs a “voting task”: at the INLINEFORM1 -th step, it marks (i.e., “votes”) with index INLINEFORM2 each LLT INLINEFORM3 containing the current ( INLINEFORM4 -th) word of the ADR description." ], "free_form_answer": "", "highlighted_evidence": [ "MagiCoder computational complexity has been carefully studied and we will show that it is linear in the size of the dictionary (in this case, the number of LLTs in MedDRA) and the text description. We performed an accurate test of MagiCoder performances: by means of well-known statistical measures, we collected a significant set of quantitative information about the effective behavior of the procedure.", "The main idea of INLINEFORM0 is that a single linear scan of the free-text is sufficient, in order to recognize INLINEFORM1 terms.", "INLINEFORM0 scans the text word-by-word (remember that each word corresponds to a token) once and performs a “voting task”: at the INLINEFORM1 -th step, it marks (i.e., “votes”) with index INLINEFORM2 each LLT INLINEFORM3 containing the current ( INLINEFORM4 -th) word of the ADR description. Moreover, it keeps track of the position where the INLINEFORM5 -th word occurs in INLINEFORM6 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "", "", "", "" ], "question": [ "Did they test the idea that the system reduces the time needed to encode ADR reports on real pharmacologists? ", "Do the authors offer a hypothesis as to why the system performs better on short descriptions than longer ones?", "What are the steps in the MagiCoder algorithm?", "How is the system constructed to be linear in the size of the narrative input and the terminology?" ], "question_id": [ "95af7aaea3ce9dab4cf64e2229ce9b98381dd050", "ab37ae82e38f64d3fa95782f2c791488f26cd43f", "6c9b3b2f2e5aac1de1cbd916dc295515301ee2a2", "71413505d7d6579e2a453a1f09f4efd20197ab4b" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ] }
{ "caption": [ "Figure 1: The yearly increasing number of reports about suspected adverse reactions induced by drugs in Italy.", "Table 1: MedDRA Hierarchy - an Example", "Figure 3: A partial screenshot of VigiFarmaco User Interface", "Table 2: Performance and correctness measures", "Table 3: First results of MagiCoder performances", "Figure 4: Graphical representation of MagiCoder performances", "Table 4: Examples of MagiCoder behavior" ], "file": [ "3-Figure1-1.png", "7-Table1-1.png", "19-Figure3-1.png", "20-Table2-1.png", "21-Table3-1.png", "22-Figure4-1.png", "23-Table4-1.png" ] }
[ "How is the system constructed to be linear in the size of the narrative input and the terminology?" ]
[ [ "1612.03762-Introduction-8", "1612.03762-MagiCoder: overview-0" ] ]
[ "The system scans the text word-by-word once and performs a voting task for each word. It also keeps track of the position of the previous words." ]
65
1906.01010
A computational linguistic study of personal recovery in bipolar disorder
Mental health research can benefit increasingly fruitfully from computational linguistics methods, given the abundant availability of language data in the internet and advances of computational tools. This interdisciplinary project will collect and analyse social media data of individuals diagnosed with bipolar disorder with regard to their recovery experiences. Personal recovery - living a satisfying and contributing life along symptoms of severe mental health issues - so far has only been investigated qualitatively with structured interviews and quantitatively with standardised questionnaires with mainly English-speaking participants in Western countries. Complementary to this evidence, computational linguistic methods allow us to analyse first-person accounts shared online in large quantities, representing unstructured settings and a more heterogeneous, multilingual population, to draw a more complete picture of the aspects and mechanisms of personal recovery in bipolar disorder.
{ "paragraphs": [ [ "Recent years have witnessed increased performance in many computational linguistics tasks such as syntactic and semantic parsing BIBREF0 , BIBREF1 , emotion classification BIBREF2 , and sentiment analysis BIBREF3 , BIBREF4 , BIBREF5 , especially concerning the applicability of such tools to noisy online data. Moreover, the field has made substantial progress in developing multilingual models and extending semantic annotation resources to languages beyond English BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .", "Concurrently, it has been argued for mental health research that it would constitute a `valuable critical step' BIBREF10 to analyse first-hand accounts by individuals with lived experience of severe mental health issues in blog posts, tweets, and discussion forums. Several severe mental health difficulties, e.g., bipolar disorder (BD) and schizophrenia are considered as chronic and clinical recovery, defined as being relapse and symptom free for a sustained period of time BIBREF11 , is considered difficult to achieve BIBREF12 , BIBREF13 , BIBREF14 . Moreover, clinically recovered individuals often do not regain full social and educational/vocational functioning BIBREF15 , BIBREF16 . Therefore, research originating from initiatives by people with lived experience of mental health issues has been advocating emphasis on the individual's goals in recovery BIBREF17 , BIBREF18 . This movement gave rise to the concept of personal recovery BIBREF19 , BIBREF20 , loosely defined as a `way of living a satisfying, hopeful, and contributing life even with limitations caused by illness' BIBREF18 . The aspects of personal recovery have been conceptualised in various ways BIBREF21 , BIBREF22 , BIBREF23 . According to the frequently used CHIME model BIBREF24 , its main components are Connectedness, Hope and optimism, Identity, Meaning and purpose, and Empowerment. Here, we focus on BD, which is characterised by recurring episodes of depressed and elated (hypomanic or manic) mood BIBREF25 , BIBREF12 . Bipolar spectrum disorders were estimated to affect approximately 2% of the UK population BIBREF13 with rates ranging from 0.1%-4.4% across 11 other European, American and Asian countries BIBREF26 . Moreover, BD is associated with a high risk of suicide BIBREF27 , making its prevention and treatment important tasks for society. BD-specific personal recovery research is motivated by mainly two facts: First, the pole of positive/elevated mood and ongoing mood instability constitute core features of BD and pose special challenges compared to other mental health issues, such as unipolar depression BIBREF25 . Second, unlike for some other severe mental health difficulties, return to normal functioning is achievable given appropriate treatment BIBREF28 , BIBREF16 , BIBREF29 .", "A substantial body of qualitative and quantitative research has shown the importance of personal recovery for individuals diagnosed with BD BIBREF22 , BIBREF25 , BIBREF30 , BIBREF31 , BIBREF23 . Qualitative evidence mainly comes from (semi-)structured interviews and focus groups and has been criticised for small numbers of participants BIBREF10 , lacking complementary quantitative evidence from larger samples BIBREF32 . Some quantitative evidence stems from the standardised bipolar recovery questionnaire BIBREF30 and a randomised control trial for recovery-focused cognitive-behavioural therapy BIBREF31 . Critically, previous research has taken place only in structured settings. What is more, the recovery concept emerged from research primarily conducted in English-speaking countries, mainly involving researchers and participants of Western ethnicity. This might have led to a lack of non-Western notions of wellbeing in the concept, such as those found in indigenous peoples BIBREF32 , limiting its the applicability to a general population. Indeed, the variation in BD prevalence rates from 0.1% in India to 4.4% in the US is striking. It has been shown that culture is an important factor in the diagnosis of BD BIBREF33 , as well as on the causes attributed to mental health difficulties in general and treatments considered appropriate BIBREF34 , BIBREF35 . While approaches to mental health classification from texts have long ignored the cultural dimension BIBREF36 , first studies show that online language of individuals affected by depression or related mental health difficulties differs significantly across cultures BIBREF37 , BIBREF36 .", "Hence, it seems timely to take into account the wealth of accounts of mental health difficulties and recovery stories from individuals of diverse ethnic and cultural backgrounds that are available in a multitude of languages on the internet. Corpus and computational linguistic methods are explicitly designed for processing large amounts of linguistic data BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , and as discussed above, recent advances have made it feasible to apply them to noisy user-generated texts from diverse domains, including mental health BIBREF42 , BIBREF43 . Computer-aided analysis of public social media data enables us to address several shortcomings in the scientific underpinning of personal recovery in BD by overcoming the small sample sizes of lab-collected data and including accounts from a more heterogeneous population.", "In sum, our research questions are as follows: (1) How is personal recovery discussed online by individuals meeting criteria for BD? (2) What new insights do we get about personal recovery and factors that facilitate or hinder it? We will investigate these questions in two parts, looking at English-language data by westerners and at multilingual data by individuals of diverse ethnicities." ], [ "Previous work in computational linguistics and clinical psychology has tended to focus on the detection of mental health issues as classification tasks BIBREF44 . Datasets have been collected for various conditions including BD using publicly available social-media data from Twitter BIBREF45 and Reddit BIBREF46 , BIBREF47 . Unfortunately, the Twitter dataset is unavailable for further research. In both Reddit datasets, mental health-related content was deliberately removed. This allows the training of classifiers that try to predict the mental health of authors from excerpts that do not explicitly address mental health, yet it renders the data useless for analyses on how mental health is talked about online. Due to this lack of appropriate existing publicly accessible datasets, we will create such resources and make them available to subsequent researchers.", "We plan to collect data relevant for BD in general as well as for personal recovery in BD from three sources varying in their available amount versus depth of the accounts we expect to find: 1) Twitter, 2) Reddit (focusing on mental health-related content unlike previous work), 3) blogs authored by affected individuals. Twitter and Reddit users with a BD diagnosis will be identified automatically via self-reported diagnosis statements, such as `I was diagnosed with BD-I last week'. To do so, we will extend on the diagnosis patterns and terms for BD provided by BIBREF47 . Implicit consent is assumed from users on these platforms to use their public tweets and posts. SECREF3 Relevant blogs will be manually identified, and their authors will be contacted to obtain informed consent for using their texts.", "Since language and culture are important factors in our research questions, we need information on the language of the texts and the country of residence of their authors, which is not provided in a structured format in the three data sources. For language identification, Twitter employs an automatic tool BIBREF48 , which can be used to filter tweets according to 60 language codes, and there are free, fairly accurate tools such as the Google Compact Language Detector, which can be applied to Reddit and blog posts. The location of Twitter users can be automatically inferred from their tweets BIBREF49 or the (albeit noisy) location field in their user profiles BIBREF50 . Only one attempt to classify the location of Reddit users has been published so far BIBREF51 showing meagre results, indicating that the development of robust location classification approaches on this platform would constitute a valuable contribution. Some companies collect mental health-related online data and make them available to researchers subject to approval of their internal review boards, e.g., OurDataHelps by Qntfy or the peer-support forum provider 7 Cups. Unlike `raw' social media data, these datasets have richer user-provided metadata and explicit consent for research usage. On the other hand, less data is available, the process to obtain access might be tedious within the short timeline of a PhD project and it might be impossible to share the used portions of the data with other researchers. Therefore, we will follow up the possibilities of obtaining access to these datasets, but in parallel also collect our own datasets to avoid dependence on external data providers." ], [ "As explained in the introduction, the overarching aim of this project is to investigate in how far information conveyed in social media posts can complement more traditional research methods in clinical psychology to get insights into the recovery experience of individuals with a BD diagnosis. Therefore, we will first conduct a systematic literature review of qualitative evidence to establish a solid base of what is already known about personal recovery experiences in BD for the subsequent social media studies.", "Our research questions, which regard the experiences of different populations, lend themselves to several subprojects. First, we will collect and analyse English-language data from westerners. Then, we will address ethnically diverse English-speaking populations and finally multilingual accounts. This has the advantage that we can build data processing and methodological workflows along an increase in complexity of the data collection and analysis throughout the project.", "In each project phase, we will employ a mixed-methods approach to combine the advantages of quantitative and qualitative methods BIBREF52 , BIBREF53 , which is established in mental health research BIBREF54 , BIBREF55 , BIBREF56 , BIBREF57 and specifically recommended to investigate personal recovery BIBREF58 . Quantitative methods are suitable to study observable behaviour such as language and yield more generalisable results by taking into account large samples. However, they fall short of capturing the subjective, idiosyncratic meaning of socially constructed reality, which is important when studying individuals' recovery experience BIBREF59 , BIBREF22 , BIBREF23 , BIBREF60 . Therefore, we will apply an explanatory sequential research design BIBREF53 , starting with statistical analysis of the full dataset followed by a manual investigation of fewer examples, similar to `distant reading' BIBREF61 in digital humanities.", "Since previous research mainly employed (semi-)structured interviews and we do not expect to necessarily find the same aspects emphasised in unstructured settings, even less so when looking at a more diverse and non-English speaking population, we will not derive hypotheses from existing recovery models for testing on the online data. Instead, we will start off with exploratory quantitative research using comparative analysis tools such as Wmatrix BIBREF62 to uncover important linguistic features, e.g., on keywords and key concepts that occur with unexpected frequency in our collected datasets relative to reference corpora. The underlying assumption is that keywords and key concepts are indicative of certain aspects of personal recovery, such as those specified in the CHIME model BIBREF24 , other previous research BIBREF22 , BIBREF23 , BIBREF60 , or novel ones. Comparing online sources with transcripts of structured interviews or subcorpora originating from different cultural backgrounds might uncover aspects that were not prominently represented in the accounts studied in prior research.", "A specific challenge will be to narrow down the data to parts relevant for personal recovery, since there is no control over the discussed topics compared to structured interviews. To investigate how individuals discuss personal recovery online and what (potentially unrecorded) aspects they associate with it, without a priori narrowing down the search-space to specific known keywords seems like a chicken-and-egg problem. We propose to address this challenge by an iterative approach similar to the one taken in a corpus linguistic study of cancer metaphors BIBREF63 . Drawing on results from previous qualitative research BIBREF24 , BIBREF23 , we will compile an initial dictionary of recovery-related terms. Next, we will examine a small portion of the dataset manually, which will be partly randomly sampled and partly selected to contain recovery-related terms. Based on this, we will be able to expand the dictionary and additionally automatically annotate semantic concepts of the identified relevant text passages using a semantic tagging approach such as the UCREL Semantic Analysis System (USAS) BIBREF64 . Crucially for the multilingual aspect of the project, USAS can tag semantic categories in eight languages BIBREF8 . Then, semantic tagging will be applied to the full corpus to retrieve all text passages mentioning relevant concepts. Furthermore, distributional semantics methods BIBREF65 , BIBREF66 can be used to find terms that frequently co-occur with words from our keyword dictionary. Occurrences of the identified keywords or concepts can be quantified in the full corpus to identify the importance of the related personal recovery aspects.", "Linguistic Inquiry and Word Count (LIWC) BIBREF67 is a frequently used tool in social-science text analysis to analyse emotional and cognitive components of texts and derive features for classification models BIBREF47 , BIBREF46 , BIBREF68 , BIBREF69 . LIWC counts target words organised in a manually constructed hierarchical dictionary without contextual disambiguation in the texts under analysis and has been psychometrically validated and developed for English exclusively. While translations for several languages exist, e.g., Dutch BIBREF9 , and it is questionable to what extent LIWC concepts can be transferred to other languages and cultures by mere translation. We therefore aim to apply and develop methods that require less manual labour and are applicable to many languages and cultures. One option constitute unsupervised methods, such as topic modelling, which has been applied to explore cultural differences in mental-health related online data already BIBREF37 , BIBREF36 . The Differential Language Analysis ToolKit (DLATK) BIBREF70 facilitates social-scientific language analyses, including tools for preprocessing, such as emoticon-aware tokenisers, filtering according to meta data, and analysis, e.g. via robust topic modelling methods.", "Furthermore, emotion and sentiment analysis constitute useful tools to investigate the emotions involved in talking about recovery and identify factors that facilitate or hinder it. There are many annotated datasets to train supervised classifiers BIBREF71 , BIBREF3 for these actively researched NLP tasks. Machine learning methods were found to usually outperform rule-based approaches based on look-ups in dictionaries such as LIWC. Again, most annotated resources are English, but state of the art approaches based on multilingual embeddings allow transferring models between languages BIBREF4 ." ], [ "Ethical considerations are established as essential part in planning mental health research and most research projects undergo approval by an ethics committee. On the contrary, the computational linguistics community has started only recently to consider ethical questions BIBREF72 , BIBREF73 . Likely, this is because computational linguistics was traditionally concerned with publicly available, impersonal texts such as newspapers or texts published with some temporal distance, which left a distance between the text and author. Conversely, recent social media research often deals with highly personal information of living individuals, who can be directly affected by the outcomes BIBREF72 .", " BIBREF72 discuss issues that can arise when constructing datasets from social media and conducting analyses or developing predictive models based on these data, which we review here in relation to our project: Demographic bias in sampling the data can lead to exclusion of minority groups, resulting in overgeneralisation of models based on these data. As discussed in the introduction, personal recovery research suffers from a bias towards English-speaking Western individuals of white ethnicity. By studying multilingual accounts of ethnically diverse populations we explicitly address the demographic bias of previous research. Topic overexposure is tricky to address, where certain groups are perceived as abnormal when research repeatedly finds that their language is different or more difficult to process. Unlike previous research BIBREF45 , BIBREF47 , BIBREF46 our goal is not to reveal particularities in the language of individuals affected by mental health problems. Instead, we will compare accounts of individuals with BD from different settings (structured interviews versus informal online discourse) and of different backgrounds. While the latter bears the risk to overexpose certain minority groups, we will pay special attention to this in the dissemination of our results.", "Lastly, most research, even when conducted with the best intentions, suffers from the dual-use problem BIBREF74 , in that it can be misused or have consequences that affect people's life negatively. For this reason, we refrain from publishing mental health classification methods, which could be used, for example, by health insurance companies for the risk assessment of applicants based on their social media profiles.", "If and how informed consent needs to be obtained for research on social media data is a debated issue BIBREF75 , BIBREF76 , BIBREF77 , mainly because it is not straightforward to determine if posts are made in a public or private context. From a legal point of view, the privacy policies of Twitter and Reddit, explicitly allow analysis of the user contents by third party, but it is unclear to what extent users are aware of this when posting to these platforms BIBREF78 . However, in practice it is often infeasible to seek retrospective consent from hundreds or thousands of social media users. According to current ethical guidelines for social media research BIBREF79 , BIBREF80 and practice in comparable research projects BIBREF81 , BIBREF78 , it is regarded as acceptable to waive explicit consent if the anonymity of the users is preserved. Therefore, we will not ask the account holders of Twitter and Reddit posts included in our datasets for their consent.", " BIBREF79 formulate guidelines for ethical social media health research that pertain especially to data collection and sharing. In line with these, we will only share anonymised and paraphrased excerpts from the texts, as it is often possible to recover a user name via a web search for the verbatim text of a post. However, we will make the original texts available as datasets to subsequent research under a data usage agreement. Since the (automatic) annotation of demographic variables in parts of our dataset constitutes especially sensitive information on minority status in conjunction with mental health, we will only share these annotations with researchers that demonstrate a genuine need for them, i.e. to verify our results or to investigate certain research questions.", "Another important question is in which situations of encountering content indicative of a risk of self-harm or harm to others it would be appropriate or even required by duty of care for the research team to pass on information to authorities. Surprisingly, we could only find two mentions of this issue in social media research BIBREF81 , BIBREF82 . Acknowledging that suicidal ideation fluctuates BIBREF83 , we accord with the ethical review board's requirement in BIBREF81 to only analyse content posted at least three months ago. If the research team, which includes clinical psychologists, still perceives users at risk we will make use of the reporting facilities of Twitter and Reddit.", "As a central component we consider the involvement of individuals with lived experience in our project, an aspect which is missing in the discussion of ethical social media health research so far. The proposal has been presented to an advisory board of individuals with a BD diagnosis and was received positively. The advisory board will be consulted at several stages of the project to inform the research design, analysis, and publication of results. We believe that board members can help to address several of the raised ethical problems, e.g., shaping the research questions to avoid feeding into existing biases or overexposing certain groups and highlighting potentially harmful interpretations and uses of our results." ], [ "The importance of the recovery concept in the design of mental health services has recently been prominently reinforced, suggesting ‘recovery-oriented social enterprises as key component of the integrated service’ BIBREF20 . We think that a recovery approach as leading principle for national or global health service strategies, should be informed by voices of individuals as diverse as those it is supposed to serve. Therefore, we expect the proposed investigations of views on recovery by previously under-researched ethnic, language, and cultural groups to yield valuable insights on the appropriateness of the recovery approach for a wider population. The datasets collected in this project can serve as useful resources for future research. More generally, our social-media data-driven approach could be applied to investigate other areas of mental health if it proves successful in leading to relevant new insights.", "Finally, this project is an interdisciplinary endeavour, combining clinical psychology, input from individuals with lived experience of BD, and computational linguistics. While this comes with the challenges of cross-disciplinary research, it has the potential to apply and develop state-of-the-art NLP methods in a way that is psychologically and ethically sound as well as informed and approved by affected people to increase our knowledge of severe mental illnesses such as BD." ], [ "I would like to thank my supervisors Steven Jones, Fiona Lobban, and Paul Rayson for their guidance in this project. My heartfelt thanks go also to Chris Lodge, service user researcher at the Spectrum Centre, and the members of the advisory panel he coordinates that offer feedback on this project based on their lived experience of BD. Further, I would like to thank Masoud Rouhizadeh for his helpful comments during pre-submission mentoring and the anonymous reviewers. This project is funded by the Faculty of Health and Medicine at Lancaster University as part of a doctoral scholarship." ] ], "section_name": [ "Introduction and background", "Data", "Methodology and Resources", "Ethical considerations", "Impact and conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "1b84d85c7a7112ede0a8853a349f44cb33b2e72b", "26fba430d2fc0785e7af2cb20fd60b2ddbe6f6d4", "db94e9c5f75c898a47b7b88dbf3541872f0515d0" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "A specific challenge will be to narrow down the data to parts relevant for personal recovery, since there is no control over the discussed topics compared to structured interviews. To investigate how individuals discuss personal recovery online and what (potentially unrecorded) aspects they associate with it, without a priori narrowing down the search-space to specific known keywords seems like a chicken-and-egg problem. We propose to address this challenge by an iterative approach similar to the one taken in a corpus linguistic study of cancer metaphors BIBREF63 . Drawing on results from previous qualitative research BIBREF24 , BIBREF23 , we will compile an initial dictionary of recovery-related terms. Next, we will examine a small portion of the dataset manually, which will be partly randomly sampled and partly selected to contain recovery-related terms. Based on this, we will be able to expand the dictionary and additionally automatically annotate semantic concepts of the identified relevant text passages using a semantic tagging approach such as the UCREL Semantic Analysis System (USAS) BIBREF64 . Crucially for the multilingual aspect of the project, USAS can tag semantic categories in eight languages BIBREF8 . Then, semantic tagging will be applied to the full corpus to retrieve all text passages mentioning relevant concepts. Furthermore, distributional semantics methods BIBREF65 , BIBREF66 can be used to find terms that frequently co-occur with words from our keyword dictionary. Occurrences of the identified keywords or concepts can be quantified in the full corpus to identify the importance of the related personal recovery aspects." ], "extractive_spans": [ "Occurrences of the identified keywords or concepts can be quantified in the full corpus to identify the importance of the related personal recovery aspects" ], "free_form_answer": "", "highlighted_evidence": [ "Occurrences of the identified keywords or concepts can be quantified in the full corpus to identify the importance of the related personal recovery aspects." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The importance of the recovery concept in the design of mental health services has recently been prominently reinforced, suggesting ‘recovery-oriented social enterprises as key component of the integrated service’ BIBREF20 . We think that a recovery approach as leading principle for national or global health service strategies, should be informed by voices of individuals as diverse as those it is supposed to serve. Therefore, we expect the proposed investigations of views on recovery by previously under-researched ethnic, language, and cultural groups to yield valuable insights on the appropriateness of the recovery approach for a wider population. The datasets collected in this project can serve as useful resources for future research. More generally, our social-media data-driven approach could be applied to investigate other areas of mental health if it proves successful in leading to relevant new insights." ], "extractive_spans": [ "a recovery approach as leading principle for national or global health service strategies, should be informed by voices of individuals", "expect the proposed investigations of views on recovery by previously under-researched ethnic, language, and cultural groups to yield valuable insights on the appropriateness of the recovery approach for a wider population", "The datasets collected in this project can serve as useful resources for future research" ], "free_form_answer": "", "highlighted_evidence": [ " We think that a recovery approach as leading principle for national or global health service strategies, should be informed by voices of individuals as diverse as those it is supposed to serve. Therefore, we expect the proposed investigations of views on recovery by previously under-researched ethnic, language, and cultural groups to yield valuable insights on the appropriateness of the recovery approach for a wider population. The datasets collected in this project can serve as useful resources for future research. More generally, our social-media data-driven approach could be applied to investigate other areas of mental health if it proves successful in leading to relevant new insights." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "42a8db3f5bb659cbc61cd8d4405994cb2d4653ab", "bb6e8f0ecd51485b8ed650cf5407db4b5ac5caa8", "ddcb71e081eaf5a94f187d8f5ce0f7a572065836" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "65f49820b02c8079504ce0a9dd3fddb7f7cb366e", "71ef0566e04869a27ecea3126b0685620c23e8d1", "e72ff48d5949f43a1202fb831b5fa217ca4de0c3" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Since language and culture are important factors in our research questions, we need information on the language of the texts and the country of residence of their authors, which is not provided in a structured format in the three data sources. For language identification, Twitter employs an automatic tool BIBREF48 , which can be used to filter tweets according to 60 language codes, and there are free, fairly accurate tools such as the Google Compact Language Detector, which can be applied to Reddit and blog posts. The location of Twitter users can be automatically inferred from their tweets BIBREF49 or the (albeit noisy) location field in their user profiles BIBREF50 . Only one attempt to classify the location of Reddit users has been published so far BIBREF51 showing meagre results, indicating that the development of robust location classification approaches on this platform would constitute a valuable contribution. Some companies collect mental health-related online data and make them available to researchers subject to approval of their internal review boards, e.g., OurDataHelps by Qntfy or the peer-support forum provider 7 Cups. Unlike `raw' social media data, these datasets have richer user-provided metadata and explicit consent for research usage. On the other hand, less data is available, the process to obtain access might be tedious within the short timeline of a PhD project and it might be impossible to share the used portions of the data with other researchers. Therefore, we will follow up the possibilities of obtaining access to these datasets, but in parallel also collect our own datasets to avoid dependence on external data providers." ], "extractive_spans": [ "language identification" ], "free_form_answer": "", "highlighted_evidence": [ " For language identification, Twitter employs an automatic tool BIBREF48 , which can be used to filter tweets according to 60 language codes, and there are free, fairly accurate tools such as the Google Compact Language Detector, which can be applied to Reddit and blog posts. T" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "b384c0bedc87fb149625320c6eb00f5428281528", "e5a12a494b24c7d624add2b95c3a1c51ee67c2d0", "f0044da47e45ec9a3efc80b1eabee0249907af0b" ], "answer": [ { "evidence": [ "We plan to collect data relevant for BD in general as well as for personal recovery in BD from three sources varying in their available amount versus depth of the accounts we expect to find: 1) Twitter, 2) Reddit (focusing on mental health-related content unlike previous work), 3) blogs authored by affected individuals. Twitter and Reddit users with a BD diagnosis will be identified automatically via self-reported diagnosis statements, such as `I was diagnosed with BD-I last week'. To do so, we will extend on the diagnosis patterns and terms for BD provided by BIBREF47 . Implicit consent is assumed from users on these platforms to use their public tweets and posts. SECREF3 Relevant blogs will be manually identified, and their authors will be contacted to obtain informed consent for using their texts." ], "extractive_spans": [], "free_form_answer": "For Twitter and Reddit users , implicit consent is assumed to use their public tweets. Blog users are contacted to obtain consent for using their texts.", "highlighted_evidence": [ "Twitter and Reddit users with a BD diagnosis will be identified automatically via self-reported diagnosis statements, such as `I was diagnosed with BD-I last week'. To do so, we will extend on the diagnosis patterns and terms for BD provided by BIBREF47 . Implicit consent is assumed from users on these platforms to use their public tweets and posts. SECREF3 Relevant blogs will be manually identified, and their authors will be contacted to obtain informed consent for using their texts." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "If and how informed consent needs to be obtained for research on social media data is a debated issue BIBREF75 , BIBREF76 , BIBREF77 , mainly because it is not straightforward to determine if posts are made in a public or private context. From a legal point of view, the privacy policies of Twitter and Reddit, explicitly allow analysis of the user contents by third party, but it is unclear to what extent users are aware of this when posting to these platforms BIBREF78 . However, in practice it is often infeasible to seek retrospective consent from hundreds or thousands of social media users. According to current ethical guidelines for social media research BIBREF79 , BIBREF80 and practice in comparable research projects BIBREF81 , BIBREF78 , it is regarded as acceptable to waive explicit consent if the anonymity of the users is preserved. Therefore, we will not ask the account holders of Twitter and Reddit posts included in our datasets for their consent." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "If and how informed consent needs to be obtained for research on social media data is a debated issue BIBREF75 , BIBREF76 , BIBREF77 , mainly because it is not straightforward to determine if posts are made in a public or private context. From a legal point of view, the privacy policies of Twitter and Reddit, explicitly allow analysis of the user contents by third party, but it is unclear to what extent users are aware of this when posting to these platforms BIBREF78 . However, in practice it is often infeasible to seek retrospective consent from hundreds or thousands of social media users. According to current ethical guidelines for social media research BIBREF79 , BIBREF80 and practice in comparable research projects BIBREF81 , BIBREF78 , it is regarded as acceptable to waive explicit consent if the anonymity of the users is preserved. Therefore, we will not ask the account holders of Twitter and Reddit posts included in our datasets for their consent.\n\n" ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "annotation_id": [ "12ea170b5d3e6adfb3dcd93a4188de89530b530e", "31564e7d01e28cf9eb6b0975627c88d969656769", "e14745fa320391a2019c9baa2677dffd9339baeb" ], "answer": [ { "evidence": [ "Concurrently, it has been argued for mental health research that it would constitute a `valuable critical step' BIBREF10 to analyse first-hand accounts by individuals with lived experience of severe mental health issues in blog posts, tweets, and discussion forums. Several severe mental health difficulties, e.g., bipolar disorder (BD) and schizophrenia are considered as chronic and clinical recovery, defined as being relapse and symptom free for a sustained period of time BIBREF11 , is considered difficult to achieve BIBREF12 , BIBREF13 , BIBREF14 . Moreover, clinically recovered individuals often do not regain full social and educational/vocational functioning BIBREF15 , BIBREF16 . Therefore, research originating from initiatives by people with lived experience of mental health issues has been advocating emphasis on the individual's goals in recovery BIBREF17 , BIBREF18 . This movement gave rise to the concept of personal recovery BIBREF19 , BIBREF20 , loosely defined as a `way of living a satisfying, hopeful, and contributing life even with limitations caused by illness' BIBREF18 . The aspects of personal recovery have been conceptualised in various ways BIBREF21 , BIBREF22 , BIBREF23 . According to the frequently used CHIME model BIBREF24 , its main components are Connectedness, Hope and optimism, Identity, Meaning and purpose, and Empowerment. Here, we focus on BD, which is characterised by recurring episodes of depressed and elated (hypomanic or manic) mood BIBREF25 , BIBREF12 . Bipolar spectrum disorders were estimated to affect approximately 2% of the UK population BIBREF13 with rates ranging from 0.1%-4.4% across 11 other European, American and Asian countries BIBREF26 . Moreover, BD is associated with a high risk of suicide BIBREF27 , making its prevention and treatment important tasks for society. BD-specific personal recovery research is motivated by mainly two facts: First, the pole of positive/elevated mood and ongoing mood instability constitute core features of BD and pose special challenges compared to other mental health issues, such as unipolar depression BIBREF25 . Second, unlike for some other severe mental health difficulties, return to normal functioning is achievable given appropriate treatment BIBREF28 , BIBREF16 , BIBREF29 ." ], "extractive_spans": [ "characterised by recurring episodes of depressed and elated (hypomanic or manic) mood BIBREF25 , BIBREF12" ], "free_form_answer": "", "highlighted_evidence": [ "Here, we focus on BD, which is characterised by recurring episodes of depressed and elated (hypomanic or manic) mood BIBREF25 , BIBREF12 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We plan to collect data relevant for BD in general as well as for personal recovery in BD from three sources varying in their available amount versus depth of the accounts we expect to find: 1) Twitter, 2) Reddit (focusing on mental health-related content unlike previous work), 3) blogs authored by affected individuals. Twitter and Reddit users with a BD diagnosis will be identified automatically via self-reported diagnosis statements, such as `I was diagnosed with BD-I last week'. To do so, we will extend on the diagnosis patterns and terms for BD provided by BIBREF47 . Implicit consent is assumed from users on these platforms to use their public tweets and posts. SECREF3 Relevant blogs will be manually identified, and their authors will be contacted to obtain informed consent for using their texts." ], "extractive_spans": [ " Twitter and Reddit users with a BD diagnosis will be identified automatically via self-reported diagnosis statements, such as `I was diagnosed with BD-I last week'." ], "free_form_answer": "", "highlighted_evidence": [ "We plan to collect data relevant for BD in general as well as for personal recovery in BD from three sources varying in their available amount versus depth of the accounts we expect to find: 1) Twitter, 2) Reddit (focusing on mental health-related content unlike previous work), 3) blogs authored by affected individuals. Twitter and Reddit users with a BD diagnosis will be identified automatically via self-reported diagnosis statements, such as `I was diagnosed with BD-I last week'. To do so, we will extend on the diagnosis patterns and terms for BD provided by BIBREF47 . Implicit consent is assumed from users on these platforms to use their public tweets and posts. SECREF3 Relevant blogs will be manually identified, and their authors will be contacted to obtain informed consent for using their texts." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We plan to collect data relevant for BD in general as well as for personal recovery in BD from three sources varying in their available amount versus depth of the accounts we expect to find: 1) Twitter, 2) Reddit (focusing on mental health-related content unlike previous work), 3) blogs authored by affected individuals. Twitter and Reddit users with a BD diagnosis will be identified automatically via self-reported diagnosis statements, such as `I was diagnosed with BD-I last week'. To do so, we will extend on the diagnosis patterns and terms for BD provided by BIBREF47 . Implicit consent is assumed from users on these platforms to use their public tweets and posts. SECREF3 Relevant blogs will be manually identified, and their authors will be contacted to obtain informed consent for using their texts." ], "extractive_spans": [], "free_form_answer": "Twitter and Reddit users are identified automatically via self-reported diagnosis statements. Blog users are identified manually.", "highlighted_evidence": [ "Relevant blogs will be manually identified, and their authors will be contacted to obtain informed consent for using their texts.", " Twitter and Reddit users with a BD diagnosis will be identified automatically via self-reported diagnosis statements, such as `I was diagnosed with BD-I last week'. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "five", "five", "five", "five", "five" ], "paper_read": [ "", "", "", "", "" ], "question": [ "What conclusions do the authors draw about the aspects and mechanisms of personal recovery in bipolar disorder?", "What languages were included in this multilingual population?", "What computational linguistic methods were used for the analysis?", "Was permission sought from the bipolar patients to use this data?", "How are the individuals with bipolar disorder identified?" ], "question_id": [ "3e6b6820e7843209495b4f9a72177573afaa4bc3", "a926d71e6e58066d279d9f7dc3210cd43f410164", "3d547a7dda18a2dd5dc89f12d25d7fe782d66450", "4a32adb0d54da90434d5bd1c66cc03a7956d12a0", "c17ece1dad42d92c78fca2e3d8afa9a20ff19598" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ] }
{ "caption": [], "file": [] }
[ "Was permission sought from the bipolar patients to use this data?", "How are the individuals with bipolar disorder identified?" ]
[ [ "1906.01010-Data-1", "1906.01010-Ethical considerations-3" ], [ "1906.01010-Data-1", "1906.01010-Introduction and background-1" ] ]
[ "For Twitter and Reddit users , implicit consent is assumed to use their public tweets. Blog users are contacted to obtain consent for using their texts.", "Twitter and Reddit users are identified automatically via self-reported diagnosis statements. Blog users are identified manually." ]
66
2003.11528
Generating Major Types of Chinese Classical Poetry in a Uniformed Framework
Poetry generation is an interesting research topic in the field of text generation. As one of the most valuable literary and cultural heritages of China, Chinese classical poetry is very familiar and loved by Chinese people from generation to generation. It has many particular characteristics in its language structure, ranging from form, sound to meaning, thus is regarded as an ideal testing task for text generation. In this paper, we propose a GPT-2 based uniformed framework for generating major types of Chinese classical poems. We define a unified format for formulating all types of training samples by integrating detailed form information, then present a simple form-stressed weighting method in GPT-2 to strengthen the control to the form of the generated poems, with special emphasis on those forms with longer body length. Preliminary experimental results show this enhanced model can generate Chinese classical poems of major types with high quality in both form and content, validating the effectiveness of the proposed strategy. The model has been incorporated into Jiuge, the most influential Chinese classical poetry generation system developed by Tsinghua University (Guo et al., 2019).
{ "paragraphs": [ [ "1.1em" ], [ "1.1.1em" ], [ "1.1.1.1em", "Jinyi Hu, Maosong Sun$^{*}$ $*$ Corresponding author", "Department of Computer Science and Technology, Tsinghua University, Beijing, China", "Institute for Artificial Intelligence, Tsinghua University, Beijing, China", "State Key Lab on Intelligent Technology and Systems, Tsinghua University, Beijing, China", "[email protected], [email protected]", "Poetry generation is an interesting research topic in the field of text generation. As one of the most valuable literary and cultural heritages of China, Chinese classical poetry is very familiar and loved by Chinese people from generation to generation. It has many particular characteristics in its language structure, ranging from form, sound to meaning, thus is regarded as an ideal testing task for text generation. In this paper, we propose a GPT-2 based uniformed framework for generating major types of Chinese classical poems. We define a unified format for formulating all types of training samples by integrating detailed form information, then present a simple form-stressed weighting method in GPT-2 to strengthen the control to the form of the generated poems, with special emphasis on those forms with longer body length. Preliminary experimental results show this enhanced model can generate Chinese classical poems of major types with high quality in both form and content, validating the effectiveness of the proposed strategy. The model has been incorporated into Jiuge, the most influential Chinese classical poetry generation system developed by Tsinghua University BIBREF0." ], [ "Chinese poetry is a rich treasure in Chinese traditional culture. For thousands of years, poetry is always considered as the crystallization of human wisdom and erudition by Chinese people and deeply influences the Chinese history from the mental and cultural perspective.", "In general, a Chinese classical poem is a perfect combination of three aspects, i.e., form, sound, and meaning. Firstly, it must strictly obey a particular form which specifies the number of lines (i.e., sentences) in the poem and the number of characters in each line. Secondly, it must strictly obey a particular sound pattern which specifies the sound requirement for each character in every position of the poem. Lastly, it must be meaningful, i.e., with grammatical and semantic well-formedness for each line and, with thematic coherence and integrity throughout the poem. These three points form the universal principles for human poets to create Chinese classical poems.", "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI. SHI and CI can be further divided into many different types in terms of their forms. We briefly introduce the related background knowledge as follows." ], [ "The majority of SHI has a fixed number of lines and a fixed and identical number of characters for all lines. Two major forms of SHI are Jueju and Lvshi with four lines and eight lines accordingly. Jueju and Lvshi are further divided into Wuyan Jueju and Qiyan Jueju as well as Wuyan Lvshi and Qiyan Lvshi where Wuyan means five characters each line and Qiyan means seven characters. Figure 1 is a famous classical poem of Wuyan Jueju. In addition, Lvshi has a strict requirement for the two-sentence pairs composed of $<$the third line, the fourth line$>$ and $<$the fifth line, the sixth line$>$: they must satisfy the requirement of Duizhang, this is, a strict parallel matching for both part of speech and sense of every character in two lines. This obviously increases the difficulty of poem composition.", "According to CCPC1.0, Wuyan Jueju, Qiyan Jueju, Wuyan Lvshi, and Qiyan Lvshi constitute 67.96% of SHI, with 4.26%, 22.57%, 15.99%, and 25.14% respectively." ], [ "CI is another primary type of Chinese poetry. In contrast to SHI, CI has nearly one thousand forms. Each form of CI (it is called Cipai scholarly) is defined by a fixed number of lines for the poem and, a fixed number of characters for a particular line which usually varies for different lines. The above settings for different Cipai are very distinct, for instance, the Cipai of Busuanzi contains 8 lines and 44 characters, as shown in Figure 2, whereas the Cipai of Manjianghong contains 22 lines and 94 characters. The high diversity regarding the forms of CI further significantly increases the difficulty of poem composition.", "We observe the statistical distribution of all the forms (Cipai) of CI over CCPC1.0. It roughly follows Zipf’s law BIBREF1. There exists a long tail in the distribution where a lot of Cipai only has a few instances which are far less enough for a computational model (algorithm) to learn its forms. So we choose the top frequent 121 forms of CI, constituting 80% of CCPC1.0, as the focus for CI in this research.", "As can be seen from the above analysis, the greatest challenge for machine generation of Chinese classical poems lies in how to make machine capable of following the universal principles underlying the writing of Chinese classical poems. The to-date research cannot deal with this challenge well. Most of the work so far mainly targeted at automatic generation of Jueju (including Wuyan Jueju and Qiyan Jueju), for an obvious reason that it is much easier for an algorithm to handle the requirements of form, thematic coherence and integrity in the scenario of four lines than that in the scenario of Lvshi with eight lines, let alone much more complicated scenarios, i.e., CI, are taken into account. In fact, the research on the automatic generation of CI is just at the very beginning stage.", "In this paper, we propose a uniformed computational framework that tries to generate major types of Chinese classical poems with two major forms of SHI, Jueju, and Lvshi, as well as 121 major forms (Cipai) of CI using a single model. Preliminary experimental results validate the effectiveness of the proposed framework. The implemented model has been incorporated into Jiuge BIBREF0, the most influential Chinese classical poetry generation system developed by Tsinghua University (refer to http://jiuge.thunlp.cn/)." ], [ "With the development of deep learning, the mainstream of poem generation research has been shifted from traditional statistical models to neural network methods in recent years. Most existing works are based on the Encoder-Decoder architecture BIBREF2. In Chinese classical poetry generation, yan2013poet proposed a model using the Encoder-Decoder architecture and wang2016chinese further used attention-based sequence-to-sequence model.", "The key factor in designing the model architecture is how to treat the generated context so far in the process of generating a poem. The input to the encoder could be as short as a single poetic line or all the previously generated lines (whole history). Theoretically, considering the whole history is more appropriate for keeping the thematic coherence and integrity of the generated poem than considering the short history, at the expense that may hurt the fluency of the generated sentences due to the data sparseness problem possibly caused by the more sophisticated model.", "Thus we have two basic ways to figure out the history. One is to consider the whole history. zhang2014chinese first introduced the neural network method into poetry generation by proposing the so-called incremental Recurrent Neural Network, where every sentence (line) is embedded into a sentence vector by a Convolutional Sentence Model and then all are packed into a history vector. yi2018chinesea presented a working memory mechanism in LSTM, designing three kinds of memory to address the whole history. Another is to select part of history. yi2018chineseb observed that considering the full context may not lead to good performance in LSTM, and proposed salient clue mechanism where only salient characters in partial history are under consideration.", "The Transformer BIBREF3 architecture and other models based on this, including GPT BIBREF4, Bert BIBREF5, show much better results in various NLP tasks. Transformer utilizes the self-attention mechanism in which any pair of tokens in the sequence can attend to each other, making it possible to generate much longer SHI or CI while keeping the coherence throughout the poem.", "liao2019gpt applied GPT to Chinese classical poetry generation. They pre-trained the model on a Chinese news corpus with 235M sentences and then fine-tuning the model on Chinese poem corpus with 250,000 Jueju and Lvshi, 20,000 CIs, 700,000 pairs of couplets. A key point is they defined a unified format to formulate different types of training samples, as [form, identifier 1, theme, identifier 2, body], where “body” accommodates the full content of an SHI, CI, or couplet in corresponding “form” with “theme” as its title. Experiments demonstrated GPT-based poem generation gained promising performance, meanwhile still faced some limitations, for instance, only 70% of the generated CIs for the Cipai Shuidiaogetou, a sort of CI with quite long body, are correct in form.", "Regarding this, we think the work of liao2019gpt could be improved in the following three respects. First, there is a large improving room for better fitting the form requirement of CI in the process of generation, especially for those with relatively long body length. Second, their formulation format for training samples can be supplemented, for example, the stanza structure of CI is missing. Third, using contemporary Chinese news corpus to pre-train the model may not be necessary, owing to distinctive differences in both meaning and form between contemporary Chinese and Chinese classical poetry language.", "For the above considerations, we give up the pre-training on the news corpus and add a separation label to indicate the stanza structure of CI. Then we make use of GPT-2 to train the model. Furthermore, we propose a form-stressed weighting method in GPT-2 to strengthen the control in particular to the form of CI." ], [ "We present a unified format for formulating all types of training samples of SHI and CI by extending the format given in liao2019gpt. First, we change various punctuations between lines into the comma ‘,’, serving as a uniform separation label between two lines. Second, we utilize three separation labels, $[label_1]$ and $[label_2]$ to separate between form, title, and body of the poem respectively, and $[label_3]$ to separate two stanzas of CI if needed. Third, we enclose $[EOS]$ at the end of the body. Thus, the format for SHI is as follows:", "where n is the number of lines in the poem.", "The format of CI will be enriched with $[label_3]$ if it has two stanzas in the body:", "Here, $[label_1]$, $[label_2]$ and $[label_3]$ are set as ‘$\\#$’, ‘$*$’ and ‘$\\&$’.", "After pre-processing, all the formatted poem samples will be sent to the poetry generation model for training, as illustrated in Figure 3." ], [ "We leverage the Transformer-based GPT-2, which is often used to train a robust language model, as the basic model of poetry generation. Compared to previous neural network-based language models such as RNN and LSTM, it is reported that GPT-2 exhibits good performance in the quality of generated texts given quite a long history BIBREF6. To weaken the so-called degeneration problem in generation and increase the diversity of generated texts, we use the top-k stochastic sampling strategy BIBREF7 (k is set as 15 in our experiment) to choose the next tokens to generate. In addition, our poetry generation model takes the Chinese character rather than the word as a basic linguistic unit, so word segmentation is not needed.", "With this naive GPT-2 model, we see from the experimental results that the generated poems appear pretty good in both meaning and sound(including rhyme), though if being observed carefully, there still exist some in-depth problems in sentence fluency and thematic coherence of the whole poem which are uneasy to solve. As for form, the model can perform well in generating Jueju and Lvshi of SHI whereas rather poorly in generating various Cipai of CI, with quite high form errors. Figure 4(a) is an example of a generated CI by this model, under Cipai of Busuanzi, where two characters are mistakenly missing which obviously violates the form requirement." ], [ "In the basic model, the loss function for training with respect to the $i$th token in the text is conventionally defined as the cross-entropy:", "where $x[i]$ is the vector of $i$th token, $j$ is over all possible token types.", "To address the form problem, we simply add a weighting factor into the loss function with particular stress on the aforementioned three types of form-related tokens, i.e., the line separation label ‘,’, the stanza separation label ‘$\\&$’, and $[EOF]$, as in:", "where $weight[i]$ is set as 1 for any Chinese character, 2 for ‘,’ and ‘$\\&$’, and 3 for $[EOF]$.", "This simple method (we thus call it the form-stressed weighting method) enhances the model’s capability to form control quite significantly. Figure 4(b) shows an example that contrasts the case in Figure 4(a)." ], [ "We implement the GPT-2 model based on the transformers library BIBREF8. The model configuration is 8 attention heads per layer, 8 layers, 512 embedding dimensions, and 1024 feed-forward layer dimensions. We employ the OpenAIAdam optimizer and train the model with 400,000 steps in total on 4 NVIDIA 1080Ti GPUs. The characters with frequency less than 3 in CCPC1.0 are treated as UNK and a vocabulary with 11259 tokens (characters) is finally built up." ], [ "For Jueju and Lvshi of SHI, because of their simplicity in form, the two models hardly make form errors. We generate 500 poems for each type using the two models accordingly. All of these poems are in the right form. This demonstrates that both models are all very powerful in generating Jueju and Lvshi with almost perfect performance in form.", "For CI, we select 6 Cipais, with the body length varying from 33 to 114 characters and with relatively sufficient training samples in CPCC, as our observation target. We generate 300 poems with the two models accordingly. Table 1 summarizes the correct rates of the two models under these 6 Cipais (a generated poem is considered to be correct in form if and only if its form fully matches the expected form). As can be seen, a tendency is the longer the body of CI, the worse the performance of the two models in form and, the more significant the gain in the form correct rate for the enhanced model (an extreme is in the case of Qinyuanchun where the correct rate is raised from 12.0% to 55.0%)." ], [ "The preliminary observation on the generated poems suggests that the inclusion of the stanza separation into the unified format of training samples is beneficial in some degree for meeting the form requirement. For instance, we input the same title to the enhanced model and to a model trained under the same condition except without the stanza separation, asking them to generate a number of CIs with Cipai of Busuanzi, a task similar to that in Figure 4. We find that about 20% of CIs generated by the latter suffer from some errors in form, as illustrated in Figure 5, meanwhile all the CIs generated by the former ideally match the expected form." ], [ "According to our observation, the enhanced model is likely to generate poems with both high quality and diversity. We present two examples generated by the model and give some comments on the meaning of each poem.", "UTF8gbsn七律 · 远望", "UTF8gbsn江上微茫一叶舟,天涯芳草满汀洲", "UTF8gbsn数声渔唱隔船过,几点人家落帆游", "UTF8gbsn春色不从莺语到,夕阳空度客心愁", "UTF8gbsn何时重向长桥饮,同泛溪光共白头", "The example above is a Qiyan Lvshi. The title of this poem means “look far around”. In this poem, the first four lines depict a view seen from the river bank-misty and rolling waters, a drifting boat, lush vanillas, melodies from passing boats and cottages on the bank, creating a tranquil and halcyon atmosphere. However, the poet is still overcome by solitude and nostalgia because of the lonely trip, which is vividly revealed in the second four sentences. The poem adopts a typical semantic structure of Qiyan Lvshi with its first-half delineating a view and then conveying the poet’s feeling in the second-half (the contrast between the view and the feeling is one of the appreciated artistic methods in Chinese classical poems). In addition, for Lvshi, the pairs of $<$the third line, the fourth line$>$ and $<$the fifth line, the sixth line$>$ must satisfy the requirement of Duizhang, a correspondence in both part-of-speech(POS) and word sense between two parallel lines. This point is perfectly reflected in the generated poem, as shown in Table 2.", "UTF8gbsn满江红 · 塞外", "UTF8gbsn风急秋空,天欲暮,黄云飞处。", "UTF8gbsn人不见,沙堤野戍,乱鸦啼苦。", "UTF8gbsn万里胡笳吹雁断,三更羌笛愁如许。", "UTF8gbsn甚关河、征妇泪痕多,无行路。", "UTF8gbsn青狼火,荒烟树。", "UTF8gbsn白露草,残阳度。", "UTF8gbsn但寒山远近,故乡千古。", "UTF8gbsn一角斜晖归梦绕,满江红叶西陵去。", "UTF8gbsn待明年,又到汉家城,重回顾。", "", "The example above is a CI in the form of Manjianghong and the title means “beyond the Great Wall”. It vividly depicts a typical view of the Northwestern China howling wind, clouds of dust, crying crows and lugubrious sound of flutes. The poem is saturated with nostalgia, solitude and desolate feelings of life, which is not only embodied in the bleak scenery but also overtly revealed in the last three sentences. The combination of visual and audio feelings and of reality and imagination is tactfully employed in the poem and makes it even more impressive and resonating." ], [ "In this paper, we propose a GPT-2 based uniformed framework for generating major types of Chinese classical poems, including SHI and CI. To this end, we at first define a unified format for formulating all types of training samples by integrating more detailed form information, then present a simple form-stressed weighting method in GPT-2 to strengthen the control to the form of CI. Preliminary experiments validate the effectiveness of our method. Nevertheless, we also find that enabling GPT-2 to have a strong capability in form manipulation for the generated texts remains a difficult challenge, particularly for those forms with longer body length and fewer training samples. We plan to figure out a more sophisticated way to make the model better learn the form structure and hope to enrich the general GPT-2 from this special perspective." ], [ "We would like to thank Zhipeng Guo, Xiaoyuan Yi, Xinran Gu and anonymous reviewers for their insightful comments. This work is supported by the project Text Analysis and Studies on Chinese Classical Literary Canons with Big Data Technology under grant number 18ZDA238 from the Major Program of the National Social Science Fund of China. Hu is also supported by the Initiative Scientific Research Program and Academic Training Program of the Department of Computer Science and Technology, Tsinghua University." ] ], "section_name": [ "", " ::: ", " ::: ::: ", "Introduction", "Introduction ::: SHI", "Introduction ::: CI", "Related Work", "Model ::: Pre-processing", "Model ::: Basic Model", "Model ::: Enhanced Model", "Experiment ::: Experiment Setup", "Experiment ::: Performance Comparison of the Two Models in Form", "Experiment ::: Effect of the Stanza Separation", "Experiment ::: Case Observation", "Conclusion and Future Works", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "b2cf6d3f57698251eec990df3ce42f4271b95225", "b329fcbcd2132fba8fa55b6c320ef06031b059d5", "db1daaf10fae01579492cc9f2ccc956126b70816" ], "answer": [ { "evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI. SHI and CI can be further divided into many different types in terms of their forms. We briefly introduce the related background knowledge as follows.", "We implement the GPT-2 model based on the transformers library BIBREF8. The model configuration is 8 attention heads per layer, 8 layers, 512 embedding dimensions, and 1024 feed-forward layer dimensions. We employ the OpenAIAdam optimizer and train the model with 400,000 steps in total on 4 NVIDIA 1080Ti GPUs. The characters with frequency less than 3 in CCPC1.0 are treated as UNK and a vocabulary with 11259 tokens (characters) is finally built up." ], "extractive_spans": [ "CCPC1.0" ], "free_form_answer": "", "highlighted_evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI. SHI and CI can be further divided into many different types in terms of their forms. We briefly introduce the related background knowledge as follows.", "We implement the GPT-2 model based on the transformers library BIBREF8. The model configuration is 8 attention heads per layer, 8 layers, 512 embedding dimensions, and 1024 feed-forward layer dimensions. We employ the OpenAIAdam optimizer and train the model with 400,000 steps in total on 4 NVIDIA 1080Ti GPUs. The characters with frequency less than 3 in CCPC1.0 are treated as UNK and a vocabulary with 11259 tokens (characters) is finally built up." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI. SHI and CI can be further divided into many different types in terms of their forms. We briefly introduce the related background knowledge as follows.", "In this paper, we propose a uniformed computational framework that tries to generate major types of Chinese classical poems with two major forms of SHI, Jueju, and Lvshi, as well as 121 major forms (Cipai) of CI using a single model. Preliminary experimental results validate the effectiveness of the proposed framework. The implemented model has been incorporated into Jiuge BIBREF0, the most influential Chinese classical poetry generation system developed by Tsinghua University (refer to http://jiuge.thunlp.cn/)." ], "extractive_spans": [], "free_form_answer": "Two major forms(Jueju and Lvshi) of SHI and 121 major forms of CI from Chinese Classical Poerty Corpus (CCPC1.0)", "highlighted_evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems).", "In this paper, we propose a uniformed computational framework that tries to generate major types of Chinese classical poems with two major forms of SHI, Jueju, and Lvshi, as well as 121 major forms (Cipai) of CI using a single model." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "liao2019gpt applied GPT to Chinese classical poetry generation. They pre-trained the model on a Chinese news corpus with 235M sentences and then fine-tuning the model on Chinese poem corpus with 250,000 Jueju and Lvshi, 20,000 CIs, 700,000 pairs of couplets. A key point is they defined a unified format to formulate different types of training samples, as [form, identifier 1, theme, identifier 2, body], where “body” accommodates the full content of an SHI, CI, or couplet in corresponding “form” with “theme” as its title. Experiments demonstrated GPT-based poem generation gained promising performance, meanwhile still faced some limitations, for instance, only 70% of the generated CIs for the Cipai Shuidiaogetou, a sort of CI with quite long body, are correct in form.", "Regarding this, we think the work of liao2019gpt could be improved in the following three respects. First, there is a large improving room for better fitting the form requirement of CI in the process of generation, especially for those with relatively long body length. Second, their formulation format for training samples can be supplemented, for example, the stanza structure of CI is missing. Third, using contemporary Chinese news corpus to pre-train the model may not be necessary, owing to distinctive differences in both meaning and form between contemporary Chinese and Chinese classical poetry language." ], "extractive_spans": [ "Chinese poem corpus with 250,000 Jueju and Lvshi, 20,000 CIs, 700,000 pairs of couplets" ], "free_form_answer": "", "highlighted_evidence": [ "They pre-trained the model on a Chinese news corpus with 235M sentences and then fine-tuning the model on Chinese poem corpus with 250,000 Jueju and Lvshi, 20,000 CIs, 700,000 pairs of couplets.", "Third, using contemporary Chinese news corpus to pre-train the model may not be necessary, owing to distinctive differences in both meaning and form between contemporary Chinese and Chinese classical poetry language." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "132abe092239e725d2a23ad7ae1fb12f103fd0ef", "2bcd3b995877f0b3d55dd88e16208fdffdea73d4", "3c0da5f76611957c796e10409bcd8b6246b269e3" ], "answer": [ { "evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI. SHI and CI can be further divided into many different types in terms of their forms. We briefly introduce the related background knowledge as follows.", "With this naive GPT-2 model, we see from the experimental results that the generated poems appear pretty good in both meaning and sound(including rhyme), though if being observed carefully, there still exist some in-depth problems in sentence fluency and thematic coherence of the whole poem which are uneasy to solve. As for form, the model can perform well in generating Jueju and Lvshi of SHI whereas rather poorly in generating various Cipai of CI, with quite high form errors. Figure 4(a) is an example of a generated CI by this model, under Cipai of Busuanzi, where two characters are mistakenly missing which obviously violates the form requirement." ], "extractive_spans": [ "SHI ", "CI " ], "free_form_answer": "", "highlighted_evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI", "As for form, the model can perform well in generating Jueju and Lvshi of SHI whereas rather poorly in generating various Cipai of CI, with quite high form errors. Figure 4(a) is an example of a generated CI by this model, under Cipai of Busuanzi, where two characters are mistakenly missing which obviously violates the form requirement." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI. SHI and CI can be further divided into many different types in terms of their forms. We briefly introduce the related background knowledge as follows.", "In this paper, we propose a uniformed computational framework that tries to generate major types of Chinese classical poems with two major forms of SHI, Jueju, and Lvshi, as well as 121 major forms (Cipai) of CI using a single model. Preliminary experimental results validate the effectiveness of the proposed framework. The implemented model has been incorporated into Jiuge BIBREF0, the most influential Chinese classical poetry generation system developed by Tsinghua University (refer to http://jiuge.thunlp.cn/)." ], "extractive_spans": [ "two major forms of SHI, Jueju, and Lvshi,", "121 major forms (Cipai) of CI " ], "free_form_answer": "", "highlighted_evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. ", "SHI and CI can be further divided into many different types in terms of their forms. ", "In this paper, we propose a uniformed computational framework that tries to generate major types of Chinese classical poems with two major forms of SHI, Jueju, and Lvshi, as well as 121 major forms (Cipai) of CI using a single model." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI. SHI and CI can be further divided into many different types in terms of their forms. We briefly introduce the related background knowledge as follows." ], "extractive_spans": [ "two primary categories, SHI and CI", "SHI and CI can be further divided into many different types" ], "free_form_answer": "", "highlighted_evidence": [ "Chinese Classical poetry can be classified into two primary categories, SHI and CI. According to the statistical data from CCPC1.0, a Chinese Classical Poetry Corpus consisting of 834,902 poems in total (We believe it is almost a full collection of Chinese Classical poems). 92.87% poems in CCPC1.0 fall into the category of SHI and 7.13% fall into the category of CI. SHI and CI can be further divided into many different types in terms of their forms." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "two", "two" ], "paper_read": [ "somewhat", "somewhat" ], "question": [ "What is the source of the training/testing data?", "What are the types of chinese poetry that are generated?" ], "question_id": [ "c2ce25878a17760c79031a426b6f38931cd854b2", "1d263356692ed8cdee2a13f103a82d98f43d66eb" ], "question_writer": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ], "search_query": [ "", "" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1: An example of SHI with Wuyan Jueju as its form. The array of small boxes, usually each surrounds a Chinese character, illustrates the form requirement in the number of lines and the number of characters per line for a poem. A full correspondence between character and box, no more and no less, indicates this basic form requirement is satisfied by the given poem.", "Figure 2: An example of CI with the form(Cipai) Busuanzi. In contrast to the case of SHI in Figure 1, the array of small boxes here shows the predefined number of characters per line of CI tends to be variable.", "Figure 3: Format pre-processing of poem samples for training.", "Figure 4: Comparison of two generated poems by the basic model and the enhanced model.", "Table 1: Comparison between two models on the control to the form of CI.", "Figure 5: Two example poems generated by the model without considering the stanza separation. Both have errors in form. Refer to Figure 4(b) for comparison.", "Table 2: Illustration of Duizhang." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "5-Table1-1.png", "5-Figure5-1.png", "6-Table2-1.png" ] }
[ "What is the source of the training/testing data?" ]
[ [ "2003.11528-Introduction-2", "2003.11528-Experiment ::: Experiment Setup-0", "2003.11528-Introduction ::: CI-3", "2003.11528-Related Work-4", "2003.11528-Related Work-5" ] ]
[ "Two major forms(Jueju and Lvshi) of SHI and 121 major forms of CI from Chinese Classical Poerty Corpus (CCPC1.0)" ]
67
1801.03615
Improved English to Russian Translation by Neural Suffix Prediction
Neural machine translation (NMT) suffers a performance deficiency when a limited vocabulary fails to cover the source or target side adequately, which happens frequently when dealing with morphologically rich languages. To address this problem, previous work focused on adjusting translation granularity or expanding the vocabulary size. However, morphological information is relatively under-considered in NMT architectures, which may further improve translation quality. We propose a novel method, which can not only reduce data sparsity but also model morphology through a simple but effective mechanism. By predicting the stem and suffix separately during decoding, our system achieves an improvement of up to 1.98 BLEU compared with previous work on English to Russian translation. Our method is orthogonal to different NMT architectures and stably gains improvements on various domains.
{ "paragraphs": [ [ "Neural machine translation (NMT) BIBREF0 has shown better performance compared with statistic machine translation BIBREF1 . Such methods encode a source sentence into hidden states and generate target words sequentially by calculating a probability distribution on the target-side vocabulary. Most NMT systems limit target side vocabulary to a fixed size, considering the limit of graphics memory size and high computing complexity when predicting a word over the whole target side vocabulary (e.g., 30K or 50K). In addition, a larger target-side vocabulary can also make the prediction task more difficult. Word-level NMT systems suffer the problem of out of vocabulary (OOV) words, particularly for morphologically rich languages. For example, English to Russian machine translation faces a big challenge due to rich morphology of Russian words, which leads to much more OOV words than some other languages. Typically a specific tag is used to represent all OOV words, which is then translated during a post process BIBREF2 . This can be harmful to the translation quality.", "There has been several methods to address this problem. Some focused on translation granularity ( BIBREF3 , BIBREF3 ; BIBREF4 , BIBREF4 ; BIBREF5 , BIBREF5 ), while others ( BIBREF6 , BIBREF6 ; BIBREF7 , BIBREF7 ) effectively expand target side vocabulary. However, though those methods can avoid OOV, none of them has explicitly modeled the target side morphology. When dealing with language pairs such as English-Russian, the number of different target side words is large due to the rich suffixes in Russian. The above methods are limited in distinguishing one suffix from another.", "Since the total number of different stems in a morphologically rich language is much less than the number of words, a natural perspective to make a better translation on a morphologically-rich target-side language is to model stems and suffixes separately. We design a simple method, which takes a two-step approach for the decoder. In particular, stem is first generated at each decoding step, before suffix is predicted. Two types of target side sequences are used during training, namely stem sequence and suffix sequence, which are extracted from the original target side word sequence, as shown in Figure FIGREF1 . Sparsity is relieved since the number of stem types is much smaller than word types, and suffix types can be as small as several hundreds. Another advantage of this structure is that during the prediction of suffix, the previously generated stem sequence can be considered, which can further improve the accuracy of suffix prediction.", "We empirically study this method and compare it with previous work on reducing OOV rates ( BIBREF3 , BIBREF3 ; BIBREF4 , BIBREF4 ). Results show that our method gives significant improvement on the English to Russian translation task on two different domains and two popular NMT architectures. We also verify our method on training data consisting of 50M bilingual sentences, which proves that this method works effectively on large-scale corpora." ], [ "Subword based BIBREF3 and character-based ( BIBREF4 , BIBREF4 ; BIBREF5 , BIBREF5 ) NMT are the two directions of adjusting translation granularity, which can be helpful to our problem.", "In BIBREF3 ( BIBREF3 )'s work, commonly appearing words remain unchanged, while others are segmented into several subword units, which are from a fixed set. Both source and target side sentences can be changed into subword sequences. More specifically, some rare words are split into and represent as some more frequent units, base on a data compression technique, namely Byte Pair Encoding (BPE). The vocabulary built on common words and these frequent subword units can successfully improve the coverage of training data. In fact, a fixed size vocabulary can cover all the training data as long as the granularity of subword units is small enough. The main limitation of this method is the absence of morphology boundary. Some subword units may not be a word suffix which can represent a morphological meaning, and the subword units are treated in the same way as complete words. Subword units and complete words are predicted during a same sequence generation procedure. This may lead to two problems:", "The sequence length can increase, especially on a morphologically rich language, which can lead to low NMT performance.", "A subword unit cannot represent a linguistic unit, and suffix is not modeled explicitly.", " BIBREF5 ( BIBREF5 ) proposed a hybrid architecture to deal with the OOV words in source side and any generated unknown tag in the target side. In their system, any OOV words on the source side are encoded at the character level, and if an unknown tag is predicted during decoding, another LSTM will be used to generate a sequence of target-side characters, which will be used as the replacement of the target side unknown word for the translation of a source OOV. However, their model may not work well when the target side is morphologically rich and the source side is not, because their hybrid network on the target side will only be used when an unknown tag is generated, which is always corresponding to a source unknown word. If most of the source side tokens are covered by the source vocabulary, the hybrid network may not have advantage on a morphologically rich target side language.", "In BIBREF4 ( BIBREF4 )'s work, source side and target side sequence are all character-based, which eliminates OOV on the source side, and can generate any target side word theoretically. Character-based NMT may potentially improve the translation accuracy of morphologically rich language on the source side, but the training and decoding latency increase linearly with the sequence length, which is several times to the original word based NMT. Another disadvantage of character-based NMT is that character embedding lost the ability to represent a linguistic unit. Long-distance dependences are more difficult to be modeled in a character-based NMT. BIBREF4 ( BIBREF4 ) use convolutional and pooling layers on the source side to make the source sequence shorter. However, the target side sequence remains much longer than the original word sequence, and suffix boundary of the target side is not specifically considered in their model. This work may more helpful if a morphologically rich language is on the source side, but it is not designed to overcome the problem brought by a morphologically rich target side language.", "There is another way which can effectively reduce target-side OOV. Both BIBREF6 ( BIBREF6 ) and BIBREF7 ( BIBREF7 ) use a large target-side vocabulary. To overcome the problem of GPU memory limitation and increasing computational complexity, instead of the original vocabulary, a selected subset is actually used both during the training and decoding time. Their model can generate any of the words in the large vocabulary, but data sparsity still remains, the low frequent words in the training data is not fully trained." ], [ "Previous work considered morphological information for both SMT and NMT. BIBREF8 ( BIBREF8 ) proposed an effective way to integrate word-level annotation in SMT, which can be morphological, syntactic, or semantic. Morphological information can be utilized not only on source side, but also the target side. Although these annotation can help to improve the translation procedure, data sparsity still exists. BIBREF9 ( BIBREF9 ) decompose the process of translating a word into two steps. Firstly a stem is produced, then a feature-rich discriminative model selects an appropriate inflection for the stem. Target-side morphological features and source-side context features are utilized in their inflection prediction model.", " BIBREF10 ( BIBREF10 ) use distributed representations for words and soft morphological tags in their neural inflection model, which can effectively reduce lexical sparsity, leading to less morphological ambiguity. This is the first try of modeling inflection through a neural method, integrated in a SMT architecture.", "For NMT, BIBREF11 ( BIBREF11 ) make use of various source side features (such as morphological features, part-of-speech tags, and syntactic dependency labels) to enhance encoding in NMT. This is the first time morphological information is leveraged in NMT architecture. Target-side morphology is not considered in their work. BIBREF12 ( BIBREF12 ) predict a sequence of interleaving morphological tags and lemmas, followed by a morphological generator. They used a external model to synthesize words given tags and lemmas. Our method is the first to explicitly consider the generation of morphological suffixes within a neural translation model. Our work is motivated by a line of work that generates morphology during text generation ( BIBREF13 , BIBREF13 ; BIBREF14 , BIBREF14 ; BIBREF10 , BIBREF10 )." ], [ "Morphology Russian has rich morphology, which includes number (singular or plural), case (nominative, accusative etc.), gender (feminine, masculine or neuter) and tense mood. Figure FIGREF2 shows one example for Russian. A noun word “ball” is always masculine, but the suffix differs when the case and number changes, resulting in 10 different forms. Some other nouns can be feminine or neuter, and their adjectives will agree with them. Both adjectives and verbs have different forms according to their case, tense mood and the form of words they modify. Such morphological changes bring a challenge to machine translation task.", "Stemming A Russian word can be split into two parts, namely the stem and the suffix. Suffix contains morphological information of a Russian word, including gender, number and case etc. In this paper, we use a deterministic rule-based stemmer to obtain stem and suffix for a Russian word. The process of stemming is shown in Figure FIGREF1 ." ], [ "We experiment with two different types of Neural Machine Translation (NMT) systems, one using a recurrent encoder-decoder structure BIBREF0 , the other leveraging the attention mechanism on the encoder BIBREF15 .", "Recurrent Neural Network Based NMT We use an encoder-decoder network proposed by BIBREF16 ( BIBREF16 ). The encoder uses a bi-directional recurrent neural network (RNN) to encode the source sentence, the decoder uses a uni-directional RNN to predict the target translation. Formally, the source sentence can be expressed as INLINEFORM0 , where INLINEFORM1 is the length of the sentence. It is encoded into a sequence of hidden states INLINEFORM2 , each INLINEFORM3 is the result of a concat operation on a forward (left-to-right) hidden state INLINEFORM4 and a backword (right-to-left) hidden state INLINEFORM5 : DISPLAYFORM0 DISPLAYFORM1 ", " INLINEFORM0 is a variation of LSTM BIBREF17 , namely Gated Recurrent Unit (GRU) BIBREF18 : DISPLAYFORM0 DISPLAYFORM1 ", "where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are weight matrices which are learned.", "During decoding, at each time step INLINEFORM0 , an attention probability INLINEFORM1 to the source word INLINEFORM2 is first calculated by: DISPLAYFORM0 ", "and DISPLAYFORM0 ", "is an attention model that gives a probability distribution on source words INLINEFORM0 , which indicates how much the source word INLINEFORM1 is considered during the decoding step INLINEFORM2 to generate target side word INLINEFORM3 . The attention layer INLINEFORM4 can be as simple as a feed-forward network. INLINEFORM5 is a weighted sum of the encoding hidden state at each position of input sentence: DISPLAYFORM0 ", " INLINEFORM0 is then fed into a feed-forward network together with previous target word embedding INLINEFORM1 and the current decoding hidden state INLINEFORM2 to generate the output intermediate state INLINEFORM3 : DISPLAYFORM0 ", "and DISPLAYFORM0 ", "where INLINEFORM0 is GRU, which is mentioned before. The output intermediate state INLINEFORM1 is then used to predict the current target word by generating a probability distribution on target side vocabulary. In our implementation, maxout BIBREF19 mechanism is used in both training and decoding. Dropout BIBREF20 is used in training time.", "Transformer BIBREF15 is a recently proposed model for sequence to sequence tasks. It discards the RNN structure for building the encoder and decoder blocks. Instead, only the attention mechanism is used to calculate the source and target hidden states.", "The encoder is composed of stacked neural layers. In particularly, for the time step INLINEFORM0 in layer INLINEFORM1 , the hidden state INLINEFORM2 is calculated as follows: First, a self-attention sub-layer is employed to encode the context. For this end, the hidden states in the previous layer are projected into a tuple of queries( INLINEFORM3 ), keys( INLINEFORM4 ) and values( INLINEFORM5 ), where INLINEFORM6 in the following function denotes a feed forward layer: DISPLAYFORM0 ", "Then attention weights are computed as scaled dot product between current query and all keys, normalized with a softmax function. After that, the context vector is represented as weighted sum of the values projected from hidden states in the previous layer. The hidden state in the previous layer and the context vector are then connected by residual connection, followed by a layer normalization function BIBREF21 , to produce a candidate hidden state INLINEFORM0 . Finally, another sub-layer including a feed forward layer, followed by another residual connection and layer normalization, are used to obtain the hidden state INLINEFORM1 : DISPLAYFORM0 ", "The decoder is also composed of stacked layers. The hidden states are calculated in a similar way, except for the following two differences: First, only those target positions before the current one are used to calculate the target side self-attention. Second, attention is applied in both target-to-target and target-to-source. The target-to-source attention sub-layer is inserted between the target self-attention sub-layer and the feed-forward sub-layer. Different from self-attention, the queries( INLINEFORM0 ) are projected from target hidden states in the previous layer, and the keys( INLINEFORM1 ) and values( INLINEFORM2 ) are projected from the source hidden states in the last layer.", "The rest of the calculation is exactly the same with self-attention. Compared to RNN based sequence to sequence models, transformer allows significantly more parallelization, since all the hidden states in the same layer can be calculated simultaneously, whereas the hidden states in RNN can only be calculated sequentially from left to right. In consideration of translation quality, BIBREF15 ( BIBREF15 ) use multi-head attention instead of single-head attention as mentioned above, and positional encoding is also used to compensate the missing of position information in this model." ], [ "We take a two-step approach for the decoder, yielding a stem at each time step before predicting the suffix of the stem. Since we only make use of source hidden states, target hidden states, target to source attention weights and target predicted tokens, these are universal in all sequence to sequence models, our method can be implemented into any of these models.", "Figure FIGREF23 shows a more detailed procedure. Decoding target stems is exactly the same as decoding target words in normal sequence to sequence model, which is predicted through a softmax layer based on the target output layer. All we need is to replace target words with target stems: DISPLAYFORM0 ", "where INLINEFORM0 is a weight matrix to transfer the output layer INLINEFORM1 from a dimension of hidden size to target side vocabulary size. INLINEFORM2 is target side hidden state at time step INLINEFORM3 when generating the stem. INLINEFORM4 is the output state: DISPLAYFORM0 ", " INLINEFORM0 is a single layer feed-forward neural network.", "After the prediction of INLINEFORM0 , the target suffix INLINEFORM1 on decoding step INLINEFORM2 is immediately predicted from the target suffix hidden state INLINEFORM3 : DISPLAYFORM0 ", " INLINEFORM0 is generated from a single layer feed-forward neural network by using the stem embedding INLINEFORM1 , stem hidden state INLINEFORM2 , and source context vector INLINEFORM3 : DISPLAYFORM0 ", "Since we consider that the attention degree towards each word in the source sequence is useful to the generation of suffix, the aligned source context is also used during the prediction of suffix. Note that the source context vector INLINEFORM0 is shared between the generation of stem hidden state INLINEFORM1 and suffix hidden state INLINEFORM2 .", "In addition, the embedding of the predicted suffix is not further fed into the hidden state of the next stem, because we think suffix information can provide little information for predicting the next stem from a linguistic perspective." ], [ "During the training stage, the objective function INLINEFORM0 consists of two components: DISPLAYFORM0 ", "where: DISPLAYFORM0 ", "and DISPLAYFORM0 ", " INLINEFORM0 verifies from 0 to 1, and INLINEFORM1 can also be modeled in the whole architecture, which will be studied in our future work. In our experiments, we set INLINEFORM2 to 0.1 empirically. We use Adam BIBREF22 as our optimizing function." ], [ "Beam search is adopted as our decoding algorithm. At each time step, the search space can be infeasible large if we take all the combinations of stems and suffixes into consideration. So we use cube pruning BIBREF23 to obtain n-best candidates. First, the top INLINEFORM0 stems with the highest scores are pushed to the stack. Then for each stem, we predict the top INLINEFORM1 suffixes, which will result in INLINEFORM2 complete candidates. The candidates will be inserted to a priority queue, which keeps records of the top INLINEFORM3 complete candidates. After all the stems are expanded, the final n-best candidates are obtained." ], [ "We run our experiments on English to Russian (En-RU) data under two significantly different domain, namely the news domain and the e-commerce domain. We verify our method on both RNN based NMT architecture and Transformer based NMT architecture." ], [ "News We select 5.3M sentences from the bilingual training corpus released by WMT2017 shared task on the news translation domain as our training data. We use 3 test set, which are published by WMT2017 news translation task, namely “News2014”, “News2015”, “News2016”.", "E-commerce We collect 50M bilingual sentences as our training corpus:", "10M sentences are crawled and automatic aligned from some international brand's English and Russian websites.", "20M are back translated corpus: First we crawled the Russian sentences from websites of certain Russian's Brands. Then translated them to English through a machine translation system trained on limited RU-EN corpus BIBREF24 .", "The last 20M bilingual sentences are crawled from the web, and are not domain specific.", "We typically use the following 3 types of data as test set, which are named title, description and comment, these sentences are all extracted from e-commerce websites. Title are the goods' titles showed on a listing page when some buyers type in some keywords in a searching bar under an e-commerce website. Description refers to the information in a commodities' detail page. Comment include the review or feedback from some buyers. Example sentences are shown in Table TABREF33 . For each kind of test set, we randomly select 1K English sentences and translate it by human.", "Pre-Processing Both the training set and the test set are lowercased, and some entity words appeared in the data are generalized into specific symbols, such as “_date_”, “_time_”, “_number_”. When selecting our training data, we keep the sentences which has length between 1 to 30. We use a bilingual sentence scorer to discard some low-quality bilingual sentences. The scorer is simply trained under algorithm of IBM Model 1 BIBREF25 on a very large bilingual corpus.", "Target Side Word Stemming We use snowball to create stems from words. Because stem created from snowball is always a substring of the original word, we can obtain suffixes by simply applying a string cut operation. By applying snowball to a target side word sequence, we split a target side sentence into a stem sequence and a suffix sequence. The stemming accuracy of snowball is 83.3% on our human labeled test set.", "Applying BPE to Target Side Stem Sequence We also use the Byte-pair encoding (BPE algorithm) on the target side stem sequence, which will further reduce data sparsity. Some stems will be split into “sub-stem” units. The stem sequence is transferred to “sub-stem” sequence at this step. Suffix sequence should also be adjusted according to the “sub-stem” sequence simultaneously. More specifically, as shown in Figure FIGREF36 , if a stem is split into INLINEFORM0 “sub-stem” units, then INLINEFORM1 “N” (refers to “N” in Figure FIGREF1 ) will be inserted into the suffix sequence, and these tags will be located in front of the suffix which is corresponding to the original complete stem. The sub-stem sequence and the adjusted suffix sequence are the final training corpus on target side." ], [ "Our RNN and Transformer baseline systems utilize BPE BIBREF3 to transfer the original word sequence to subword sequence on both the source and the target sides, since the subword method had a stable improvement compared with word based system, especially on morphologically rich languages.", "Besides, we compared our system with a fully character-based baseline system, which is an implementation of BIBREF4 ( BIBREF4 )'s work, and is available on github.", "We limit the source and target vocabularies to the most frequent 30K tokens for both English and Russian. For news domain, about 99.7% tokens are covered by the source side vocabulary, about 97.0% target tokens are covered by the target side vocabulary." ], [ "For our system, the source token coverage is the same as the baselines. On the other hand, 100% target tokens are covered by the target-side vocabulary, which consists of “sub-stem” units generated from target side stem sequence by applying BPE algorithm. There are totally 752 types of suffixes, which are calculated from the suffix sequences generated from target side sentences." ], [ "For the experiments on the e-commerce domain, the training data is large. We use a distributed training framework for both the baseline system and our system. Training data are split into several parts, each being trained on a single worker node. A parameter server averages the model parameters from each worker node after every 100 training batchs and then synchronizes the averaged model to every worker node. Each worker continues with the training process based on the averaged model." ], [ "We use BLEU BIBREF26 as our evaluation metric. The performance of different systems are shown in Table TABREF34 and TABREF35 . On both the news and e-commerce domains, our system performs better than baseline systems.", "On news domain, the average improvement of our method is 1.75 and 0.97 BLEU score when implemented on RNN-based NMT, compared with subword BIBREF3 method and fully character-based BIBREF4 method, respectively. When implemented on Transformer BIBREF15 , average improvement is 1.47 BLEU compared with subword method. On the e-commerce domain, which use 50M sentences as training corpus, the average improvement of our method is 0.68 BLEU compared with the subword method.", "We evaluate stem accuracies and suffix accuracies separately. For stem, we use BLEU as evaluation metric, Table TABREF34 shows stem BLEU of different methods on “News2014” test set, our method can gain significant improvement compared with baselines, since our method can reduce data sparsity better than baselines. Our method can effectively reduce suffix error, Figure FIGREF43 gives some examples both on e-commerce and news domains:", "For the first sample, the suffix of the translation words (tagged by 1 and 2) from two different baseline systems means a reflexive verb, whose direct object is the same as its subject. In other words, a reflexive verb has the same semantic agent and patient. It is an incorrect translation according to the source meaning, because we can infer from the source sentence that the agent is a person and the patient is an object (some goods bought by a customer). In our system, the suffix of the translation word (tagged by 3) is correct. It represents an infinitive verb which may take objects, other complements and modifiers to form a verb phrase.", "In the second sample, the translation word (tagged by 1) is not accurate, its suffix represents a plural form, but the correct form is singular, because the corresponding source word “positive” is singular form. Character-based system can correctly translate source word “stars” into a Russian word with plural form. However, the translation of “positive” (tagged by 2) is still with wrong form. Both the translation of “positive” and “stars” from our system are with the correct forms.", "In the third sample, the translation word tagged by 3 represents past tense; However, the translation words tagged by 1 and 2 represent present tense. Our system successfully predicted the tense moods." ], [ "We proposed a simple but effective method to improve English-Russian NMT, for which a morphologically rich language is on the target side. We take a two-step approach in the decoder. At each step, a stem is first generated, then its suffix is generated. We empirically compared our method with two previous methods (namely subword and fully character-based), which can also to some extent address our problem. Our method gives an improvement on two encoder-decoder NMT architectures on two domains. To our knowledge, we are the first to explicitly model suffix for morphologically-rich target translation." ], [ "We thank the anonymous reviewers for their detailed and constructed comments. Yue Zhang and Min Zhang are the corresponding authors. The research work is supported by the National Natural Science Foundation of China (61525205, 61432013, 61373095). Thanks for Xiaoqing Li, Heng Yu and Zhdanova Liubov for their useful discussion. " ] ], "section_name": [ "Introduction", "Translation Granularity", "Morphology and MT", "Russian Morphology and Stemming", "Neural Machine Translation Baselines", "Target-Side Suffix Prediction", "Training", "Decoding", "Experiments", "Data", "Baselines", "Our System", "Distributed Training", "Results and Analysis", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "132f382fc3f9b9db1abc162078d50f865a62d2f8", "85e33ebd8533d4f457f39a62a4d608ffa108890a", "42ba6f4847c52b36a85a95d8b529cbd44c0e5787" ], "answer": [ { "evidence": [ "Our RNN and Transformer baseline systems utilize BPE BIBREF3 to transfer the original word sequence to subword sequence on both the source and the target sides, since the subword method had a stable improvement compared with word based system, especially on morphologically rich languages.", "Besides, we compared our system with a fully character-based baseline system, which is an implementation of BIBREF4 ( BIBREF4 )'s work, and is available on github." ], "extractive_spans": [ "RNN and Transformer baseline systems utilize BPE BIBREF3", "fully character-based baseline system, which is an implementation of BIBREF4 ( BIBREF4 )'s work" ], "free_form_answer": "", "highlighted_evidence": [ "Our RNN and Transformer baseline systems utilize BPE BIBREF3 to transfer the original word sequence to subword sequence on both the source and the target sides, since the subword method had a stable improvement compared with word based system, especially on morphologically rich languages.\n\nBesides, we compared our system with a fully character-based baseline system, which is an implementation of BIBREF4 ( BIBREF4 )'s work, and is available on github." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We empirically study this method and compare it with previous work on reducing OOV rates ( BIBREF3 , BIBREF3 ; BIBREF4 , BIBREF4 ). Results show that our method gives significant improvement on the English to Russian translation task on two different domains and two popular NMT architectures. We also verify our method on training data consisting of 50M bilingual sentences, which proves that this method works effectively on large-scale corpora.", "Subword based BIBREF3 and character-based ( BIBREF4 , BIBREF4 ; BIBREF5 , BIBREF5 ) NMT are the two directions of adjusting translation granularity, which can be helpful to our problem." ], "extractive_spans": [], "free_form_answer": "Subword based NMT, Character-based NMT", "highlighted_evidence": [ "We empirically study this method and compare it with previous work on reducing OOV rates ( BIBREF3 , BIBREF3 ; BIBREF4 , BIBREF4 ). ", "Subword based BIBREF3 and character-based ( BIBREF4 , BIBREF4 ; BIBREF5 , BIBREF5 ) NMT are the two directions of adjusting translation granularity, which can be helpful to our problem." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our RNN and Transformer baseline systems utilize BPE BIBREF3 to transfer the original word sequence to subword sequence on both the source and the target sides, since the subword method had a stable improvement compared with word based system, especially on morphologically rich languages." ], "extractive_spans": [ "RNN and Transformer baseline systems utilize BPE BIBREF3 to transfer the original word sequence to subword sequence on both the source and the target sides" ], "free_form_answer": "", "highlighted_evidence": [ "Our RNN and Transformer baseline systems utilize BPE BIBREF3 to transfer the original word sequence to subword sequence on both the source and the target sides, since the subword method had a stable improvement compared with word based system, especially on morphologically rich languages." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58", "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "" ], "paper_read": [ "" ], "question": [ "what is the previous work they are comparing to?" ], "question_id": [ "68f1df3fb0703ff694a055d23e7ec3f6fb449b8d" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "" ], "topic_background": [ "" ] }
{ "caption": [ "Figure 1: Russian word sequence to stem sequence and suffix sequence, “N” is a special tag used in suffix sequence, which means “no suffix” for corresponding stem.", "Figure 2: Different forms of the word “ball”.", "Figure 3: Improved rnn-based NMT architecture.", "Table 1: Example of the e-commerce test set.", "Figure 4: Adjust the suffix sequence according to the “substem” sequence.", "Table 2: Evaluation on the news domain: “Subword” refers to Sennrich, Haddow, and Birch (2015b), “Fully Character-based” refers to Lee, Cho, and Hofmann (2016), “Suffix Prediction” refers to our work. Scores in brackets are BLEU of stem, which means that the output sentence and reference are both transformed into stem sequence.", "Table 3: Evaluation on the e-commerce domain: “Subword” refers to Sennrich, Haddow, and Birch (2015b), “Suffix Prediction” refers to our work.", "Figure 5: “RNN+Subword” refers to Sennrich, Haddow, and Birch (2015b), “Character-based” refers to Lee, Cho, and Hofmann (2016), “RNN+Suffix” refers to our work." ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "4-Figure3-1.png", "5-Table1-1.png", "5-Figure4-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Figure5-1.png" ] }
[ "what is the previous work they are comparing to?" ]
[ [ "1801.03615-Baselines-0", "1801.03615-Translation Granularity-0", "1801.03615-Baselines-1", "1801.03615-Introduction-3" ] ]
[ "Subword based NMT, Character-based NMT" ]
68
1910.09362
Improving Word Representations: A Sub-sampled Unigram Distribution for Negative Sampling
Word2Vec is the most popular model for word representation and has been widely investigated in literature. However, its noise distribution for negative sampling is decided by empirical trials and the optimality has always been ignored. We suggest that the distribution is a sub-optimal choice, and propose to use a sub-sampled unigram distribution for better negative sampling. Our contributions include: (1) proposing the concept of semantics quantification and deriving a suitable sub-sampling rate for the proposed distribution adaptive to different training corpora; (2) demonstrating the advantages of our approach in both negative sampling and noise contrastive estimation by extensive evaluation tasks; and (3) proposing a semantics weighted model for the MSR sentence completion task, resulting in considerable improvements. Our work not only improves the quality of word vectors but also benefits current understanding of Word2Vec.
{ "paragraphs": [ [ "The recent decade has witnessed the great success achieved by word representation in natural language processing (NLP). It proves to be an integral part of most other NLP tasks, in which words have to be vectorized before input to the models. High quality word vectors have boosted the performance of many tasks, such as named entity recognition BIBREF0, BIBREF1, sentence completion BIBREF2, BIBREF3, part-of-speech tagging BIBREF4, BIBREF5, sentiment analysis BIBREF6, BIBREF7, and machine translation BIBREF8, BIBREF9. In a conventional way, word vectors are obtained from word-context co-occurrence matrices by either cascading the row and column vectors BIBREF10 or applying singular value decomposition (SVD) BIBREF11. However, these approaches are limited by their sub-optimal linear structure of vector space and the highly increased memory requirement when confronting huge vocabularies. Both problems have been solved by a popular model called Word2Vec BIBREF12, which utilizes two shallow neural networks, i.e., skip-gram and continuous bag-of-words, to learn word vectors from large corpora. The model is also capable of capturing interesting linear relationships between word vectors.", "While Word2Vec makes a breakthrough in word representation, it has not been fully understood and its theoretical exploitation is still in demand. One aspect, which has always been ignored, is the choice of noise distribution for negative sampling. Word2Vec employs a smoothed unigram distribution with a power rate of 3/4 as the noise distribution. The decision is made by empirical trials but has been widely adopted in subsequent work BIBREF13, BIBREF4, BIBREF14, BIBREF15. However, the quality of learned word vectors is sensitive to the choice of noise distribution BIBREF16, BIBREF13 when using a moderate number (5 to 15) of negative samples, which is a common strategy for the tradeoff between vector quality and computation costs.", "In this paper, we propose to employ a sub-sampled unigram distribution for negative sampling and demonstrate its capability of improving the linear relationships between word vectors. Our contributions include three aspects: (1) We propose the concept of semantics quantification and derive a suitable sub-sampling rate for the proposed distribution. (2) We demonstrate the advantages of our noise distribution in both negative sampling and noise contrastive estimation by extensive experiments. (3) We propose a semantics weighted model for the MSR sentence completion task, resulting in considerable improvements." ], [ "Firstly, we briefly introduce the two architectures, i.e., skip-gram (SG) and continuous bag-of-words (CBOW) in Word2Vec BIBREF12. For a corpus with a word sequence $w_{1}, w_{2}, \\cdots , w_{T}$, skip-gram predicts the context word $w_{t+j}$ given the center word $w_t$, and maximizes the average log probability,", "where $c$ is the size of context window, and $p(w_{t+j}|w_{t})$ is defined by the full softmax function,", "where $v_{w}$ and $v_{w}^{\\prime }$ are the vectors of the “input” and “output” words, and $|V|$ is the size of vocabulary.", "As for CBOW, it predicts the center word based on the context words. The input vector is usually the average of the context words' vectors, i.e., $v_{w_{I}} = \\frac{1}{2c} \\sum _{-c \\le j \\le c, j \\ne 0} v_{w_{t+j}}$." ], [ "For large vocabularies, it is inefficient to compute the full softmax function in Eq. (DISPLAY_FORM3). To tackle this problem, Word2Vec utilizes negative sampling to distinguish the real output word from $k$ noise words,", "where $\\sigma (x) = \\frac{1}{1 + \\exp (-x)}$, and $P_n$ is the so-called noise distribution, representing the probability for a word to be sampled as a noise word. The smoothed unigram distribution used in Word2Vec is expressed as,", "where $f(w_i)$ is the frequency of word $w_i$." ], [ "Sub-sampling is a process in Word2Vec for randomly deleting the most frequent words during training, since they are usually stop words with less information than infrequent ones. During sub-sampling, the probability that a word $w_i$ should be kept is defined as,", "where $\\hat{f}(w_i)$ is the normalized word frequency of $w_i$, and $t$ is called the sub-sampling rate typically between $10^{-5}$ and $10^{-3}$. The process does not delete infrequent words." ], [ "Unigram. A noise distribution is recommended to be close to the distribution of the real data in noise contrastive estimation (NCE) BIBREF16. Such guidance finds its earliest application for training language models by BIBREF17, demonstrating that the unigram distribution works better than a uniform distribution. This choice is also adopted in some other work BIBREF18, BIBREF19, BIBREF20, BIBREF21. However, the performance of models is limited due to the inadequate training of infrequent words BIBREF22, BIBREF23.", "Smoothed Unigram. The smoothed unigram distribution in Word2Vec BIBREF12 solves this problem because it gives more chances for infrequent words to be sampled. However, the required power rate is decided empirically, and may need adjustment for different scenarios BIBREF24, BIBREF25. BIBREF23 even propose to use a bigram distribution after studying the power rate, but it is infeasible for large corpora. Besides, the smoothed unigram distribution also changes the lexical structure of infrequent words, which could be a reason for the limited quality of word vectors." ], [ "We believe a sub-sampled unigram distribution is better for negative sampling since it reduces the amount of frequent words and also maintains the lexical structure of infrequent words. To our best knowledge, we are the first to employ such a noise distribution for negative sampling. Beyond this, we propose a approach to derive the sub-sampling rate that is adaptive to different corpora (Table TABREF35)." ], [ "We start our analysis by recalling the probability in Eq. (DISPLAY_FORM9) of a word to be kept during sub-sampling. Obviously, we need to choose the sub-sampling rate $t$ to decide the noise distribution. Although empirically selecting a sub-sampling rate can result in improvements (Table TABREF55), we aim to derive the sub-sampling rate adaptive to different corpora. To accomplish this, we firstly introduce a concept critical word denoted by $w_{crt}$, which is the word with $P_{keep}(w_{crt})=1$. The critical word indicates that words with frequencies lower than it will not be deleted during sub-sampling. It is uniquely decided by the sub-sampling rate. Thus, if we select the critical word with certain properties at first, we are able to obtain a suitable sub-sampling rate in return.", "The basic rule for us to select the critical word is to find a word with balanced semantic and syntactic information. We prefer not to delete words with relatively more semantic information. Now, the problem is how to measure these two kinds of information a word possesses." ], [ "In order to quantify the semantic and syntactic information of words, we consider two observations: (1) frequent words are more likely to be function words with more syntactic information; (2) infrequent words are more likely to be content words with more semantic information BIBREF26. Thus, for the $r$-th most frequent word $w$, the quantity of its semantic and syntactic information $I_{sem}^{w}$ and $I_{syn}^{w}$, can be described as,", "where $F_1(r)$ and $F_2(f_r)$ are monotonically increasing functions of the ranking $r$ and the frequency $f_r$, respectively. One can tell that the functions capture the properties of the observations.", "On the other hand, we require that the total quantity of semantic and syntactic information, denoted by $I_{tot}^{w}$ is fixed for all words, i.e.,", "where $\\mathrm {const}_1$ is a constant. We rewrite Eq. (DISPLAY_FORM14) into an exponential form as the following,", "This expression leads us to a well known power law called Zipf's law BIBREF27, which approximates the relationship between $f_r$ and $r$ as,", "where $\\gamma , \\beta $ are constants and $\\beta \\approx 1$. Consequently, we can decide the form of the functions $F_1(r)$ and $F_2(f_r)$ as,", "Obviously, the $\\log $ form functions satisfy the definition we made before. As a results, the total information becomes $\\log \\gamma $ given $\\beta \\approx 1$." ], [ "Now, given the quantified information, we are able to decide the critical word satisfying the condition", "Combined with Eq. (DISPLAY_FORM16), we obtain the frequency of the critical word", "where $r_c$ is the ranking of the critical word. Meanwhile, we know the probability of the critical word $w_{crt}$ to be kept should be exactly $P_{keep}^{t_c} (w_{crt})=1$. Thus, with Eq. (DISPLAY_FORM9) and Eq. (DISPLAY_FORM20), the sub-sampling rate for our noise distribution is expressed as", "Note that we use $t_c$ to distinguish from the sub-sampling rate $t$ applied for the training corpus." ], [ "As for the estimation of constants $\\gamma $ and $\\beta $, we provide two choices:", "(1) wLSE-1. We use weighted least squares estimation (wLSE) to estimate the two constants. Since more data are located at higher positions in $\\log r$ axis, wLSE with a weight of $\\frac{1}{r}$ for the r-th most frequent word makes sure the trend of line can be well fit. The estimated constants are", "where $\\left\\langle x \\right\\rangle $ denotes the weighted average of $x$ such that $\\left\\langle x \\right\\rangle = \\sum _{r=1}^{|V|}\\frac{x}{r} / \\sum _{r=1}^{|V|}\\frac{1}{r}$.", "(2) wLSE-2. We use wLSE with a condition that the fitting line passes through the point $(\\log 1, \\log f_1)$. This method engages the most frequent word to further control the trend of the line. As a result, $\\hat{\\gamma }= f_1$ and", "Now, we can write down the expression of the sub-sampled unigram distribution", "where $\\alpha _i$ satisfies", "Note that we use $P_n^{sub}$ to distinguish from the original noise distribution $P_n$ in Word2Vec." ], [ "In semantics quantification, the modeling of word distribution is not limited to zipf's law. We adopt it because of its popularity and conciseness. There could be other choices BIBREF28, BIBREF29, and the expression of $t_c$ needs modification accordingly. Besides, one can either use the chosen law to decide the critical word or just search through the unigram distribution to find it." ], [ "To show the advantages of our noise distribution, we conduct experiments on three evaluation tasks. While the word analogy task BIBREF12 is our focus for testing the linear relationships between word vectors, we also evaluate the learned word vectors on the word similarity task BIBREF0 and the synonym selection task BIBREF3.", "In the following, we firstly describe the experimental setup including baselines, training corpora and details. Next, we report experimental results for the three NLP tasks. At last, we introduce the semantics weighted model proposed for the MSR sentence completion task BIBREF30." ], [ "We train the two models, SG and CBOW, using the original noise distribution and other two obtained by our approach, specifically,", "(1) Uni$^{3/4}$. The smoothed unigram distribution proposed by BIBREF12.", "(2) Sub$^{L1}$. The sub-sampled uinigram distribution, of which the threshold $t_c$ is estimated by wLSE-1.", "(3) Sub$^{L2}$. The sub-sampled uinigram distribution, of which the threshold $t_c$ is estimated by wLSE-2." ], [ "Our training corpora come from four sources, described as below:", "(1) BWLM. The “One Billion Word Language Modeling Benchmark”, which is already pre-processed and has almost 1 billion tokens.", "(2) Wiki10. The April 2010 snapshot of the Wikipedia corpus with a total of about 2 million articles and 1 billion tokens.", "(3) UMBC. The UMBC WebBase corpus from the Stanford WebBase project’s February 2007 Web crawl, with over 3 billion tokens.", "(4) MSR. The MSR corpus containing 5 Conan Doyle Sherlock Holmes novels with about 50 million tokens.", "The first three large corpora are used for word similarity, synonym selection, and word analogy tasks. The MSR corpus is designated for the MSR sentence completion task. We pre-process the corpora by converting all words into lowercase and removing all the non-alphanumeric. The number of remaining tokens for each corpus is listed in the column Size of Table TABREF35. Vocabularies are built by discarding words whose occurrences are less than the threshold shown in the column Mcn. The column Vocab represents the sizes of the resulted vocabularies. The rightmost two columns are the sub-sampling rates for our noise distribution by the wLSE-1 and wLSE-2 estimations, respectively. The values are $10^6$ times of the true ones for readability." ], [ "We implement the training of word vectors with the word2vec tool, in which the part of noise distribution is modified to support several choices. For SG and CBOW, we set the vector dimensionality to 100, and the size of the context window to 5. We choose 10 negative samples for each training sample in the models. The models are trained using the stochastic gradient decent (SGD) algorithm with a linear decaying learning rate with an initial value of 0.025 in SG and 0.05 in CBOW. We train the models on the three large corpora for 2 epochs, and for MSR's Holmes novels the value may vary. Results in this paper are shown in percentages and each of them is the average result of 4 repeated experiments, unless otherwise stated." ], [ "The task computes the correlation between the word similarity scores by human judgment and the word distances in vector space. We use Pearson correlation coefficient $\\rho _p$ as the metric, the higher of which the better the word vectors are. The expression of $\\rho _p$ is", "where $\\phi $ and $\\hat{\\phi }$ are random variables for the word similarity scores by human judgment and the cosine distances between word vectors, respectively. Benchmark datasets for this task include RG BIBREF31, MC BIBREF32, WS BIBREF33, MEN BIBREF34, and RW BIBREF35." ], [ "We implement the task on the mentioned 5 datasets and show the results in the column Word Similarity of Table TABREF42. At the first glance, our noise distributions Sub$^{L1}$ and Sub$^{L2}$ perform slightly better than Uni$^{3/4}$. Significant improvements can be achieved on two small datasets RG and MC, because they are more sensitive to the vector quality. Another observation is that CBOW is more affected by Sub$^{L1}$ and Sub$^{L2}$ than SG, if comparing results on RG and MC with Wiki10 corpus. These results show that our noise distributions have the potential as high as or even higher than the smoothed unigram distribution in learning good word vectors." ], [ "This task attempts to select the semantically closest word, from the candidate answers, to the stem word. For example, given the stem word “costly” and the candidate answers “expensive, beautiful, popular, complicated”, the most similar word should be “expensive”. For each candidate answer, we compute the cosine similarity score between its word vector and that of the stem word. The candidate answer with the highest score is our final answer for a question. Here we use the TOEFL dataset BIBREF36 with 80 synonym questions and the LEX dataset with 303 questions collected by ourselves." ], [ "We report the results of this task in the Synonym Selection column of Table TABREF42. For all the noise distributions, the results are not stable on TOEFL dataset since it is quite small. Still, Sub$^{L1}$ and Sub$^{L2}$ have comparable performance with Uni$^{3/4}$. In particular, Sub$^{L1}$ makes considerable improvements with Wiki10 corpus. As for LEX dataset, Sub$^{L1}$ and Sub$^{L2}$ outperform Uni$^{3/4}$ in both SG and CBOW models with BWLM corpus. With the other two corpora, Sub$^{L2}$ performs better than Sub$^{L1}$ and Uni$^{3/4}$ using CBOW model. But again, the SG model appears to be less boosted by Sub$^{L1}$ and Sub$^{L2}$ in terms of the corresponding results. Considering the unbalanced number of questions in these two datasets, we provide the total results on TOEFL+LEX and conclude that our noise distributions are better than Uni$^{3/4}$." ], [ "The task comes from the idea that arithmetic operations in a word vector space can be predicted: given three words $w_a$, $w_b$, and $w_c$, the goal is to find a word $w_d$ such that the relation $w_d:w_c$ is the same as the relation $w_b:w_a$. Semantic questions are in the form of “Athens:Greece is as Berlin:German” and syntactic ones are like “dance:dancing is as fly:flying”. Here we choose the fourth word $\\hat{w}_d$ by maximizing the cosine similarity such that $\\hat{w}_d = \\operatornamewithlimits{arg\\,max}_{w\\in V} \\,\\cos \\left( v_{w_b}-v_{w_a}+v_{w_c}, v_w\\right)$ BIBREF37. We test the learned word vectors on the Google analogy dataset BIBREF12, which contains 8,869 semantic questions and 10,675 syntactic ones." ], [ "This task is our primary focus because it exposes interesting linear relationships between word vectors. Thus we conduct four sub-experiments to investigate four aspects of our noise distributions.", "Model Responses. The two models SG and CBOW respond differently to our noise distributions as shown in Table TABREF42. When applying CBOW model on the three corpora, our noise distributions Sub$^{L1}$ and Sub$^{L2}$ can result in significant improvements compared with Uni$^{3/4}$, especially on semantic questions. Specifically, the accuracy of semantic questions is improved by 2 to 6 points, and for syntactic questions it is 1.5 to 2 points. As for the SG model, the improvements on semantic questions by Sub$^{L1}$ and Sub$^{L2}$ are still considerable (2 to 5 points). But on syntactic questions, Uni$^{0.75}$ becomes competitive with Sub$^{L1}$ and Sub$^{L2}$ and is slightly better with BWLM and Wiki10 corpora. The reason may be that SG model is better at capturing semantic relationships between words compared with CBOW model. Still, it is safe to say that our noise distributions are better for SG in terms of the total accuracy.", "Number of Negative Samples. Increasing the number of negative samples does not reduce the advantages of our noise distributions necessarily. We report the results of the task using various number of negative samples in Fig. FIGREF48 (a) for CBOW and Fig. FIGREF48 (b) for SG. Note that we only train the models on Wiki10 and compare Sub$^{L2}$ with Uni$^{3/4}$. For CBOW, Sub$^{L2}$ outperforms Uni$^{3/4}$ consistently with significant margins on both semantic and syntactic questions. For SG, though the two distributions are competitive with each other on syntactic questions, Sub$^{L2}$ always performs better than Uni$^{3/4}$ on semantic ones.", "Optimality. Since our approach is built on assumptions and new concepts, we wonder whether the resulted $t_c$ is optimal. We select several values around $t_c$-2 and show the word analogy results in Fig. FIGREF48 (c). For CBOW, $t_c$-2 approaches the optimal point given the accuracy on semantic questions and the total dataset. For SG, the optimal point lies between $0.1\\,t_c$-2 and $t_c$-2, with negligible advantages relative to Sub$^{L2}$. Notice that the point $3.57\\,t_c$-2 corresponds to $10^{-5}$, showing much worse performance than Sub$^{L2}$. It indicates that trying a commonly used sub-sampling rate is inappropriate, and our approach is better.", "Scalability. We apply our noise distributions in NCE, from which negative sampling originates, to train word vectors. The implementation comes from wang2vec by BIBREF4, and we report the results of this task using CBOW. We include the unigram distribution Uni BIBREF18 and the sub-sampled unigram distribution Sub$^{1e-5}$ with a manually chosen threshold $10^{-5}$ for comparison. We draw three conclusions: (1) Uni$^{3/4}$ indeed works much better than Uni as claimed in BIBREF12; (2) Sub$^{1e-5}$ results in considerable improvements compared with Uni$^{3/4}$, especially on semantic questions; (3) Our Sub$^{L2}$ achieves the best performance consistently even with a larger vector size of 300. Note that even though Sub$^{1e-5}$ or Uni$^{3/4}$ performs better on syntactic questions with UMBC corpus, its results on semantic questions and the total dataset are much worse than Sub$^{L2}$. To this end, we believe that our approach is also scalable to the NCE related work." ], [ "The task deals with incompletion sentences, e.g., “A few faint were gleaming in a violet sky.” with candidate answers “tragedies, stars, rumours, noises, explanations”, and aims to choose a word (e.g., “stars”) to best complete the sentence. Several works evaluate word vectors on this task BIBREF38, BIBREF18, BIBREF3 since it requires a combination of semantics and occasional logical reasoning. Most of them follow the same procedures of implementation described in BIBREF17. Specifically, we can calculate the probabilities that a set of words $\\mathcal {S}$ surrounding the blank to be the context of each candidate answer $w_{cd}$. Then the score of the candidate answer is the sum of these probabilities,", "and the highest score corresponds to the final answer for the question.", "Since the conventional method ignores the syntactic structure of sentences, it should be biased to semantics. Thus, we modify the method with two steps: (1) applying sub-sampling on the words in the sentences (CM$^s$); and (2) using quantified semantics as weights to form a semantics weighted model (SWM) based on (1). Then we have" ], [ "The setup of models is a little different: the size of context window for SG and CBOW is 10 and 5; the number of negative samples is 20 in both models; we train SG for 5 and 10 epochs when the size of word vectors is 100 and 300, while the number of epochs is 10 and 20 in CBOW; we use all the rest words in a sentence to form $\\mathcal {S}$.", "Our focus here is to popularize SWM rather than to compare the noise distributions. We show the results of this task by previous word presentation models and our approach in Table TABREF60. The bottom three previous models follow the conventional method. Accordingly, we draw two conclusions: (1) sub-sampling on the words in sentences results in significant improvements to the conventional method; and (2) SWM further improves CM$^s$ and beats previous word representation models with a vector size of 300, indicating the success of semantics quantification." ], [ "We propose to employ a sub-sampled unigram distribution for better negative sampling, and design an approach to derive the required sub-sampling rate. Experimental results show that our noise distribution captures better linear relationships between words than the baselines. It adapts to different corpora and is scalable to NCE related work. The proposed semantics weighted model also achieves a success on the MSR sentence completion task. In summary, our work not only improves the quality of word vectors, but also sheds light on the understanding of Word2Vec." ] ], "section_name": [ "Introduction", "Word2Vec ::: Architectures", "Word2Vec ::: Negative Sampling", "Word2Vec ::: Sub-sampling", "Related Work", "Sub-sampled Unigram Distribution", "Sub-sampled Unigram Distribution ::: Critical Word", "Sub-sampled Unigram Distribution ::: Semantics Quantification", "Sub-sampled Unigram Distribution ::: Expression of Sub-sampling Rate", "Sub-sampled Unigram Distribution ::: Constants Estimation", "Sub-sampled Unigram Distribution ::: Discussions", "Experiments", "Experiments ::: Experimental Setup ::: Baselines", "Experiments ::: Experimental Setup ::: Training Corpora", "Experiments ::: Experimental Setup ::: Training details", "Experiments ::: Task 1: Word Similarity Task ::: Task Description", "Experiments ::: Task 1: Word Similarity Task ::: Results", "Experiments ::: Task 2: Synonym Selection Task ::: Task Description", "Experiments ::: Task 2: Synonym Selection Task ::: Results", "Experiments ::: Task 3: Word Analogy Task ::: Task Description", "Experiments ::: Task 3: Word Analogy Task ::: Results", "Experiments ::: Extension of Semantics Quantification ::: MSR Sentence Completion Task", "Experiments ::: Extension of Semantics Quantification ::: Results", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "6ed4824b13d6d27bed97ce689fcad83013171d06", "a58b94fd14b714ae45384060ad924d725048f3ed", "d5bbaab3acb647e566d6bed937842408502ec1e9" ], "answer": [ { "evidence": [ "Firstly, we briefly introduce the two architectures, i.e., skip-gram (SG) and continuous bag-of-words (CBOW) in Word2Vec BIBREF12. For a corpus with a word sequence $w_{1}, w_{2}, \\cdots , w_{T}$, skip-gram predicts the context word $w_{t+j}$ given the center word $w_t$, and maximizes the average log probability,", "We train the two models, SG and CBOW, using the original noise distribution and other two obtained by our approach, specifically," ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Firstly, we briefly introduce the two architectures, i.e., skip-gram (SG) and continuous bag-of-words (CBOW) in Word2Vec BIBREF12. ", "We train the two models, SG and CBOW, using the original noise distribution and other two obtained by our approach" ], "unanswerable": false, "yes_no": true }, { "evidence": [ "We train the two models, SG and CBOW, using the original noise distribution and other two obtained by our approach, specifically,", "(1) Uni$^{3/4}$. The smoothed unigram distribution proposed by BIBREF12." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We train the two models, SG and CBOW, using the original noise distribution and other two obtained by our approach, specifically,\n\n(1) Uni$^{3/4}$. The smoothed unigram distribution proposed by BIBREF12." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "We train the two models, SG and CBOW, using the original noise distribution and other two obtained by our approach, specifically," ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We train the two models, SG and CBOW, using the original noise distribution and other two obtained by our approach, specifically," ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "13370930a6963282bff5b6a4604c259ff0808235", "91cf5b5494bbc6a8a649e67184bb49d5ff5b3543", "c4e195960929ec170dae79b5ccae9e4341655085" ], "answer": [ { "evidence": [ "Experiments ::: Task 1: Word Similarity Task ::: Task Description", "The task computes the correlation between the word similarity scores by human judgment and the word distances in vector space. We use Pearson correlation coefficient $\\rho _p$ as the metric, the higher of which the better the word vectors are. The expression of $\\rho _p$ is", "Experiments ::: Task 2: Synonym Selection Task ::: Task Description", "This task attempts to select the semantically closest word, from the candidate answers, to the stem word. For example, given the stem word “costly” and the candidate answers “expensive, beautiful, popular, complicated”, the most similar word should be “expensive”. For each candidate answer, we compute the cosine similarity score between its word vector and that of the stem word. The candidate answer with the highest score is our final answer for a question. Here we use the TOEFL dataset BIBREF36 with 80 synonym questions and the LEX dataset with 303 questions collected by ourselves." ], "extractive_spans": [ "correlation between the word similarity scores by human judgment and the word distances in vector space", "select the semantically closest word, from the candidate answers" ], "free_form_answer": "", "highlighted_evidence": [ "Experiments ::: Task 1: Word Similarity Task ::: Task Description\nThe task computes the correlation between the word similarity scores by human judgment and the word distances in vector space. We use Pearson correlation coefficient $\\rho _p$ as the metric, the higher of which the better the word vectors are.", "Experiments ::: Task 2: Synonym Selection Task ::: Task Description\nThis task attempts to select the semantically closest word, from the candidate answers, to the stem word." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "To show the advantages of our noise distribution, we conduct experiments on three evaluation tasks. While the word analogy task BIBREF12 is our focus for testing the linear relationships between word vectors, we also evaluate the learned word vectors on the word similarity task BIBREF0 and the synonym selection task BIBREF3.", "The task computes the correlation between the word similarity scores by human judgment and the word distances in vector space. We use Pearson correlation coefficient $\\rho _p$ as the metric, the higher of which the better the word vectors are. The expression of $\\rho _p$ is", "where $\\phi $ and $\\hat{\\phi }$ are random variables for the word similarity scores by human judgment and the cosine distances between word vectors, respectively. Benchmark datasets for this task include RG BIBREF31, MC BIBREF32, WS BIBREF33, MEN BIBREF34, and RW BIBREF35." ], "extractive_spans": [], "free_form_answer": "They evaluate it on the word analogy, word similarity and synonym selection tasks using Pearson correlation coefficient as the metric.", "highlighted_evidence": [ "While the word analogy task BIBREF12 is our focus for testing the linear relationships between word vectors, we also evaluate the learned word vectors on the word similarity task BIBREF0 and the synonym selection task BIBREF3.", "We use Pearson correlation coefficient $\\rho _p$ as the metric, the higher of which the better the word vectors are. The expression of $\\rho _p$ is\n\nwhere $\\phi $ and $\\hat{\\phi }$ are random variables for the word similarity scores by human judgment and the cosine distances between word vectors, respectively." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "no", "no" ], "question": [ "Do they use skip-gram word2vec?", "How is quality of the word vectors measured?" ], "question_id": [ "c7f43c95db3d0c870407cd0e7becdd802463683b", "4e2b12cfc530a4682b06f8f5243bc9f64bd41135" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "", "" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Illustration of the skip-gram and continuous bag-of-words (CBOW) architectures.", "Figure 2: Illustration of a unigram distribution, the fitting line, and the sub-sampled version.", "Table 1: Information of the training corpora.", "Table 2: Results of evaluation tasks on the learned word vectors, i.e., word similarity, synonym selection, and word analogy. The sub-sampling rate for the training corpora is 10−4.", "Figure 3: Word analogy results (a) and (b) for number of negative samples and (c) for optimality. Smoothed and wLSE-2 represent Uni3/4 and SubL2, tc-2 means the sub-sampling rate of SubL2.", "Table 3: The results of word analogy task using NCE for the training of word vectors. Each entry is the average result of 2 repeated experiments.", "Table 4: The results of MSR sentence completion task by previous word representation models and our approach." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "5-Table1-1.png", "6-Table2-1.png", "7-Figure3-1.png", "8-Table3-1.png", "8-Table4-1.png" ] }
[ "How is quality of the word vectors measured?" ]
[ [ "1910.09362-Experiments ::: Task 1: Word Similarity Task ::: Task Description-0", "1910.09362-Experiments ::: Task 2: Synonym Selection Task ::: Task Description-0", "1910.09362-Experiments ::: Task 1: Word Similarity Task ::: Task Description-1", "1910.09362-Experiments-0" ] ]
[ "They evaluate it on the word analogy, word similarity and synonym selection tasks using Pearson correlation coefficient as the metric." ]
69
1911.12559
KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents
Keyphrase generation is the task of predicting a set of lexical units that conveys the main content of a source text. Existing datasets for keyphrase generation are only readily available for the scholarly domain and include non-expert annotations. In this paper we present KPTimes, a large-scale dataset of news texts paired with editor-curated keyphrases. Exploring the dataset, we show how editors tag documents , and how their annotations differ from those found in existing datasets. We also train and evaluate state-of-the-art neural keyphrase generation models on KPTimes to gain insights on how well they perform on the news domain. The dataset is available online at https:// github.com/ygorg/KPTimes.
{ "paragraphs": [ [ "Keyphrases are single or multi-word lexical units that best summarise a document BIBREF0. As such, they are of great importance for indexing, categorising and browsing digital libraries BIBREF1. Yet, very few documents have keyphrases assigned, thus raising the need for automatic keyphrase generation systems. This task falls under the task of automatic keyphrase extraction which can also be the subtask of finding keyphrases that only appear in the input document. Generating keyphrases can be seen as a particular instantiation of text summarization, where the goal is not to produce a well-formed piece of text, but a coherent set of phrases that convey the most salient information. Those phrases may or may not appear in the document, the latter requiring some form of abstraction to be generated. State-of-the-art systems for this task rely on recurrent neural networks BIBREF2, BIBREF3, BIBREF4, and hence require large amounts of annotated training data to achieve good performance. As gold annotated data is expensive and difficult to obtain BIBREF5, previous works focused on readily available scientific abstracts and used author-assigned keyphrases as a proxy for expert annotations. However, this poses two major issues: 1) neural models for keyphrase generation do not generalize well across domains, thus limiting their use in practice; 2) author-assigned keyphrases exhibit strong consistency issues that negatively impacts the model's performance. There is therefore a great need for annotated data from different sources, that is both sufficiently large to support the training of neural-based models and that comprises gold-standard labels provided by experts. In this study, we address this need by providing KPTimes, a dataset made of 279 923 news articles that comes with editor-assigned keyphrases.", "Online news are particularly relevant to keyphrase generation since they are a natural fit for faceted navigation BIBREF6 or topic detection and tracking BIBREF7. Also, and not less importantly, they are available in large quantities and are sometimes accompanied by metadata containing human-assigned keyphrases initially intended for search engines. Here, we divert these annotations from their primary purpose, and use them as gold-standard labels to automatically build our dataset. More precisely, we collect data by crawling selected news websites and use heuristics to draw texts paired with gold keyphrases. We then explore the resulting dataset to better understand how editors tag documents, and how these expert annotations differ from author-assigned keyphrases found in scholarly documents. Finally, we analyse the performance of state-of-the-art keyphrase generation models and investigate their transferability to the news domain and the impact of domain shift." ], [ "Frequently used datasets for keyphrase generation have a common characteristic that they are, by and large, made from scholarly documents (abstracts or full texts) paired with non-expert (mostly from authors) annotations. Notable examples of such datasets are SemEval-2010 BIBREF8 and KP20k BIBREF2, which respectively comprises scientific articles and paper abstracts, both about computer science and information technology. Detailed statistics are listed in Table . Only two publicly available datasets, that we are aware of, contain news documents: DUC-2001 BIBREF9 and KPCrowd BIBREF10. Originally created for the DUC evaluation campaign on text summarization BIBREF11, the former is composed of 308 news annotated by graduate students. The latter includes 500 news annotated by crowdsourcing. Both datasets are very small and contain newswire articles from various online sources labelled by non-expert annotators, in this case readers, which is not without issues.", "Thus, unlike author annotations, those produced by readers exhibit significantly lower missing keyphrases, that is, gold keyphrases that do not occur in the content of the document. In the DUC-2001 dataset for example, more than 96% of the gold keyphrases actually appear in the documents. This confirms previous observations that readers tend to assign keyphrases in an extractive fashion BIBREF12, which makes these datasets less suitable for the task at hand (keyphrase generation) but rather relevant for a purely extractive task (keyphrase extraction). Yet, author-assigned keyphrases commonly found in scientific paper datasets are not perfect either, as they are less constrained BIBREF13 and include seldom-used variants or misspellings that negatively impact performance. One can see there is an apparent lack of sizeable expert-annotated data that enables the development of neural keyphrase generation models in a domain other than scholarly texts. Here, we fill this gap and propose a large-scale dataset that includes news texts paired with manually curated gold standard annotations." ], [ "To create the KPTimes dataset, we collected over half a million newswire articles by crawling selected online news websites. We applied heuristics to identify the content (title, headline and body) of each article and regarded the keyphrases provided in the HTML metadata as the gold standard. A cherry-picked sample document is showcased in Figure , it allows to show present and absent keyphrases, as well as keyphrase variants (in this example News media and journalism).", "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented. The news articles are annotated in a semi-automatic way, first the editors revise a set of tags proposed by an algorithm. They then provide additional tags which will be used by a taxonomy team to improve the algorithm.", "We first retrieved the URLs of the free-to-read articles from 2006 to 2017, and collected the corresponding archived HTML pages using the Internet Archive. Doing so allows the distribution of our dataset using a thin, URL-only list. We then extracted the HTML body content using beautifulsoup and devised heuristics to extract the main content and title of each article while excluding extraneous HTML markup and inline ads. Gold standard keyphrases are obtained from the metadata (field types news_keywords and keywords) available in the HTML page of each article. Surface form variants of gold keyphrases (e.g. “AIDS; HIV”, “Driverless Cars; Self-Driving Cars” or “Fatalities; Casualties”), which are sometimes present in the metadata, are kept to be used for evaluation purposes.", "We further cleansed and filtered the dataset by removing duplicates, articles without content and those with too few (less than 2) or too many (more than 10) keyphrases. This process resulted in a set of 279 923 article-keyphrase pairs. We randomly divided this dataset into training (92.8%), development (3.6%) and test (3.6%) splits.", "Restricting ourselves to one source of data ensures the uniformity and consistency of annotation that is missing in the other datasets, but it may also make the trained model source-dependent and harm generalization. To monitor the model's ability to generalize, we gather a secondary source of data. We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset.", "Although in this study we concentrate only on the textual content of the news articles, it is worth noting that the HTML pages also provide additional information that can be helpful in generating keyphrases such as text style properties (e.g. bold, italic), links to related articles, or news categorization (e.g. politics, science, technology)." ], [ "We explored the KPTimes dataset to better understand how it stands out from the existing ones. First, we looked at how editors tag news articles. Figure illustrates the difference between the annotation behaviour of readers, authors and editors through the number of times that each unique keyphrase is used in the gold standard. We see that non-expert annotators use a larger, less controlled indexing vocabulary, in part because they lack the higher level of domain expertise that editors have. For example, we observe that frequent keyphrases in KPTimes are close to topic descriptors (e.g. “Baseball“, “Politics and Government“) while those appearing only once are very precise (e.g. “Marley's Cafe“, “Catherine E. Connelly“). Annotations in KPTimes are arguably more uniform and consistent, through the use of tag suggestions, which, as we will soon discuss in §SECREF12, makes it easier for supervised approaches to learn a good model.", "Next, we further looked at the characteristics of the gold keyphrases in KPTimes. Table shows that the number of gold keyphrases per document is similar to the one observed for KP20k while the number of missing keyphrases is higher. This indicates that editors are more likely to generalize and assign keyphrases that do not occur in the document ($\\approx 55\\%$). It is therefore this ability to generalize that models should mimic in order to perform well on KPTimes. We also note that keyphrases are on average shorter in news datasets ($1.5$ words) than those in scientific paper datasets ($2.4$ words). This may be due to the abundant use of longer, more specific phrases in scholarly documents BIBREF14.", "Variants of keyphrases recovered from the metadata occur in 8% of the documents and represent 810 sets of variants in the KPTimes test split. These variants often refer to the same concept (e.g. “Marijuana; Pot; Weed“), but can sometimes be simply semantically related (e.g. “Bridges; Tunnels“). Thereafter, keyphrase variants will be used during model evaluation for reducing the number of mismatches associated with commonly used lexical overlap metrics." ], [ "We train and evaluate several keyphrase generation models to understand the challenges of KPTimes and its usefulness for training models." ], [ "We follow the common practice and evaluate the performance of each model in terms of f-measure (F$_1$) at the top $N=10$ keyphrases, and apply stemming to reduce the number of mismatches. We also report the Mean Average Precision (MAP) scores of the ranked lists of keyphrases." ], [ "Position is a strong feature for keyphrase extraction, simply because texts are usually written so that the most important ideas go first BIBREF15. In news summarization for example, the lead baseline –that is, the first sentences from the document–, while incredibly simple, is still a competitive baseline BIBREF16. Similar to the lead baseline, we compute the FirstPhrases baseline that extracts the first $N$ keyphrase candidates from a document." ], [ "The second baseline we consider, MultipartiteRank BIBREF17, represents the state-of-the-art in unsupervised graph-based keyphrase extraction. It relies on a multipartite graph representation to enforce topical diversity while ranking keyphrase candidates. Just as FirstPhrases, this model is bound to the content of the document and cannot generate missing keyphrases. We use the implementation of MultipartiteRank available in pke BIBREF18." ], [ "The generative neural model we include in this study is CopyRNN BIBREF2, an encoder-decoder model that incorporates a copying mechanism BIBREF19 in order to be able to generate phrases that rarely occur. When properly trained, this model was shown to be very effective in extracting keyphrases from scientific abstracts. CopyRNN has been further extended by BIBREF3 to include correlation constraints among keyphrases which we do not include here as it yields comparable results.", "Two models were trained to bring evidence on the necessity to have datasets from multiple domains. CopySci was trained using scientific abstracts (KP20k) and CopyNews using newspaper articles (KPTimes), the two models use the same architecture." ], [ "Model performances for each dataset are reported in Table . Extractive baselines show the best results for KPCrowd and DUC-2001 which is not surprising given that these datasets exhibit the lowest ratio of absent keyphrases. Neural-based models obtain the greatest performance, but only for the dataset on which they were trained. We therefore see that these models do not generalize well across domains, confirming previous preliminary findings BIBREF2 and exacerbating the need for further research on this topic. Interestingly, CopyNews outperforms the other models on JPTimes and achieves very low scores for KPCrowd and DUC-2001, although all these datasets are from the same domain. This emphasizes the differences that exist between the reader- and editor-assigned gold standard. The score difference may be explained by the ratio of absent keyphrases that differs greatly between the reader-annotated datasets and JPTimes (see Table ), and thus question the use of these rather extractive datasets for evaluating keyphrase generation.", "Finally, we note that the performance of CopyNews on KPTimes is significantly higher than that of CopySci on KP20k, proving that a more uniform and consistent annotation makes it easier to learn a good model." ], [ "In this paper we presented KPTimes, a large-scale dataset of newswire articles to train and test deep learning models for keyphrase generation. The dataset and the code are available at https://github.com/ygorg/KPTimes. Large datasets have driven rapid improvement in other natural language generation tasks, such as machine translation or summarization. We hope that KPTimes will play this role and help the community in devising more robust and generalizable neural keyphrase generation models." ] ], "section_name": [ "Introduction", "Existing datasets", "Building the KPTimes dataset", "Data analysis", "Performance of existing models", "Performance of existing models ::: Evaluation metrics", "Performance of existing models ::: Models ::: Baseline: FirstPhrase", "Performance of existing models ::: Models ::: Baseline, unsupervised: MultipartiteRank", "Performance of existing models ::: Models ::: State-of-the-art, supervised: CopyRNN", "Performance of existing models ::: Results", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "7a9f96067e761e6fb86486d2548e824f56d6d47d", "9554b4081d9bc1bd2bd8c4a40871460041a9249a", "afc445ffbcc140a3be00b45d65b2e06da4042b3e" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Restricting ourselves to one source of data ensures the uniformity and consistency of annotation that is missing in the other datasets, but it may also make the trained model source-dependent and harm generalization. To monitor the model's ability to generalize, we gather a secondary source of data. We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ " To monitor the model's ability to generalize, we gather a secondary source of data. We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented. The news articles are annotated in a semi-automatic way, first the editors revise a set of tags proposed by an algorithm. They then provide additional tags which will be used by a taxonomy team to improve the algorithm.", "We first retrieved the URLs of the free-to-read articles from 2006 to 2017, and collected the corresponding archived HTML pages using the Internet Archive. Doing so allows the distribution of our dataset using a thin, URL-only list. We then extracted the HTML body content using beautifulsoup and devised heuristics to extract the main content and title of each article while excluding extraneous HTML markup and inline ads. Gold standard keyphrases are obtained from the metadata (field types news_keywords and keywords) available in the HTML page of each article. Surface form variants of gold keyphrases (e.g. “AIDS; HIV”, “Driverless Cars; Self-Driving Cars” or “Fatalities; Casualties”), which are sometimes present in the metadata, are kept to be used for evaluation purposes.", "Restricting ourselves to one source of data ensures the uniformity and consistency of annotation that is missing in the other datasets, but it may also make the trained model source-dependent and harm generalization. To monitor the model's ability to generalize, we gather a secondary source of data. We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset.", "Although in this study we concentrate only on the textual content of the news articles, it is worth noting that the HTML pages also provide additional information that can be helpful in generating keyphrases such as text style properties (e.g. bold, italic), links to related articles, or news categorization (e.g. politics, science, technology)." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented.", "Gold standard keyphrases are obtained from the metadata (field types news_keywords and keywords) available in the HTML page of each article. Surface form variants of gold keyphrases (e.g. “AIDS; HIV”, “Driverless Cars; Self-Driving Cars” or “Fatalities; Casualties”), which are sometimes present in the metadata, are kept to be used for evaluation purposes.", "To monitor the model's ability to generalize, we gather a secondary source of data. We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset.\n\nAlthough in this study we concentrate only on the textual content of the news articles, it is worth noting that the HTML pages also provide additional information that can be helpful in generating keyphrases such as text style properties (e.g. bold, italic), links to related articles, or news categorization (e.g. politics, science, technology)." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "137be3470225276b3da1c65345d05a43e2708b77", "d14056c8a6ec9682f84e5fc9c981707ad01589cc", "f4412d098c8d09e20859171aa1421fc035b305a4" ], "answer": [ { "evidence": [ "To create the KPTimes dataset, we collected over half a million newswire articles by crawling selected online news websites. We applied heuristics to identify the content (title, headline and body) of each article and regarded the keyphrases provided in the HTML metadata as the gold standard. A cherry-picked sample document is showcased in Figure , it allows to show present and absent keyphrases, as well as keyphrase variants (in this example News media and journalism).", "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented. The news articles are annotated in a semi-automatic way, first the editors revise a set of tags proposed by an algorithm. They then provide additional tags which will be used by a taxonomy team to improve the algorithm.", "Restricting ourselves to one source of data ensures the uniformity and consistency of annotation that is missing in the other datasets, but it may also make the trained model source-dependent and harm generalization. To monitor the model's ability to generalize, we gather a secondary source of data. We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset." ], "extractive_spans": [ "online news websites", "New York Times", "Japan Times" ], "free_form_answer": "", "highlighted_evidence": [ "To create the KPTimes dataset, we collected over half a million newswire articles by crawling selected online news websites.", "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented.", "We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented. The news articles are annotated in a semi-automatic way, first the editors revise a set of tags proposed by an algorithm. They then provide additional tags which will be used by a taxonomy team to improve the algorithm." ], "extractive_spans": [ "the New York Times" ], "free_form_answer": "", "highlighted_evidence": [ "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented. The news articles are annotated in a semi-automatic way, first the editors revise a set of tags proposed by an algorithm. They then provide additional tags which will be used by a taxonomy team to improve the algorithm.", "Restricting ourselves to one source of data ensures the uniformity and consistency of annotation that is missing in the other datasets, but it may also make the trained model source-dependent and harm generalization. To monitor the model's ability to generalize, we gather a secondary source of data. We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset.", "Although in this study we concentrate only on the textual content of the news articles, it is worth noting that the HTML pages also provide additional information that can be helpful in generating keyphrases such as text style properties (e.g. bold, italic), links to related articles, or news categorization (e.g. politics, science, technology)." ], "extractive_spans": [ "New York Times", "Japan Times" ], "free_form_answer": "", "highlighted_evidence": [ "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented.", "To monitor the model's ability to generalize, we gather a secondary source of data. We collected HTML pages from the Japan Times and processed them the same way as described above. 10K more news articles were gathered as the JPTimes dataset.\n\nAlthough in this study we concentrate only on the textual content of the news articles, it is worth noting that the HTML pages also provide additional information that can be helpful in generating keyphrases such as text style properties (e.g. bold, italic), links to related articles, or news categorization (e.g. politics, science, technology)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "a00a15ea2f5667fde32d3b31feae75915149f62b", "ade6b65e49f201b4625df7e4a33b94b7ce0ba7bf", "c13f2256df441460baebe10b133083dabcd25628" ], "answer": [ { "evidence": [ "Position is a strong feature for keyphrase extraction, simply because texts are usually written so that the most important ideas go first BIBREF15. In news summarization for example, the lead baseline –that is, the first sentences from the document–, while incredibly simple, is still a competitive baseline BIBREF16. Similar to the lead baseline, we compute the FirstPhrases baseline that extracts the first $N$ keyphrase candidates from a document.", "The second baseline we consider, MultipartiteRank BIBREF17, represents the state-of-the-art in unsupervised graph-based keyphrase extraction. It relies on a multipartite graph representation to enforce topical diversity while ranking keyphrase candidates. Just as FirstPhrases, this model is bound to the content of the document and cannot generate missing keyphrases. We use the implementation of MultipartiteRank available in pke BIBREF18." ], "extractive_spans": [ "FirstPhrases baseline", "MultipartiteRank BIBREF17" ], "free_form_answer": "", "highlighted_evidence": [ "Similar to the lead baseline, we compute the FirstPhrases baseline that extracts the first $N$ keyphrase candidates from a document.", "The second baseline we consider, MultipartiteRank BIBREF17, represents the state-of-the-art in unsupervised graph-based keyphrase extraction." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Position is a strong feature for keyphrase extraction, simply because texts are usually written so that the most important ideas go first BIBREF15. In news summarization for example, the lead baseline –that is, the first sentences from the document–, while incredibly simple, is still a competitive baseline BIBREF16. Similar to the lead baseline, we compute the FirstPhrases baseline that extracts the first $N$ keyphrase candidates from a document.", "The second baseline we consider, MultipartiteRank BIBREF17, represents the state-of-the-art in unsupervised graph-based keyphrase extraction. It relies on a multipartite graph representation to enforce topical diversity while ranking keyphrase candidates. Just as FirstPhrases, this model is bound to the content of the document and cannot generate missing keyphrases. We use the implementation of MultipartiteRank available in pke BIBREF18." ], "extractive_spans": [ " FirstPhrases baseline", "MultipartiteRank" ], "free_form_answer": "", "highlighted_evidence": [ "Similar to the lead baseline, we compute the FirstPhrases baseline that extracts the first $N$ keyphrase candidates from a document.", "The second baseline we consider, MultipartiteRank BIBREF17, represents the state-of-the-art in unsupervised graph-based keyphrase extraction. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Performance of existing models ::: Models ::: Baseline: FirstPhrase", "Position is a strong feature for keyphrase extraction, simply because texts are usually written so that the most important ideas go first BIBREF15. In news summarization for example, the lead baseline –that is, the first sentences from the document–, while incredibly simple, is still a competitive baseline BIBREF16. Similar to the lead baseline, we compute the FirstPhrases baseline that extracts the first $N$ keyphrase candidates from a document.", "Performance of existing models ::: Models ::: Baseline, unsupervised: MultipartiteRank", "The second baseline we consider, MultipartiteRank BIBREF17, represents the state-of-the-art in unsupervised graph-based keyphrase extraction. It relies on a multipartite graph representation to enforce topical diversity while ranking keyphrase candidates. Just as FirstPhrases, this model is bound to the content of the document and cannot generate missing keyphrases. We use the implementation of MultipartiteRank available in pke BIBREF18." ], "extractive_spans": [ "FirstPhrase", "MultipartiteRank" ], "free_form_answer": "", "highlighted_evidence": [ "Performance of existing models ::: Models ::: Baseline: FirstPhrase", "Similar to the lead baseline, we compute the FirstPhrases baseline that extracts the first $N$ keyphrase candidates from a document.", "Performance of existing models ::: Models ::: Baseline, unsupervised: MultipartiteRank", "The second baseline we consider, MultipartiteRank BIBREF17, represents the state-of-the-art in unsupervised graph-based keyphrase extraction. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "89c9f3d87bb309e35ad93fc0f5f0e45fde15e7b9", "986cec2a016c9ed46ccc0d97a3290d2a099434c1", "c5b716a919b0914104426c5f1cc4255c83aea604" ], "answer": [ { "evidence": [ "The generative neural model we include in this study is CopyRNN BIBREF2, an encoder-decoder model that incorporates a copying mechanism BIBREF19 in order to be able to generate phrases that rarely occur. When properly trained, this model was shown to be very effective in extracting keyphrases from scientific abstracts. CopyRNN has been further extended by BIBREF3 to include correlation constraints among keyphrases which we do not include here as it yields comparable results.", "Two models were trained to bring evidence on the necessity to have datasets from multiple domains. CopySci was trained using scientific abstracts (KP20k) and CopyNews using newspaper articles (KPTimes), the two models use the same architecture." ], "extractive_spans": [ "CopySci was trained using scientific abstracts (KP20k) and CopyNews using newspaper articles (KPTimes)" ], "free_form_answer": "", "highlighted_evidence": [ "The generative neural model we include in this study is CopyRNN BIBREF2, an encoder-decoder model that incorporates a copying mechanism BIBREF19 in order to be able to generate phrases that rarely occur. ", "Two models were trained to bring evidence on the necessity to have datasets from multiple domains. CopySci was trained using scientific abstracts (KP20k) and CopyNews using newspaper articles (KPTimes), the two models use the same architecture." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The generative neural model we include in this study is CopyRNN BIBREF2, an encoder-decoder model that incorporates a copying mechanism BIBREF19 in order to be able to generate phrases that rarely occur. When properly trained, this model was shown to be very effective in extracting keyphrases from scientific abstracts. CopyRNN has been further extended by BIBREF3 to include correlation constraints among keyphrases which we do not include here as it yields comparable results." ], "extractive_spans": [ "encoder-decoder model" ], "free_form_answer": "", "highlighted_evidence": [ "The generative neural model we include in this study is CopyRNN BIBREF2, an encoder-decoder model that incorporates a copying mechanism BIBREF19 in order to be able to generate phrases that rarely occur." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The generative neural model we include in this study is CopyRNN BIBREF2, an encoder-decoder model that incorporates a copying mechanism BIBREF19 in order to be able to generate phrases that rarely occur. When properly trained, this model was shown to be very effective in extracting keyphrases from scientific abstracts. CopyRNN has been further extended by BIBREF3 to include correlation constraints among keyphrases which we do not include here as it yields comparable results.", "Two models were trained to bring evidence on the necessity to have datasets from multiple domains. CopySci was trained using scientific abstracts (KP20k) and CopyNews using newspaper articles (KPTimes), the two models use the same architecture." ], "extractive_spans": [ "CopyRNN BIBREF2" ], "free_form_answer": "", "highlighted_evidence": [ "The generative neural model we include in this study is CopyRNN BIBREF2, an encoder-decoder model that incorporates a copying mechanism BIBREF19 in order to be able to generate phrases that rarely occur. When properly trained, this model was shown to be very effective in extracting keyphrases from scientific abstracts. CopyRNN has been further extended by BIBREF3 to include correlation constraints among keyphrases which we do not include here as it yields comparable results.\n\nTwo models were trained to bring evidence on the necessity to have datasets from multiple domains." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "7ed620ee70f694bb288277995291be873471447f", "9594760ce27c75a9fec3a9bcccc87f59b937a0dd", "d70d84a6ca0212ba5ca3e9a7f675547076226e35" ], "answer": [ { "evidence": [ "Frequently used datasets for keyphrase generation have a common characteristic that they are, by and large, made from scholarly documents (abstracts or full texts) paired with non-expert (mostly from authors) annotations. Notable examples of such datasets are SemEval-2010 BIBREF8 and KP20k BIBREF2, which respectively comprises scientific articles and paper abstracts, both about computer science and information technology. Detailed statistics are listed in Table . Only two publicly available datasets, that we are aware of, contain news documents: DUC-2001 BIBREF9 and KPCrowd BIBREF10. Originally created for the DUC evaluation campaign on text summarization BIBREF11, the former is composed of 308 news annotated by graduate students. The latter includes 500 news annotated by crowdsourcing. Both datasets are very small and contain newswire articles from various online sources labelled by non-expert annotators, in this case readers, which is not without issues.", "We explored the KPTimes dataset to better understand how it stands out from the existing ones. First, we looked at how editors tag news articles. Figure illustrates the difference between the annotation behaviour of readers, authors and editors through the number of times that each unique keyphrase is used in the gold standard. We see that non-expert annotators use a larger, less controlled indexing vocabulary, in part because they lack the higher level of domain expertise that editors have. For example, we observe that frequent keyphrases in KPTimes are close to topic descriptors (e.g. “Baseball“, “Politics and Government“) while those appearing only once are very precise (e.g. “Marley's Cafe“, “Catherine E. Connelly“). Annotations in KPTimes are arguably more uniform and consistent, through the use of tag suggestions, which, as we will soon discuss in §SECREF12, makes it easier for supervised approaches to learn a good model." ], "extractive_spans": [], "free_form_answer": "Existing datasets are annotated by non-experts who use a larger, less controlled indexed vocabulary lacking the domain expertise shown by the editors", "highlighted_evidence": [ "Frequently used datasets for keyphrase generation have a common characteristic that they are, by and large, made from scholarly documents (abstracts or full texts) paired with non-expert (mostly from authors) annotations.", "We see that non-expert annotators use a larger, less controlled indexing vocabulary, in part because they lack the higher level of domain expertise that editors have. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented. The news articles are annotated in a semi-automatic way, first the editors revise a set of tags proposed by an algorithm. They then provide additional tags which will be used by a taxonomy team to improve the algorithm." ], "extractive_spans": [ " news articles are annotated in a semi-automatic way", "first the editors revise a set of tags proposed by an algorithm", "provide additional tags which will be used by a taxonomy team to improve the algorithm" ], "free_form_answer": "", "highlighted_evidence": [ "We use the New York Times as our primary source of data, since the content tagging policy that it applies is rigorous and well-documented. The news articles are annotated in a semi-automatic way, first the editors revise a set of tags proposed by an algorithm. They then provide additional tags which will be used by a taxonomy team to improve the algorithm." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We explored the KPTimes dataset to better understand how it stands out from the existing ones. First, we looked at how editors tag news articles. Figure illustrates the difference between the annotation behaviour of readers, authors and editors through the number of times that each unique keyphrase is used in the gold standard. We see that non-expert annotators use a larger, less controlled indexing vocabulary, in part because they lack the higher level of domain expertise that editors have. For example, we observe that frequent keyphrases in KPTimes are close to topic descriptors (e.g. “Baseball“, “Politics and Government“) while those appearing only once are very precise (e.g. “Marley's Cafe“, “Catherine E. Connelly“). Annotations in KPTimes are arguably more uniform and consistent, through the use of tag suggestions, which, as we will soon discuss in §SECREF12, makes it easier for supervised approaches to learn a good model." ], "extractive_spans": [], "free_form_answer": "Exper annotators use a smaller, more controlled indexing vocabulary.", "highlighted_evidence": [ "We see that non-expert annotators use a larger, less controlled indexing vocabulary, in part because they lack the higher level of domain expertise that editors have." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ], "nlp_background": [ "five", "five", "five", "five", "five" ], "paper_read": [ "", "", "", "", "" ], "question": [ "Do they report results only on English data?", "Where do the news texts come from?", "What baseline is used for this task?", "What type of nerual keyphrase generation models are trained?", "How do the editors' annotations differ from those in existing datasets?" ], "question_id": [ "bc7081aaa207de2362e0bea7bc8108d338aee36f", "c72e05dd41ed5a85335ffeca5a03e71514e60e84", "07edc082eb86aecef3db5cad2534459c1310d6e8", "eaacee4246f003d29a108fe857b5dd317287ecf1", "3ea82a5ca495ffbd1e30e8655aef1be4ba423efe" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "", "", "", "", "" ] }
{ "caption": [ "Table 1: Statistics of available datasets for keyphrase generation. Gold annotation is performed by authors (A), readers (R) or editors (E). The number of documents in the training (#Train), validation (dev) and testing (#Test) splits are shown. The average number of keyphrases (#kp) and words (#words) per document, the average length of keyphrases (len kp) and the ratio of keyphrases in the reference that do not appear in the document (%abs) are computed on the test set.", "Figure 2: Distributions of gold keyphrase assignments.", "Table 2: Performance on benchmark datasets composed of newspaper article, full scientific article and scientific article abstract. The generation models CopySci and CopyNews were trained respectively on KP20k and KPTimes. The dataset presented in this work are written in italic." ], "file": [ "3-Table1-1.png", "4-Figure2-1.png", "5-Table2-1.png" ] }
[ "How do the editors' annotations differ from those in existing datasets?" ]
[ [ "1911.12559-Data analysis-0", "1911.12559-Building the KPTimes dataset-1", "1911.12559-Existing datasets-0" ] ]
[ "Exper annotators use a smaller, more controlled indexing vocabulary." ]
70
1803.09000
WikiRank: Improving Keyphrase Extraction Based on Background Knowledge
Keyphrase is an efficient representation of the main idea of documents. While background knowledge can provide valuable information about documents, they are rarely incorporated in keyphrase extraction methods. In this paper, we propose WikiRank, an unsupervised method for keyphrase extraction based on the background knowledge from Wikipedia. Firstly, we construct a semantic graph for the document. Then we transform the keyphrase extraction problem into an optimization problem on the graph. Finally, we get the optimal keyphrase set to be the output. Our method obtains improvements over other state-of-art models by more than 2% in F1-score.
{ "paragraphs": [ [ "As the amount of published material rapidly increases, the problem of managing information becomes more difficult. Keyphrase, as a concise representation of the main idea of the text, facilitates the management, categorization, and retrieval of information. Automatic keyphrase extraction concerns “the automatic selection of important and topical phrases from the body of a document”. Its goal is to extract a set of phrases that are related to the main topics discussed in a given document BIBREF0 .", "Existing methods of keyphrase extraction could be divided into two categories: supervised and unsupervised. While supervised approaches require human labeling, at the same time needs various kinds of training data to get better generalization performance, more and more researchers focus on unsupervised methods.", "Traditional methods of unsupervised keyphrase extraction mostly focus on getting information of document from word frequency and document structure BIBREF0 , however, after years of attempting, the performance seems very hard to be improved any more. Based on this observation, it is reasonable to suspect that the document itself possibly cannot provide enough information for keyphrase extraction task.", "To get good coverage of the main topics of the document, Topical PageRank BIBREF1 started to adopt topical information in automatic keyphrase extraction. The main idea of Topical PageRank is to extract the top topics of the document using LDA, then sum over the scores of a candidate phrase under each topic to be the final score. The main problems with Topical PageRank are: First, The topics are too general. Second, since they are using LDA, they only classify the words to several topics, but don't know what the topics exactly are. However, the topical information we need for keyphrase extraction should be precise. As shown in Figure , the difference between a correct keyphrase sheep disease and an incorrect keyphrase incurable disease could be small, which is hard to be captured by rough topical categorization approach.", "To overcome the limitations of aforementioned approaches, we propose WikiRank, an unsupervised automatic keyphrase extraction approach that links semantic meaning to text", "The key contribution of this paper could be summarized as follows:" ], [ "Figure shows part of an example document. In this figure, the gold keyphrases are marked with bold, and the keyphrases extracted by the TextRank system are marked with parentheses. We are going to illustrate the errors exist in most of present keyphrase extraction systems using this example. Overgeneration errors occur when a system correctly predicts a candidate as a keyphrase because it contains a word that frequently appears in the associated document, but at the same time erroneously outputs other candidates as keyphrases because they contain the same word BIBREF0 . It is not easy to reject a non-keyphrase containing a word with a high term frequency: many unsupervised systems score a candidate by summing the score of each of its component words, and many supervised systems use unigrams as features to represent a candidate. To be more concrete, consider the news article in Figure . The word Cattle has a significant presence in the document. Consequently, the system not only correctly predict British cattle as a keyphrase, but also erroneously predict cattle industry, cattle feed, and cattle brain as keyphrases, yielding overgeneration errors.", "Redundancy errors occur when a system correctly identifies a candidate as a keyphrase, but at the same time outputs a semantically equivalent candidate (e.g., its alias) as a keyphrase. This type of error can be attributed to the failure of a system to determine that two candidates are semantically equivalent. Nevertheless, some researchers may argue that a system should not be penalized for redundancy errors because the extracted candidates are in fact keyphrases. In our example, bovine spongiform encephalopathy and bse refer to the same concept. If a system predicts both of them as keyphrases, it commits a redundancy error.", "Infrequency errors occur when a system fails to identify a keyphrase owing to its infrequent presence in the associated document. Handling infrequency errors is a challenge because state-of-the-art keyphrase extractors rarely predict candidates that appear only once or twice in a document. In the Mad cow disease example, the keyphrase extractor fails to identify export and scrapie as keyphrases, resulting in infrequency errors." ], [ "The WikiRank algorithm includes three steps: (1) Construct the semantic graph including concepts and candidate keyphrases; (2)(optional) Prune the graph with heuristic to filter out candidates which are likely to be erroneously produced; (3) Generate the best set of keyphrases as output." ], [ "This is one of the crucial steps in our paper that connects the plain text with human knowledge, facilitating the understanding of semantics. In this step, we adopt TAGME BIBREF2 to obtain the underlying concepts in documents.", "TAGME is a powerful topic annotator. It identifies meaningful sequences of words in a short text and link them to a pertinent Wikipedia page, as shown in Figure . These links add a new topical dimension to the text that enable us to relate, classify or cluster short texts.", "This step is to filter out unnecessary word tokens from the input document and generate a list of potential keywords using heuristics. As reported in BIBREF3 , most manually assigned keyphrases turn out to be noun groups. We follow BIBREF4 and select candidates lexical unit with the following Penn Treebank tags: NN, NNS, NNP, NNPS, and JJ, which are obtained using the Stanford POS tagger BIBREF5 , and then extract the noun groups whose pattern is zero or more adjectives followed by one or more nouns. The pattern can be represented using regular expressions as follows INLINEFORM0 ", "where JJ indicates adjectives and various forms of nouns are represented using NN, NNS and NNP .", "We build a semantic graph INLINEFORM0 in which the set of vertices INLINEFORM1 is the union of the concept set INLINEFORM2 and the candidate keyphrase set INLINEFORM3 —i.e., INLINEFORM4 . In the graph, each unique concept INLINEFORM5 or candidate keyphrase INLINEFORM6 for document INLINEFORM7 corresponds to a node. The node corresponds to a concept INLINEFORM8 and the node corresponds to a candidate keyphrase INLINEFORM9 are connected by an edge INLINEFORM10 , if the candidate keyphrase INLINEFORM11 contains concept INLINEFORM12 according to the annotation of TAGME. Part of the semantic graph of the sample document is shown in Figure . Concepts corresponding to are shown in Table ." ], [ "According to BIBREF1 , good keyphrases should be relevant to the major topics of the given document, at the same time should also have good coverage of the major topics of the document. Since we represent the topical information with concepts annotated with TAGME, the goal of our approach is to find the set INLINEFORM0 consisting of INLINEFORM1 keyphrases, to cover concepts (1) as important as possible (2) as much as possible.", "Let INLINEFORM0 denote the weight of concept INLINEFORM1 . We compute INLINEFORM2 as the frequency INLINEFORM3 exists in the whole document INLINEFORM4 . To quantify how good the coverage of a keyphrase set INLINEFORM5 is, we compute the overall score of the concepts that INLINEFORM6 contains.", "Consider a subgraph of INLINEFORM0 , INLINEFORM1 , which captures all the concepts connected to INLINEFORM2 . In INLINEFORM3 , the set of vertices INLINEFORM4 is the union of the candidate keyphrase set INLINEFORM5 , and the set INLINEFORM6 of concepts that nodes in INLINEFORM7 connect to. The set of edges INLINEFORM8 of INLINEFORM9 is constructed with the edges connect nodes in INLINEFORM10 with nodes in INLINEFORM11 .", "We set up the score of a concept INLINEFORM0 in the subgraph INLINEFORM1 as following: DISPLAYFORM0 ", "where INLINEFORM0 is the weight of INLINEFORM1 as we defined before, and INLINEFORM2 is the degree of INLINEFORM3 in the subgraph INLINEFORM4 . Essentially, INLINEFORM5 is equal to the frequency that concept INLINEFORM6 is annotated in the keyphrase set INLINEFORM7 .", "The optimization problem is defined as: The goal of the optimization problem is to find the candidate keyphrase set INLINEFORM0 , such that the sum of the scores of the concepts annotated from the phrases in INLINEFORM1 is maximized.", "We propose an algorithm to solve the optimization problem, as shown in Algorithm . In each iteration, we compute the score INLINEFORM0 for all candidate keyphrases INLINEFORM1 and include the INLINEFORM2 with highest score into INLINEFORM3 , in which INLINEFORM4 evaluates the score of concepts added to the new set INLINEFORM5 by adding INLINEFORM6 into INLINEFORM7 ." ], [ "In practice, computing score for all the candidate keyphrases is not always necessary, because some of the candidates are very unlikely to be gold keyphrase that we can remove them from our graph before applying the algorithm to reduce the complexity.", "In this section, we introduce three heuristic pruning steps that significantly reduces the complexity of the optimization problem without reducing much of the accuracy.", "Step 1. Remove the candidate keyphrase INLINEFORM0 from original graph INLINEFORM1 , if it is not connected to any concept.", "The intuition behind this heuristic is straightforward. Since our objective function is constructed over concepts, if a candidate keyphrase INLINEFORM0 doesn't contain any concept, adding it to INLINEFORM1 doesn't bring any improvement to the objective function, so INLINEFORM2 is irrelevant to our optimization process. Pruning INLINEFORM3 would be a wise decision.", "Step 2. Remove the candidate keyphrase INLINEFORM0 from original graph INLINEFORM1 , if it is only connected to one concept that only exists once in the document", "If a candidate keyphrase contains fewer concepts, or the concepts connects to it barely exist in the document, we think this candidate keyphrase contributes less valuable information to the document. In practice, there are numerous INLINEFORM0 pairs in graph INLINEFORM1 that is isolated from the center of the graph. We believe they are irrelevant to the major topic of the document.", "Step 3. For a concept INLINEFORM0 connecting to more than INLINEFORM1 candidate keyphrases, remove any candidate keyphrase INLINEFORM2 which (1)Does not connect to any other concept. AND (2)The ranking is lower than INLINEFORM3 th among all candidate keyphrases connect to INLINEFORM4 .(In practice, INLINEFORM5 is usually 3 or 4.)", "According to equation EQREF10 , if there are already INLINEFORM0 instances of concept INLINEFORM1 in the INLINEFORM2 , adding the INLINEFORM3 th instance of INLINEFORM4 will only contribute INLINEFORM5 to INLINEFORM6 . At the same time, among all the candidate keyphrases connected to concept INLINEFORM7 , our optimization process always chooses the ones that connect to other concepts as well over the ones that do not connect to any other concept. Combining these two logic, a candidate satisfying the constrains of Step 3 is not likely to be picked in the best keyphrase set INLINEFORM8 , so we can prune it before the optimalization process." ], [ "The DUC-2001 dataset BIBREF6 , which is a collection of 308 news articles, is annotated by BIBREF7 .", "The Inspec dataset is a collection of 2,000 abstracts from journal papers including the paper title. This is a relatively popular dataset for automatic keyphrase extraction, as it was first used by BIBREF3 and later by Mihalcea and BIBREF8 and BIBREF9 .", "The NUS Keyphrase Corpus BIBREF10 includes 211 scientific conference papers with lengths between 4 to 12 pages. Each paper has one or more sets of keyphrases assigned by its authors and other annotators. The number of candidate keyphrases that can be extracted is potentially large, making this corpus the most challenging of the four.", "Finally, the ICSI Meeting Corpus (Janin et al., 2003), which is annotated by Liu et al. (2009a), includes 161 meeting transcriptions. Unlike the other three datasets, the gold standard keys for the ICSI corpus are mostly unigrams." ], [ "For comparing with our system, we reimplemented SingleRank and Topical PageRank. Table shows the result of our reimplementation of SingleRank and Topical PageRank, as well as the result of our system. Note that we predict the same number of phrase ( INLINEFORM0 ) for each document while testing all three methods.", "The result shows our result has guaranteed improvement over SingleRank and Topical PageRank on all four corpora." ], [ "We proposed an unsupervised graph-based keyphrase extraction method WikiRank. This method connects the text with concepts in Wikipedia, thus incorporate the background information into the semantic graph and finally construct a set of keyphrase that has optimal coverage of the concepts of the document. Experiment results show the method outperforms two related keyphrase extraction methods.", "We suggest that future work could incorporate more other semantic approaches to investigate keyphrase extraction task. Introducing the results of dependency parsing or semantic parsing (e.g., OntoUSP) in intermediate steps could be helpful." ] ], "section_name": [ "Introduction", "Existing Error Illustration with Example", "Proposed Model", "Graph Construction", "WikiRank", "Approximation Approach with Pre-pruning", "Corpora", "Result", "Conclusion and Future Work" ] }
{ "answers": [ { "annotation_id": [ "4549aa4cb737ec8f4b70129d8d2c30b020a63396", "6ecf504f7cd59d37af69baa8550cad539dae637a", "78da5e99bda73a9dee63324e9adcc8bb0ac5873d" ], "answer": [ { "evidence": [ "The DUC-2001 dataset BIBREF6 , which is a collection of 308 news articles, is annotated by BIBREF7 .", "The Inspec dataset is a collection of 2,000 abstracts from journal papers including the paper title. This is a relatively popular dataset for automatic keyphrase extraction, as it was first used by BIBREF3 and later by Mihalcea and BIBREF8 and BIBREF9 .", "The NUS Keyphrase Corpus BIBREF10 includes 211 scientific conference papers with lengths between 4 to 12 pages. Each paper has one or more sets of keyphrases assigned by its authors and other annotators. The number of candidate keyphrases that can be extracted is potentially large, making this corpus the most challenging of the four.", "Finally, the ICSI Meeting Corpus (Janin et al., 2003), which is annotated by Liu et al. (2009a), includes 161 meeting transcriptions. Unlike the other three datasets, the gold standard keys for the ICSI corpus are mostly unigrams." ], "extractive_spans": [ "DUC-2001 dataset BIBREF6", "Inspec dataset", "NUS Keyphrase Corpus BIBREF10", "ICSI Meeting Corpus" ], "free_form_answer": "", "highlighted_evidence": [ "The DUC-2001 dataset BIBREF6 , which is a collection of 308 news articles, is annotated by BIBREF7 .\n\n", "The Inspec dataset is a collection of 2,000 abstracts from journal papers including the paper title.", "The NUS Keyphrase Corpus BIBREF10 includes 211 scientific conference papers with lengths between 4 to 12 pages.", "Finally, the ICSI Meeting Corpus (Janin et al., 2003), which is annotated by Liu et al. (2009a), includes 161 meeting transcriptions." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The DUC-2001 dataset BIBREF6 , which is a collection of 308 news articles, is annotated by BIBREF7 .", "The Inspec dataset is a collection of 2,000 abstracts from journal papers including the paper title. This is a relatively popular dataset for automatic keyphrase extraction, as it was first used by BIBREF3 and later by Mihalcea and BIBREF8 and BIBREF9 .", "The NUS Keyphrase Corpus BIBREF10 includes 211 scientific conference papers with lengths between 4 to 12 pages. Each paper has one or more sets of keyphrases assigned by its authors and other annotators. The number of candidate keyphrases that can be extracted is potentially large, making this corpus the most challenging of the four.", "Finally, the ICSI Meeting Corpus (Janin et al., 2003), which is annotated by Liu et al. (2009a), includes 161 meeting transcriptions. Unlike the other three datasets, the gold standard keys for the ICSI corpus are mostly unigrams." ], "extractive_spans": [ "DUC-2001", "Inspec ", " NUS Keyphrase Corpus", " ICSI Meeting Corpus " ], "free_form_answer": "", "highlighted_evidence": [ "The DUC-2001 dataset BIBREF6 , which is a collection of 308 news articles, is annotated by BIBREF7 .\n\nThe Inspec dataset is a collection of 2,000 abstracts from journal papers including the paper title. This is a relatively popular dataset for automatic keyphrase extraction, as it was first used by BIBREF3 and later by Mihalcea and BIBREF8 and BIBREF9 .\n\nThe NUS Keyphrase Corpus BIBREF10 includes 211 scientific conference papers with lengths between 4 to 12 pages. Each paper has one or more sets of keyphrases assigned by its authors and other annotators. The number of candidate keyphrases that can be extracted is potentially large, making this corpus the most challenging of the four.\n\nFinally, the ICSI Meeting Corpus (Janin et al., 2003), which is annotated by Liu et al. (2009a), includes 161 meeting transcriptions. Unlike the other three datasets, the gold standard keys for the ICSI corpus are mostly unigrams." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The DUC-2001 dataset BIBREF6 , which is a collection of 308 news articles, is annotated by BIBREF7 .", "The NUS Keyphrase Corpus BIBREF10 includes 211 scientific conference papers with lengths between 4 to 12 pages. Each paper has one or more sets of keyphrases assigned by its authors and other annotators. The number of candidate keyphrases that can be extracted is potentially large, making this corpus the most challenging of the four.", "The Inspec dataset is a collection of 2,000 abstracts from journal papers including the paper title. This is a relatively popular dataset for automatic keyphrase extraction, as it was first used by BIBREF3 and later by Mihalcea and BIBREF8 and BIBREF9 .", "Finally, the ICSI Meeting Corpus (Janin et al., 2003), which is annotated by Liu et al. (2009a), includes 161 meeting transcriptions. Unlike the other three datasets, the gold standard keys for the ICSI corpus are mostly unigrams." ], "extractive_spans": [ "DUC-2001 dataset", "Inspec dataset", "NUS Keyphrase Corpus", "ICSI Meeting Corpus" ], "free_form_answer": "", "highlighted_evidence": [ "The DUC-2001 dataset BIBREF6 , which is a collection of 308 news articles, is annotated by BIBREF7 .", "The NUS Keyphrase Corpus BIBREF10 includes 211 scientific conference papers with lengths between 4 to 12 pages. ", "The Inspec dataset is a collection of 2,000 abstracts from journal papers including the paper title. ", "Finally, the ICSI Meeting Corpus (Janin et al., 2003), which is annotated by Liu et al. (2009a), includes 161 meeting transcriptions. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "13a7f0e7312e1f758eb63dbdbc35ad5f1cdfdf10", "91e6a5a3d8663512e7fbe92712ac38afbb9ec6fe", "ba4ad8fa3507981e30adc8b1a0a37aecd37f4f98" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: The Result of our System as well as the Reimplementation of SingleRank and Topical PageRank on four Corpora", "For comparing with our system, we reimplemented SingleRank and Topical PageRank. Table shows the result of our reimplementation of SingleRank and Topical PageRank, as well as the result of our system. Note that we predict the same number of phrase ( INLINEFORM0 ) for each document while testing all three methods." ], "extractive_spans": [], "free_form_answer": "On DUC 27.53, on Inspec 27.01, on ICSI 4.30, and on Nus 9.10", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: The Result of our System as well as the Reimplementation of SingleRank and Topical PageRank on four Corpora", "Table shows the result of our reimplementation of SingleRank and Topical PageRank, as well as the result of our system. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 2: The Result of our System as well as the Reimplementation of SingleRank and Topical PageRank on four Corpora" ], "extractive_spans": [], "free_form_answer": "27.53, 27.01, 4.30 and 9.10 for DUC, Inspec, ICSI and Nus datasets respectively.", "highlighted_evidence": [ "FLOAT SELECTED: Table 2: The Result of our System as well as the Reimplementation of SingleRank and Topical PageRank on four Corpora" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For comparing with our system, we reimplemented SingleRank and Topical PageRank. Table shows the result of our reimplementation of SingleRank and Topical PageRank, as well as the result of our system. Note that we predict the same number of phrase ( INLINEFORM0 ) for each document while testing all three methods.", "FLOAT SELECTED: Table 2: The Result of our System as well as the Reimplementation of SingleRank and Topical PageRank on four Corpora" ], "extractive_spans": [], "free_form_answer": "F1 score their system achieved is 27.53, 27.01, 4.30 and 9.10 on DUC, Inspec, ICSI and NUS dataset respectively.", "highlighted_evidence": [ "Table shows the result of our reimplementation of SingleRank and Topical PageRank, as well as the result of our system.", "FLOAT SELECTED: Table 2: The Result of our System as well as the Reimplementation of SingleRank and Topical PageRank on four Corpora" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "519460b39aec0ab0ece7357e21c49b9f62463269", "9efe835b45597dfe19f9fa158b5aa8ca7e3a1db6", "b4f0dd9cb950626573a71d82982d7a832cada055" ], "answer": [ { "evidence": [ "For comparing with our system, we reimplemented SingleRank and Topical PageRank. Table shows the result of our reimplementation of SingleRank and Topical PageRank, as well as the result of our system. Note that we predict the same number of phrase ( INLINEFORM0 ) for each document while testing all three methods." ], "extractive_spans": [ " SingleRank and Topical PageRank" ], "free_form_answer": "", "highlighted_evidence": [ "For comparing with our system, we reimplemented SingleRank and Topical PageRank." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For comparing with our system, we reimplemented SingleRank and Topical PageRank. Table shows the result of our reimplementation of SingleRank and Topical PageRank, as well as the result of our system. Note that we predict the same number of phrase ( INLINEFORM0 ) for each document while testing all three methods." ], "extractive_spans": [ "SingleRank and Topical PageRank" ], "free_form_answer": "", "highlighted_evidence": [ "For comparing with our system, we reimplemented SingleRank and Topical PageRank." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For comparing with our system, we reimplemented SingleRank and Topical PageRank. Table shows the result of our reimplementation of SingleRank and Topical PageRank, as well as the result of our system. Note that we predict the same number of phrase ( INLINEFORM0 ) for each document while testing all three methods." ], "extractive_spans": [ "SingleRank", "Topical PageRank" ], "free_form_answer": "", "highlighted_evidence": [ "For comparing with our system, we reimplemented SingleRank and Topical PageRank. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "what dataset did they use?", "what was their model's f1 score?", "what are the state of the art models?" ], "question_id": [ "e90425ac05a15dc145bbf3034e78b56e7cec36ac", "b677952cabfec0150e028530d5d4d708d796eedc", "d7799d26fe39302c4aff5b530aa691e8653fffe8" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: Part of the Sample Document 2 Bold: Gold Keyphrase In parentheses: Keyphrase generated by TextRank algorithm Underlined: Keyphrase annotated to Wikipedia Entity by TagMe", "Figure 2: Part of the Semantic Graph of the Sample Document Circle: Concept Rectangle: Candidate Keyphrase", "Table 1: Part of the Concepts Annotated from the Sample Document", "Table 2: The Result of our System as well as the Reimplementation of SingleRank and Topical PageRank on four Corpora" ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "3-Table1-1.png", "4-Table2-1.png" ] }
[ "what was their model's f1 score?" ]
[ [ "1803.09000-Result-0", "1803.09000-4-Table2-1.png" ] ]
[ "F1 score their system achieved is 27.53, 27.01, 4.30 and 9.10 on DUC, Inspec, ICSI and NUS dataset respectively." ]
72
1909.06708
Hint-Based Training for Non-Autoregressive Machine Translation
Due to the unparallelizable nature of the autoregressive factorization, AutoRegressive Translation (ART) models have to generate tokens sequentially during decoding and thus suffer from high inference latency. Non-AutoRegressive Translation (NART) models were proposed to reduce the inference time, but could only achieve inferior translation accuracy. In this paper, we proposed a novel approach to leveraging the hints from hidden states and word alignments to help the training of NART models. The results achieve significant improvement over previous NART models for the WMT14 En-De and De-En datasets and are even comparable to a strong LSTM-based ART baseline but one order of magnitude faster in inference.
{ "paragraphs": [ [ "Neural machine translation has attracted much attention in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3. Given a sentence $x=(x_1, \\dots ,x_{T_x})$ from the source language, the straight-forward way for translation is to generate the target words $y=(y_1, \\dots , y_{T_y})$ one by one from left to right. This is also known as the AutoRegressive Translation (ART) models, in which the joint probability is decomposed into a chain of conditional probabilities:", "While the ART models have achieved great success in terms of translation quality, the time consumption during inference is still far away from satisfactory. During training, the predictions at different positions can be estimated in parallel since the ground truth pair $(x,y)$ is exposed to the model. However, during inference, the model has to generate tokens sequentially as $y_{<t}$ must be inferred on the fly. Such autoregressive behavior becomes the bottleneck of the computational time BIBREF4.", "In order to speed up the inference process, a line of works begin to develop non-autoregressive translation models. These models break the autoregressive dependency by decomposing the joint probability with", "The lost of autoregressive dependency largely hurt the consistency of the output sentences, increase the difficulty in the learning process and thus lead to a low quality translation. Previous works mainly focus on adding different components into the NART model to improve the expressiveness of the network structure to overcome the loss of autoregressive dependency BIBREF5, BIBREF6, BIBREF7. However, the computational overhead of new components will hurt the inference speed, contradicting with the goal of the NART models: to parallelize and speed up neural machine translation models.", "To tackle this, we proposed a novel hint-based method for NART model training. We first investigate the causes of the poor performance of the NART model. Comparing with the ART model, we find that: (1) the positions where the NART model outputs incoherent tokens will have very high hidden states similarity; (2) the attention distributions of the NART model are more ambiguous than those of ART model. Therefore, we design two kinds of hints from the hidden states and attention distributions of the ART model to help the training of the NART model. The experimental results show that our model achieves significant improvement over the NART baseline models and is even comparable to a strong ART baseline in BIBREF4." ], [ "In this section, we first describe the observations on the ART and NART models, and then discuss what kinds of information can be used as hints to help the training of the NART model. We follow the network structure in BIBREF8, use a copy of the source sentence as decoder input, remove the attention masks in decoder self-attention layers and add a positional attention layer as suggested in BIBREF5. We provide a visualization of ART and NART models we used in Figure FIGREF11 and a detailed description of the model structure in Appendix." ], [ "According to the case study in BIBREF5, the translations of the NART models contain incoherent phrases (e.g. repetitive words) and miss meaningful tokens on the source side, while these patterns do not commonly appear in ART models. After some empirical study, we find two non-obvious facts that lead to this phenomenon.", "First, we visualize the cosine similarities between decoder hidden states of a certain layer in both ART and NART models for sampled cases. Mathematically, for a set of hidden states $r_1, \\ldots , r_T$, the pairwise cosine similarity can be derived by $\\cos _{ij} = {\\left<r_i, r_j\\right>}/{(\\Vert r_i\\Vert \\cdot \\Vert r_j\\Vert )}.$ We then plot the heatmap of the resulting matrix $\\cos $. A typical example is shown in Figure FIGREF4, where the cosine similarities in the NART model are larger than those of the ART model, indicating that the hidden states across positions in the NART model are “similar”. Positions with highly-correlated hidden states tend to generate the same word and make the NART model output repetitive tokens, e.g., the yellow area on the top-left of Figure FIGREF4(b), while this does not happen in the ART model (Figure FIGREF4(a)). According to our statistics, 70% of the cosine similarities between hidden states in the ART model are less than 0.25, and 95% are less than 0.5.", "Second, we visualize the encoder-decoder attentions for sampled cases, shown in Figure FIGREF6. Good attentions between the source and target sentences are usually considered to lead to accurate translation while poor ones may cause wrong output tokens BIBREF0. In Figure FIGREF6(b), the attentions of the ART model almost covers all source tokens, while the attentions of the NART model do not cover “farm” but with two “morning”. This directly makes the translation result worse in the NART model. These phenomena inspire us to use the intermediate hidden information in the ART model to guide the learning process of the NART model." ], [ "Our study motivates us to leverage the intermediate hidden information from an ART model to improve the NART model. We focus on how to define hints from a well-trained ART teacher model and use it to guide the training process of a NART student model. We study layer-to-layer hints and assume both the teacher and student models have an $M$-layer encoder and an $N$-layer decoder, despite the difference in stacked components.", "Without the loss of generality, we discuss our method on a given paired sentence $(x,y)$. In real experiments, losses are averaged over all training data. For the teacher model, we use $a_{t,l,h}^\\mathit {tr}$ as the encoder-to-decoder attention distribution of $h$-th head in the $l$-th decoder layer at position $t$, and use $r_{t,l}^\\mathit {tr}$ as the output of the $l$-th decoder layer after feed forward network at position $t$. Correspondingly, $a_{t,l,h}^\\mathit {st}$ and $r_{t,l}^\\mathit {st}$ are used for the student model. We propose a hint-based training framework that contains two kinds of hints:" ], [ "The discrepancy of hidden states motivates us to use hidden states of the ART model as a hint for the learning process of the NART model. One straight-forward method is to regularize the $L_1$ or $L_2$ distance between each pair of hidden states in ART and NART models. However, since the network components are quite different in ART and NART models, applying the straight-forward regression on hidden states hurts the learning process and fails. Therefore, we design a more implicit loss to help the student refrain from the incoherent translation results by acting towards the teacher in the hidden-state level:", "where $d_\\mathit {st} = \\cos (r_{s, l}^\\mathit {st},r_{t, l}^\\mathit {st})$, $d_\\mathit {tr} = \\cos (r_{s, l}^\\mathit {tr},r_{t, l}^\\mathit {tr})$, and $\\phi $ is a penalty function. In particular, we let", "where $-1\\le \\gamma _\\mathit {st}, \\gamma _\\mathit {tr}\\le 1$ are two thresholds controlling whether to penalize or not. We design this loss since we only want to penalize hidden states that are highly similar in the NART model, but not similar in the ART model. We have tested several choices of $-\\log (1-d_\\mathit {st})$, e.g., $\\exp (d_\\mathit {st})$, from which we find similar experimental results." ], [ "We observe that meaningful words in the source sentence are sometimes untranslated by the NART model, and the corresponding positions often suffer from ambiguous attention distributions. Therefore, we use the word alignment information from the ART model to help the training of the NART model.", "In particular, we minimize KL-divergence between the per-head encoder-to-decoder attention distributions of the teacher and the student to encourage the student to have similar word alignments to the teacher model, i.e.", "Our final training loss $\\mathcal {L}$ is a weighted sum of two parts stated above and the negative log-likelihood loss $\\mathcal {L}_\\mathit {nll}$ defined on bilingual sentence pair $(x, y)$, i.e.", "where $\\lambda $ and $\\mu $ are hyperparameters controlling the weight of different loss terms." ], [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10. To compare with previous works, we also reverse WMT14 English-to-German dataset and obtain WMT14 German-to-English dataset.", "We pretrain Transformer BIBREF8 as the teacher model on each dataset, which achieves 33.26/27.30/31.29 in terms of BLEU BIBREF11 in IWSLT14 De-En, WMT14 En-De and De-En test sets. The student model shares the same number of layers in encoder/decoder, size of hidden states/embeddings and number of heads as the teacher models (Figure FIGREF11). Following BIBREF5, BIBREF12, we replace the target sentences by the decoded output of the teacher models.", "Hyperparameters ($\\gamma _\\mathit {st}, \\gamma _\\mathit {tr}, \\lambda , \\mu $) for hint-based learning are determined to make the scales of three loss components similar after initialization. We also employ label smoothing of value $\\epsilon _\\mathit {ls}=0.1$ BIBREF13 in all experiments. We use Adam optimizer and follow the setting in BIBREF8. Models for WMT14/IWSLT14 tasks are trained on 8/1 NVIDIA M40 GPUs respectively. The model is based on the open-sourced tensor2tensor BIBREF14. More settings can be found in Appendix." ], [ "During training, $T_y$ does not need to be predicted as the target sentence is given. During testing, we have to predict the length of the target sentence for each source sentence. In many languages, the length of the target sentence can be roughly estimated from the length of the source sentence. We choose a simple method to avoid the computational overhead, which uses input length to determine target sentence length: $T_y = T_x + C$, where $C$ is a constant bias determined by the average length differences between the source and target training sentences. We can also predict the target length ranging from $[(T_x+C)-B, (T_x+C)+B]$, where $B$ is the halfwidth. By doing this, we can obtain multiple translation results with different lengths. Note that we choose this method only to show the effectiveness of our proposed method and a more advanced length estimation method can be used to further improve the performance.", "Once we have multiple translation results, we additionally use our ART teacher model to evaluate each result and select the one that achieves the highest probability. As the evaluation is fully parallelizable (since it is identical to the parallel training of the ART model), this rescoring operation will not hurt the non-autoregressive property of the NART model." ], [ "We compare our model with several baselines, including three ART models, the fertility based (FT) NART model BIBREF5, the deterministic iterative refinement based (IR) NART model BIBREF6, and the Latent Transformer BIBREF7 which is not fully non-autoregressive by incorporating an autoregressive sub-module in the NART model architecture.", "The results are shown in the Table TABREF15. Across different datasets, our method achieves significant improvements over previous non-autoregressive models. Specifically, our method outperforms fertility based NART model with 6.54/7.11 BLEU score improvements on WMT En-De and De-En tasks in similar settings and achieves comparable results with state-of-the-art LSTM-based model on WMT En-De task. Furthermore, our model achieves a speedup of 30.2 (output a single sentence) or 17.8 (teacher rescoring) times over the ART counterparts. Note that our speedups significantly outperform all previous works, because of our lighter design of the NART model: without any computationally expensive module trying to improve the expressiveness.", "We also visualize the hidden state cosine similarities and attention distributions for the NART model with hint-based training, as shown in Figure FIGREF4(c) and FIGREF6(c). With hints from hidden states, the hidden states similarities of the NART model decrease in general, and especially for the positions where the original NART model outputs incoherent phrases. The attention distribution of the NART model after hint-based training is more similar to the ART teacher model and less ambiguous comparing to the NART model without hints.", "According to our empirical analysis, the percentage of repetitive words drops from 8.3% to 6.5% by our proposed methods on the IWSLT14 De-En test set, which is a 20%+ reduction. This shows that our proposed method effectively improve the quality of the translation outputs. We also provide several case studies in Appendix.", "Finally, we conduct an ablation study on IWSLT14 De-En task. As shown in Table TABREF18, the hints from word alignments provide an improvement of about 1.6 BLEU points, and the hints from hidden states improve the results by about 0.8 BLEU points. We also test these models on a subsampled set whose source sentence lengths are at least 40. Our model outperforms the baseline model by more than 3 BLEU points (20.63 v.s. 17.48)." ], [ "In this paper, we proposed to use hints from a well-trained ART model to enhance the training of NART models. Our results on WMT14 En-De and De-En significantly outperform previous NART baselines, with one order of magnitude faster in inference than ART models. In the future, we will focus on designing new architectures and training methods for NART models to achieve comparable accuracy as ART models." ], [ "This work is supported by National Key R&D Program of China (2018YFB1402600), NSFC (61573026) and BJNSF (L172037) and a grant from Microsoft Research Asia. We would like to thank the anonymous reviewers for their valuable comments on our paper." ] ], "section_name": [ "Introduction", "Approach", "Approach ::: Observation: Illed States and Attentions", "Approach ::: Hints from the ART teacher Model", "Approach ::: Hints from the ART teacher Model ::: Hints from hidden states", "Approach ::: Hints from the ART teacher Model ::: Hints from word alignments", "Experiments ::: Experimental Settings", "Experiments ::: Inference", "Experiments ::: Experimental Results", "Conclusion", "Acknowledgment" ] }
{ "answers": [ { "annotation_id": [ "1b4b98e0c5559a6db1334a6e37b8dd1214849e3d", "2ebb533776b993a4723f989522a5037c8c88b5c3", "faa6ecc535db10c99c57a9276dde81218babf75d" ], "answer": [ { "evidence": [ "We observe that meaningful words in the source sentence are sometimes untranslated by the NART model, and the corresponding positions often suffer from ambiguous attention distributions. Therefore, we use the word alignment information from the ART model to help the training of the NART model." ], "extractive_spans": [ "we use the word alignment information from the ART model" ], "free_form_answer": "", "highlighted_evidence": [ "Therefore, we use the word alignment information from the ART model to help the training of the NART model." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We observe that meaningful words in the source sentence are sometimes untranslated by the NART model, and the corresponding positions often suffer from ambiguous attention distributions. Therefore, we use the word alignment information from the ART model to help the training of the NART model.", "In particular, we minimize KL-divergence between the per-head encoder-to-decoder attention distributions of the teacher and the student to encourage the student to have similar word alignments to the teacher model, i.e." ], "extractive_spans": [ "In particular, we minimize KL-divergence between the per-head encoder-to-decoder attention distributions of the teacher and the student to encourage the student to have similar word alignments to the teacher model," ], "free_form_answer": "", "highlighted_evidence": [ "We observe that meaningful words in the source sentence are sometimes untranslated by the NART model, and the corresponding positions often suffer from ambiguous attention distributions. Therefore, we use the word alignment information from the ART model to help the training of the NART model.", "In particular, we minimize KL-divergence between the per-head encoder-to-decoder attention distributions of the teacher and the student to encourage the student to have similar word alignments to the teacher model, i.e." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Finally, we conduct an ablation study on IWSLT14 De-En task. As shown in Table TABREF18, the hints from word alignments provide an improvement of about 1.6 BLEU points, and the hints from hidden states improve the results by about 0.8 BLEU points. We also test these models on a subsampled set whose source sentence lengths are at least 40. Our model outperforms the baseline model by more than 3 BLEU points (20.63 v.s. 17.48)." ], "extractive_spans": [ "we conduct an ablation study on IWSLT14 De-En task. As shown in Table TABREF18, the hints from word alignments provide an improvement of about 1.6 BLEU points, and the hints from hidden states improve the results by about 0.8 BLEU points. We also test these models on a subsampled set whose source sentence lengths are at least 40. Our model outperforms the baseline model by more than 3 BLEU points (20.63 v.s. 17.48)." ], "free_form_answer": "", "highlighted_evidence": [ "we conduct an ablation study on IWSLT14 De-En task. As shown in Table TABREF18, the hints from word alignments provide an improvement of about 1.6 BLEU points, and the hints from hidden states improve the results by about 0.8 BLEU points. We also test these models on a subsampled set whose source sentence lengths are at least 40. Our model outperforms the baseline model by more than 3 BLEU points (20.63 v.s. 17.48)." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f", "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "annotation_id": [ "b7a2531758fee0a570d1da0a110a473f2e15a428", "bd7b8812e9e6ab573811ed5a03ba5af462ae6eeb", "fc3ff0062d877762c9c884e099b1a1f327161cf9" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Performance on WMT14 En-De, De-En and IWSLT14 De-En tasks. “/” means non-reportable." ], "extractive_spans": [], "free_form_answer": "784 miliseconds", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Performance on WMT14 En-De, De-En and IWSLT14 De-En tasks. “/” means non-reportable." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "While the ART models have achieved great success in terms of translation quality, the time consumption during inference is still far away from satisfactory. During training, the predictions at different positions can be estimated in parallel since the ground truth pair $(x,y)$ is exposed to the model. However, during inference, the model has to generate tokens sequentially as $y_{<t}$ must be inferred on the fly. Such autoregressive behavior becomes the bottleneck of the computational time BIBREF4.", "The results are shown in the Table TABREF15. Across different datasets, our method achieves significant improvements over previous non-autoregressive models. Specifically, our method outperforms fertility based NART model with 6.54/7.11 BLEU score improvements on WMT En-De and De-En tasks in similar settings and achieves comparable results with state-of-the-art LSTM-based model on WMT En-De task. Furthermore, our model achieves a speedup of 30.2 (output a single sentence) or 17.8 (teacher rescoring) times over the ART counterparts. Note that our speedups significantly outperform all previous works, because of our lighter design of the NART model: without any computationally expensive module trying to improve the expressiveness.", "FLOAT SELECTED: Table 1: Performance on WMT14 En-De, De-En and IWSLT14 De-En tasks. “/” means non-reportable." ], "extractive_spans": [ "While the ART models have achieved great success in terms of translation quality, the time consumption during inference is still far away from satisfactory" ], "free_form_answer": "", "highlighted_evidence": [ "While the ART models have achieved great success in terms of translation quality, the time consumption during inference is still far away from satisfactory. During training, the predictions at different positions can be estimated in parallel since the ground truth pair $(x,y)$ is exposed to the model. However, during inference, the model has to generate tokens sequentially as $y_{", "Furthermore, our model achieves a speedup of 30.2 (output a single sentence) or 17.8 (teacher rescoring) times over the ART counterparts", "FLOAT SELECTED: Table 1: Performance on WMT14 En-De, De-En and IWSLT14 De-En tasks. “/” means non-reportable." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "fa716cd87ce6fd6905e2f23f09b262e90413167f", "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "annotation_id": [ "85671354b0e3e3cb410ec10127f02f8730dac160", "a7ea236a48b65d1408a37ba2fc4897049e6fe6d9", "d93de16829a31b34b0330fcb17cafacc44af0e4b" ], "answer": [ { "evidence": [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10. To compare with previous works, we also reverse WMT14 English-to-German dataset and obtain WMT14 German-to-English dataset.", "We pretrain Transformer BIBREF8 as the teacher model on each dataset, which achieves 33.26/27.30/31.29 in terms of BLEU BIBREF11 in IWSLT14 De-En, WMT14 En-De and De-En test sets. The student model shares the same number of layers in encoder/decoder, size of hidden states/embeddings and number of heads as the teacher models (Figure FIGREF11). Following BIBREF5, BIBREF12, we replace the target sentences by the decoded output of the teacher models." ], "extractive_spans": [ "BLEU " ], "free_form_answer": "", "highlighted_evidence": [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10. To compare with previous works, we also reverse WMT14 English-to-German dataset and obtain WMT14 German-to-English dataset.", "We pretrain Transformer BIBREF8 as the teacher model on each dataset, which achieves 33.26/27.30/31.29 in terms of BLEU BIBREF11 in IWSLT14 De-En, WMT14 En-De and De-En test sets. The student model shares the same number of layers in encoder/decoder, size of hidden states/embeddings and number of heads as the teacher models (Figure FIGREF11). Following BIBREF5, BIBREF12, we replace the target sentences by the decoded output of the teacher models." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The results are shown in the Table TABREF15. Across different datasets, our method achieves significant improvements over previous non-autoregressive models. Specifically, our method outperforms fertility based NART model with 6.54/7.11 BLEU score improvements on WMT En-De and De-En tasks in similar settings and achieves comparable results with state-of-the-art LSTM-based model on WMT En-De task. Furthermore, our model achieves a speedup of 30.2 (output a single sentence) or 17.8 (teacher rescoring) times over the ART counterparts. Note that our speedups significantly outperform all previous works, because of our lighter design of the NART model: without any computationally expensive module trying to improve the expressiveness." ], "extractive_spans": [ "BLEU score" ], "free_form_answer": "", "highlighted_evidence": [ "Specifically, our method outperforms fertility based NART model with 6.54/7.11 BLEU score improvements on WMT En-De and De-En tasks in similar settings and achieves comparable results with state-of-the-art LSTM-based model on WMT En-De task." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The results are shown in the Table TABREF15. Across different datasets, our method achieves significant improvements over previous non-autoregressive models. Specifically, our method outperforms fertility based NART model with 6.54/7.11 BLEU score improvements on WMT En-De and De-En tasks in similar settings and achieves comparable results with state-of-the-art LSTM-based model on WMT En-De task. Furthermore, our model achieves a speedup of 30.2 (output a single sentence) or 17.8 (teacher rescoring) times over the ART counterparts. Note that our speedups significantly outperform all previous works, because of our lighter design of the NART model: without any computationally expensive module trying to improve the expressiveness.", "According to our empirical analysis, the percentage of repetitive words drops from 8.3% to 6.5% by our proposed methods on the IWSLT14 De-En test set, which is a 20%+ reduction. This shows that our proposed method effectively improve the quality of the translation outputs. We also provide several case studies in Appendix." ], "extractive_spans": [], "free_form_answer": "BLUE and the percentage of repetitive words", "highlighted_evidence": [ "Across different datasets, our method achieves significant improvements over previous non-autoregressive models. Specifically, our method outperforms fertility based NART model with 6.54/7.11 BLEU score improvements on WMT En-De and De-En tasks in similar settings and achieves comparable results with state-of-the-art LSTM-based model on WMT En-De task. ", "According to our empirical analysis, the percentage of repetitive words drops from 8.3% to 6.5% by our proposed methods on the IWSLT14 De-En test set, which is a 20%+ reduction. This shows that our proposed method effectively improve the quality of the translation outputs. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c", "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] }, { "annotation_id": [ "2e84a95d82e9d7c0b07b2a0c90f5bb8398a713f5", "3c986df836bbb7a854e6e967876c5b6d1a9557b0", "c1fc2f9b5805acf7bafa7150d1b4343a9afa1861" ], "answer": [ { "evidence": [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10. To compare with previous works, we also reverse WMT14 English-to-German dataset and obtain WMT14 German-to-English dataset." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10. " ], "unanswerable": false, "yes_no": true }, { "evidence": [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10. To compare with previous works, we also reverse WMT14 English-to-German dataset and obtain WMT14 German-to-English dataset." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10. " ], "unanswerable": false, "yes_no": true }, { "evidence": [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10. To compare with previous works, we also reverse WMT14 English-to-German dataset and obtain WMT14 German-to-English dataset." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "The evaluation is on two widely used public machine translation datasets: IWSLT14 German-to-English (De-En) BIBREF9, BIBREF1 and WMT14 English-to-German (En-De) dataset BIBREF4, BIBREF10." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "fa716cd87ce6fd6905e2f23f09b262e90413167f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "13cf1e8b9ebc4a1a938d7635c0b697ed72287505", "542e51531e73dbff79c25baf0ab8e8e8f2487731", "d1e5ffb8001ecb7dd7bbf289d7f9d29dd5b90102" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "258ee4069f740c400c0049a2580945a1cc7f044c", "a0b403873302db7cada39008f04d01155ef68f4f" ] } ], "nlp_background": [ "five", "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no", "no" ], "question": [ "How do you know the word alignments are correct?", "How slow is the unparallelizable ART model in the first place? ", "What metric is used to measure translation accuracy?", "Were any datasets other than WMT used to test the model?", "Are the results applicable to other language pairs than German-English?" ], "question_id": [ "2711ae6dd532d136295c95253dbf202e37ecd3e7", "96356c1affc56178b3099ce4b4aece995032e0ff", "92fc94a4999d1b25a0593904025eb7b8953bb28b", "e56c1f0e9eabda41f929d0dfd5cfa50edd69fa89", "a86758696926f2db71f982dc1a4fa4404988544e" ], "question_writer": [ "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255", "2a18a3656984d04249f100633e4c1003417a2255" ], "search_query": [ "", "", "", "", "" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Case study: the above three figures visualize the hidden state cosine similarities of different models. The axes correspond to the generated target tokens. Each pixel shows the cosine similarities cosij between the last layer hidden states of the i-th and j-th generated tokens, where the diagonal pixel will always be 1.0.", "Figure 2: Case study: the above three figures visualize the encoder-decoder attention weights of different models. The x-axis and y-axis correspond to the source and generated target tokens respectively. The attention distribution is from a single head of the third layer encoder-decoder attention, which is the most informative one according to our observation. Each pixel shows attention weights αij between the i-th source token and j-th target token.", "Figure 3: Hint-based training from ART model to NART model.", "Table 1: Performance on WMT14 En-De, De-En and IWSLT14 De-En tasks. “/” means non-reportable.", "Table 2: Ablation studies on IWSLT14 De-En. Results are BLEU scores without teacher rescoring.", "Table 3: Cases on IWSLT14 De-En." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "5-Table1-1.png", "5-Table2-1.png", "9-Table3-1.png" ] }
[ "How slow is the unparallelizable ART model in the first place? ", "What metric is used to measure translation accuracy?" ]
[ [ "1909.06708-5-Table1-1.png", "1909.06708-Introduction-1", "1909.06708-Experiments ::: Experimental Results-1" ], [ "1909.06708-Experiments ::: Experimental Settings-1", "1909.06708-Experiments ::: Experimental Results-3", "1909.06708-Experiments ::: Experimental Results-1", "1909.06708-Experiments ::: Experimental Settings-0" ] ]
[ "784 miliseconds", "BLUE and the percentage of repetitive words" ]
73
1705.10754
A Low Dimensionality Representation for Language Variety Identification
Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with its specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). In this work we propose a low dimensionality representation (LDR) to address this task with five different varieties of Spanish: Argentina, Chile, Mexico, Peru and Spain. We compare our LDR method with common state-of-the-art representations and show an increase in accuracy of ~35%. Furthermore, we compare LDR with two reference distributed representation models. Experimental results show competitive performance while dramatically reducing the dimensionality --and increasing the big data suitability-- to only 6 features per variety. Additionally, we analyse the behaviour of the employed machine learning algorithms and the most discriminating features. Finally, we employ an alternative dataset to test the robustness of our low dimensionality representation with another set of similar languages.
{ "paragraphs": [ [ "Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with their specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). Although at first sight language variety identification may seem a classical text classification problem, cultural idiosyncrasies may influence the way users construct their discourse, the kind of sentences they build, the expressions they use or their particular choice of words. Due to that, we can consider language variety identification as a double problem of text classification and author profiling, where information about how language is shared by people may help to discriminate among classes of authors depending on their language variety.", "This task is specially important in social media. Despite the vastness and accessibility of the Internet destroyed frontiers among regions or traits, companies are still very interested in author profiling segmentation. For example, when a new product is launched to the market, knowing the geographical distribution of opinions may help to improve marketing campaigns. Or given a security threat, knowing the possible cultural idiosyncrasies of the author may help to better understand who could have written the message.", "Language variety identification is a popular research topic of natural language processing. In the last years, several tasks and workshops have been organized: the Workshop on Language Technology for Closely Related Languages and Language Variants @ EMNLP 2014; the VarDial Workshop @ COLING 2014 - Applying NLP Tools to Similar Languages, Varieties and Dialects; and the LT4VarDial - Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialect @ RANLP BIBREF0 BIBREF1 . We can find also several works focused on the task. In BIBREF2 the authors addressed the problem of identifying Arabic varieties in blogs and social fora. They used character $n$ -gram features to discriminate between six different varieties and obtained accuracies between 70%-80%. Similarly, BIBREF3 collected 1,000 news articles of two varieties of Portuguese. They applied different features such as word and character $n$ -grams and reported accuracies over 90%. With respect to the Spanish language, BIBREF4 focused on varieties from Argentina, Chile, Colombia, Mexico and Spain in Twitter. They used meta-learning and combined four types of features: i) character $n$ -gram frequency profiles, ii) character $n$ -gram language models, iii) Lempel-Ziv-Welch compression and iv) syllable-based language models. They obtained an interesting 60%-70% accuracy of classification.", "We are interested in discovering which kind of features capture higher differences among varieties. Our hypothesis is that language varieties differ mainly in lexicographic clues. We show an example in Table 1 .", " In this work we focus on the Spanish language variety identification. We differentiate from the previous works as follows: i) instead of $n$ -gram based representations, we propose a low dimensionality representation that is helpful when dealing with big data in social media; ii) in order to reduce the possible over-fitting, our training and test partitions do not share any author of instance between them; and iii) in contrast to the Twitter dataset of BIBREF4 , we will make available our dataset to the research community." ], [ "The key aspect of the low dimensionality representation (LDR) is the use of weights to represent the probability of each term to belong to each one of the different language varieties. We assume that the distribution of weights for a given document should be closer to the weights of its corresponding language variety. Formally, the LDR is estimated as follows:" ], [ "In this section, we describe the corpus and the alternative representations that we employ in this work." ], [ "We have created the HispaBlogs dataset by collecting posts from Spanish blogs from five different countries: Argentina, Chile, Mexico, Peru and Spain. For each country, there are 450 and 200 blogs respectively for training and test, ensuring that each author appears only in one set. Each blog contains at least 10 posts. The total number of blogs is 2,250 and 1,000 respectively. Statistics of the number of words are shown in Table 3 ." ], [ "We are interested in investigating the impact of the proposed representation and compare its performance with state-of-the-art representations based on $n$ -grams and with two approaches based on the recent and popular distributed representations of words by means of the continuous Skip-gram model BIBREF6 .", "State-of-the-art representations are mainly based on $n$ -grams models, hence we tested character and word based ones, besides word with tf-idf weights. For each of them, we iterated $n$ from 1 to 10 and selected 1,000, 5,000 and 10,000 most frequent grams. The best results were obtained with the 10,000 most frequent BOW, character 4-grams and tf-idf 2-grams. Therefore, we will use them in the evaluation.", "Due to the increasing popularity of the distributed representations BIBREF7 , we used the continuous Skip-gram model to generate distributed representations of words (e.g. $n$ -dimensional vectors), with further refinements in order to use them with documents. The continuous Skip-gram model BIBREF8 , BIBREF9 is an iterative algorithm which attempts to maximize the classification of the context surrounding a word. Formally, given a word $w(t)$ , and its surrounding words $w(t-c),~w(t-c+1),...,~w(t+c)$ inside a window of size $2c+1$ , the training objective is to maximize the average of the log probability shown in Equation 23 : ", "$$\\frac{1}{T} \\displaystyle \\sum _{t=1}^T \\displaystyle \\sum _{-c \\le j \\le c,j \\ne 0} \\log p(w_{t+j}|w_t)$$ (Eq. 23) ", "To estimate $p(w_{t+j}|w_t)$ we used negative sampling BIBREF9 that is a simplified version of the Noise Contrastive Estimation (NCE) BIBREF10 , BIBREF11 which is only concerned with preserving vector quality in the context of Skip-gram learning. The basic idea is to use logistic regression to distinguish the target word $W_O$ from draws from a noise distribution $P_n(w)$ , having $k$ negative samples for each word. Formally, the negative sampling estimates $p(w_O|w_I)$ following Equation 24 : ", "$$\\log \\sigma (v^{\\prime }_{w_O}{}^T v_{w_I}) + \\displaystyle \\sum _{i=1}^k \\mathbb {E}_{w_i}\\sim P_n(w) \\bigg [\\log \\sigma (-v^{\\prime }_{w_i}{}^T v_{w_I}) \\bigg ]$$ (Eq. 24) ", "where $\\sigma (x)=1/(1+\\exp (-x))$ . The experimental results in BIBREF9 show that this function obtains better results at the semantic level than hierarchical softmax BIBREF12 and NCE.", "In order to combine the word vectors to represent a complete sentence we used two approaches. First, given a list of word vectors $(w_1,w_2,...,w_n)$ belonging to a document, we generated a vector representation $v$ of its content by estimating the average of their dimensions: $v=n^{-1}\\sum _{i=1}^n w_i$ . We call this representation Skip-gram in the evaluation. In addition, we used Sentence vectors (SenVec) BIBREF13 , a variant that follows Skip-gram architecture to train a special vector $sv$ representing the sentence. Basically, before each context window movement, SenVec uses a special vector $sv$ in place of $w(t)$ with the objective of maximizing the classification of the surrounding words. In consequence, $sv$ will be a distributed vector of the complete sentence.", "Following state-of-the-art approach BIBREF13 , in the evaluation we used a logistic classifier for both SenVec and Skip-gram approaches." ], [ "In this section we show experimental results obtained with the machine learning algorithms that best solve the problem with the proposed representation, the impact of the preprocessing on the performance, the obtained results in comparison with the ones obtained with state-of-the-art and distributed representations, the error analysis that provides useful insights to better understand differences among languages, a depth analysis on the contribution of the different features and a cost analysis that highlights the suitability of LDR for a big data scenario." ], [ "We tested several machine learning algorithms with the aim at selecting the one that best solves the task. As can be seen in Table 4 , Multiclass Classifier obtains the best result (results in the rest of the paper refer to Multiclass Classifier). We carried out a statistical test of significance with respect to the next two systems with the highest performance: SVM ( $z_{0.05} 0, 880 < 1, 960$ ) and LogitBoost ( $z_{0.05} = 1, 983 > 1, 960$ )." ], [ "The proposed representation aims at using the whole vocabulary to obtain the weights of its terms. Social media texts may have noise and inadequately written words. Moreover, some of these words may be used only by few authors. With the aim at investigating their effect in the classification, we carried out a preprocessing step to remove words that appear less than $n$ times in the corpus, iterating $n$ between 1 and 100. In Figure 1 the corresponding accuracies are shown. In the left part of the figure (a), results for $n$ between 1 and 10 are shown in a continuous scale. In the right part (b), values from 10 to 100 are shown in a non-continuous scale. As can be seen, the best result was obtained with $n$ equal to 5, with an accuracy of 71.1%. As it was expected, the proposed representation takes advantage from the whole vocabulary, although it is recommendable to remove words with very few occurrences that may alter the results. We show examples of those infrequent words in Table 5 .", "In Figure 2 , when analysing the evolution of the number of remaining words in function of the value of $n$ , we can see a high number of words with very low frequency of occurrence. These words may introduce a high amount of noise in our LDR weight estimation. In addition, removing these words may be also beneficial in order to reduce the processing time needed to obtain the representation. This fact has special relevance for improving the performance in big data environments." ], [ "In Table 6 we show the results obtained by the described representations employing the Multiclass Classifier. As can be appreciated, the proposed low dimensionality representation improves more than 35% the results obtained with the state-of-the-art representations. BOW obtains slightly better results than character 4-grams, and both of them improve significantly the ones obtained with tf-idf 2-grams. Instead of selecting the most frequent $n$ -grams, our approach takes advantage from the whole vocabulary and assigns higher weights to the most discriminative words for the different language varieties as shown in Equation 10 .", " We highlight that our LDR obtains competitive results compared with the use of distributed representations. Concretely, there is no significant difference among them (Skip-gram $z_{0.05} = 0,5457 < 1,960$ and SenVec $z_{0.05} = 0,7095 < 1,960$ ). In addition, our proposal reduces considerably the dimensionality of one order of magnitude as shown in Table 6 ." ], [ "We aim at analysing the error of LDR to better understand which varieties are the most difficult to discriminate. As can be seen in Table 7 , the Spanish variety is the easiest to discriminate. However, one of the highest confusions occurs from Argentinian to Spanish. Mexican and Spanish were considerably confused with Argentinian too. Finally, the highest confusion occurs from Peruvian to Chilean, although the lowest average confusion occurs with Peruvian. In general, Latin American varieties are closer to each other and it is more difficult to differentiate among them. Language evolves over time. It is logical that language varieties of nearby countries — as the Latin American ones — evolved in a more similar manner that the Spanish variety. It is also logical that even more language variety similarities are shared across neighbour countries, e.g. Chilean compared with Peruvian and Argentinian.", "In Figure 3 we show the precision and recall values for the identification of each variety. As can be seen, Spain and Chile have the highest recall so that texts written in these varieties may have less probability to be misclassified as other varieties. Nevertheless, the highest precisions are obtained for Mexico and Peru, implying that texts written in such varieties may be easier to discriminate." ], [ "In Table 8 we show the most discriminant features. The features are sorted by their information gain (IG). As can be seen, the highest gain is obtained by average, maximum and minimum, and standard deviation. On the other hand, probability and proportionality features has low information gain.", "We experimented with different sets of features and show the results in Figure 4 . As may be expected, average-based features obtain high accuracies (67.0%). However, although features based on standard deviation have not the highest information gain, they obtained the highest results individually (69.2%), as well as their combination with average ones (70,8%). Features based on minimum and maximum obtain low results (48.3% and 54.7% respectively), but in combination they obtain a significant increase (61.1%). The combination of the previous features obtains almost the highest accuracy (71.0%), equivalent to the accuracy obtained with probability and proportionality features (71.1%)." ], [ "We analyse the cost from two perspectives: i) the complexity to the features; and ii) the number of features needed to represent a document. Defining $l$ as the number of different language varieties, and $n$ the number of terms of the document to be classified, the cost of obtaining the features of Table 2 (average, minimum, maximum, probability and proportionality) is $O(l\\cdot {n})$ . Defining $m$ as the number of terms in the document that coincides with some term in the vocabulary, the cost of obtaining the standard deviation is $O(l\\cdot {m})$ . As the average is needed previously to the standard deviation calculation, the total cost is $O(l\\cdot {n}) + O(l\\cdot {m})$ that is equal to $O(max(l\\cdot {n}, l\\cdot {m})) =\nO(l\\cdot {n})$ . Since the number of terms in the vocabulary will always be equal or greater than the number of coincident terms ( $n \\ge m$ ), and as the number of terms in the document will always be much higher than the number of language varieties ( $l<<n$ ), we can determine the cost as lineal with respect to the number of terms in the document $O(n)$ . With respect to the number of features needed to represent a document, we showed in Table 6 the considerable reduction of the proposed low dimensionality representation." ], [ "In order to analyse the robustness of the low dimensionality representation to different languages, we experimented with the development set of the DSLCC corpus from the Discriminating between Similar Languages task BIBREF1 . The corpus consists of 2,000 sentences per language or variety, with between 20 and 100 tokens per sentence, obtained from news headers. In Table 9 we show the results obtained with the proposed representation and the two distributed representations, Skip-gram and SenVec. It is important to notice that, in general, when a particular representation improves for one language is at cost of the other one. We can conclude that the three representations obtained comparative results and support the robustness of the low dimensionality representation." ], [ "In this work, we proposed the LDR low dimensionality representation for language variety identification. Experimental results outperformed traditional state-of-the-art representations and obtained competitive results compared with two distributed representation-based approaches that employed the popular continuous Skip-gram model. The dimensionality reduction obtained by means of LDR is from thousands to only 6 features per language variety. This allows to deal with large collections in big data environments such as social media. Recently, we have applied LDR to the age and gender identification task obtaining competitive results with the best performing teams in the author profiling task at the PAN Lab at CLEF. As a future work, we plan to apply LDR to other author profiling tasks such as personality recognition." ] ], "section_name": [ "Introduction", "Low Dimensionality Representation", "Evaluation Framework", "HispaBlogs Corpus", "Alternative representations", "Experimental Results", "Machine learning algorithms comparison", "Preprocessing impact", "Language variety identification results", "Error analysis", "Most discriminating features", "Cost analysis", "Robustness", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "14eb3b30206cf2c656fc506ba6e75f9e58b4659e", "16037bea9dd4f1282a766d01a5b34c94df79b382", "c1d381f354cc9eb6d9bc9f331850d2965fddffe9" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 8. Features sorted by information gain." ], "extractive_spans": [], "free_form_answer": "Highest gain is obtained by average, maximum, minimum, and standard deviation. Probability and proportionality features have low information gain", "highlighted_evidence": [ "FLOAT SELECTED: Table 8. Features sorted by information gain." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In Table 8 we show the most discriminant features. The features are sorted by their information gain (IG). As can be seen, the highest gain is obtained by average, maximum and minimum, and standard deviation. On the other hand, probability and proportionality features has low information gain." ], "extractive_spans": [ "average", "maximum and minimum", "standard deviation" ], "free_form_answer": "", "highlighted_evidence": [ "In Table 8 we show the most discriminant features. The features are sorted by their information gain (IG). As can be seen, the highest gain is obtained by average, maximum and minimum, and standard deviation. On the other hand, probability and proportionality features has low information gain." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 2. Set of features for each category (language variety) used in Equation 4." ], "extractive_spans": [], "free_form_answer": "a document's terms' minimum, maximum, average (relative to all terms and to in-vocabulary terms), and standard deviation of weights; and proportion of terms that are in-vocabulary", "highlighted_evidence": [ "FLOAT SELECTED: Table 2. Set of features for each category (language variety) used in Equation 4." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "197290cb509b9a046b311719c6ce1ce408f3be8a", "043654eefd60242ac8da08ddc1d4b8d73f86f653" ] }, { "annotation_id": [ "258c98c6d21ee9f3d68211d46fccf521d36dc0a4", "b0e28b22fde4781d68c5b4f22838d42dfc3fbd73", "fd2cbf531c2a1bb32b0480cfec7460db62c5669b" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 9. Accuracy results in the development set of the DSLCC. The significance is marked in bold when some representation obtains significantly better results than the next best performing representation (e.g. results for SenVec in Portugal Portuguese are significantly higher than LDR, which at the same time are significantly higher than Skip-gram)." ], "extractive_spans": [], "free_form_answer": "Accuracy results range from 74.4 to 100 ", "highlighted_evidence": [ "FLOAT SELECTED: Table 9. Accuracy results in the development set of the DSLCC. The significance is marked in bold when some representation obtains significantly better results than the next best performing representation (e.g. results for SenVec in Portugal Portuguese are significantly higher than LDR, which at the same time are significantly higher than Skip-gram)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In order to analyse the robustness of the low dimensionality representation to different languages, we experimented with the development set of the DSLCC corpus from the Discriminating between Similar Languages task BIBREF1 . The corpus consists of 2,000 sentences per language or variety, with between 20 and 100 tokens per sentence, obtained from news headers. In Table 9 we show the results obtained with the proposed representation and the two distributed representations, Skip-gram and SenVec. It is important to notice that, in general, when a particular representation improves for one language is at cost of the other one. We can conclude that the three representations obtained comparative results and support the robustness of the low dimensionality representation.", "FLOAT SELECTED: Table 9. Accuracy results in the development set of the DSLCC. The significance is marked in bold when some representation obtains significantly better results than the next best performing representation (e.g. results for SenVec in Portugal Portuguese are significantly higher than LDR, which at the same time are significantly higher than Skip-gram)." ], "extractive_spans": [ " three representations obtained comparative results and support the robustness of the low dimensionality representation" ], "free_form_answer": "", "highlighted_evidence": [ "In order to analyse the robustness of the low dimensionality representation to different languages, we experimented with the development set of the DSLCC corpus from the Discriminating between Similar Languages task BIBREF1 .", "It is important to notice that, in general, when a particular representation improves for one language is at cost of the other one. We can conclude that the three representations obtained comparative results and support the robustness of the low dimensionality representation.", "FLOAT SELECTED: Table 9. Accuracy results in the development set of the DSLCC. The significance is marked in bold when some representation obtains significantly better results than the next best performing representation (e.g. results for SenVec in Portugal Portuguese are significantly higher than LDR, which at the same time are significantly higher than Skip-gram)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 9. Accuracy results in the development set of the DSLCC. The significance is marked in bold when some representation obtains significantly better results than the next best performing representation (e.g. results for SenVec in Portugal Portuguese are significantly higher than LDR, which at the same time are significantly higher than Skip-gram).", "In order to analyse the robustness of the low dimensionality representation to different languages, we experimented with the development set of the DSLCC corpus from the Discriminating between Similar Languages task BIBREF1 . The corpus consists of 2,000 sentences per language or variety, with between 20 and 100 tokens per sentence, obtained from news headers. In Table 9 we show the results obtained with the proposed representation and the two distributed representations, Skip-gram and SenVec. It is important to notice that, in general, when a particular representation improves for one language is at cost of the other one. We can conclude that the three representations obtained comparative results and support the robustness of the low dimensionality representation." ], "extractive_spans": [], "free_form_answer": "Comparable to state-of-the-art", "highlighted_evidence": [ "FLOAT SELECTED: Table 9. Accuracy results in the development set of the DSLCC. The significance is marked in bold when some representation obtains significantly better results than the next best performing representation (e.g. results for SenVec in Portugal Portuguese are significantly higher than LDR, which at the same time are significantly higher than Skip-gram).", "We can conclude that the three representations obtained comparative results and support the robustness of the low dimensionality representation." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "043654eefd60242ac8da08ddc1d4b8d73f86f653", "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "no", "no" ], "question": [ "What dicrimating features are discovered?", "What results are obtained on the alternate datasets?" ], "question_id": [ "9262292ca4cc78de515b5617f6a91e540eb2678c", "d796a251792eca01cea31ba5cf3e54ff9acf543f" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "representation", "representation" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Table 1. The same example in three varieties of Spanish (Argentina, Mexico and Spain).", "Table 2. Set of features for each category (language variety) used in Equation 4.", "Table 3. Number of posts, words and words per post (average and standard deviation) per language variety.", "Table 4. Accuracy results with different machine learning algorithms.", "Fig. 1. Accuracy obtained after removing words with frequency equal or lower than n. (a) Continuous scale. (b) Non-continuous scale.", "Table 5. Very infrequent words.", "Fig. 2. Number of words after removing those with frequency equal or lower than n.", "Table 6. Accuracy results in language variety identification and number of features for each representation.", "Table 7. Confusion matrix of the 5-class classification.", "Table 8. Features sorted by information gain.", "Fig. 4. Accuracy with different combinations of features.", "Table 9. Accuracy results in the development set of the DSLCC. The significance is marked in bold when some representation obtains significantly better results than the next best performing representation (e.g. results for SenVec in Portugal Portuguese are significantly higher than LDR, which at the same time are significantly higher than Skip-gram)." ], "file": [ "2-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "7-Figure1-1.png", "7-Table5-1.png", "8-Figure2-1.png", "8-Table6-1.png", "9-Table7-1.png", "10-Table8-1.png", "10-Figure4-1.png", "11-Table9-1.png" ] }
[ "What dicrimating features are discovered?", "What results are obtained on the alternate datasets?" ]
[ [ "1705.10754-10-Table8-1.png", "1705.10754-4-Table2-1.png", "1705.10754-Most discriminating features-0" ], [ "1705.10754-11-Table9-1.png", "1705.10754-Robustness-0" ] ]
[ "a document's terms' minimum, maximum, average (relative to all terms and to in-vocabulary terms), and standard deviation of weights; and proportion of terms that are in-vocabulary", "Comparable to state-of-the-art" ]
74
1706.08568
Neural Question Answering at BioASQ 5B
This paper describes our submission to the 2017 BioASQ challenge. We participated in Task B, Phase B which is concerned with biomedical question answering (QA). We focus on factoid and list question, using an extractive QA model, that is, we restrict our system to output substrings of the provided text snippets. At the core of our system, we use FastQA, a state-of-the-art neural QA system. We extended it with biomedical word embeddings and changed its answer layer to be able to answer list questions in addition to factoid questions. We pre-trained the model on a large-scale open-domain QA dataset, SQuAD, and then fine-tuned the parameters on the BioASQ training set. With our approach, we achieve state-of-the-art results on factoid questions and competitive results on list questions.
{ "paragraphs": [ [ "BioASQ is a semantic indexing, question answering (QA) and information extraction challenge BIBREF0 . We participated in Task B of the challenge which is concerned with biomedical QA. More specifically, our system participated in Task B, Phase B: Given a question and gold-standard snippets (i.e., pieces of text that contain the answer(s) to the question), the system is asked to return a list of answer candidates.", "The fifth BioASQ challenge is taking place at the time of writing. Five batches of 100 questions each were released every two weeks. Participating systems have 24 hours to submit their results. At the time of writing, all batches had been released.", "The questions are categorized into different question types: factoid, list, summary and yes/no. Our work concentrates on answering factoid and list questions. For factoid questions, the system's responses are interpreted as a ranked list of answer candidates. They are evaluated using mean-reciprocal rank (MRR). For list questions, the system's responses are interpreted as a set of answers to the list question. Precision and recall are computed by comparing the given answers to the gold-standard answers. F1 score, i.e., the harmonic mean of precision and recall, is used as the official evaluation measure .", "Most existing biomedical QA systems employ a traditional QA pipeline, similar in structure to the baseline system by weissenborn2013answering. They consist of several discrete steps, e.g., named-entity recognition, question classification, and candidate answer scoring. These systems require a large amount of resources and feature engineering that is specific to the biomedical domain. For example, OAQA BIBREF1 , which has been very successful in last year's challenge, uses a biomedical parser, entity tagger and a thesaurus to retrieve synonyms.", "Our system, on the other hand, is based on a neural network QA architecture that is trained end-to-end on the target task. We build upon FastQA BIBREF2 , an extractive factoid QA system which achieves state-of-the-art results on QA benchmarks that provide large amounts of training data. For example, SQuAD BIBREF3 provides a dataset of $\\approx 100,000$ questions on Wikipedia articles. Our approach is to train FastQA (with some extensions) on the SQuAD dataset and then fine-tune the model parameters on the BioASQ training set.", "Note that by using an extractive QA network as our central component, we restrict our system's responses to substrings in the provided snippets. This also implies that the network will not be able to answer yes/no questions. We do, however, generalize the FastQA output layer in order to be able to answer list questions in addition to factoid questions." ], [ "Our system is a neural network which takes as input a question and a context (i.e., the snippets) and outputs start and end pointers to tokens in the context. At its core, we use FastQA BIBREF2 , a state-of-the-art neural QA system. In the following, we describe our changes to the architecture and how the network is trained." ], [ "In the input layer, the context and question tokens are mapped to high-dimensional word vectors. Our word vectors consists of three components, which are concatenated to form a single vector:", "GloVe embedding: We use 300-dimensional GloVe embeddings BIBREF4 which have been trained on a large collection of web documents.", "Character embedding: This embedding is computed by a 1-dimensional convolutional neural network from the characters of the words, as introduced by seo2016bidirectional.", "Biomedical Word2Vec embeddings: We use the biomedical word embeddings provided by biomedicalword2vec. These are 200-dimensional Word2Vec embeddings BIBREF5 which were trained on $\\approx 10$ million PubMed abstracts.", "To the embedding vectors, we concatenate a one-hot encoding of the question type (list or factoid). Note that these features are identical for all tokens.", "Following our embedding layer, we invoke FastQA in order to compute start and end scores for all context tokens. Because end scores are conditioned on the chosen start, there are $O(n^2)$ end scores where $n$ is the number of context tokens. We denote the start index by $i \\in [1, n]$ , the end index by $j \\in [i, n]$ , the start scores by $y_{start}^{i}$ , and end scores by $y_{end}^{i, j}$ .", "In our output layer, the start, end, and span probabilities are computed as: ", "$$p_{start}^i = \\sigma (y_{start}^i)$$ (Eq. 8) ", "$$p_{end}^{i, \\cdot } = softmax(y_{end}^{i, \\cdot })$$ (Eq. 9) ", "where $\\sigma $ denotes the sigmoid function. By computing the start probability via the sigmoid rather than softmax function (as used in FastQA), we enable the model to output multiple spans as likely answer spans. This generalizes the factoid QA network to list questions." ], [ "We define our loss as the cross-entropy of the correct start and end indices. In the case of multiple occurrences of the same answer, we only minimize the span of the lowest loss.", "We train the network in two steps: First, the network is trained on SQuAD, following the procedure by weissenborn2017fastqa (pre-training phase). Second, we fine-tune the network parameters on BioASQ (fine-tuning phase). For both phases, we use the Adam optimizer BIBREF6 with an exponentially decaying learning rate. We start with learning rates of $10^{-3}$ and $10^{-4}$ for the pre-training and fine-tuning phases, respectively.", "During fine-tuning, we extract answer spans from the BioASQ training data by looking for occurrences of the gold standard answer in the provided snippets. Note that this approach is not perfect as it can produce false positives (e.g., the answer is mentioned in a sentence which does not answer the question) and false negatives (e.g., a sentence answers the question, but the exact string used is not in the synonym list).", "Because BioASQ usually contains multiple snippets for a given question, we process all snippets independently and then aggregate the answer spans, sorting globally according to their probability $p_{span}^{i, j}$ .", "During the inference phase, we retrieve the top 20 answers span via beam search with beam size 20. From this sorted list of answer strings, we remove all duplicate strings. For factoid questions, we output the top five answer strings as our ranked list of answer candidates. For list questions, we use a probability cutoff threshold $t$ , such that $\\lbrace (i, j)|p_{span}^{i, j} \\ge t\\rbrace $ is the set of answers. We set $t$ to be the threshold for which the list F1 score on the development set is optimized.", "In order to further tweak the performance of our systems, we built a model ensemble. For this, we trained five single models using 5-fold cross-validation on the entire training set. These models are combined by averaging their start and end scores before computing the span probabilities (Equations 8 - 10 ). As a result, we submit two systems to the challenge: The best single model (according to its development set) and the model ensemble.", "We implemented our system using TensorFlow BIBREF7 . It was trained on an NVidia GForce Titan X GPU." ], [ "We report the results for all five test batches of BioASQ 5 (Task 5b, Phase B) in Table 1 . Note that the performance numbers are not final, as the provided synonyms in the gold-standard answers will be updated as a manual step, in order to reflect valid responses by the participating systems. This has not been done by the time of writing. Note also that – in contrast to previous BioASQ challenges – systems are no longer allowed to provide an own list of synonyms in this year's challenge.", "In general, the single and ensemble system are performing very similar relative to the rest of field: Their ranks are almost always right next to each other. Between the two, the ensemble model performed slightly better on average.", "On factoid questions, our system has been very successful, winning three out of five batches. On list questions, however, the relative performance varies significantly. We expect our system to perform better on factoid questions than list questions, because our pre-training dataset (SQuAD) does not contain any list questions.", "Starting with batch 3, we also submitted responses to yes/no questions by always answering yes. Because of a very skewed class distribution in the BioASQ dataset, this is a strong baseline. Because this is done merely to have baseline performance for this question type and because of the naivety of the method, we do not list or discuss the results here." ], [ "In this paper, we summarized the system design of our BioASQ 5B submission for factoid and list questions. We use a neural architecture which is trained end-to-end on the QA task. This approach has not been applied to BioASQ questions in previous challenges. Our results show that our approach achieves state-of-the art results on factoid questions and competitive results on list questions." ] ], "section_name": [ "Introduction", "Model", "Network architecture", "Training & decoding", "Results & discussion", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "2ca29ac18fdb31592cbd37a3c51357c4498713ec", "b82c3c516f21a6b8592f8adff34baf590b1282b6", "beca733a2ae3c9397cd187e03dc43d5b68aedbb9" ], "answer": [ { "evidence": [ "BioASQ is a semantic indexing, question answering (QA) and information extraction challenge BIBREF0 . We participated in Task B of the challenge which is concerned with biomedical QA. More specifically, our system participated in Task B, Phase B: Given a question and gold-standard snippets (i.e., pieces of text that contain the answer(s) to the question), the system is asked to return a list of answer candidates." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "More specifically, our system participated in Task B, Phase B: Given a question and gold-standard snippets (i.e., pieces of text that contain the answer(s) to the question), the system is asked to return a list of answer candidates." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "The questions are categorized into different question types: factoid, list, summary and yes/no. Our work concentrates on answering factoid and list questions. For factoid questions, the system's responses are interpreted as a ranked list of answer candidates. They are evaluated using mean-reciprocal rank (MRR). For list questions, the system's responses are interpreted as a set of answers to the list question. Precision and recall are computed by comparing the given answers to the gold-standard answers. F1 score, i.e., the harmonic mean of precision and recall, is used as the official evaluation measure ." ], "extractive_spans": [], "free_form_answer": "No, the answers can also be summaries or yes/no.", "highlighted_evidence": [ "The questions are categorized into different question types: factoid, list, summary and yes/no." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "ca2a4695129d0180768a955fb5910d639f79aa34", "4857c606a55a83454e8d81ffe17e05cf8bc4b75f", "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "15d8a1ed704096cbb23f49af0d5e0c1f7062ed48", "377d74c4903327c940e873224651716c78b98d67", "8c308f2c74b528317f6dd614cb5514c3cc04ac20" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b", "c7d4a630661cd719ea504dba56393f78278b296b", "ca2a4695129d0180768a955fb5910d639f79aa34" ] } ], "nlp_background": [ "five", "five" ], "paper_read": [ "no", "no" ], "question": [ "Are answers in this dataset guaranteed to be substrings of the text? If not, what is the coverage of answers being substrings?", "How much is the gap between pretraining on SQuAD and not pretraining on SQuAD?" ], "question_id": [ "a526c63fc8dc1b79702b481b77e3922d7002d973", "0f9678e11079ee9ea1a1ce693f017177dd495ee5" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "question", "question" ], "topic_background": [ "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Neural architecture of our system. Question and context (i.e., the snippets) are mapped directly to start and end probabilities for each context token. We use FastQA (Weissenborn et al., 2017) with modified input vectors and an output layer that supports list answers in addition to factoid answers.", "Table 1: Preliminary results for factoid and list questions for all five batches and for our single and ensemble systems. We report MRR and F1 scores for factoid and list questions, respectively. In parentheses, we report the rank of the respective systems relative to all other systems in the challenge. The last row averages the performance numbers of the respective system and question type across the five batches." ], "file": [ "2-Figure1-1.png", "3-Table1-1.png" ] }
[ "Are answers in this dataset guaranteed to be substrings of the text? If not, what is the coverage of answers being substrings?" ]
[ [ "1706.08568-Introduction-0", "1706.08568-Introduction-2" ] ]
[ "No, the answers can also be summaries or yes/no." ]
75
1909.05190
Event Representation Learning Enhanced with External Commonsense Knowledge
Prior work has proposed effective methods to learn event representations that can capture syntactic and semantic information over text corpus, demonstrating their effectiveness for downstream tasks such as script event prediction. On the other hand, events extracted from raw texts lacks of commonsense knowledge, such as the intents and emotions of the event participants, which are useful for distinguishing event pairs when there are only subtle differences in their surface realizations. To address this issue, this paper proposes to leverage external commonsense knowledge about the intent and sentiment of the event. Experiments on three event-related tasks, i.e., event similarity, script event prediction and stock market prediction, show that our model obtains much better event embeddings for the tasks, achieving 78% improvements on hard similarity task, yielding more precise inferences on subsequent events under given contexts, and better accuracies in predicting the volatilities of the stock market.
{ "paragraphs": [ [ "Events are a kind of important objective information of the world. Structuralizing and representing such information as machine-readable knowledge are crucial to artificial intelligence BIBREF0, BIBREF1. The main idea is to learn distributed representations for structured events (i.e. event embeddings) from text, and use them as the basis to induce textual features for downstream applications, such as script event prediction and stock market prediction.", "Parameterized additive models are among the most widely used for learning distributed event representations in prior work BIBREF2, BIBREF3, which passes the concatenation or addition of event arguments' word embeddings to a parameterized function. The function maps the summed vectors into an event embedding space. Furthermore, BIBREF4 ding2015deep and BIBREF5 weber2018event propose using neural tensor networks to perform semantic composition of event arguments, which can better capture the interactions between event arguments.", "This line of work only captures shallow event semantics, which is not capable of distinguishing events with subtle differences. On the one hand, the obtained event embeddings cannot capture the relationship between events that are syntactically or semantically similar, if they do not share similar word vectors. For example, as shown in Figure FIGREF2 (a), “PersonX threw bomb” and “PersonZ attacked embassy”. On the other hand, two events with similar word embeddings may have similar embeddings despite that they are quite unrelated, for example, as shown in Figure FIGREF2 (b), “PersonX broke record” and “PersonY broke vase”. Note that in this paper, similar events generally refer to events with strong semantic relationships rather than just the same events.", "One important reason for the problem is the lack of the external commonsense knowledge about the mental state of event participants when learning the objective event representations. In Figure FIGREF2 (a), two event participants “PersonY” and “PersonZ” may carry out a terrorist attack, and hence, they have the same intent: “to bloodshed”, which can help representation learning model maps two events into the neighbor vector space. In Figure FIGREF2 (b), a change to a single argument leads to a large semantic shift in the event representations, as the change of an argument can result in different emotions of event participants. Who “broke the record” is likely to be happy, while, who “broke a vase” may be sad. Hence, intent and sentiment can be used to learn more fine-grained semantic features for event embeddings.", "Such commonsense knowledge is not explicitly expressed but can be found in a knowledge base such as Event2Mind BIBREF6 and ATOMIC BIBREF7. Thus, we aim to incorporate the external commonsense knowledge, i.e., intent and sentiment, into the learning process to generate better event representations. Specifically, we propose a simple and effective model to jointly embed events, intents and emotions into the same vector space. A neural tensor network is used to learn baseline event embeddings, and we define a corresponding loss function to incorporate intent and sentiment information.", "Extensive experiments show that incorporating external commonsense knowledge brings promising improvements to event embeddings, achieving 78% and 200% improvements on hard similarity small and big dataset, respectively. With better embeddings, we can achieve superior performances on script event prediction and stock market prediction compared to state-of-the-art baseline methods." ], [ "The joint embedding framework is shown in Figure FIGREF3. We begin by introducing the baseline event embedding learning model, which serves as the basis of the proposed framework. Then, we show how to model intent and sentiment information. Subsequently, we describe the proposed joint model by integrating intent and sentiment into the original objective function to help learn high-quality event representations, and introduce the training details." ], [ "The goal of event embedding is to learn low-dimension dense vector representations for event tuples $E=(A, P, O)$, where $P$ is the action or predicate, $A$ is the actor or subject and $O$ is the object on which the action is performed. Event embedding models compound vector representations over its predicate and arguments representations. The challenge is that the composition models should be effective for learning the interactions between the predicate and the argument. Simple additive transformations are incompetent.", "We follow BIBREF4 (BIBREF4) modelling such informative interactions through tensor composition. The architecture of neural tensor network (NTN) for learning event embeddings is shown in Figure FIGREF5, where the bilinear tensors are used to explicitly model the relationship between the actor and the action, and that between the object and the action.", "The inputs of NTN are the word embeddings of $A$, $P$ and $O$, and the outputs are event embeddings. We initialized our word representations using publicly available $d$-dimensional ($d=100$) GloVe vectors BIBREF8. As most event arguments consist of several words, we represent the actor, action and object as the average of their word embeddings, respectively.", "From Figure FIGREF5, $S_1 \\in \\mathbb {R}^d$ is computed by:", "where $T^{[1:k]}_1 \\in \\mathbb {R}^{d\\times d\\times k}$ is a tensor, which is a set of $k$ matrices, each with $d\\times d$ dimensions. The bilinear tensor product $A^TT_1^{[1:k]}P$ is a vector $r \\in \\mathbb {R}^k$, where each entry is computed by one slice of the tensor ($r_i=A^TT_1^{[i]}P, i = 1, \\cdots , k$). The other parameters are a standard feed-forward neural network, where $W \\in \\mathbb {R}^{k \\times \\it 2d}$ is the weight matrix, $b \\in \\mathbb {R}^k$ is the bias vector, $U \\in \\mathbb {R}^k$ is a hyper-parameter and $f=\\it tanh$ is a standard nonlinearity applied element-wise. $S_2$ and $C$ in Figure FIGREF5 are computed in the same way as $S_1$.", "One problem with tensors is curse of dimensionality, which limits the wide application of tensors in many areas. It is therefore essential to approximate tensors of higher order in a compressed scheme, for example, a low-rank tensor decomposition. To decrease the number of parameters in standard neural tensor network, we make low-rank approximation that represents each matrix by two low-rank matrices plus diagonal, as illustrated in Figure FIGREF7. Formally, the parameter of the $i$-th slice is $T_{appr}^{[i]}=T^{[i_1]}\\times T^{[i_2]}+diag(t^{[i]})$, where $T^{[i_1]}\\in \\mathbb {R}^{d\\times n}$, $T^{[i_2]}\\in \\mathbb {R}^{n\\times d}$, $t^{[i]}\\in \\mathbb {R}^d$, $n$ is a hyper-parameter, which is used for adjusting the degree of tensor decomposition. The output of neural tensor layer is formalized as follows.", "where $[T_{appr}]_1^{[1:k]}$ is the low-rank tensor that defines multiple low-rank bilinear layers. $k$ is the slice number of neural tensor network which is also equal to the output length of $S_1$.", "We assume that event tuples in the training data should be scored higher than corrupted tuples, in which one of the event arguments is replaced with a random argument. Formally, the corrupted event tuple is $E^r=(A^r, P, O)$, which is derived by replacing each word in $A$ with a random word $w^r$ in our dictionary $\\mathcal {D}$ (which contains all the words in the training data) to obtain a corrupted counterpart $A^r$. We calculate the margin loss of the two event tuples as:", "where $\\mathit {\\Phi }=(T_1, T_2, T_3, W, b)$ is the set of model parameters. The standard $L_2$ regularization is used, for which the weight $\\lambda $ is set as 0.0001. The algorithm goes over the training set for multiple iterations. For each training instance, if the loss $loss(E,E^r)=\\max (0,1-g(E)+g(E^r))$ is equal to zero, the online training algorithm continues to process the next event tuple. Otherwise, the parameters are updated to minimize the loss using back-propagation BIBREF9." ], [ "Intent embedding refers to encoding the event participants' intents into event vectors, which is mainly used to explain why the actor performed the action. For example, given two events “PersonX threw basketball” and “PersonX threw bomb”, there are only subtle differences in their surface realizations, however, the intents are totally different. “PersonX threw basketball” is just for fun, while “PersonX threw bomb” could be a terrorist attack. With the intents, we can easily distinguish these superficial similar events.", "One challenge for incorporating intents into event embeddings is that we should have a large-scale labeled dataset, which annotated the event and its actor's intents. Recently, BIBREF6 P18-1043 and BIBREF7 sap2018atomic released such valuable commonsense knowledge dataset (ATOMIC), which consists of 25,000 event phrases covering a diverse range of daily-life events and situations. For example, given an event “PersonX drinks coffee in the morning”, the dataset labels PersonX's likely intent is “PersonX wants to stay awake”.", "We notice that the intents labeled in ATOMIC is a sentence. Hence, intent embedding is actually a sentence representation learning task. Among various neural networks for encoding sentences, bi-directional LSTMs (BiLSTM) BIBREF10 have been a dominant method, giving state-of-the-art results in language modelling BIBREF11 and syntactic parsing BIBREF12.", "We use BiLSTM model to learn intent representations. BiLSTM consists of two LSTM components, which process the input in the forward left-to-right and the backward right-to-left directions, respectively. In each direction, the reading of input words is modelled as a recurrent process with a single hidden state. Given an initial value, the state changes its value recurrently, each time consuming an incoming word.", "Take the forward LSTM component for example. Denoting the initial state as $\\overrightarrow{\\mathbf {h}}^0$, which is a model parameter, it reads the input word representations $\\mathbf {x}_0,\\mathbf {x}_1,\\dots ,\\mathbf {x}_n$, and the recurrent state transition step for calculating $\\overrightarrow{\\mathbf {h}}^1,\\dots ,\\overrightarrow{\\mathbf {h}}^{n+1}$ is defined as BIBREF13 (BIBREF13).", "The backward LSTM component follows the same recurrent state transition process as the forward LSTM component. Starting from an initial state $\\overleftarrow{\\mathbf {h}}^{n+1}$, which is a model parameter, it reads the input $\\mathbf {x}_n,\\mathbf {x}_{n-1},\\dots ,\\mathbf {x}_0$, changing its value to $\\overleftarrow{\\mathbf {h}}^n,\\overleftarrow{\\mathbf {h}}^{n-1},\\dots ,\\overleftarrow{\\mathbf {h}}^0$, respectively. The BiLSTM model uses the concatenated value of $\\overrightarrow{\\mathbf {h}}^t$ and $\\overleftarrow{\\mathbf {h}}^t$ as the hidden vector for $w_t$:", "A single hidden vector representation $\\mathbf {v}_i$ of the input intent can be obtained by concatenating the last hidden states of the two LSTMs:", "In the training process, we calculate the similarity between a given event vector $\\mathbf {v}_e$ and its related intent vector $\\mathbf {v}_i$. For effectively training the model, we devise a ranking type loss function as follows:", "where $\\mathbf {v}^{\\prime }_i$ is the incorrect intent for $\\mathbf {v}_e$, which is randomly selected from the annotated dataset." ], [ "Sentiment embedding refers to encoding the event participants' emotions into event vectors, which is mainly used to explain how does the actor feel after the event. For example, given two events “PersonX broke record” and “PersonX broke vase”, there are only subtle differences in their surface realizations, however, the emotions of PersonX are totally different. After “PersonX broke record”, PersonX may be feel happy, while after “PersonX broke vase”, PersonX could be feel sad. With the emotions, we can also effectively distinguish these superficial similar events.", "We also use ATOMIC BIBREF7 as the event sentiment labeled dataset. In this dataset, the sentiment of the event is labeled as words. For example, the sentiment of “PersonX broke vase” is labeled as “(sad, be regretful, feel sorry, afraid)”. We use SenticNet BIBREF14 to normalize these emotion words ($W=\\lbrace w_1, w_2, \\dots , w_n\\rbrace $) as the positive (labeled as 1) or the negative (labeled as -1) sentiment. The sentiment polarity of the event $P_e$ is dependent on the polarity of the labeled emotion words $P_W$: $P_e=1$, if $\\sum _i P_{w_i}>0$, or $P_e=-1$, if $\\sum _i P_{w_i}<0$. We use the softmax binary classifier to learn sentiment enhanced event embeddings. The input of the classifier is event embeddings, and the output is its sentiment polarity (positive or negative). The model is trained in a supervised manner by minimizing the cross entropy error of the sentiment classification, whose loss function is given below.", "where $C$ means all training instances, $L$ is the collection of sentiment categories, $x_e$ means an event vector, $p_l(x_e)$ is the probability of predicting $x_e$ as class $l$, $p^g_l(x_e)$ indicates whether class $l$ is the correct sentiment category, whose value is 1 or -1." ], [ "Given a training event corpus with annotated intents and emotions, our model jointly minimizes a linear combination of the loss functions on events, intents and sentiment:", "where $\\alpha , \\beta , \\gamma \\in [0,1]$ are model parameters to weight the three loss functions.", "We use the New York Times Gigaword Corpus (LDC2007T07) for pre-training event embeddings. Event triples are extracted based on the Open Information Extraction technology BIBREF15. We initialize the word embedding layer with 100 dimensional pre-trained GloVe vectors BIBREF8, and fine-tune initialized word vectors during our model training. We use Adagrad BIBREF16 for optimizing the parameters with initial learning rate 0.001 and batch size 128." ], [ "We compare the performance of intent and sentiment powered event embedding model with state-of-the-art baselines on three tasks: event similarity, script event prediction and stock prediction." ], [ "We compare the performance of our approach against a variety of event embedding models developed in recent years. These models can be categorized into three groups:", "Averaging Baseline (Avg) This represents each event as the average of the constituent word vectors using pre-trained GloVe embeddings BIBREF8.", "Compositional Neural Network (Comp. NN) The event representation in this model is computed by feeding the concatenation of the subject, predicate, and object embedding into a two layer neural network BIBREF17, BIBREF3, BIBREF2.", "Element-wise Multiplicative Composition (EM Comp.) This method simply concatenates the element-wise multiplications between the verb and its subject/object.", "Neural Tensor Network This line of work use tensors to learn the interactions between the predicate and its subject/object BIBREF4, BIBREF5. According to the different usage of tensors, we have three baseline methods: Role Factor Tensor BIBREF5 which represents the predicate as a tensor, Predicate Tensor BIBREF5 which uses two tensors learning the interactions between the predicate and its subject, and the predicate and its object, respectively, NTN BIBREF4, which we used as the baseline event embedding model in this paper, and KGEB BIBREF18, which incorporates knowledge graph information in NTN." ], [ "We first follow BIBREF5 (BIBREF5) evaluating our proposed approach on the hard similarity task. The goal of this task is that similar events should be close to each other in the same vector space, while dissimilar events should be far away with each other. To this end, BIBREF5 (BIBREF5) created two types of event pairs, one with events that should be close to each other but have very little lexical overlap (e.g., police catch robber / authorities apprehend suspect), and another with events that should be farther apart but have high overlap (e.g., police catch robber / police catch disease).", "The labeled dataset contains 230 event pairs (115 pairs each of similar and dissimilar types). Three different annotators were asked to give the similarity/dissimilarity rankings, of which only those the annotators agreed upon completely were kept. For each event representation learning method, we obtain the cosine similarity score of the pairs, and report the fraction of cases where the similar pair receives a higher cosine value than the dissimilar pair (we use Accuracy $\\in [0,1]$ denoting it). To evaluate the robustness of our approach, we extend this dataset to 1,000 event pairs (similar and dissimilar events each account for 50%), and we will release this dataset to the public." ], [ "Except for the hard similarity task, we also evaluate our approach on the transitive sentence similarity dataset BIBREF19, which contains 108 pairs of transitive sentences: short phrases containing a single subject, object and verb (e.g., agent sell property). It also has another dataset which consists of 200 sentence pairs. In this dataset, the sentences to be compared are constructed using the same subject and object and semantically correlated verbs, such as `spell’ and `write’; for example, `pupils write letters’ is compared with `pupils spell letters’. As this dataset is not suitable for our task, we only evaluate our approach and baselines on 108 sentence pairs.", "Every pair is annotated by a human with a similarity score from 1 to 7. For example, pairs such as (design, reduce, amount) and (company, cut, cost) are annotated with a high similarity score, while pairs such as (wife, pour, tea) and (worker, join, party) are given low similarity scores. Since each pair has several annotations, we use the average annotator score as the gold score. To evaluate the cosine similarity given by each model and the annotated similarity score, we use the Spearman’s correlation ($\\rho \\in [-1,1]$)." ], [ "Experimental results of hard similarity and transitive sentence similarity are shown in Table TABREF23. We find that:", "(1) Simple averaging achieved competitive performance in the task of transitive sentence similarity, while performed very badly in the task of hard similarity. This is mainly because hard similarity dataset is specially created for evaluating the event pairs that should be close to each other but have little lexical overlap and that should be farther apart but have high lexical overlap. Obviously, on such dataset, simply averaging word vectors which is incapable of capturing the semantic interactions between event arguments, cannot achieve a sound performance.", "(2) Tensor-based compositional methods (NTN, KGEB, Role Factor Tensor and Predicate Tensor) outperformed parameterized additive models (Comp. NN and EM Comp.), which shows that tensor is capable of learning the semantic composition of event arguments.", "(3) Our commonsense knowledge enhanced event representation learning approach outperformed all baseline methods across all datasets (achieving 78% and 200% improvements on hard similarity small and big dataset, respectively, compared to previous SOTA method), which indicates that commonsense knowledge is useful for distinguishing distinct events." ], [ "To further analyse the effects of intents and emotions on the event representation learning, we present case studies in Table TABREF29, which directly shows the changes of similarity scores before and after incorporating intent and sentiment. For example, the original similarity score of two events “chef cooked pasta” and “chef cooked books” is very high (0.89) as they have high lexical overlap. However, their intents differ greatly. The intent of “chef cooked pasta” is “to hope his customer enjoying the delicious food”, while the intent of “chef cooked books” is “to falsify their financial statements”. Enhanced with the intents, the similarity score of the above two events dramatically drops to 0.45. For another example, as the event pair “man clears test” and “he passed exam” share the same sentiment polarity, their similarity score is boosted from -0.08 to 0.40." ], [ "Event is a kind of important real-world knowledge. Learning effective event representations can be benefit for numerous applications. Script event prediction BIBREF20 is a challenging event-based commonsense reasoning task, which is defined as giving an existing event context, one needs to choose the most reasonable subsequent event from a candidate list.", "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2. As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings.", "BIBREF22 (BIBREF22) and BIBREF21 (BIBREF21) showed that script event prediction is a challenging problem, and even 1% of accuracy improvement is very difficult. Experimental results shown in Table TABREF31 demonstrate that we can achieve more than 1.5% improvements in single model comparison and more than 1.4% improvements in multi-model integration comparison, just by replacing the input embeddings, which confirms that better event understanding can lead to better inference results. An interesting result is that the event embeddings only incorporated with intents achieved the best result against other baselines. This confirms that capturing people's intents is helpful to infer their next plan. In addition, we notice that the event embeddings only incorporated with sentiment also achieve better performance than SGNN. This is mainly because the emotional consistency does also contribute to predicate the subsequent event." ], [ "It has been shown that news events influence the trends of stock price movements BIBREF23. As news events affect human decisions and the volatility of stock prices is influenced by human trading, it is reasonable to say that events can influence the stock market.", "In this section, we compare with several event-driven stock market prediction baseline methods: (1) Word, BIBREF23 luss2012predicting use bag-of-words represent news events for stock prediction; (2) Event, BIBREF24 ding-EtAl:2014:EMNLP2014 represent events by subject-predicate-object triples for stock prediction; (3) NTN, BIBREF4 ding2015deep learn continues event vectors for stock prediction; (4) KGEB, BIBREF18 ding2016knowledge incorporate knowledge graph into event vectors for stock prediction.", "Experimental results are shown in Figure FIGREF33. We find that knowledge-driven event embedding is a competitive baseline method, which incorporates world knowledge to improve the performances of event embeddings on the stock prediction. Sentiment is often discussed in predicting stock market, as positive or negative news can affect people's trading decision, which in turn influences the movement of stock market. In this study, we empirically show that event emotions are effective for improving the performance of stock prediction (+2.4%)." ], [ "Recent advances in computing power and NLP technology enables more accurate models of events with structures. Using open information extraction to obtain structured events representations, we find that the actor and object of events can be better captured BIBREF24. For example, a structured representation of the event above can be (Actor = Microsoft, Action = sues, Object = Barnes & Noble). They report improvements on stock market prediction using their structured representation instead of words as features.", "One disadvantage of structured representations of events is that they lead to increased sparsity, which potentially limits the predictive power. BIBREF4 ding2015deep propose to address this issue by representing structured events using event embeddings, which are dense vectors. The goal of event representation learning is that similar events should be embedded close to each other in the same vector space, and distinct events should be farther from each other.", "Previous work investigated compositional models for event embeddings. BIBREF2 granroth2016happens concatenate predicate and argument embeddings and feed them to a neural network to generate an event embedding. Event embeddings are further concatenated and fed through another neural network to predict the coherence between the events. Modi modi2016event encodes a set of events in a similar way and use that to incrementally predict the next event – first the argument, then the predicate and then next argument. BIBREF25 pichotta2016learning treat event prediction as a sequence to sequence problem and use RNN based models conditioned on event sequences in order to predict the next event. These three works all model narrative chains, that is, event sequences in which a single entity (the protagonist) participates in every event. BIBREF26 hu2017happens also apply an RNN approach, applying a new hierarchical LSTM model in order to predict events by generating descriptive word sequences. This line of work combines the words in these phrases by the passing the concatenation or addition of their word embeddings to a parameterized function that maps the summed vector into event embedding space. The additive nature of these models makes it difficult to model subtle differences in an event’s surface form.", "To address this issue, BIBREF4 ding2015deep, and BIBREF5 weber2018event propose tensor-based composition models, which combine the subject, predicate and object to produce the final event representation. The models capture multiplicative interactions between these elements and are thus able to make large shifts in event semantics with only small changes to the arguments.", "However, previous work mainly focuses on the nature of the event and lose sight of external commonsense knowledge, such as the intent and sentiment of event participants. This paper proposes to encode intent and sentiment into event embeddings, such that we can obtain a kind of more powerful event representations." ], [ "Understanding events requires effective representations that contain commonsense knowledge. High-quality event representations are valuable for many NLP downstream applications. This paper proposed a simple and effective framework to incorporate commonsense knowledge into the learning process of event embeddings. Experimental results on event similarity, script event prediction and stock prediction showed that commonsense knowledge enhanced event embeddings can improve the quality of event representations and benefit the downstream applications." ], [ "We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137." ] ], "section_name": [ "Introduction", "Commonsense Knowledge Enhanced Event Representations", "Commonsense Knowledge Enhanced Event Representations ::: Low-Rank Tensors for Event Embedding", "Commonsense Knowledge Enhanced Event Representations ::: Intent Embedding", "Commonsense Knowledge Enhanced Event Representations ::: Sentiment Embedding", "Commonsense Knowledge Enhanced Event Representations ::: Joint Event, Intent and Sentiment Embedding", "Experiments", "Experiments ::: Baselines", "Experiments ::: Event Similarity Evaluation ::: Hard Similarity Task", "Experiments ::: Event Similarity Evaluation ::: Transitive Sentence Similarity", "Experiments ::: Event Similarity Evaluation ::: Results", "Experiments ::: Event Similarity Evaluation ::: Case Study", "Experiments ::: Script Event Prediction", "Experiments ::: Stock Market Prediction", "Related Work", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "8b67a98a5a22bc3eb73bc2e32cb09ede0f905c44", "a5da5d44af41425b6f503279cabf8b3b27eed003", "df0441d9f840e74987c925f28428509015be528e" ], "answer": [ { "evidence": [ "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2. As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "extractive_spans": [ "SGNN" ], "free_form_answer": "", "highlighted_evidence": [ "As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2. As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings.", "In this section, we compare with several event-driven stock market prediction baseline methods: (1) Word, BIBREF23 luss2012predicting use bag-of-words represent news events for stock prediction; (2) Event, BIBREF24 ding-EtAl:2014:EMNLP2014 represent events by subject-predicate-object triples for stock prediction; (3) NTN, BIBREF4 ding2015deep learn continues event vectors for stock prediction; (4) KGEB, BIBREF18 ding2016knowledge incorporate knowledge graph into event vectors for stock prediction." ], "extractive_spans": [ "SGNN", "Word, BIBREF23", "Event, BIBREF24", "NTN, BIBREF4", "KGEB, BIBREF18 " ], "free_form_answer": "", "highlighted_evidence": [ "As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings.", "In this section, we compare with several event-driven stock market prediction baseline methods: (1) Word, BIBREF23 luss2012predicting use bag-of-words represent news events for stock prediction; (2) Event, BIBREF24 ding-EtAl:2014:EMNLP2014 represent events by subject-predicate-object triples for stock prediction; (3) NTN, BIBREF4 ding2015deep learn continues event vectors for stock prediction; (4) KGEB, BIBREF18 ding2016knowledge incorporate knowledge graph into event vectors for stock prediction." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We compare the performance of our approach against a variety of event embedding models developed in recent years. These models can be categorized into three groups:", "Averaging Baseline (Avg) This represents each event as the average of the constituent word vectors using pre-trained GloVe embeddings BIBREF8.", "Compositional Neural Network (Comp. NN) The event representation in this model is computed by feeding the concatenation of the subject, predicate, and object embedding into a two layer neural network BIBREF17, BIBREF3, BIBREF2.", "Element-wise Multiplicative Composition (EM Comp.) This method simply concatenates the element-wise multiplications between the verb and its subject/object.", "Neural Tensor Network This line of work use tensors to learn the interactions between the predicate and its subject/object BIBREF4, BIBREF5. According to the different usage of tensors, we have three baseline methods: Role Factor Tensor BIBREF5 which represents the predicate as a tensor, Predicate Tensor BIBREF5 which uses two tensors learning the interactions between the predicate and its subject, and the predicate and its object, respectively, NTN BIBREF4, which we used as the baseline event embedding model in this paper, and KGEB BIBREF18, which incorporates knowledge graph information in NTN." ], "extractive_spans": [ "Compositional Neural Network", "Element-wise Multiplicative Composition", "Neural Tensor Network" ], "free_form_answer": "", "highlighted_evidence": [ "These models can be categorized into three groups:\n\nAveraging Baseline (Avg) This represents each event as the average of the constituent word vectors using pre-trained GloVe embeddings BIBREF8.\n\nCompositional Neural Network (Comp. NN) The event representation in this model is computed by feeding the concatenation of the subject, predicate, and object embedding into a two layer neural network BIBREF17, BIBREF3, BIBREF2.\n\nElement-wise Multiplicative Composition (EM Comp.) This method simply concatenates the element-wise multiplications between the verb and its subject/object.\n\nNeural Tensor Network This line of work use tensors to learn the interactions between the predicate and its subject/object BIBREF4, BIBREF5. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "1e32cb1b434ea6bc5892f2a60c7775b864e85381", "50a38bf2879562d9ae5ed550ac28a301e6a3aa26", "861962918e878b9b620e9755fca08f0ecc2ddc7d" ], "answer": [ { "evidence": [ "BIBREF22 (BIBREF22) and BIBREF21 (BIBREF21) showed that script event prediction is a challenging problem, and even 1% of accuracy improvement is very difficult. Experimental results shown in Table TABREF31 demonstrate that we can achieve more than 1.5% improvements in single model comparison and more than 1.4% improvements in multi-model integration comparison, just by replacing the input embeddings, which confirms that better event understanding can lead to better inference results. An interesting result is that the event embeddings only incorporated with intents achieved the best result against other baselines. This confirms that capturing people's intents is helpful to infer their next plan. In addition, we notice that the event embeddings only incorporated with sentiment also achieve better performance than SGNN. This is mainly because the emotional consistency does also contribute to predicate the subsequent event." ], "extractive_spans": [ "accuracy" ], "free_form_answer": "", "highlighted_evidence": [ "BIBREF22 (BIBREF22) and BIBREF21 (BIBREF21) showed that script event prediction is a challenging problem, and even 1% of accuracy improvement is very difficult." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2. As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "extractive_spans": [], "free_form_answer": "replacing the event embeddings on SGNN and running it on the MCNC dataset", "highlighted_evidence": [ "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2.", "As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2. As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "extractive_spans": [ "we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings" ], "free_form_answer": "", "highlighted_evidence": [ " As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "1677b99c1764c4e696fc88495884b7ac5f0e0091", "69b3c99b87b2e67909bf6fbc975c7adad8925a6d", "af22da7e21bc89d21e9f5b2f99c812cb5a725d72" ], "answer": [ { "evidence": [ "We also use ATOMIC BIBREF7 as the event sentiment labeled dataset. In this dataset, the sentiment of the event is labeled as words. For example, the sentiment of “PersonX broke vase” is labeled as “(sad, be regretful, feel sorry, afraid)”. We use SenticNet BIBREF14 to normalize these emotion words ($W=\\lbrace w_1, w_2, \\dots , w_n\\rbrace $) as the positive (labeled as 1) or the negative (labeled as -1) sentiment. The sentiment polarity of the event $P_e$ is dependent on the polarity of the labeled emotion words $P_W$: $P_e=1$, if $\\sum _i P_{w_i}>0$, or $P_e=-1$, if $\\sum _i P_{w_i}<0$. We use the softmax binary classifier to learn sentiment enhanced event embeddings. The input of the classifier is event embeddings, and the output is its sentiment polarity (positive or negative). The model is trained in a supervised manner by minimizing the cross entropy error of the sentiment classification, whose loss function is given below.", "Extensive experiments show that incorporating external commonsense knowledge brings promising improvements to event embeddings, achieving 78% and 200% improvements on hard similarity small and big dataset, respectively. With better embeddings, we can achieve superior performances on script event prediction and stock market prediction compared to state-of-the-art baseline methods.", "Except for the hard similarity task, we also evaluate our approach on the transitive sentence similarity dataset BIBREF19, which contains 108 pairs of transitive sentences: short phrases containing a single subject, object and verb (e.g., agent sell property). It also has another dataset which consists of 200 sentence pairs. In this dataset, the sentences to be compared are constructed using the same subject and object and semantically correlated verbs, such as `spell’ and `write’; for example, `pupils write letters’ is compared with `pupils spell letters’. As this dataset is not suitable for our task, we only evaluate our approach and baselines on 108 sentence pairs.", "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2. As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "extractive_spans": [ "ATOMIC", "hard similarity small and big dataset", "the transitive sentence similarity dataset", "the standard multiple choice narrative cloze (MCNC) dataset" ], "free_form_answer": "", "highlighted_evidence": [ "We also use ATOMIC BIBREF7 as the event sentiment labeled dataset.", "Extensive experiments show that incorporating external commonsense knowledge brings promising improvements to event embeddings, achieving 78% and 200% improvements on hard similarity small and big dataset, respectively.", "Except for the hard similarity task, we also evaluate our approach on the transitive sentence similarity dataset BIBREF19, which contains 108 pairs of transitive sentences: short phrases containing a single subject, object and verb (e.g., agent sell property).", "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We also use ATOMIC BIBREF7 as the event sentiment labeled dataset. In this dataset, the sentiment of the event is labeled as words. For example, the sentiment of “PersonX broke vase” is labeled as “(sad, be regretful, feel sorry, afraid)”. We use SenticNet BIBREF14 to normalize these emotion words ($W=\\lbrace w_1, w_2, \\dots , w_n\\rbrace $) as the positive (labeled as 1) or the negative (labeled as -1) sentiment. The sentiment polarity of the event $P_e$ is dependent on the polarity of the labeled emotion words $P_W$: $P_e=1$, if $\\sum _i P_{w_i}>0$, or $P_e=-1$, if $\\sum _i P_{w_i}<0$. We use the softmax binary classifier to learn sentiment enhanced event embeddings. The input of the classifier is event embeddings, and the output is its sentiment polarity (positive or negative). The model is trained in a supervised manner by minimizing the cross entropy error of the sentiment classification, whose loss function is given below.", "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2. As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "extractive_spans": [ "ATOMIC ", "MCNC" ], "free_form_answer": "", "highlighted_evidence": [ "We also use ATOMIC BIBREF7 as the event sentiment labeled dataset. In this dataset, the sentiment of the event is labeled as words", "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "One challenge for incorporating intents into event embeddings is that we should have a large-scale labeled dataset, which annotated the event and its actor's intents. Recently, BIBREF6 P18-1043 and BIBREF7 sap2018atomic released such valuable commonsense knowledge dataset (ATOMIC), which consists of 25,000 event phrases covering a diverse range of daily-life events and situations. For example, given an event “PersonX drinks coffee in the morning”, the dataset labels PersonX's likely intent is “PersonX wants to stay awake”.", "We use the New York Times Gigaword Corpus (LDC2007T07) for pre-training event embeddings. Event triples are extracted based on the Open Information Extraction technology BIBREF15. We initialize the word embedding layer with 100 dimensional pre-trained GloVe vectors BIBREF8, and fine-tune initialized word vectors during our model training. We use Adagrad BIBREF16 for optimizing the parameters with initial learning rate 0.001 and batch size 128.", "We first follow BIBREF5 (BIBREF5) evaluating our proposed approach on the hard similarity task. The goal of this task is that similar events should be close to each other in the same vector space, while dissimilar events should be far away with each other. To this end, BIBREF5 (BIBREF5) created two types of event pairs, one with events that should be close to each other but have very little lexical overlap (e.g., police catch robber / authorities apprehend suspect), and another with events that should be farther apart but have high overlap (e.g., police catch robber / police catch disease).", "The labeled dataset contains 230 event pairs (115 pairs each of similar and dissimilar types). Three different annotators were asked to give the similarity/dissimilarity rankings, of which only those the annotators agreed upon completely were kept. For each event representation learning method, we obtain the cosine similarity score of the pairs, and report the fraction of cases where the similar pair receives a higher cosine value than the dissimilar pair (we use Accuracy $\\in [0,1]$ denoting it). To evaluate the robustness of our approach, we extend this dataset to 1,000 event pairs (similar and dissimilar events each account for 50%), and we will release this dataset to the public.", "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2. As SGNN proposed by BIBREF21 (BIBREF21) achieved state-of-the-art performances for this task, we use the framework of SGNN, and only replace their input event embeddings with our intent and sentiment-enhanced event embeddings." ], "extractive_spans": [], "free_form_answer": "ATOMIC, New York Times Gigaword, an unreleased extension of the dataset by BIBREF5, MCNC", "highlighted_evidence": [ "One challenge for incorporating intents into event embeddings is that we should have a large-scale labeled dataset, which annotated the event and its actor's intents. Recently, BIBREF6 P18-1043 and BIBREF7 sap2018atomic released such valuable commonsense knowledge dataset (ATOMIC), which consists of 25,000 event phrases covering a diverse range of daily-life events and situations.", "We use the New York Times Gigaword Corpus (LDC2007T07) for pre-training event embeddings.", "To this end, BIBREF5 (BIBREF5) created two types of event pairs, one with events that should be close to each other but have very little lexical overlap (e.g., police catch robber / authorities apprehend suspect), and another with events that should be farther apart but have high overlap (e.g., police catch robber / police catch disease).", "To evaluate the robustness of our approach, we extend this dataset to 1,000 event pairs (similar and dissimilar events each account for 50%), and we will release this dataset to the public.", "Following BIBREF21 (BIBREF21), we evaluate on the standard multiple choice narrative cloze (MCNC) dataset BIBREF2." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ], "nlp_background": [ "two", "two", "two" ], "paper_read": [ "no", "no", "no" ], "question": [ "What is the machine learning method used to make the predictions?", "How is the event prediction task evaluated?", "What are the datasets used in the paper?" ], "question_id": [ "0f1f81b6d4aa0da38b4cc8b060926e7df61bb646", "ec62df859ad901bf0848f0a8b91eedc78dba5657", "ccec4f8deff651858f44553f8daa5a19e8ed8d3b" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "search_query": [ "stock market", "stock market", "stock market" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 1: Intent and sentiment enhanced event embeddings can distinguish distinct events even with high lexical overlap, and find similar events even with low lexical overlap.", "Figure 2: Architecture of the joint embedding model. eventneg refers to the corrupted event tuple, which is derived by replacing each word of the event object with a random word in our dictionary. intentneg is the incorrect intent for the given event, which is randomly selected from the annotated dataset.", "Figure 4: An illustration of low-rank neural tensor network for learning event embeddings.", "Figure 3: Baseline event-embedding model.", "Table 1: Experimental results on hard similarity dataset and transitive sentence similarity dataset. The small dataset (230 event pairs) of hard similarity task from Weber et al. (2018), and the big dataset (2,000 event pairs) is annotated by us. The best results are in bold.", "Table 2: Case study of the cosine similarity score changes with incorporating the intent and sentiment. oScore is the original cosine similarity score without intent and sentiment, and mScore is the modified cosine similarity score with intent and sentiment.", "Table 3: Results of script event prediction on the test set. The improvement is significant at p < 0.05. Acc is short for Accuracy.", "Figure 5: Experimental results on S&P 500 index prediction. “+Int” means that we encode the intent information into the original event embeddings." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Figure4-1.png", "3-Figure3-1.png", "6-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "7-Figure5-1.png" ] }
[ "How is the event prediction task evaluated?", "What are the datasets used in the paper?" ]
[ [ "1909.05190-Experiments ::: Script Event Prediction-1", "1909.05190-Experiments ::: Script Event Prediction-2" ], [ "1909.05190-Experiments ::: Script Event Prediction-1", "1909.05190-Commonsense Knowledge Enhanced Event Representations ::: Intent Embedding-1", "1909.05190-Commonsense Knowledge Enhanced Event Representations ::: Joint Event, Intent and Sentiment Embedding-2", "1909.05190-Experiments ::: Event Similarity Evaluation ::: Hard Similarity Task-1", "1909.05190-Experiments ::: Event Similarity Evaluation ::: Transitive Sentence Similarity-0", "1909.05190-Introduction-5", "1909.05190-Commonsense Knowledge Enhanced Event Representations ::: Sentiment Embedding-1", "1909.05190-Experiments ::: Event Similarity Evaluation ::: Hard Similarity Task-0" ] ]
[ "replacing the event embeddings on SGNN and running it on the MCNC dataset", "ATOMIC, New York Times Gigaword, an unreleased extension of the dataset by BIBREF5, MCNC" ]
76
1606.02601
A Joint Model for Word Embedding and Word Morphology
This paper presents a joint model for performing unsupervised morphological analysis on words, and learning a character-level composition function from morphemes to word embeddings. Our model splits individual words into segments, and weights each segment according to its ability to predict context words. Our morphological analysis is comparable to dedicated morphological analyzers at the task of morpheme boundary recovery, and also performs better than word-based embedding models at the task of syntactic analogy answering. Finally, we show that incorporating morphology explicitly into character-level models help them produce embeddings for unseen words which correlate better with human judgments.
{ "paragraphs": [ [ "Word embedding models associate each word in a corpus with a vector in a semantic space. These vectors can either be learnt to optimize performance in a downstream task BIBREF0 , BIBREF1 or learnt via the distributional hypothesis: words with similar contexts have similar meanings BIBREF2 , BIBREF3 . Current word embedding models treat words as atomic. However, words follow a power law distribution BIBREF4 , and word embedding models suffer from the problem of sparsity: a word like `unbelievableness' does not appear at all in the first 17 million words of Wikipedia, even though it is derived from common morphemes. This leads to three problems:", "One approach to smooth word distributions is to operate on the smallest meaningful semantic unit, the morpheme BIBREF6 , BIBREF7 . However, previous work on the morpheme level has all used external morphological analyzers. These require a separate pre-processing step, and cannot be adapted to suit the problem at hand.", "Another is to operate on the smallest orthographic unit, the character BIBREF8 , BIBREF9 . However, the link between shape and meaning is often complicated BIBREF10 , as alphabetic characters carry no inherent semantic meaning. To account for this, the model has to learn complicated dependencies between strings of characters to accurately capture word meaning. We hypothesize that explicitly introducing morphology into character-level models can help them learn morphological features, and hence word meaning.", "In this paper, we introduce a word embedding model that jointly learns word morphology and word embeddings. To the best of our knowledge, this is the first word embedding model that learns morphology as part of the model. Our guiding intuition is that the words with the same stem have similar contexts. Thus, when considering word segments in terms of context-predictive power, the segment corresponding to the stem will have the most weight.", "Our model `reads' the word and outputs a sequence of word segments. We weight each segment, and then combine the segments to obtain the final word representation. These representations are trained to predict context words, as this has been shown to give word representations which capture word semantics well BIBREF11 . As the root morpheme has the most context-predictive power, we expect our model to assign high weight to this segment, thereby learning to separate root+affix structures.", "One exciting feature of character-level models is their ability to represent open-vocabulary words. After training, they can predict a vector for any word, not just words that they have seen before. Our model has an advantage in that it can split unknown words into known and unknown components. Hence, it can potentially generalise better over seen morphemes and words and apply existing knowledge to new cases.", "To evaluate our model, we evaluate its use as a morphological analyzer (§ \"Morphological awareness\" ), test how well it learns word semantics, including for unseen words (§ \"Capturing semantic similarity\" ), and examine the structure of the embedding space (§ \"Capturing syntactic and semantic regularity\" )." ], [ "While words are often treated as the fundamental unit of language, they are in fact themselves compositional. The smallest unit of semantics is the morpheme, while the smallest unit of orthography is the grapheme, or character. Both have been used as a method to go beyond word-level models." ], [ "As word semantics is compositional, one might ask whether it is possible to learn morpheme representations, and compose them to obtain good word representations. Lazaridou et al. lazaridou demonstrated precisely this: one can derive good representations of morphemes distributionally, and apply tools from compositional distributional semantics to obtain good word representations. Luong et al. luong also trained a morphological composition model based on recursive neural networks. Botha and Blunsom Botha2014 built a language model incorporating morphemes, and demonstrated improvements in language modelling and in machine translation. All of these approaches incorporated external morphological knowledge, either in the form of gold standard morphological analyses such as CELEX BIBREF12 or an external morphological analyzer such as Morfessor BIBREF13 .", "Unsupervised morphology induction aims to decide whether two words are morphologically related or to generate a morphological analysis for a word BIBREF14 , BIBREF15 . While they may use semantic insights to perform the morphological analysis BIBREF16 , they typically are not concerned with obtaining a semantic representation for morphemes, nor of the resulting word." ], [ "Another approach to go beyond words is based on on character-level neural network models. Both recurrent and convolutional architectures for deriving word representations from characters have been used, and results in downstream tasks such as language modelling and POS tagging have been promising, with reductions in word perplexity for language modelling and state-of-the-art English POS tagging accuracy BIBREF8 , BIBREF9 . Ballesteros et al. ballesteros train a character-level model for parsing. Zhang et al. zhang do away with words completely, and train a convolutional neural network to do text classification directly from characters.", "Excitingly, character-level models seem to capture morphological effects. Examining nearest neighbours of morphologically complex words in character-aware models often shows other words with the same morphology BIBREF8 , BIBREF9 . Furthermore, morphosyntactic features such as capitalization and suffix information have long been used in tasks such as POS tagging BIBREF17 , BIBREF18 . By explicitly modelling these features, one might expect good performance gains in many NLP tasks.", "What is less clear is how well these models learn word semantics. Classical word embedding models seem to capture word semantics, and the nearest neighbours of a given word are typically semantically related words BIBREF3 , BIBREF19 . In addition, the correlation between model word similarity scores and human similarity judgments is typically high BIBREF20 . However, no previous work (to our knowledge) evaluates the similarity judgments of character-level models against human annotators." ], [ "We hypothesize that by incorporating morphological knowledge directly into a character-level model, one can improve the ability of character-level models to learn compositional word semantics. In addition, we hypothesize that incorporating morphological knowledge helps structure the embedding space in such a way that affixation corresponds to a regular shift in the embedding space. We test both hypotheses directly in § \"Capturing semantic similarity\" and § \"Capturing syntactic and semantic regularity\" respectively.", "The starting point for our model is the skip-gram with negative sampling (SGNS) objective of Mikolov et al. word2vec2. For a vocabulary $V$ of size $|V|$ and embedding size $N$ , SGNS learns two embedding tables $W, C \\in \\mathbb {R}^{N \\times |V|}$ , the target and context vectors. Every time a word $w$ is seen in the corpus with a context word $c$ , the tables are updated to maximize ", "$$\\log \\sigma (w \\cdot c) + \\sum _{i = 1}^{k} \\mathbb {E}_{\\tilde{c}_i \\sim P(w)} [\\log \\sigma (-w \\cdot \\tilde{c}_i)]$$ (Eq. 7) ", "where $P(w)$ is a noise distribution from which we draw $k$ negative samples. In the end, the target vector for a word $w$ should have high inner product with context vectors for words with which it is typically seen, and low inner products with context vectors for words it is not typically seen with. Figure 1 illustrates this for a particular example. In Mikolov et al. word2vec2, the noise distribution $P(w)$ is proportional to the unigram probability of a word raised to the 3/4th power BIBREF11 .", "Our innovation is to replace $W$ with a trainable function $f$ that accepts a sequence of characters and returns a vector of length $N$ (i.e. $f: A^{<\\omega } \\rightarrow \\mathbb {R}^N$ , where $A$ is the alphabet we are considering and $A^{<\\omega }$ denotes the finite length strings over the alphabet $A$ ). We still keep the table of context embeddings $C$ , and our model objective is still to minimize ", "$$\\log \\sigma (f(w) \\cdot c) + \\sum _{i = 1}^{k} \\mathbb {E}_{\\tilde{c}_i \\sim P(w)} [\\log \\sigma (-f(w) \\cdot \\tilde{c}_i)]$$ (Eq. 8) ", "where we now treat $w$ as a sequence of characters. After training, $f$ can be used to produce an embedding for any sequence of characters, even if it was not previously seen in training.", "The process of calculating $f$ on a word is illustrated in Figure 2 . We first pad the word with beginning and end of word tokens, and then pass the characters of the word into a character lookup table. As the link between characters and morphemes is non-compositional and requires essentially memorizing a sequence of characters, we use LSTMs BIBREF21 to encode the letters in the word, as they have been shown to capture non-local and non-linear dependencies. We run a forward and a backward LSTM over the character embeddings. The forward LSTM reads the beginning of word symbol, but not the end of word symbol, and the backward LSTM reads the end of word symbol but not the beginning of word symbol. This is necessary to align the resulting embeddings, so that the LSTM hidden states taken together correspond to a partition of the word into two without overlap.", "The LSTMs output two sequences of vectors $h_0^{f}, \\dots , h_n^f$ and $h_n^{b}, \\dots , h_0^b$ . We then concatenate the resulting vectors, and pass them through a shared feed-forward layer to obtain a final sequence of vectors $h_i$ . Each vector corresponds to two half-words: one half read by the forward LSTM, and the other by the backward LSTM.", "We then learn an attention model over these hidden states: given a hidden state $h_i$ , we calculate a weight $\\alpha _i = a(h_i)$ such that $\\sum \\alpha _i = 1$ , and then calculate the resulting vector for the word $w$ as $f(w) = \\sum \\alpha _i h_i$ . Following Bahdanau et al. bahdanau, we calculate $a$ as ", "$$a(h_i) = \\frac{\\exp (v^{T} \\tanh (Wh_i))}{\\sum _j \\exp (v^{T} \\tanh (Wh_j))}$$ (Eq. 10) ", "i.e. a softmax over the hidden states." ], [ "Previous work on bidirectional LSTM character-level models used both LSTMs to read the entire word BIBREF8 , BIBREF22 . This can lead to redundancy, as both LSTMs are used to capture the full word. In contrast, our model is capable of splitting the words and optimizing the two LSTMs for modelling different halves. This means one of the LSTMs can specialize on word prefixes and roots, while the other memorizes possible suffixes. In addition, when dealing with an unknown word, it can be split into known and unknown components. The model can then use the semantic knowledge it has learnt for a known component to predict a representation for the unknown word as a whole.", "We hypothesize that the natural place to split words is on morpheme boundaries, as morphemes are the smallest unit of language which carry semantic meaning. We test the splitting capabilities of our model in § \"Morphological awareness\" ." ], [ "We evaluate our model on three tasks: morphological analysis (§ \"Morphological awareness\" ), semantic similarity (§ \"Capturing semantic similarity\" ), and analogy retrieval (§ \"Capturing syntactic and semantic regularity\" ). We trained all of the models once, and then use the same trained model for all three tasks – we do not perform hyperparameter tuning to optimize performance on each task.", "We trained our Char2Vec model on the Text8 corpus, consisting of the first 100MB of a 2006 cleaned-up dump of Wikipedia. We only trained on words which appeared more than 5 times in our corpus. We used a context window size of 3 words either side of the target word, and took 11 negative samples per positive sample, using the same smoothed unigram distribution as word2vec. The model was trained for 3 epochs using the Adam optimizer BIBREF23 . All experiments were carried out using Keras BIBREF24 and Theano BIBREF25 , BIBREF26 . We initialized the context lookup table using word2vec, and kept it fixed during training. In all character-level models, the character embeddings have dimension $d_C = 64$ , while the forward and backward LSTMs have dimension $d_{LSTM} = 256$ . The concatenation of both therefore has dimensionality $d = 512$ . The concatenated LSTM hidden states are then compressed down to $d_{word} = 256$ by a feed-forward layer.", "As baselines, we trained a SGNS model on the same dataset with the same parameters. To test how much the attention model helps the character-level model to generalize, we also trained the Char2Vec model without the attention layer, but with the same parameters. In this model, the word embeddings are just the concatenation of the final forward and backward states, passed through a feedforward layer. We refer to this model as C2V-NO-ATT. We also constructed count-based vectors using SVD on PPMI-weighted co-occurence counts, with a window size of 3. We kept the top 256 principal components in the SVD decomposition, to obtain embeddings with the same size as our other models." ], [ "The main innovation of our Char2Vec model compared to existing recurrent character-level models is the capability to split words and model each half independently. Here we test whether our model segmentations correspond to gold-standard morphological analyses.", "We obtained morphological analyses for all the words in our training vocabulary which were in the English Lexicon Project BIBREF27 . We then converted these into surface-level segmentations using heuristic affix-matching, and used this as a gold-standard morphemic analysis. We ended up with 14682 words, of which 7867 have at least two morphemes and 1138 have at least three.", "Evaluating morphological segmentation is a long-debated issue BIBREF28 . Traditional hard morphological analyzers are normally evaluated on border $F_1$ – that is, how many morpheme borders are recovered. However, our model does not actually posit any hard morpheme borders. Instead, it just associates each character boundary with a weight. Therefore, we treat the problem of recovering intra-word morpheme boundaries as a ranking problem. We rank each inter-character boundary of a word according to our model weights, and then evaluate whether our model ranks morpheme boundaries above non-morpheme boundaries.", "We use mean average precision (MAP) as our evaluation metric. We first calculate precision at $N$ for each word, until all the gold standard morpheme boundaries have been recovered. Then, we average over $N$ to obtain the average precision (AP) for that word. We then calculate the mean of the APs across all words to obtain the MAP for the model.", "We report results of a random baseline as a point of comparison, which randomly places morpheme boundaries inside the word. We also report the results of the Porter stemmer, where we place a morpheme boundary at the end of the stem, then randomly thereafter.", "Finally, we trained Morfessor 2.0 BIBREF13 on our corpus, using an initial random split value of 0.9, and stopping training when the difference in loss between successive epochs is less than 0.1% of the total loss. We then used our trained Morfessor model to predict morpheme boundaries, and randomly permuted the morpheme boundaries and ranked them ahead of randomly permuted non-morpheme boundaries to calculate MAP.", "As the test set is dominated by words with simple morphology, we also extracted all the morphologically rich words with 3 or more morphemes, and created a separate evaluation on this subsection. We report the results in Table 1 .", "As the results show, our model performs the best out of all the methods at analysing morphologically rich words with multiple morphemes. On these words, our model even outperforms Morfessor, which is explicitly designed as a morphological analyzer. This shows that our model learns splits which correspond well to human morphological analysis, even though we build no morphological knowledge into our model. However, when evaluating on all words, the Porter stemmer has a great advantage, as it is rule-based and able to give just the stem of words with great precision, which is effectively giving a canonical segmentation for words with just 2 morphemes.", "We show some model analyses against the gold standard in Table 2 ." ], [ "Next, we tested our model similarity scores against human similarity judgments. For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 .", "We use the WordSim353 dataset BIBREF29 , the test split of the MEN dataset BIBREF30 , and the Rare Word (RW) dataset BIBREF31 . The word pairs in the WordSim353 and MEN datasets are typically simple, commonly occurring words denoting basic concepts, whereas the RW dataset contains many morphologically derived words which have low corpus frequencies. This is reflected by how many of the test pairs in each dataset contain out of vocabulary (OOV) items: 3/353 and 6/1000 of the word pairs in WordSim353 and MEN, compared with 1083/2034 for the RW dataset.", "We report results for in-corpus word pairs in Table 3 , and for all word pairs for those models able to predict vectors for unseen words in Table 4 .", "Overall, word-based embedding models learn vectors that correlate better with human judgments, particularly for morphologically simple words. However, character-based models are competitive with word-based models on the RW dataset. While the words in this dataset appear rarely in our corpus (of the in-corpus words, over half appear fewer than 100 times), each morpheme may be common, and the character-level models can use this information. We note that on the entire RW dataset (of which over half contain an OOV word), the character-based models still perform reasonably. We also note that on word pairs in the RW test containing at least one OOV word, the full Char2Vec model outperforms the C2V model without morphology. This suggests that character-based embedding models are learning to morphologically analyse complex word forms, even on unseen words, and that giving the model the capability to learn word segments independently helps this process.", "We also present some word nearest neighbours for our Char2Vec model in Table 5 , both on the whole vocabulary and then filtering the nearest neighbours to only include words which appear 100 times or more in our corpus. This corresponds to keeping the top 10k words, which is common among language models BIBREF8 , BIBREF9 . We note that nearest neighbour predictions include words that are orthographically distant but semantically similar, showing that our model has the capability to learn to compose characters into word meanings.", "We also note that word nearest neighbours seem to be more semantically coherent when rarely-observed words are filtered out of the vocabulary, and more based on orthographic overlap when the entire vocabulary is included. This suggests that for rarely-observed words, the model is basing its predictions on orthographic analysis, whereas for more commonly observed words it can `memorize' the mapping between the orthography and word semantics." ], [ "Finally, we evaluate the structure of the embedding space of our various models. In particular, we test whether affixation corresponds to regular linear shifts in the embedding space.", "To do this, we use the Google analogy dataset BIBREF3 . This consists of 19544 questions of the form “A is to B as C is to X”. We split this collection into semantic and syntactic sections, based on whether the analogies between the words are driven by morphological changes or deeper semantic shifts. Example semantic questions are on capital-country relationships (“Paris is to France as Berlin is to X) and currency-country relationships. Example syntactic questions are adjective-adverb relationships (“amazing is to amazingly as apparent is to X”) and opposites formed by prefixing a negation particle (“acceptable is to unacceptable as aware is to X”). This results in 5537 semantic analogies and 10411 syntactic analogies.", "We use the method of Mikolov et al. word2vec1 to answer these questions. We first $\\ell _2$ -normalize all of our word vectors. Then, to answer a question of the form “A is to B as C is to X”, we find the word $w$ which satisfies ", "$$w = \\operatornamewithlimits{argmax}_{w \\in V - \\lbrace a, b, c\\rbrace } \\cos (w, b - a + c)$$ (Eq. 28) ", "where $a,\\, b,\\, c$ are the word vectors for the words A, B and C respectively.", "We report the results in Table 6 . The most intriguing result is that character-level models are competitive with word-level models for syntactic analogy, with our Char2Vec model holding the best result for syntactic analogy answering. This suggests that incorporating morphological knowledge explicitly rather than latently helps the model learn morphological features. However, on the semantic analogies, the character-based models do much worse than the word-based models. This is perhaps unsurprising in light of the previous section, where we demonstrate that character-based models do worse at the semantic similarity task than word-level models." ], [ "We only report results for English. However, English is a morphologically impoverished language, with little inflection and relatively few productive patterns of derivation. Our morphology test set reflects this, with over half the words consisting of a simple morpheme, and over 90% having at most 2 morphemes.", "This is unfortunate for our model, as it performs better on words with richer morphology. It gives consistently more accurate morphological analyses for these words compared to standard baselines, and matches word-level models for semantic similarity on rare words with rich morphology. In addition, it seems to learn morphosyntactic features to help solve the syntactic analogy task. Most of all, it is language-agnostic, and easy to port across different languages. We thus expect our model to perform even better for languages with a richer morphology than English, such as Turkish and German." ], [ "In this paper, we present a model which learns morphology and word embeddings jointly. Given a word, it splits the word in to segments and ranks the segments based on their context-predictive power. Our model can segment words into morphemes, and also embed the word into a representation space.", "We show that our model is competitive at the task of morpheme boundary recovery compared to a dedicated morphological analyzer, beating dedicated analyzers on words with a rich morphology. We also show that in the representation space word affixation corresponds to linear shifts, demonstrating that our model can learn morphological features.", "Finally, we show that character-level models, while outperformed by word-level models generally at the task of semantic similarity, are competitive at representing rare morphologically rich words. In addition, the character-level models can predict good quality representations for unseen words, with the morphologically aware character-level model doing slightly better." ] ], "section_name": [ "Introduction", "Related Work", "Morphemic analysis and semantics", "Character-level models", "The Char2Vec model", "Capturing morphology via attention", "Experiments", "Morphological awareness", "Capturing semantic similarity", "Capturing syntactic and semantic regularity", "Discussion", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "2f79025a7c78ad2161d5f2208d237bfe02fbd46a", "5356318478338d8a153a4b6594da38054e9a7b23" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Results at retrieving intra-word morpheme boundaries." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results at retrieving intra-word morpheme boundaries." ], "unanswerable": false, "yes_no": false }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no": false } ], "worker_id": [ "0bef3d73d87da2395d3e86b22aef3d3bc5a05057", "5c4ed7751ffe32c903ca9e8c7eee49ba66e4336b" ] }, { "annotation_id": [ "17d6e12f71e32614d5e4857d2e5e302a114b362b", "661dc949661c6d352bce6540b8eca10a5270ac76", "db425b513e03139fbf9fa4ec6ecfc49497e024d2" ], "answer": [ { "evidence": [ "We only report results for English. However, English is a morphologically impoverished language, with little inflection and relatively few productive patterns of derivation. Our morphology test set reflects this, with over half the words consisting of a simple morpheme, and over 90% having at most 2 morphemes.", "This is unfortunate for our model, as it performs better on words with richer morphology. It gives consistently more accurate morphological analyses for these words compared to standard baselines, and matches word-level models for semantic similarity on rare words with rich morphology. In addition, it seems to learn morphosyntactic features to help solve the syntactic analogy task. Most of all, it is language-agnostic, and easy to port across different languages. We thus expect our model to perform even better for languages with a richer morphology than English, such as Turkish and German." ], "extractive_spans": [], "free_form_answer": "They did not report results for English but expect that morphologically complex languages will perform better.", "highlighted_evidence": [ "We only report results for English.", "We only report results for English. However, English is a morphologically impoverished language, with little inflection and relatively few productive patterns of derivation.", "This is unfortunate for our model, as it performs better on words with richer morphology. It gives consistently more accurate morphological analyses for these words compared to standard baselines, and matches word-level models for semantic similarity on rare words with rich morphology.", "We thus expect our model to perform even better for languages with a richer morphology than English, such as Turkish and German." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "This is unfortunate for our model, as it performs better on words with richer morphology. It gives consistently more accurate morphological analyses for these words compared to standard baselines, and matches word-level models for semantic similarity on rare words with rich morphology. In addition, it seems to learn morphosyntactic features to help solve the syntactic analogy task. Most of all, it is language-agnostic, and easy to port across different languages. We thus expect our model to perform even better for languages with a richer morphology than English, such as Turkish and German." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "This is unfortunate for our model, as it performs better on words with richer morphology. It gives consistently more accurate morphological analyses for these words compared to standard baselines, and matches word-level models for semantic similarity on rare words with rich morphology." ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "0bef3d73d87da2395d3e86b22aef3d3bc5a05057", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "8bbf73085923ab9804349bd5ec1c51469b3928bd" ] }, { "annotation_id": [ "3b4ed4c52f56cc64b9627614abee058e07188bba", "d2dacf31ba045f35ca309853cd3e30f975fce358", "d9bf406879dc22f02706e4d6dc167a949495d06a", "bc0dc0009e68ca2433e7da5b17c048dc0c9f9d5e" ], "answer": [ { "evidence": [ "We only report results for English. However, English is a morphologically impoverished language, with little inflection and relatively few productive patterns of derivation. Our morphology test set reflects this, with over half the words consisting of a simple morpheme, and over 90% having at most 2 morphemes." ], "extractive_spans": [ "English" ], "free_form_answer": "", "highlighted_evidence": [ "We only report results for English." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Another approach to go beyond words is based on on character-level neural network models. Both recurrent and convolutional architectures for deriving word representations from characters have been used, and results in downstream tasks such as language modelling and POS tagging have been promising, with reductions in word perplexity for language modelling and state-of-the-art English POS tagging accuracy BIBREF8 , BIBREF9 . Ballesteros et al. ballesteros train a character-level model for parsing. Zhang et al. zhang do away with words completely, and train a convolutional neural network to do text classification directly from characters.", "Excitingly, character-level models seem to capture morphological effects. Examining nearest neighbours of morphologically complex words in character-aware models often shows other words with the same morphology BIBREF8 , BIBREF9 . Furthermore, morphosyntactic features such as capitalization and suffix information have long been used in tasks such as POS tagging BIBREF17 , BIBREF18 . By explicitly modelling these features, one might expect good performance gains in many NLP tasks.", "What is less clear is how well these models learn word semantics. Classical word embedding models seem to capture word semantics, and the nearest neighbours of a given word are typically semantically related words BIBREF3 , BIBREF19 . In addition, the correlation between model word similarity scores and human similarity judgments is typically high BIBREF20 . However, no previous work (to our knowledge) evaluates the similarity judgments of character-level models against human annotators.", "The Char2Vec model", "We hypothesize that by incorporating morphological knowledge directly into a character-level model, one can improve the ability of character-level models to learn compositional word semantics. In addition, we hypothesize that incorporating morphological knowledge helps structure the embedding space in such a way that affixation corresponds to a regular shift in the embedding space. We test both hypotheses directly in § \"Capturing semantic similarity\" and § \"Capturing syntactic and semantic regularity\" respectively.", "The starting point for our model is the skip-gram with negative sampling (SGNS) objective of Mikolov et al. word2vec2. For a vocabulary $V$ of size $|V|$ and embedding size $N$ , SGNS learns two embedding tables $W, C \\in \\mathbb {R}^{N \\times |V|}$ , the target and context vectors. Every time a word $w$ is seen in the corpus with a context word $c$ , the tables are updated to maximize", "$$\\log \\sigma (w \\cdot c) + \\sum _{i = 1}^{k} \\mathbb {E}_{\\tilde{c}_i \\sim P(w)} [\\log \\sigma (-w \\cdot \\tilde{c}_i)]$$ (Eq. 7)", "where $P(w)$ is a noise distribution from which we draw $k$ negative samples. In the end, the target vector for a word $w$ should have high inner product with context vectors for words with which it is typically seen, and low inner products with context vectors for words it is not typically seen with. Figure 1 illustrates this for a particular example. In Mikolov et al. word2vec2, the noise distribution $P(w)$ is proportional to the unigram probability of a word raised to the 3/4th power BIBREF11 .", "Our innovation is to replace $W$ with a trainable function $f$ that accepts a sequence of characters and returns a vector of length $N$ (i.e. $f: A^{<\\omega } \\rightarrow \\mathbb {R}^N$ , where $A$ is the alphabet we are considering and $A^{<\\omega }$ denotes the finite length strings over the alphabet $A$ ). We still keep the table of context embeddings $C$ , and our model objective is still to minimize", "$$\\log \\sigma (f(w) \\cdot c) + \\sum _{i = 1}^{k} \\mathbb {E}_{\\tilde{c}_i \\sim P(w)} [\\log \\sigma (-f(w) \\cdot \\tilde{c}_i)]$$ (Eq. 8)", "where we now treat $w$ as a sequence of characters. After training, $f$ can be used to produce an embedding for any sequence of characters, even if it was not previously seen in training.", "The process of calculating $f$ on a word is illustrated in Figure 2 . We first pad the word with beginning and end of word tokens, and then pass the characters of the word into a character lookup table. As the link between characters and morphemes is non-compositional and requires essentially memorizing a sequence of characters, we use LSTMs BIBREF21 to encode the letters in the word, as they have been shown to capture non-local and non-linear dependencies. We run a forward and a backward LSTM over the character embeddings. The forward LSTM reads the beginning of word symbol, but not the end of word symbol, and the backward LSTM reads the end of word symbol but not the beginning of word symbol. This is necessary to align the resulting embeddings, so that the LSTM hidden states taken together correspond to a partition of the word into two without overlap.", "The LSTMs output two sequences of vectors $h_0^{f}, \\dots , h_n^f$ and $h_n^{b}, \\dots , h_0^b$ . We then concatenate the resulting vectors, and pass them through a shared feed-forward layer to obtain a final sequence of vectors $h_i$ . Each vector corresponds to two half-words: one half read by the forward LSTM, and the other by the backward LSTM.", "We then learn an attention model over these hidden states: given a hidden state $h_i$ , we calculate a weight $\\alpha _i = a(h_i)$ such that $\\sum \\alpha _i = 1$ , and then calculate the resulting vector for the word $w$ as $f(w) = \\sum \\alpha _i h_i$ . Following Bahdanau et al. bahdanau, we calculate $a$ as", "$$a(h_i) = \\frac{\\exp (v^{T} \\tanh (Wh_i))}{\\sum _j \\exp (v^{T} \\tanh (Wh_j))}$$ (Eq. 10)", "i.e. a softmax over the hidden states.", "Capturing morphology via attention", "Previous work on bidirectional LSTM character-level models used both LSTMs to read the entire word BIBREF8 , BIBREF22 . This can lead to redundancy, as both LSTMs are used to capture the full word. In contrast, our model is capable of splitting the words and optimizing the two LSTMs for modelling different halves. This means one of the LSTMs can specialize on word prefixes and roots, while the other memorizes possible suffixes. In addition, when dealing with an unknown word, it can be split into known and unknown components. The model can then use the semantic knowledge it has learnt for a known component to predict a representation for the unknown word as a whole.", "We hypothesize that the natural place to split words is on morpheme boundaries, as morphemes are the smallest unit of language which carry semantic meaning. We test the splitting capabilities of our model in § \"Morphological awareness\" .", "Experiments", "We evaluate our model on three tasks: morphological analysis (§ \"Morphological awareness\" ), semantic similarity (§ \"Capturing semantic similarity\" ), and analogy retrieval (§ \"Capturing syntactic and semantic regularity\" ). We trained all of the models once, and then use the same trained model for all three tasks – we do not perform hyperparameter tuning to optimize performance on each task.", "We trained our Char2Vec model on the Text8 corpus, consisting of the first 100MB of a 2006 cleaned-up dump of Wikipedia. We only trained on words which appeared more than 5 times in our corpus. We used a context window size of 3 words either side of the target word, and took 11 negative samples per positive sample, using the same smoothed unigram distribution as word2vec. The model was trained for 3 epochs using the Adam optimizer BIBREF23 . All experiments were carried out using Keras BIBREF24 and Theano BIBREF25 , BIBREF26 . We initialized the context lookup table using word2vec, and kept it fixed during training. In all character-level models, the character embeddings have dimension $d_C = 64$ , while the forward and backward LSTMs have dimension $d_{LSTM} = 256$ . The concatenation of both therefore has dimensionality $d = 512$ . The concatenated LSTM hidden states are then compressed down to $d_{word} = 256$ by a feed-forward layer.", "As baselines, we trained a SGNS model on the same dataset with the same parameters. To test how much the attention model helps the character-level model to generalize, we also trained the Char2Vec model without the attention layer, but with the same parameters. In this model, the word embeddings are just the concatenation of the final forward and backward states, passed through a feedforward layer. We refer to this model as C2V-NO-ATT. We also constructed count-based vectors using SVD on PPMI-weighted co-occurence counts, with a window size of 3. We kept the top 256 principal components in the SVD decomposition, to obtain embeddings with the same size as our other models.", "To evaluate our model, we evaluate its use as a morphological analyzer (§ \"Morphological awareness\" ), test how well it learns word semantics, including for unseen words (§ \"Capturing semantic similarity\" ), and examine the structure of the embedding space (§ \"Capturing syntactic and semantic regularity\" ).", "The main innovation of our Char2Vec model compared to existing recurrent character-level models is the capability to split words and model each half independently. Here we test whether our model segmentations correspond to gold-standard morphological analyses.", "We obtained morphological analyses for all the words in our training vocabulary which were in the English Lexicon Project BIBREF27 . We then converted these into surface-level segmentations using heuristic affix-matching, and used this as a gold-standard morphemic analysis. We ended up with 14682 words, of which 7867 have at least two morphemes and 1138 have at least three.", "Evaluating morphological segmentation is a long-debated issue BIBREF28 . Traditional hard morphological analyzers are normally evaluated on border $F_1$ – that is, how many morpheme borders are recovered. However, our model does not actually posit any hard morpheme borders. Instead, it just associates each character boundary with a weight. Therefore, we treat the problem of recovering intra-word morpheme boundaries as a ranking problem. We rank each inter-character boundary of a word according to our model weights, and then evaluate whether our model ranks morpheme boundaries above non-morpheme boundaries.", "We use mean average precision (MAP) as our evaluation metric. We first calculate precision at $N$ for each word, until all the gold standard morpheme boundaries have been recovered. Then, we average over $N$ to obtain the average precision (AP) for that word. We then calculate the mean of the APs across all words to obtain the MAP for the model.", "We report results of a random baseline as a point of comparison, which randomly places morpheme boundaries inside the word. We also report the results of the Porter stemmer, where we place a morpheme boundary at the end of the stem, then randomly thereafter.", "Finally, we trained Morfessor 2.0 BIBREF13 on our corpus, using an initial random split value of 0.9, and stopping training when the difference in loss between successive epochs is less than 0.1% of the total loss. We then used our trained Morfessor model to predict morpheme boundaries, and randomly permuted the morpheme boundaries and ranked them ahead of randomly permuted non-morpheme boundaries to calculate MAP.", "As the test set is dominated by words with simple morphology, we also extracted all the morphologically rich words with 3 or more morphemes, and created a separate evaluation on this subsection. We report the results in Table 1 .", "As the results show, our model performs the best out of all the methods at analysing morphologically rich words with multiple morphemes. On these words, our model even outperforms Morfessor, which is explicitly designed as a morphological analyzer. This shows that our model learns splits which correspond well to human morphological analysis, even though we build no morphological knowledge into our model. However, when evaluating on all words, the Porter stemmer has a great advantage, as it is rule-based and able to give just the stem of words with great precision, which is effectively giving a canonical segmentation for words with just 2 morphemes.", "We show some model analyses against the gold standard in Table 2 .", "Next, we tested our model similarity scores against human similarity judgments. For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 .", "We use the WordSim353 dataset BIBREF29 , the test split of the MEN dataset BIBREF30 , and the Rare Word (RW) dataset BIBREF31 . The word pairs in the WordSim353 and MEN datasets are typically simple, commonly occurring words denoting basic concepts, whereas the RW dataset contains many morphologically derived words which have low corpus frequencies. This is reflected by how many of the test pairs in each dataset contain out of vocabulary (OOV) items: 3/353 and 6/1000 of the word pairs in WordSim353 and MEN, compared with 1083/2034 for the RW dataset.", "We report results for in-corpus word pairs in Table 3 , and for all word pairs for those models able to predict vectors for unseen words in Table 4 .", "Overall, word-based embedding models learn vectors that correlate better with human judgments, particularly for morphologically simple words. However, character-based models are competitive with word-based models on the RW dataset. While the words in this dataset appear rarely in our corpus (of the in-corpus words, over half appear fewer than 100 times), each morpheme may be common, and the character-level models can use this information. We note that on the entire RW dataset (of which over half contain an OOV word), the character-based models still perform reasonably. We also note that on word pairs in the RW test containing at least one OOV word, the full Char2Vec model outperforms the C2V model without morphology. This suggests that character-based embedding models are learning to morphologically analyse complex word forms, even on unseen words, and that giving the model the capability to learn word segments independently helps this process.", "We also present some word nearest neighbours for our Char2Vec model in Table 5 , both on the whole vocabulary and then filtering the nearest neighbours to only include words which appear 100 times or more in our corpus. This corresponds to keeping the top 10k words, which is common among language models BIBREF8 , BIBREF9 . We note that nearest neighbour predictions include words that are orthographically distant but semantically similar, showing that our model has the capability to learn to compose characters into word meanings.", "We also note that word nearest neighbours seem to be more semantically coherent when rarely-observed words are filtered out of the vocabulary, and more based on orthographic overlap when the entire vocabulary is included. This suggests that for rarely-observed words, the model is basing its predictions on orthographic analysis, whereas for more commonly observed words it can `memorize' the mapping between the orthography and word semantics.", "Finally, we evaluate the structure of the embedding space of our various models. In particular, we test whether affixation corresponds to regular linear shifts in the embedding space.", "To do this, we use the Google analogy dataset BIBREF3 . This consists of 19544 questions of the form “A is to B as C is to X”. We split this collection into semantic and syntactic sections, based on whether the analogies between the words are driven by morphological changes or deeper semantic shifts. Example semantic questions are on capital-country relationships (“Paris is to France as Berlin is to X) and currency-country relationships. Example syntactic questions are adjective-adverb relationships (“amazing is to amazingly as apparent is to X”) and opposites formed by prefixing a negation particle (“acceptable is to unacceptable as aware is to X”). This results in 5537 semantic analogies and 10411 syntactic analogies.", "We use the method of Mikolov et al. word2vec1 to answer these questions. We first $\\ell _2$ -normalize all of our word vectors. Then, to answer a question of the form “A is to B as C is to X”, we find the word $w$ which satisfies", "$$w = \\operatornamewithlimits{argmax}_{w \\in V - \\lbrace a, b, c\\rbrace } \\cos (w, b - a + c)$$ (Eq. 28)", "where $a,\\, b,\\, c$ are the word vectors for the words A, B and C respectively.", "We report the results in Table 6 . The most intriguing result is that character-level models are competitive with word-level models for syntactic analogy, with our Char2Vec model holding the best result for syntactic analogy answering. This suggests that incorporating morphological knowledge explicitly rather than latently helps the model learn morphological features. However, on the semantic analogies, the character-based models do much worse than the word-based models. This is perhaps unsurprising in light of the previous section, where we demonstrate that character-based models do worse at the semantic similarity task than word-level models.", "Discussion", "We only report results for English. However, English is a morphologically impoverished language, with little inflection and relatively few productive patterns of derivation. Our morphology test set reflects this, with over half the words consisting of a simple morpheme, and over 90% having at most 2 morphemes.", "This is unfortunate for our model, as it performs better on words with richer morphology. It gives consistently more accurate morphological analyses for these words compared to standard baselines, and matches word-level models for semantic similarity on rare words with rich morphology. In addition, it seems to learn morphosyntactic features to help solve the syntactic analogy task. Most of all, it is language-agnostic, and easy to port across different languages. We thus expect our model to perform even better for languages with a richer morphology than English, such as Turkish and German.", "Conclusion", "In this paper, we present a model which learns morphology and word embeddings jointly. Given a word, it splits the word in to segments and ranks the segments based on their context-predictive power. Our model can segment words into morphemes, and also embed the word into a representation space.", "We show that our model is competitive at the task of morpheme boundary recovery compared to a dedicated morphological analyzer, beating dedicated analyzers on words with a rich morphology. We also show that in the representation space word affixation corresponds to linear shifts, demonstrating that our model can learn morphological features.", "Finally, we show that character-level models, while outperformed by word-level models generally at the task of semantic similarity, are competitive at representing rare morphologically rich words. In addition, the character-level models can predict good quality representations for unseen words, with the morphologically aware character-level model doing slightly better." ], "extractive_spans": [ "English" ], "free_form_answer": "", "highlighted_evidence": [ "perplexity for language modelling and state-of-the-art English POS tagging accuracy BIBREF8 , BIBREF9 . Ballesteros et al. ballesteros train a character-level model for parsing. Zhang et al. zhang do away with words completely, and train a convolutional neural network to do text classification directly from characters.\n\nExcitingly, character-level models seem to capture morphological effects. Examining nearest neighbours of morphologically complex words in character-aware models often shows other words with the same morphology BIBREF8 , BIBREF9 . Furthermore, morphosyntactic features such as capitalization and suffix information have long been used in tasks such as POS tagging BIBREF17 , BIBREF18 . By explicitly modelling these features, one might expect good performance gains in many NLP tasks.\n\nWhat is less clear is how well these models learn word semantics. Classical word embedding models seem to capture word semantics, and the nearest neighbours of a given word are typically semantically related words BIBREF3 , BIBREF19 . In addition, the correlation between model word similarity scores and human similarity judgments is typically high BIBREF20 . However, no previous work (to our knowledge) evaluates the similarity judgments of character-level models against human annotators.\nThe Char2Vec model\n\nWe hypothesize that by incorporating morphological knowledge directly into a character-level model, one can improve the ability of character-level models to learn compositional word semantics. In addition, we hypothesize that incorporating morphological knowledge helps structure the embedding space in such a way that affixation corresponds to a regular shift in the embedding space. We test both hypotheses directly in § \"Capturing semantic similarity\" and § \"Capturing syntactic and semantic regularity\" respectively.\n\nThe starting point for our model is the skip-gram with negative sampling (SGNS) objective of Mikolov et al. word2vec2. For a vocabulary $V$ of size $|V|$ and embedding size $N$ , SGNS learns two embedding tables $W, C \\in \\mathbb {R}^{N \\times |V|}$ , the target and context vectors. Every time a word $w$ is seen in the corpus with a context word $c$ , the tables are updated to maximize\n\n$$\\log \\sigma (w \\cdot c) + \\sum _{i = 1}^{k} \\mathbb {E}_{\\tilde{c}_i \\sim P(w)} [\\log \\sigma (-w \\cdot \\tilde{c}_i)]$$ (Eq. 7)\n\nwhere $P(w)$ is a noise distribution from which we draw $k$ negative samples. In the end, the target vector for a word $w$ should have high inner product with context vectors for words with which it is typically seen, and low inner products with context vectors for words it is not typically seen with. Figure 1 illustrates this for a particular example. In Mikolov et al. word2vec2, the noise distribution $P(w)$ is proportional to the unigram probability of a word raised to the 3/4th power BIBREF11 .\n\nOur innovation is to replace $W$ with a trainable function $f$ that accepts a sequence of characters and returns a vector of length $N$ (i.e. $f: A^{<\\omega } \\rightarrow \\mathbb {R}^N$ , where $A$ is the alphabet we are considering and $A^{<\\omega }$ denotes the finite length strings over the alphabet $A$ ). We still keep the table of context embeddings $C$ , and our model objective is still to minimize\n\n$$\\log \\sigma (f(w) \\cdot c) + \\sum _{i = 1}^{k} \\mathbb {E}_{\\tilde{c}_i \\sim P(w)} [\\log \\sigma (-f(w) \\cdot \\tilde{c}_i)]$$ (Eq. 8)\n\nwhere we now treat $w$ as a sequence of characters. After training, $f$ can be used to produce an embedding for any sequence of characters, even if it was not previously seen in training.\n\nThe process of calculating $f$ on a word is illustrated in Figure 2 . We first pad the word with beginning and end of word tokens, and then pass the characters of the word into a character lookup table. As the link between characters and morphemes is non-compositional and requires essentially memorizing a sequence of characters, we use LSTMs BIBREF21 to encode the letters in the word, as they have been shown to capture non-local and non-linear dependencies. We run a forward and a backward LSTM over the character embeddings. The forward LSTM reads the beginning of word symbol, but not the end of word symbol, and the backward LSTM reads the end of word symbol but not the beginning of word symbol. This is necessary to align the resulting embeddings, so that the LSTM hidden states taken together correspond to a partition of the word into two without overlap.\n\nThe LSTMs output two sequences of vectors $h_0^{f}, \\dots , h_n^f$ and $h_n^{b}, \\dots , h_0^b$ . We then concatenate the resulting vectors, and pass them through a shared feed-forward layer to obtain a final sequence of vectors $h_i$ . Each vector corresponds to two half-words: one half read by the forward LSTM, and the other by the backward LSTM.\n\nWe then learn an attention model over these hidden states: given a hidden state $h_i$ , we calculate a weight $\\alpha _i = a(h_i)$ such that $\\sum \\alpha _i = 1$ , and then calculate the resulting vector for the word $w$ as $f(w) = \\sum \\alpha _i h_i$ . Following Bahdanau et al. bahdanau, we calculate $a$ as\n\n$$a(h_i) = \\frac{\\exp (v^{T} \\tanh (Wh_i))}{\\sum _j \\exp (v^{T} \\tanh (Wh_j))}$$ (Eq. 10)\n\ni.e. a softmax over the hidden states.\nCapturing morphology via attention\n\nPrevious work on bidirectional LSTM character-level models used both LSTMs to read the entire word BIBREF8 , BIBREF22 . This can lead to redundancy, as both LSTMs are used to capture the full word. In contrast, our model is capable of splitting the words and optimizing the two LSTMs for modelling different halves. This means one of the LSTMs can specialize on word prefixes and roots, while the other memorizes possible suffixes. In addition, when dealing with an unknown word, it can be split into known and unknown components. The model can then use the semantic knowledge it has learnt for a known component to predict a representation for the unknown word as a whole.\n\nWe hypothesize that the natural place to split words is on morpheme boundaries, as morphemes are the smallest unit of language which carry semantic meaning. We test the splitting capabilities of our model in § \"Morphological awareness\" .\nExperiments\n\nWe evaluate our model on three tasks: morphological analysis (§ \"Morphological awareness\" ), semantic similarity (§ \"Capturing semantic similarity\" ), and analogy retrieval (§ \"Capturing syntactic and semantic regularity\" ). We trained all of the models once, and then use the same trained model for all three tasks – we do not perform hyperparameter tuning to optimize performance on each task.\n\nWe trained our Char2Vec model on the Text8 corpus, consisting of the first 100MB of a 2006 cleaned-up dump of Wikipedia. We only trained on words which appeared more than 5 times in our corpus. We used a context window size of 3 words either side of the target word, and took 11 negative samples per positive sample, using the same smoothed unigram distribution as word2vec. The model was trained for 3 epochs using the Adam optimizer BIBREF23 . All experiments were carried out using Keras BIBREF24 and Theano BIBREF25 , BIBREF26 . We initialized the context lookup table using word2vec, and kept it fixed during training. In all character-level models, the character embeddings have dimension $d_C = 64$ , while the forward and backward LSTMs have dimension $d_{LSTM} = 256$ . The concatenation of both therefore has dimensionality $d = 512$ . The concatenated LSTM hidden states are then compressed down to $d_{word} = 256$ by a feed-forward layer.\n\nAs baselines, we trained a SGNS model on the same dataset with the same parameters. To test how much the attention model helps the character-level model to generalize, we also trained the Char2Vec model without the attention layer, but with the same parameters. In this model, the word embeddings are just the concatenation of the final forward and backward states, passed through a feedforward layer. We refer to this model as C2V-NO-ATT. We also constructed count-based vectors using SVD on PPMI-weighted co-occurence counts, with a window size of 3. We kept the top 256 principal components in the SVD decomposition, to obtain embeddings with the same size as our other models.\nMorphological awareness\n\nThe main innovation of our Char2Vec model compared to existing recurrent character-level models is the capability to split words and model each half independently. Here we test whether our model segmentations correspond to gold-standard morphological analyses.\n\nWe obtained morphological analyses for all the words in our training vocabulary which were in the English Lexicon Project BIBREF27 . We then converted these into surface-level segmentations using heuristic affix-matching, and used this as a gold-standard morphemic analysis. We ended up with 14682 words, of which 7867 have at least two morphemes and 1138 have at least three.\n\nEvaluating morphological segmentation is a long-debated issue BIBREF28 . Traditional hard morphological analyzers are normally evaluated on border $F_1$ – that is, how many morpheme borders are recovered. However, our model does not actually posit any hard morpheme borders. Instead, it just associates each character boundary with a weight. Therefore, we treat the problem of recovering intra-word morpheme boundaries as a ranking problem. We rank each inter-character boundary of a word according to our model weights, and then evaluate whether our model ranks morpheme boundaries above non-morpheme boundaries.\n\nWe use mean average precision (MAP) as our evaluation metric. We first calculate precision at $N$ for each word, until all the gold standard morpheme boundaries have been recovered. Then, we average over $N$ to obtain the average precision (AP) for that word. We then calculate the mean of the APs across all words to obtain the MAP for the model.\n\nWe report results of a random baseline as a point of comparison, which randomly places morpheme boundaries inside the word. We also report the results of the Porter stemmer, where we place a morpheme boundary at the end of the stem, then randomly thereafter.\n\nFinally, we trained Morfessor 2.0 BIBREF13 on our corpus, using an initial random split value of 0.9, and stopping training when the difference in loss between successive epochs is less than 0.1% of the total loss. We then used our trained Morfessor model to predict morpheme boundaries, and randomly permuted the morpheme boundaries and ranked them ahead of randomly permuted non-morpheme boundaries to calculate MAP.\n\nAs the test set is dominated by words with simple morphology, we also extracted all the morphologically rich words with 3 or more morphemes, and created a separate evaluation on this subsection. We report the results in Table 1 .\n\nAs the results show, our model performs the best out of all the methods at analysing morphologically rich words with multiple morphemes. On these words, our model even outperforms Morfessor, which is explicitly designed as a morphological analyzer. This shows that our model learns splits which correspond well to human morphological analysis, even though we build no morphological knowledge into our model. However, when evaluating on all words, the Porter stemmer has a great advantage, as it is rule-based and able to give just the stem of words with great precision, which is effectively giving a canonical segmentation for words with just 2 morphemes.\n\nWe show some model analyses against the gold standard in Table 2 .\nCapturing semantic similarity\n\nNext, we tested our model similarity scores against human similarity judgments. For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 .\n\nWe use the WordSim353 dataset BIBREF29 , the test split of the MEN dataset BIBREF30 , and the Rare Word (RW) dataset BIBREF31 . The word pairs in the WordSim353 and MEN datasets are typically simple, commonly occurring words denoting basic concepts, whereas the RW dataset contains many morphologically derived words which have low corpus frequencies. This is reflected by how many of the test pairs in each dataset contain out of vocabulary (OOV) items: 3/353 and 6/1000 of the word pairs in WordSim353 and MEN, compared with 1083/2034 for the RW dataset.\n\nWe report results for in-corpus word pairs in Table 3 , and for all word pairs for those models able to predict vectors for unseen words in Table 4 .\n\nOverall, word-based embedding models learn vectors that correlate better with human judgments, particularly for morphologically simple words. However, character-based models are competitive with word-based models on the RW dataset. While the words in this dataset appear rarely in our corpus (of the in-corpus words, over half appear fewer than 100 times), each morpheme may be common, and the character-level models can use this information. We note that on the entire RW dataset (of which over half contain an OOV word), the character-based models still perform reasonably. We also note that on word pairs in the RW test containing at least one OOV word, the full Char2Vec model outperforms the C2V model without morphology. This suggests that character-based embedding models are learning to morphologically analyse complex word forms, even on unseen words, and that giving the model the capability to learn word segments independently helps this process.\n\nWe also present some word nearest neighbours for our Char2Vec model in Table 5 , both on the whole vocabulary and then filtering the nearest neighbours to only include words which appear 100 times or more in our corpus. This corresponds to keeping the top 10k words, which is common among language models BIBREF8 , BIBREF9 . We note that nearest neighbour predictions include words that are orthographically distant but semantically similar, showing that our model has the capability to learn to compose characters into word meanings.\n\nWe also note that word nearest neighbours seem to be more semantically coherent when rarely-observed words are filtered out of the vocabulary, and more based on orthographic overlap when the entire vocabulary is included. This suggests that for rarely-observed words, the model is basing its predictions on orthographic analysis, whereas for more commonly observed words it can `memorize' the mapping between the orthography and word semantics.\nCapturing syntactic and semantic regularity\n\nFinally, we evaluate the structure of the embedding space of our various models. In particular, we test whether affixation corresponds to regular linear shifts in the embedding space.\n\nTo do this, we use the Google analogy dataset BIBREF3 . This consists of 19544 questions of the form “A is to B as C is to X”. We split this collection into semantic and syntactic sections, based on whether the analogies between the words are driven by morphological changes or deeper semantic shifts. Example semantic questions are on capital-country relationships (“Paris is to France as Berlin is to X) and currency-country relationships. Example syntactic questions are adjective-adverb relationships (“amazing is to amazingly as apparent is to X”) and opposites formed by prefixing a negation particle (“acceptable is to unacceptable as aware is to X”). This results in 5537 semantic analogies and 10411 syntactic analogies.\n\nWe use the method of Mikolov et al. word2vec1 to answer these questions. We first $\\ell _2$ -normalize all of our word vectors. Then, to answer a question of the form “A is to B as C is to X”, we find the word $w$ which satisfies\n\n$$w = \\operatornamewithlimits{argmax}_{w \\in V - \\lbrace a, b, c\\rbrace } \\cos (w, b - a + c)$$ (Eq. 28)\n\nwhere $a,\\, b,\\, c$ are the word vectors for the words A, B and C respectively.\n\nWe report the results in Table 6 . The most intriguing result is that character-level models are competitive with word-level models for syntactic analogy, with our Char2Vec model holding the best result for syntactic analogy answering. This suggests that incorporating morphological knowledge explicitly rather than latently helps the model learn morphological features. However, on the semantic analogies, the character-based models do much worse than the word-based models. This is perhaps unsurprising in light of the previous section, where we demonstrate that character-based models do worse at the semantic similarity task than word-level models.\nDiscussion\n\nWe only report results for English. However, English is a morphologically impoverished language, with little inflection and relatively few productive patterns of derivation. Our morphology test set reflects this, with over half the words consisting of a simple morpheme, and over 90% having at most 2 morphemes.\n\nThis is unfortunate for our model, as it performs better on words with richer morphology. It gives consistently more accurate morphological analyses for these words compared to standard baselines, and matches word-level models for semantic similarity on rare words with rich morphology. In addition, it seems to learn morphosyntactic features to help solve the syntactic analogy task. Most of all, it is language-agnostic, and easy to port across different languages. We thus expect our model to perform even better for languages with a richer morphology than English, such as Turkish and German.\nConclusion\n\nIn this paper, we present a model which learns morphology and word embeddings jointly. Given a word, it splits the word in to segments and ranks the segments based on their context-predictive power. Our model can segment words into morphemes, and also embed the word into a representation space.\n\nWe show that our model is competitive at the task of morpheme boundary recovery compared to a dedicated morphological analyzer, beating dedicated analyzers on words with a rich morphology. We also show that in the representation space word affixation corresponds to linear shifts, demonstrating that our model can learn morphological features.\n\nFinally, we show that character-level models, while outperformed by word-level models generally at the task of semantic similarity, are competitive at representing rare morphologically rich words. In addition, the character-level models can predict good quality representations for unseen words, with the morphologically aware character-level model doing slightly better.\nFigure 1: A graphical illustration of SGNS. The target vector for ‘dog’ is learned to have high inner product with the context vectors for words seen in the context of ‘dog’ (no shading), while having low inner product with random negatively sampled words (shaded)\nFigure 1: A graphical illustration of SGNS. The target vector for ‘dog’ is learned to have high inner product with the context vectors for words seen in the context of ‘dog’ (no shading), while having low inner product with random negatively sampled words (shaded)\nFigure 2: An illustration of Char2Vec. A bidirectional LSTM reads the word (start and end of word symbols represented by ˆ and $ respectively), outputting a sequence of hidden states. These are then passed through a feed-forward layer (not shown), weighted by an attention model (the square box in the diagram) and summed to obtain the final word representation.\nFigure 2: An illustration of Char2Vec. A bidirectional LSTM reads the word (start and end of word symbols represented by ˆ and $ respectively), outputting a sequence of hidden states. These are then passed through a feed-forward layer (not shown), weighted by an attention model (the square box in the diagram) and summed to obtain the final word representation.\nFigure 3: An illustration of the attention model (start and end of word symbols omitted). The root morpheme contributes the most to predicting the context, and is upweighted. In contrast, another potential split is inaccurate, and predicts the wrong context words. This is downweighted.\nFigure 3: An illustration of the attention model (start and end of word symbols omitted). The root morpheme contributes the most to predicting the context, and is upweighted. In contrast, another potential split is inaccurate, and predicts the wrong context words. This is downweighted.\nTable 1: Results at retrieving intra-word morpheme boundaries.\nTable 1: Results at retrieving intra-word morpheme boundaries.\nPlease provide evidence by selecting chunks of text from the paper, or clicking on a figure or a table.\nIs question answerable?\nYes No\nAnswer type:\nSelect Yes/No Write\nYour worker ID:\nEmma\n© The Allen Institute for Artificial Intelligence - All Rights Reserved | Privacy Policy | Terms of Use", "We only report results for English" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We only report results for English. However, English is a morphologically impoverished language, with little inflection and relatively few productive patterns of derivation. Our morphology test set reflects this, with over half the words consisting of a simple morpheme, and over 90% having at most 2 morphemes." ], "extractive_spans": [ "English" ], "free_form_answer": "", "highlighted_evidence": [ "We only report results for English" ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We only report results for English. However, English is a morphologically impoverished language, with little inflection and relatively few productive patterns of derivation. Our morphology test set reflects this, with over half the words consisting of a simple morpheme, and over 90% having at most 2 morphemes." ], "extractive_spans": [ "English" ], "free_form_answer": "", "highlighted_evidence": [ "We only report results for English." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "0bef3d73d87da2395d3e86b22aef3d3bc5a05057", "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "5c4ed7751ffe32c903ca9e8c7eee49ba66e4336b" ] }, { "annotation_id": [ "4d72f7e46df49a5210f7105875751b9a2041d275", "ae3bb02a165ceceec686aba062308407195cdfea", "b0004eb711407f25be7a71afef7946f6d06ab1cf" ], "answer": [ { "evidence": [ "Next, we tested our model similarity scores against human similarity judgments. For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 ." ], "extractive_spans": [ "human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments" ], "free_form_answer": "", "highlighted_evidence": [ "For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Next, we tested our model similarity scores against human similarity judgments. For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 ." ], "extractive_spans": [ "human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments" ], "free_form_answer": "", "highlighted_evidence": [ "Next, we tested our model similarity scores against human similarity judgments. For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Next, we tested our model similarity scores against human similarity judgments. For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 ." ], "extractive_spans": [], "free_form_answer": "Using cosine similarity between the embeddings which is then correlated with human judgement", "highlighted_evidence": [ "For these datasets, human annotators are asked to judge how similar two words are on a fixed scale. Model word vectors are evaluated based on ranking the word pairs according to their cosine similarity, and then measuring the correlation (using Spearman's $\\rho $ ) between model judgments and human judgments BIBREF20 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "470eb649822d8b3324b9feb4233eef4b38672454", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "5c4ed7751ffe32c903ca9e8c7eee49ba66e4336b" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat" ], "question": [ "Do they compare to other models that include subword information such as fastText?", "Is there a difference between the model's performance for morphologically impoverished and morphologically complex languages?", "What languages do they apply the model to?", "How are the embeddings evaluated in the human judgement comparison?" ], "question_id": [ "d38745a3910c380e6df97c7056a5dd9643fd365b", "2b75df325c98b761faf2fecf6e71ac7366eb15ea", "649e77ac2ecce42ab2efa821882675b5a0c993cb", "0bc305d6b90f77f835bc4c904b22a4be07f963b2" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "morphology", "morphology", "morphology", "morphology" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: A graphical illustration of SGNS. The target vector for ‘dog’ is learned to have high inner product with the context vectors for words seen in the context of ‘dog’ (no shading), while having low inner product with random negatively sampled words (shaded)", "Figure 2: An illustration of Char2Vec. A bidirectional LSTM reads the word (start and end of word symbols represented by ˆ and $ respectively), outputting a sequence of hidden states. These are then passed through a feed-forward layer (not shown), weighted by an attention model (the square box in the diagram) and summed to obtain the final word representation.", "Figure 3: An illustration of the attention model (start and end of word symbols omitted). The root morpheme contributes the most to predicting the context, and is upweighted. In contrast, another potential split is inaccurate, and predicts the wrong context words. This is downweighted.", "Table 1: Results at retrieving intra-word morpheme boundaries.", "Table 3: Similarity correlations of in-vocabulary word pairs between the models and human annotators.", "Table 2: Morphological analyses for sample words from the corpus. We take the top N model predictions as the split points, where N is the number of gold-standard morphemes in the word.", "Table 4: Similarity correlations of all word pairs between the character-level models and human annotators. RW OOV indicates results specifically on pairs in the RW dataset with at least one word not seen in the training corpus.", "Table 5: Filtered and unfiltered model nearest neighbours for some in-vocabulary and out-of-vocabulary words", "Table 6: Results on the Google analogy task" ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "4-Figure3-1.png", "6-Table1-1.png", "6-Table3-1.png", "6-Table2-1.png", "6-Table4-1.png", "7-Table5-1.png", "7-Table6-1.png" ] }
[ "Is there a difference between the model's performance for morphologically impoverished and morphologically complex languages?", "How are the embeddings evaluated in the human judgement comparison?" ]
[ [ "1606.02601-Discussion-0", "1606.02601-Discussion-1" ], [ "1606.02601-Capturing semantic similarity-0" ] ]
[ "They did not report results for English but expect that morphologically complex languages will perform better.", "Using cosine similarity between the embeddings which is then correlated with human judgement" ]
77
1602.04341
Attention-Based Convolutional Neural Network for Machine Comprehension
Understanding open-domain text is one of the primary challenges in natural language processing (NLP). Machine comprehension benchmarks evaluate the system's ability to understand text based on the text content only. In this work, we investigate machine comprehension on MCTest, a question answering (QA) benchmark. Prior work is mainly based on feature engineering approaches. We come up with a neural network framework, named hierarchical attention-based convolutional neural network (HABCNN), to address this task without any manually designed features. Specifically, we explore HABCNN for this task by two routes, one is through traditional joint modeling of passage, question and answer, one is through textual entailment. HABCNN employs an attention mechanism to detect key phrases, key sentences and key snippets that are relevant to answering the question. Experiments show that HABCNN outperforms prior deep learning approaches by a big margin.
{ "paragraphs": [ [ "Endowing machines with the ability to understand natural language is a long-standing goal in NLP and holds the promise of revolutionizing the way in which people interact with machines and retrieve information. Richardson et al. richardson2013mctest proposed the task of machine comprehension, along with MCTest, a question answering dataset for evaluation. The ability of the machine to understand text is evaluated by posing a series of questions, where the answer to each question can be found only in the associated text. Solutions typically focus on some semantic interpretation of the text, possibly with some form of probabilistic or logic inference, to answer the question. Despite intensive recent work BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , the problem is far from solved.", "Machine comprehension is an open-domain question-answering problem which contains factoid questions, but the answers can be derived by extraction or induction of key clues. Figure FIGREF1 shows one example in MCTest. Each example consists of one document, four associated questions; each question is followed by four answer candidates in which only one is correct. Questions in MCTest have two categories: “one” and “multiple”. The label means one or multiple sentences from the document are required to answer this question. To correctly answer the first question in the example, the two blue sentences are required; for the second question instead, only the red sentence can help. The following observations hold for the whole MCTest. (i) Most of the sentences in the document are irrelavent for a given question. It hints that we need to pay attention to just some key regions. (ii) The answer candidates can be flexible text in length and abstraction level, and probably do not appear in the document. For example, candidate B for the second question is “outside”, which is one word and does not exist in the document, while the answer candidates for the first question are longer texts with some auxiliary words like “Because” in the text. This requires our system to handle flexible texts via extraction as well as abstraction. (iii) Some questions require multiple sentences to infer the answer, and those vital sentences mostly appear close to each other (we call them snippet). Hence, our system should be able to make a choice or compromise between potential single-sentence clue and snippet clue.", "Prior work on this task is mostly based on feature engineering. This work, instead, takes the lead in presenting a deep neural network based approach without any linguistic features involved.", "Concretely, we propose HABCNN, a hierarchical attention-based convolutional neural network, to address this task in two roadmaps. In the first one, we project the document in two different ways, one based on question-attention, one based on answer-attention and then compare the two projected document representations to determine whether the answer matches the question. In the second one, every question-answer pair is reformatted into a statement, then the whole task is treated through textual entailment.", "In both roadmaps, convolutional neural network (CNN) is explored to model all types of text. As human beings usually do for such a QA task, our model is expected to be able to detect the key snippets, key sentences, and key words or phrases in the document. In order to detect those informative parts required by questions, we explore an attention mechanism to model the document so that its representation contains required information intensively. In practice, instead of imitating human beings in QA task top-down, our system models the document bottom-up, through accumulating the most relevant information from word level to snippet level.", "Our approach is novel in three aspects. (i) A document is modeled by a hierarchical CNN for different granularity, from word to sentence level, then from sentence to snippet level. The reason of choosing a CNN rather than other sequence models like recurrent neural network BIBREF4 , long short-term memory unit (LSTM BIBREF5 ), gated recurrent unit (GRU BIBREF6 ) etc, is that we argue CNNs are more suitable to detect the key sentences within documents and key phrases within sentences. Considering again the second question in Figure FIGREF1 , the original sentence “They sat by the fire and talked about he insects” has more information than required, i.e, we do not need to know “they talked about the insects”. Sequence modeling neural networks usually model the sentence meaning by accumulating the whole sequence. CNNs, with convolution-pooling steps, are supposed to detect some prominent features no matter where the features come from. (ii) In the example in Figure FIGREF1 , apparently not all sentences are required given a question, and usually different snippets are required by different questions. Hence, the same document should have different representations based on what the question is. To this end, attentions are incorporated into the hierarchical CNN to guide the learning of dynamic document representations which closely match the information requirements by questions. (iii) Document representations at sentence and snippet levels both are informative for the question, a highway network is developed to combine them, enabling our system to make a flexible tradeoff.", "Overall, we make three contributions. (i) We present a hierarchical attention-based CNN system “HABCNN”. It is, to our knowledge, the first deep learning based system for this MCTest task. (ii) Prior document modeling systems based on deep neural networks mostly generate generic representation, this work is the first to incorporate attention so that document representation is biased towards the question requirement. (iii) Our HABCNN systems outperform other deep learning competitors by big margins." ], [ "Existing systems for MCTest task are mostly based on manually engineered features. Representative work includes BIBREF7 , BIBREF3 , BIBREF8 , BIBREF9 . In these works, a common route is first to define a regularized loss function based on assumed feature vectors, then the effort focuses on designing effective features based on various rules. Even though these researches are groundbreaking for this task, their flexibility and their capacity for generalization are limited.", "Deep learning based approaches appeal to increasing interest in analogous tasks. Weston et al., weston2014memory introduce memory networks for factoid QA. Memory network framework is extended in BIBREF1 , BIBREF10 for Facebook bAbI dataset. Peng et al. PengLLW15's Neural Reasoner infers over multiple supporting facts to generate an entity answer for a given question and it is also tested on bAbI. All of these works deal with some short texts with simple-grammar, aiming to generate an answer which is restricted to be one word denoting a location, a person etc.", "Some works also tried over other kinds of QA tasks. For example, Iyyer et al., iyyer2014neural present QANTA, a recursive neural network, to infer an entity based on its description text. This task is basically a matching between description and entity, no explicit question exist. Another difference with us lies in that all the sentences in the entity description actually contain partial information about the entity, hence a description is supposed to have only one representation. However in our task, the modeling of document should be dynamically changed according to the question analysis. Hermann et al., hermann2015teaching incorporate attention mechanism into LSTM for a QA task over news text. Still, their work does not handle some complex question types like “Why...”, they merely aim to find the entity from the document to fill the slot in the query so that the completed query is true based on the document. Nevertheless, it inspires us to treat our task as a textual entailment problem by first reformatting question-answer pairs into statements.", "Some other deep learning systems are developed for answer selection task BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . Differently, this kind of question answering task does not involve document comprehension. They only try to match the question and answer candidate without any background information. Instead, we treat machine comprehension in this work as a question-answer matching problem under background guidance.", "Overall, for open-domain MCTest machine comprehension task, this work is the first to resort to deep neural networks." ], [ "We investigate this task by three approaches, illustrated in Figure FIGREF2 . (i) We can compute two different document (D) representations in a common space, one based on question (Q) attention, one based on answer (A) attention, and compare them. This architecture we name HABCNN-QAP. (ii) We compute a representation of D based on Q attention (as before), but now we compare it directly with a representation of A. We name this architecture HABCNN-QP. (iii) We treat this QA task as textual entailment (TE), first reformatting Q-A pair into a statement (S), then matching S and D directly. This architecture we name HABCNN-TE. All three approaches are implemented in the common framework HABCNN." ], [ "Recall that we use the abbreviations A (answer), Q (question), S (statement), D (document). HABCNN performs representation learning for triple (Q, A, D) in HABCNN-QP and HABCNN-QAP, for tuple (S, D) in HABCNN-TE. For convenience, we use “query” to refer to Q, A, or S uniformly. HABCNN, depicted in Figure FIGREF3 , has the following phases.", "Input Layer. The input is (query,D). Query is two individual sentences (for Q, A) or one single sentence (for S), D is a sequence of sentences. Words are initialized by INLINEFORM0 -dimensional pre-trained word embeddings. As a result, each sentence is represented as a feature map with dimensionality of INLINEFORM1 ( INLINEFORM2 is sentence length). In Figure FIGREF3 , each sentence in the input layer is depicted by a rectangle with multiple columns.", "Sentence-CNN. Sentence-CNN is used for sentence representation learning from word level. Given a sentence of length INLINEFORM0 with a word sequence: INLINEFORM1 , let vector INLINEFORM2 be the concatenated embeddings of INLINEFORM3 words INLINEFORM4 where INLINEFORM5 is the filter width, INLINEFORM6 is the dimensionality of word representations and INLINEFORM7 . Embeddings for words INLINEFORM8 , INLINEFORM9 and INLINEFORM10 , are zero padding. We then generate the representation INLINEFORM11 for the phrase INLINEFORM12 using the convolution weights INLINEFORM13 : DISPLAYFORM0 ", "where bias INLINEFORM0 . INLINEFORM1 is called “kernel size” in CNN.", "Note that the sentence-CNNs for query and all document sentences share the same weights, so that the resulting sentence representations are comparable.", "Sentence-Level Representation. The sentence-CNN generates a new feature map (omitted in Figure FIGREF3 ) for each input sentence, one column in the feature map denotes a phrase representation (i.e., INLINEFORM0 in Equation (1)).", "For the query and each sentence of D, we do element-wise 1-max-pooling (“max-pooling” for short) BIBREF16 over phrase representations to form their representations at this level.", "We now treat D as a set of “vital” sentences and “noise” sentences. We propose attention-pooling to learn the sentence-level representation of D as follows: first identify vital sentences by computing attention for each D's sentence as the cosine similarity between the its representation and the query representation, then select the INLINEFORM0 highest-attention sentences to do max-pooling over them. Taking Figure FIGREF3 as an example, based on the output of sentence-CNN layer, INLINEFORM1 important sentences with blue color are combined by max-pooling as the sentence-level representation INLINEFORM2 of D; the other – white-color – sentence representations are neglected as they have low attentions. (If INLINEFORM3 , attention-pooling returns to the common max-pooling in BIBREF16 .) When the query is (Q,A), this step will be repeated, once for Q, once for A, to compute representations of D at the sentence level that are biased with respect to Q and A, respectively.", "Snippet-CNN. As the example in Figure FIGREF1 shows, to answer the first question “why did Grandpa answer the door?”, it does not suffice to compare this question only to the sentence “Grandpa answered the door with a smile and welcomed Jimmy inside”; instead, the snippet “Finally, Jimmy arrived at Grandpa's house and knocked. Grandpa answered the door with a smile and welcomed Jimmy inside” should be used to compare. To this end, it is necessary to stack another CNN layer, snippet-CNN, to learn representations of snippets, i.e., units of one or more sentences. Thus, the basic units input to snippet-CNN (resp. sentence-CNN) are sentences (resp. words) and the output is representations of snippets (resp. sentences).", "Concretely, snippet-CNN puts all sentence representations in column sequence as a feature map and conducts another convolution operation over it. With filter width INLINEFORM0 , this step generates representation of snippet with INLINEFORM1 consecutive sentences. Similarly, we use the same CNN to learn higher-abstraction query representations (just treating query as a document with only one sentence, so that the higher-abstraction query representation is in the same space with corresponding snippet representations).", "Snippet-Level Representation. For the output of snippet-CNN, each representation is more abstract and denotes bigger granularity. We apply the same attention-pooling process to snippet level representations: attention values are computed as cosine similarities between query and snippets and the snippets with the INLINEFORM0 largest attentions are retained. Max-pooling over the INLINEFORM1 selected snippet representations then creates the snippet-level representation INLINEFORM2 of D. Two selected snippets are shown as red in Figure FIGREF3 .", "Overall Representation. Based on convolution layers at two different granularity, we have derived query-biased representations of D at sentence level (i.e., INLINEFORM0 ) as well as snippet level (i.e., INLINEFORM1 ). In order to create a flexible choice for open Q/A, we develop a highway network BIBREF17 to combine the two levels of representations as an overall representation INLINEFORM2 of D: DISPLAYFORM0 ", "where highway network weights INLINEFORM0 are learned by DISPLAYFORM0 ", "where INLINEFORM0 . With the same highway network, we can generate the overall query representation, INLINEFORM1 in Figure FIGREF3 , by combining the two representations of the query at sentence and snippet levels." ], [ "HABCNN-QP/QAP computes the representation of D as a projection of D, either based on attention from Q or based on attention from A. We hope that these two projections of the document are close for a correct A and less close for an incorrect A. As we said in related work, machine comprehension can be viewed as an answer selection task using the document D as critical background information. Here, HABCNN-QP/QAP do not compare Q and A directly, but they use Q and A to filter the document differently, extracting what is critical for the Q/A match by attention-pooling. Then they match the two document representations in the new space.", "For ease of exposition, we have used the symbol INLINEFORM0 so far, but in HABCNN-QP/QAP we compute two different document representations: INLINEFORM1 , for which attention is computed with respect to Q; and INLINEFORM2 for which attention is computed with respect to A. INLINEFORM3 also has two versions, one for Q: INLINEFORM4 , one for A: INLINEFORM5 .", "HABCNN-QP and HABCNN-QAP make different use of INLINEFORM0 . HABCNN-QP compares INLINEFORM1 with answer representation INLINEFORM2 . HABCNN-QAP compares INLINEFORM3 with INLINEFORM4 . HABCNN-QAP projects D twice, once based on attention from Q, once based on attention from A and compares the two projected representations, shown in Figure FIGREF2 (top). HABCNN-QP only utilizes the Q-based projection of D and then compares the projected document with the answer representation, shown in Figure FIGREF2 (middle)." ], [ "HABCNN-TE treats machine comprehension as textual entailment. We use the statements that are provided as part of MCTest. Each statement corresponds to a question-answer pair; e.g., the Q/A pair “Why did Grandpa answer the door?” / “Because he saw the insects” (Figure FIGREF1 ) is reformatted into the statement “Grandpa answered the door because he saw the insects”. The question answering task is then cast as: “does the document entail the statement?”", "For HABCNN-TE, shown in Figure FIGREF2 (bottom), the input for Figure FIGREF3 is the pair (S,D). HABCNN-TE tries to match the S's representation INLINEFORM0 with the D's representation INLINEFORM1 ." ], [ "MCTest has two subsets. MCTest-160 is a set of 160 items, each consisting of a document, four questions followed by one correct anwer and three incorrect answers (split into 70 train, 30 dev and 60 test) and MCTest-500 a set of 500 items (split into 300 train, 50 dev and 150 test)." ], [ "Our training objective is to minimize the following ranking loss function: DISPLAYFORM0 ", "where INLINEFORM0 is a matching score between two representation vectors. Cosine similarity is used throughout. INLINEFORM1 is a constant.", "For this common ranking loss, we also have two styles to utilize the data in view of each positive answer is accompanied with three negative answers. One is treating ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 ) as a training example, then our loss function can have three “max()” terms, each for a positive-negative pair; the other one is treating ( INLINEFORM5 , INLINEFORM6 , INLINEFORM7 ) as an individual training example. In practice, we find the second way works better. We conjecture that the second way has more training examples, and positive answers are repeatedly used to balance the amounts of positive and negative answers.", "Multitask learning: Question typing is commonly used and proved to be very helpful in QA tasks BIBREF3 . Inspired, we stack a logistic regression layer over question representation INLINEFORM0 , with the purpose that this subtask can favor the parameter tuning of the whole system, and finally the question is better recognized and is able to find the answer more accurately.", "To be specific, we classify questions into 12 classes: “how”, “how much”, “how many”, “what”, “who”, “where”, “which”, “when”, “whose”, “why”, “will” and “other”. The question label is created by querying for the label keyword in the question. If more than one keyword appears in a question, we adopt the one appearing earlier and the more specific one (e.g., “how much”, not “how”). In case there is no match, the class “other” is assigned.", "We train with AdaGrad BIBREF18 and use 50-dimensional GloVe BIBREF19 to initialize word representations, kept fixed during training. Table TABREF15 gives hyperparameter values, tuned on dev.", "We consider two evaluation metrics: accuracy (proportion of questions correctly answered) and NDCG INLINEFORM0 BIBREF20 . Unlike accuracy which evaluates if the question is correctly answered or not, NDCG INLINEFORM1 , being a measure of ranking quality, evaluates the position of the correct answer in our predicted ranking." ], [ "This work focuses on the comparison with systems about distributed representation learning and deep learning:", "Addition. Directly compare question and answers without considering the D. Sentence representations are computed by element-wise addition over word representations.", "Addition-proj. First compute sentence representations for Q, A and all D sentences as the same way as Addition, then match the two sentences in D which have highest similarity with Q and A respectively.", "NR. The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer. The input for the encoding layer is a question and the sentences of the document (called facts); each sentence is encoded by a GRU into a vector. In each reasoning layer, NR lets the question representation interact with each fact representation as reasoning process. Finally, all temporary reasoning clues are pooled as answer representation.", "AR. The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. Attention mechanism is implemented at word representation level.", "Overall, baselines Addition and Addition-proj do not involve complex composition and inference. NR and AR represent the top-performing deep neural networks in QA tasks." ], [ "In addition to the main architectures described above, we also explore two variants of ABCHNN, inspired by BIBREF21 and BIBREF2 , respectively.", "Variant-I: As RNNs are widely recognized as a competitor of CNNs in sentence modeling, similar with BIBREF21 , we replace the sentence-CNN in Figure FIGREF3 by a GRU while keeping other parts unchanged.", "Variant-II: How to model attention at the granularity of words was shown in BIBREF2 ; see their paper for details. We develop their attention idea and model attention at the granularity of sentence and snippet. Our attention gives different weights to sentences/snippets (not words), then computes the document representation as a weighted average of all sentence/snippet representations." ], [ "Table TABREF16 lists the performance of baselines, HABCNN-TE variants, HABCNN systems in the first, second and last block, respectively (we only report variants for top-performing HABCNN-TE). Consistently, our HABCNN systems outperform all baselines, especially surpass the two competitive deep learning based systems AR and NR. The margin between our best-performing ABHCNN-TE and NR is 15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500. This demonstrates the promise of our architecture in this task.", "As said before, both AR and NR systems aim to generate answers in entity form. Their designs might not suit this machine comprehension task, in which the answers are openly-formed based on summarizing or abstracting the clues. To be more specific, AR models D always at word level, attentions are also paid to corresponding word representations, which is applicable for entity-style answers, but is less suitable for comprehension at sentence level or even snippet level. NR contrarily models D in sentence level always, neglecting the discovering of key phrases which however compose most of answers. In addition, the attention of AR system and the question-fact interaction in NR system both bring large numbers of parameters, this potentially constrains their power in a dataset of limited size.", "For Variant-I and Variant-II (second block of Table TABREF16 ), we can see that both modifications do harm to the original HABCNN-TE performance. The first variant, i.e, replacing the sentence-CNN in Figure FIGREF3 as GRU module is not helpful for this task. We suspect that this lies in the fundamental function of CNN and GRU. The CNN models a sentence without caring about the global word order information, and max-pooling is supposed to extract the features of key phrases in the sentence no matter where the phrases are located. This property should be useful for answer detection, as answers are usually formed by discovering some key phrases, not all words in a sentence should be considered. However, a GRU models a sentence by reading the words sequentially, the importance of phrases is less determined by the question requirement. The second variant, using a more complicated attention scheme to model biased D representations than simple cosine similarity based attention used in our model, is less effective to detect truly informative sentences or snippet. We doubt such kind of attention scheme when used in sentence sequences of large size. In training, the attention weights after softmax normalization have actually small difference across sentences, this means the system can not distinguish key sentences from noise sentences effectively. Our cosine similarity based attention-pooling, though pretty simple, is able to filter noise sentences more effectively, as we only pick top- INLINEFORM0 pivotal sentences to form D representation finally. This trick makes the system simple while effective." ], [ "In Figure FIGREF17 , we visualize the attention distribution at sentence level as well as snippet level for the statement “ Grandpa answered the door because Jimmy knocked” whose corresponding question requires multiple sentences to answer. From its left part, we can see that “Grandpa answered the door with a smile and welcomed Jimmy inside” has the highest attention weight. This meets the intuition that this sentence has semantic overlap with the statement. And yet this sentence does not contain the answer. Look further the right part, in which the CNN layer over sentence-level representations is supposed to extract high-level features of snippets. In this level, the highest attention weight is cast to the best snippet “Finally, Jimmy arrived...knocked. Grandpa answered the door...”. And the neighboring snippets also get relatively higher attentions than other regions. Recall that our system chooses the one sentence with top attention at left part and choose top-3 snippets at right part (referring to INLINEFORM0 value in Table TABREF15 ) to form D representations at different granularity, then uses a highway network to combine both representations as an overall D representation. This visualization hints that our architecture provides a good way for a question to compromise key information from different granularity.", "We also do some preliminary error analysis. One big obstacle for our systems is the “how many” questions. For example, for question “how many rooms did I say I checked?” and the answer candidates are four digits “5,4,3,2” which never appear in the D, but require the counting of some locations. However, these digital answers can not be modeled well by distributed representations so far. In addition, digital answers also appear for “what” questions, like “what time did...”. Another big limitation lies in “why” questions. This question type requires complex inference and long-distance dependencies. We observed that all deep lerning systems, including the two baselines, suffered somewhat from it." ], [ "This work takes the lead in presenting a CNN based neural network system for open-domain machine comprehension task. Our systems tried to solve this task in a document projection way as well as a textual entailment way. The latter one demonstrates slightly better performance. Overall, our architecture, modeling dynamic document representation by attention scheme from sentence level to snippet level, shows promising results in this task. In the future, more fine-grained representation learning approaches are expected to model complex answer types and question types." ] ], "section_name": [ "Introduction", "Related Work", "Model", "HABCNN", "HABCNN-QP & HABCNN-QAP", "HABCNN-TE", "Dataset", "Training Setup and Tricks", "Baseline Systems", "HABCNN Variants", "Results", "Case Study and Error Analysis", "Conclusion" ] }
{ "answers": [ { "annotation_id": [ "37c28c13e3a2bc003122c3d361e15967bc16f1c8", "6f01f33691bbe009bca62f6a5aab6d6db896e3a5", "e0297e2c15df9c2da46828d6ff7149734a4844bc" ], "answer": [ { "evidence": [ "Concretely, we propose HABCNN, a hierarchical attention-based convolutional neural network, to address this task in two roadmaps. In the first one, we project the document in two different ways, one based on question-attention, one based on answer-attention and then compare the two projected document representations to determine whether the answer matches the question. In the second one, every question-answer pair is reformatted into a statement, then the whole task is treated through textual entailment.", "We consider two evaluation metrics: accuracy (proportion of questions correctly answered) and NDCG INLINEFORM0 BIBREF20 . Unlike accuracy which evaluates if the question is correctly answered or not, NDCG INLINEFORM1 , being a measure of ranking quality, evaluates the position of the correct answer in our predicted ranking.", "Baseline Systems", "This work focuses on the comparison with systems about distributed representation learning and deep learning:", "NR. The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer. The input for the encoding layer is a question and the sentences of the document (called facts); each sentence is encoded by a GRU into a vector. In each reasoning layer, NR lets the question representation interact with each fact representation as reasoning process. Finally, all temporary reasoning clues are pooled as answer representation.", "AR. The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. Attention mechanism is implemented at word representation level.", "Overall, baselines Addition and Addition-proj do not involve complex composition and inference. NR and AR represent the top-performing deep neural networks in QA tasks.", "Table TABREF16 lists the performance of baselines, HABCNN-TE variants, HABCNN systems in the first, second and last block, respectively (we only report variants for top-performing HABCNN-TE). Consistently, our HABCNN systems outperform all baselines, especially surpass the two competitive deep learning based systems AR and NR. The margin between our best-performing ABHCNN-TE and NR is 15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500. This demonstrates the promise of our architecture in this task.", "Dataset", "MCTest has two subsets. MCTest-160 is a set of 160 items, each consisting of a document, four questions followed by one correct anwer and three incorrect answers (split into 70 train, 30 dev and 60 test) and MCTest-500 a set of 500 items (split into 300 train, 50 dev and 150 test)." ], "extractive_spans": [ "15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500" ], "free_form_answer": "", "highlighted_evidence": [ "Concretely, we propose HABCNN, a hierarchical attention-based convolutional neural network, to address this task in two roadmaps. ", "We consider two evaluation metrics: accuracy (proportion of questions correctly answered) and NDCG INLINEFORM0 BIBREF20 . ", "Baseline Systems\nThis work focuses on the comparison with systems about distributed representation learning and deep learning:", "NR. The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer.", "AR. The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. ", "NR and AR represent the top-performing deep neural networks in QA tasks.", "Table TABREF16 lists the performance of baselines, HABCNN-TE variants, HABCNN systems in the first, second and last block, respectively (we only report variants for top-performing HABCNN-TE). Consistently, our HABCNN systems outperform all baselines, especially surpass the two competitive deep learning based systems AR and NR. The margin between our best-performing ABHCNN-TE and NR is 15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500. This demonstrates the promise of our architecture in this task.", "Dataset\nMCTest has two subsets. MCTest-160 is a set of 160 items, each consisting of a document, four questions followed by one correct anwer and three incorrect answers (split into 70 train, 30 dev and 60 test) and MCTest-500 a set of 500 items (split into 300 train, 50 dev and 150 test)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF16 lists the performance of baselines, HABCNN-TE variants, HABCNN systems in the first, second and last block, respectively (we only report variants for top-performing HABCNN-TE). Consistently, our HABCNN systems outperform all baselines, especially surpass the two competitive deep learning based systems AR and NR. The margin between our best-performing ABHCNN-TE and NR is 15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500. This demonstrates the promise of our architecture in this task." ], "extractive_spans": [], "free_form_answer": "15.6 and 16.5 for accuracy and NDCG on MCTest-150, 7.3 and 4.6 on MCTest-500.", "highlighted_evidence": [ "The margin between our best-performing ABHCNN-TE and NR is 15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF16 lists the performance of baselines, HABCNN-TE variants, HABCNN systems in the first, second and last block, respectively (we only report variants for top-performing HABCNN-TE). Consistently, our HABCNN systems outperform all baselines, especially surpass the two competitive deep learning based systems AR and NR. The margin between our best-performing ABHCNN-TE and NR is 15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500. This demonstrates the promise of our architecture in this task." ], "extractive_spans": [ "15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500" ], "free_form_answer": "", "highlighted_evidence": [ "Consistently, our HABCNN systems outperform all baselines, especially surpass the two competitive deep learning based systems AR and NR. The margin between our best-performing ABHCNN-TE and NR is 15.6/16.5 (accuracy/NDCG) on MCTest-150 and 7.3/4.6 on MCTest-500." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "17f0c1ccb2eb9e323c9eec1f013cd4496254a850", "2511ebaa0bf2a39a391e69f725e3b897204b4ad5", "e8ccdc10e8b877c54aab311badebedc2c54d0efe" ], "answer": [ { "evidence": [ "This work focuses on the comparison with systems about distributed representation learning and deep learning:", "Addition. Directly compare question and answers without considering the D. Sentence representations are computed by element-wise addition over word representations.", "Addition-proj. First compute sentence representations for Q, A and all D sentences as the same way as Addition, then match the two sentences in D which have highest similarity with Q and A respectively.", "NR. The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer. The input for the encoding layer is a question and the sentences of the document (called facts); each sentence is encoded by a GRU into a vector. In each reasoning layer, NR lets the question representation interact with each fact representation as reasoning process. Finally, all temporary reasoning clues are pooled as answer representation.", "AR. The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. Attention mechanism is implemented at word representation level." ], "extractive_spans": [ "Addition", "Addition-proj", "Neural Reasoner", "Attentive Reader" ], "free_form_answer": "", "highlighted_evidence": [ "This work focuses on the comparison with systems about distributed representation learning and deep learning:\n\nAddition. Directly compare question and answers without considering the D. ", "Addition-proj. First compute sentence representations for Q, A and all D sentences as the same way as Addition, then match the two sentences in D which have highest similarity with Q and A respectively.", "The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer. The input for the encoding layer is a question and the sentences of the document (called facts); each sentence is encoded by a GRU into a vector. In each reasoning layer, NR lets the question representation interact with each fact representation as reasoning process. Finally, all temporary reasoning clues are pooled as answer representation.", "The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. Attention mechanism is implemented at word representation level." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Baseline Systems", "This work focuses on the comparison with systems about distributed representation learning and deep learning:", "NR. The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer. The input for the encoding layer is a question and the sentences of the document (called facts); each sentence is encoded by a GRU into a vector. In each reasoning layer, NR lets the question representation interact with each fact representation as reasoning process. Finally, all temporary reasoning clues are pooled as answer representation.", "AR. The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. Attention mechanism is implemented at word representation level.", "Overall, baselines Addition and Addition-proj do not involve complex composition and inference. NR and AR represent the top-performing deep neural networks in QA tasks." ], "extractive_spans": [ "Neural Reasoner", "Attentive Reader" ], "free_form_answer": "", "highlighted_evidence": [ "Baseline Systems\nThis work focuses on the comparison with systems about distributed representation learning and deep learning:", "NR. The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer. The input for the encoding layer is a question and the sentences of the document (called facts); each sentence is encoded by a GRU into a vector. In each reasoning layer, NR lets the question representation interact with each fact representation as reasoning process. Finally, all temporary reasoning clues are pooled as answer representation.", "AR. The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. Attention mechanism is implemented at word representation level.", "NR and AR represent the top-performing deep neural networks in QA tasks." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "NR. The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer. The input for the encoding layer is a question and the sentences of the document (called facts); each sentence is encoded by a GRU into a vector. In each reasoning layer, NR lets the question representation interact with each fact representation as reasoning process. Finally, all temporary reasoning clues are pooled as answer representation.", "AR. The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. Attention mechanism is implemented at word representation level." ], "extractive_spans": [ "The Neural Reasoner", "The Attentive Reader" ], "free_form_answer": "", "highlighted_evidence": [ "NR. The Neural Reasoner BIBREF21 has an encoding layer, multiple reasoning layers and a final answer layer. ", "AR. The Attentive Reader BIBREF2 is implemented by modeling the whole D as a word sequence – without specific sentence / snippet representations – using an LSTM. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ], "nlp_background": [ "", "" ], "paper_read": [ "", "" ], "question": [ "what was the margin their system outperformed previous ones?", "what prior approaches did they compare to?" ], "question_id": [ "041529e15b70b21986adb781fd9b94b595e451ed", "da2350395867b5fd4dbf968b5a1cd6921ab6dd37" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "" ], "topic_background": [ "", "" ] }
{ "caption": [ "Figure 2: Illustrations of HABCNN-QAP (top), HABCHHQP (middle) and HABCNN-TE (bottom). Q, A, S: question, answer, statement; D: document", "Figure 3: HABCNN. Feature maps for phrase representations pi and the max pooling steps that create sentence representations out of phrase representations are omitted for simplification. Each snippet covers three sentences in snippet-CNN. Symbols ◦ mean cosine similarity calculation.", "Table 2: Experimental results for one-sentence (one), multiple-sentence (mul) and all cases." ], "file": [ "2-Figure2-1.png", "3-Figure3-1.png", "6-Table2-1.png" ] }
[ "what was the margin their system outperformed previous ones?" ]
[ [ "1602.04341-Baseline Systems-0", "1602.04341-Baseline Systems-3", "1602.04341-Dataset-0", "1602.04341-Baseline Systems-4", "1602.04341-Results-0", "1602.04341-Training Setup and Tricks-6", "1602.04341-Introduction-3", "1602.04341-Baseline Systems-5" ] ]
[ "15.6 and 16.5 for accuracy and NDCG on MCTest-150, 7.3 and 4.6 on MCTest-500." ]
78
1908.02284
Two-stage Training for Chinese Dialect Recognition
In this paper, we present a two-stage language identification (LID) system based on a shallow ResNet14 followed by a simple 2-layer recurrent neural network (RNN) architecture, which was used for Xunfei (iFlyTek) Chinese Dialect Recognition Challenge and won the first place among 110 teams. The system trains an acoustic model (AM) firstly with connectionist temporal classification (CTC) to recognize the given phonetic sequence annotation and then train another RNN to classify dialect category by utilizing the intermediate features as inputs from the AM. Compared with a three-stage system we further explore, our results show that the two-stage system can achieve high accuracy for Chinese dialects recognition under both short utterance and long utterance conditions with less training time.
{ "paragraphs": [ [ "The aim of language identification (LID) is to determine the language of an utterance and can be defined as a variable-length sequence classification task on the utterance-level. The task introduced in this paper is more challenging than general LID tasks cause we use a dialect database which contains 10 dialects in China. The dialects' regions are close to each other and they all belong to Chinese, so they have the same characters and similar pronunciations.", "Recently, the use of deep neural network (DNN) has been explored in LID tasks. The DNN is trained to discriminate individual physical states of a tied-state triphone and then extract the bottleneck features to a back-end system for classification BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . End-to-end frameworks based on DNN later are trained for LID BIBREF4 . Other network architectures are successfully applied to LID task, example for convolutional neural network (CNN) BIBREF5 , BIBREF6 , time delay neural network (TDNN) BIBREF7 , RNN BIBREF8 , BIBREF9 , BIBREF10 , and BIBREF11 has a CNN followed by an RNN structure, which is similar to ours. They predict the final category of an utterance directly by the last fully connected layer, or derive the results by averaging the the frame-level posteriors. These frameworks just trained end-to-end to recognize languages, but they do not consider the phonetic information concretely.", "On the other hand, in many utterance analyzing tasks such as acoustic speech recognition (ASR), speaker verification (SV) and our LID, only a simple task or a specific aim is focused on. However, an utterance always has multi-dimensional information such as content, emotion, speaker and language and there are some certain correlations between them. Although the LID task is text-independent, which means the content of each utterance is totally different, different languages may have its own pronunciations or tones. Thus acoustic and language are two components in the LID task, BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 use the bottleneck features from an ASR system and feed to another neural network for recognition. Nevertheless, these ASR DNNs constituted by fully connected layers adds significant computational complexity and also require labels of physical states of a tied-state triphone.", "Inspired by all this, we assume that the high-dim features extracted from the network will contain information of pronunciation and language category, so we combine the ASR method with our LID task and here are our main contributions:", "The remainder of the paper is organized as follows. Section 2 introduces some related works about ASR and section 3 introduces the ResNet14 structure and gives processing of the two multi-stage systems. We present details of the database and initialization methods of the network in Section 4. The results of experiments and analysis are shown in Section 5. Lastly we give the conclusion and some future work in Section 6." ], [ "ASR BIBREF17 task enables the recognition and translation of spoken language into text. Traditionally, we can train an AM based on frame-wise cross-entropy loss to recognize phoneme, which requires tedious label alignment procedure such as Hidden Markov Model and Gaussian Mixture Model (HMM-GMM) paradigm. Then we can use a pronunciation model and a language model to transfer into text.", "In the latter case, CTC is used in training end-to-end ASR network BIBREF18 , BIBREF19 , BIBREF20 , which means that we do not have to align the phoneme label before training. CNN followed by RNN architectures have shown strong ability in dealing sequence-related problems such as sense text recognition BIBREF21 and ASR BIBREF22 . These make the ASR network easy to train and perform better with fewer parameters. BIBREF23 , BIBREF24 further add residual links to the CNN and RNN respectively and both make significant progress." ], [ "The major network structure we use in the two-stage system can be divided to the CNN part and the RNN part, as described in Table TABREF7 . Given the input data of shape INLINEFORM0 , where INLINEFORM1 is the frame length of an utterance, we finally get 512-dimensional frame-level representation and INLINEFORM2 is the number of phonemes or dialect categories.", "Compared with other DNN based systems, we design the CNN part based on ResNet-18 BIBREF25 structure, named ResNet14, as the main part, which decreases the parameters a lot. The first conv layer is with kernel size INLINEFORM0 and stride size INLINEFORM1 , followed by a maxpool layer with stride size INLINEFORM2 for downsampling. Then the residual blocks extract high-dim features from the input sequences and keep the low-rank information. There are 6 res-blocks in all for decreasing parameters, the kernel size of each block is INLINEFORM3 and the features are downsampled while adding channels.", "We use 2-layer bidirectional long-short term memory (BLSTM) BIBREF26 as the RNN part following ResNet14. BLSTM extends original LSTM by introducing a backward direction layer so it considers the future context. The output of the network will be linked to different loss functions and different labels in different stages." ], [ "CTC is an objective function that allows an RNN to be trained for sequence transcription tasks without requiring any prior alignment between the input and target sequences. The label sequence INLINEFORM0 can be mapped to its corresponding CTC paths. We denote the set of CTC paths for INLINEFORM1 as INLINEFORM2 . Thus the likelihood of INLINEFORM3 can be evaluated as a sum of the probabilities of its CTC paths: DISPLAYFORM0 ", "where INLINEFORM0 is the utterance and INLINEFORM1 is a CTC path. Then the network can be trained to optimize the CTC function INLINEFORM2 by the given sequence labeling. For the LID task, we use the multi-class cross-entropy loss for classification: DISPLAYFORM0 ", "where INLINEFORM0 is the ground truth label and INLINEFORM1 is the output probability distribution." ], [ "Figure FIGREF12 shows the architecture of the two-stage system. The input is the sound feature of each utterance. We firstly train the AM with the ResNet14 followed by an RNN architecture, then the intermediate results computed by res-blocks are feed to the second stage as the input. The framework does not need to compress the feature sequence so it keeps all the information from the ResNet14 part. The network of second stage is 2-layer BLSTM. The final pooling strategy is average pooling on time-dimension so we can get the utterance-level category results from frame-level, and the output is the prediction of dialect category. We use CTC loss to train the AM so the network outputs can align with the phoneme sequences automatically and use cross-entropy loss to discriminate between dialects.", "Compared with multi-task training BIBREF27 , BIBREF28 in SV tasks, it should be emphasized that these stages should be trained step by step instead of multi-task learning with shared layers, that is to say we backpropagate the whole network while training AM, and only backpropagate the RNN part in the second stage, or the network will be degenerated and lost the information of acoustic knowledge." ], [ "The three-stage system, as shown in Figure FIGREF14 , has a more complex architecture. Firstly, we still train an AM whose architecture is the same as the first-stage in the two-stage system. This AM is used to generate temporal locations of each phoneme through CTC loss, so that we can train an another AM by using cross-entropy loss as the second-stage to predict the corresponding phonetic labels of the input frames, in which we only use ResNet14 without an RNN because we have the precise locations of each phoneme from the first stage. The third stage is similar, we use the intermediate features from the second stage to train an RNN network for LID task, also the loss in this stage is cross-entropy loss." ], [ "We use a database covering 10 most widespread Chinese dialects, the dialects are Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian. Each dialect has 6-hour audio data. For the training set, there will be 6000 audio files in each dialect with variable length (Figure FIGREF16 ), we can see that most files are longer than 3 seconds. The test set has 500 audio files in each dialect and the set is divided into two categories according to the duration of the audio file ( INLINEFORM0 3s for the first task and INLINEFORM1 3s for the second task).", "The phonetic sequence annotation of the corresponding text to each speech is also provided in the training set. There are 27 initials and 39 finals with 148 tones in the whole database." ], [ "We convert the raw audio to 40-dimensional log Mel-filterbank coefficients with a frame-length of 25 ms, mean-normalized over the whole utterance. Then we stack all the log-mel filterbank features and feed it into the neural network, which is implemented in PyTorch. No voice activity detection (VAD) or other data augmentation approaches are applied. During the training process, we use Adam as the optimization method and set different learning rates and weight decay in different stages. We do not set dropout while training AM but set the dropout value=0.5 while training the LID network (the last stage).", "The baseline we use for comparison is a one-stage RNN system, the RNN structure is the same as the last stage containing 2-layer BLSTM and directly trained to recognize dialect category. In the process of evaluation, we compute the accuracy of the two sub-tasks and the whole test set to evaluate the performance of each system." ], [ "First of all, we compare the two-stage system and the three-stage system trained with phonetic sequence annotation and dialect category label with the baseline trained only with dialect category label. The two multi-stage system have the same ResNet14 architecture and use 2-layer BLSTM as the RNN part with 256 nodes. From the results in the Table TABREF20 , we can see that the relative accuracy (ACC) of the two multi-stage systems increases by 10% on every task relative to the baseline and the two-stage system performs best. We also observe that both two multi-stage systems perform excellently in long duration ( INLINEFORM0 3s) task and the two-stage system illustrates its advantageous and robustness in short duration ( INLINEFORM1 3s) task.", "By analyzing the confusing matrices (Figure FIGREF19 ) of predicted results, we can find that the accuracy is high in several dialects' recognition, such as Shanghai (98.8%) and Hefei (99.8%), but the systems have some trouble while recognizing Minnan and Kekka, Hebei and Shanxi. The results accord with regional distribution of the dialects. For example, Minnan and Kekka are both in Fujian Province and have lots of cognate words, so it is hard to recognize them in reality." ], [ "We further explore the impact of different RNN structures with bidirectional gated recurrent unit (BGRU) and BLSTM. For the two-stage system (Table TABREF22 ), adding the nodes of BLSTM does not work, but adding another layer makes sense in short-duration task. Moreover, with the same layers and nodes, BLSTM outperforms BGRU in the two sub-tasks. We believe that sound related tasks do not need a very deep network as image related tasks, that is also the reason why we use a shallow ResNet14 as the CNN part.", "We evaluate the three-stage system with the same experiments, and the results (Table TABREF23 ) demonstrate that the three-stage system can achieve high accuracy in long duration task by larger BLSTM layers and the BGRU structure outperforms BLSTM on the whole. But adding the third RNN layer also does not work in these experiments.", "As Table TABREF24 shows, training networks in the first stage (with CTC loss) needs more time for convergence than training networks in the second or third stage (with cross-entropy loss). We can observe that the two-stage system spends less time while having a slightly higher accuracy compared to the three-stage system.", "These two multi-stage systems both much outperform the baseline system. They learn acoustic and language knowledge successively, indicating that language and phoneme are features of different levels, so we have to train step by step to avoid the networks “forget\" some knowledge. Through the process, we can find the rules of multi-task and multi-stage training, if the labels are in different levels then multi-stage training should be used such as the situation in our paper, otherwise multi-task training should be used for parallel learning a wide range of knowledge." ], [ "In this work, we propose an acoustic model based on ResNet14 followed by an RNN to recognize phoneme sequence directly with CTC loss and train a simple RNN lastly to get posteriors for recognizing dialect category, forming a two-stage LID system. The system links the different stages by using intermediate features extracted by a shallow ResNet14 architecture. Compared with a simple network or the three-stage system, the two-stage system achieves the state-of-the-art in the Chinese dialect recognition task. We believe this idea of two-stage training can provide inspirations for learning different classes knowledge and can extend to other fields." ] ], "section_name": [ "Introduction", "Related works", "Network structure", "Loss function", "Two-stage system", "Three-stage system", "Data description", "Experimental setup", "Comparison of different stage systems", "Comparison of different RNN structures", "Conclusions" ] }
{ "answers": [ { "annotation_id": [ "1c751eea2e248956c8be45d687dc14176948bf20", "f3cbe10b07421b055d22d96c483a8f78c6446a79", "f9883b10bd7efdb51cb337f2ac605b5e627adeea" ], "answer": [ { "evidence": [ "The baseline we use for comparison is a one-stage RNN system, the RNN structure is the same as the last stage containing 2-layer BLSTM and directly trained to recognize dialect category. In the process of evaluation, we compute the accuracy of the two sub-tasks and the whole test set to evaluate the performance of each system." ], "extractive_spans": [], "free_form_answer": "one-stage RNN system containing 2-layer BLSTM", "highlighted_evidence": [ "The baseline we use for comparison is a one-stage RNN system, the RNN structure is the same as the last stage containing 2-layer BLSTM and directly trained to recognize dialect category." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The baseline we use for comparison is a one-stage RNN system, the RNN structure is the same as the last stage containing 2-layer BLSTM and directly trained to recognize dialect category. In the process of evaluation, we compute the accuracy of the two sub-tasks and the whole test set to evaluate the performance of each system." ], "extractive_spans": [ "one-stage RNN system" ], "free_form_answer": "", "highlighted_evidence": [ "The baseline we use for comparison is a one-stage RNN system, the RNN structure is the same as the last stage containing 2-layer BLSTM and directly trained to recognize dialect category." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The baseline we use for comparison is a one-stage RNN system, the RNN structure is the same as the last stage containing 2-layer BLSTM and directly trained to recognize dialect category. In the process of evaluation, we compute the accuracy of the two sub-tasks and the whole test set to evaluate the performance of each system." ], "extractive_spans": [ "a one-stage RNN system" ], "free_form_answer": "", "highlighted_evidence": [ "The baseline we use for comparison is a one-stage RNN system, the RNN structure is the same as the last stage containing 2-layer BLSTM and directly trained to recognize dialect category. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "34f23feca37ebc01d2d2483cc284a9fece4f501c", "aac717c18d366d20da3fd77e819a154f71b41c59", "c7776942ca17f09e827ac05327ebfa327ce6aeb2" ], "answer": [ { "evidence": [ "First of all, we compare the two-stage system and the three-stage system trained with phonetic sequence annotation and dialect category label with the baseline trained only with dialect category label. The two multi-stage system have the same ResNet14 architecture and use 2-layer BLSTM as the RNN part with 256 nodes. From the results in the Table TABREF20 , we can see that the relative accuracy (ACC) of the two multi-stage systems increases by 10% on every task relative to the baseline and the two-stage system performs best. We also observe that both two multi-stage systems perform excellently in long duration ( INLINEFORM0 3s) task and the two-stage system illustrates its advantageous and robustness in short duration ( INLINEFORM1 3s) task.", "By analyzing the confusing matrices (Figure FIGREF19 ) of predicted results, we can find that the accuracy is high in several dialects' recognition, such as Shanghai (98.8%) and Hefei (99.8%), but the systems have some trouble while recognizing Minnan and Kekka, Hebei and Shanxi. The results accord with regional distribution of the dialects. For example, Minnan and Kekka are both in Fujian Province and have lots of cognate words, so it is hard to recognize them in reality." ], "extractive_spans": [ " relative accuracy (ACC) of the two multi-stage systems increases by 10% on every task relative to the baseline", "accuracy is high in several dialects' recognition, such as Shanghai (98.8%) and Hefei (99.8%), but the systems have some trouble while recognizing Minnan and Kekka, Hebei and Shanxi" ], "free_form_answer": "", "highlighted_evidence": [ "From the results in the Table TABREF20 , we can see that the relative accuracy (ACC) of the two multi-stage systems increases by 10% on every task relative to the baseline and the two-stage system performs best. We also observe that both two multi-stage systems perform excellently in long duration ( INLINEFORM0 3s) task and the two-stage system illustrates its advantageous and robustness in short duration ( INLINEFORM1 3s) task.", "By analyzing the confusing matrices (Figure FIGREF19 ) of predicted results, we can find that the accuracy is high in several dialects' recognition, such as Shanghai (98.8%) and Hefei (99.8%), but the systems have some trouble while recognizing Minnan and Kekka, Hebei and Shanxi. The results accord with regional distribution of the dialects." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "In this work, we propose an acoustic model based on ResNet14 followed by an RNN to recognize phoneme sequence directly with CTC loss and train a simple RNN lastly to get posteriors for recognizing dialect category, forming a two-stage LID system. The system links the different stages by using intermediate features extracted by a shallow ResNet14 architecture. Compared with a simple network or the three-stage system, the two-stage system achieves the state-of-the-art in the Chinese dialect recognition task. We believe this idea of two-stage training can provide inspirations for learning different classes knowledge and can extend to other fields." ], "extractive_spans": [ "state-of-the-art in the Chinese dialect recognition task" ], "free_form_answer": "", "highlighted_evidence": [ "Compared with a simple network or the three-stage system, the two-stage system achieves the state-of-the-art in the Chinese dialect recognition task. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "First of all, we compare the two-stage system and the three-stage system trained with phonetic sequence annotation and dialect category label with the baseline trained only with dialect category label. The two multi-stage system have the same ResNet14 architecture and use 2-layer BLSTM as the RNN part with 256 nodes. From the results in the Table TABREF20 , we can see that the relative accuracy (ACC) of the two multi-stage systems increases by 10% on every task relative to the baseline and the two-stage system performs best. We also observe that both two multi-stage systems perform excellently in long duration ( INLINEFORM0 3s) task and the two-stage system illustrates its advantageous and robustness in short duration ( INLINEFORM1 3s) task.", "FLOAT SELECTED: Table 2: System Performance of different multi-stage systems" ], "extractive_spans": [], "free_form_answer": " The relative accuracy (ACC) of the two multi-stage systems increases by 10% on every task (88.88 and 87.24) relative to the baseline 78.85.", "highlighted_evidence": [ "The two multi-stage system have the same ResNet14 architecture and use 2-layer BLSTM as the RNN part with 256 nodes. From the results in the Table TABREF20 , we can see that the relative accuracy (ACC) of the two multi-stage systems increases by 10% on every task relative to the baseline and the two-stage system performs best. ", "FLOAT SELECTED: Table 2: System Performance of different multi-stage systems" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "1acccdc937ef334b73a4562d6b2134ff1ae995a1", "305bad903ab1b5630a062083fc4ab9cc83575840", "aa7e32ad6fea277dbdf23388762c721345670faf" ], "answer": [ { "evidence": [ "We use a database covering 10 most widespread Chinese dialects, the dialects are Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian. Each dialect has 6-hour audio data. For the training set, there will be 6000 audio files in each dialect with variable length (Figure FIGREF16 ), we can see that most files are longer than 3 seconds. The test set has 500 audio files in each dialect and the set is divided into two categories according to the duration of the audio file ( INLINEFORM0 3s for the first task and INLINEFORM1 3s for the second task)." ], "extractive_spans": [ "Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian" ], "free_form_answer": "", "highlighted_evidence": [ "We use a database covering 10 most widespread Chinese dialects, the dialects are Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use a database covering 10 most widespread Chinese dialects, the dialects are Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian. Each dialect has 6-hour audio data. For the training set, there will be 6000 audio files in each dialect with variable length (Figure FIGREF16 ), we can see that most files are longer than 3 seconds. The test set has 500 audio files in each dialect and the set is divided into two categories according to the duration of the audio file ( INLINEFORM0 3s for the first task and INLINEFORM1 3s for the second task)." ], "extractive_spans": [ "Ningxia", "Hefei", "Sichuan", "Shanxi", "Changsha", "Hebei", "Nanchang", "Shanghai", "Kekka", "Fujian" ], "free_form_answer": "", "highlighted_evidence": [ "We use a database covering 10 most widespread Chinese dialects, the dialects are Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We use a database covering 10 most widespread Chinese dialects, the dialects are Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian. Each dialect has 6-hour audio data. For the training set, there will be 6000 audio files in each dialect with variable length (Figure FIGREF16 ), we can see that most files are longer than 3 seconds. The test set has 500 audio files in each dialect and the set is divided into two categories according to the duration of the audio file ( INLINEFORM0 3s for the first task and INLINEFORM1 3s for the second task)." ], "extractive_spans": [ "Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian" ], "free_form_answer": "", "highlighted_evidence": [ "We use a database covering 10 most widespread Chinese dialects, the dialects are Ningxia, Hefei, Sichuan, Shanxi, Changsha, Hebei, Nanchang, Shanghai, Kekka and Fujian." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "question": [ "what are the baselines?", "what results do they achieve?", "what chinese dialects are explored?" ], "question_id": [ "eb653a5c59851eda313ece0bcd8c589b6155d73e", "0caa3162abe588f576a568d63ab9fd0e9c46ceda", "cbe42bf7c99ee248cdb2c5d6cf86b41106e66863" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "search_query": [ "", "", "" ], "topic_background": [ "", "", "" ] }
{ "caption": [ "Figure 1: Training architecture of two-stage system", "Table 1: Our ResNet14 structure, which has only 5.36 million parameters compared to the standard ResNet-18 (11.26 million)", "Figure 3: Data distribution of time", "Figure 2: Training architecture of three-stage system", "Table 2: System Performance of different multi-stage systems", "Figure 4: Comparison of confusion matrices produced by the two-stage system (left) and the three-stage system (right)", "Table 5: Epochs to converge of different multi-stage systems", "Table 3: Comparison of two-stage system with different RNN structures", "Table 4: Comparison of three-stage system with different RNN structures" ], "file": [ "2-Figure1-1.png", "2-Table1-1.png", "3-Figure3-1.png", "3-Figure2-1.png", "3-Table2-1.png", "4-Figure4-1.png", "4-Table5-1.png", "4-Table3-1.png", "4-Table4-1.png" ] }
[ "what are the baselines?", "what results do they achieve?" ]
[ [ "1908.02284-Experimental setup-1" ], [ "1908.02284-Comparison of different stage systems-1", "1908.02284-Comparison of different stage systems-0", "1908.02284-3-Table2-1.png", "1908.02284-Conclusions-0" ] ]
[ "one-stage RNN system containing 2-layer BLSTM", " The relative accuracy (ACC) of the two multi-stage systems increases by 10% on every task (88.88 and 87.24) relative to the baseline 78.85." ]
79
1907.00168
The CUED's Grammatical Error Correction Systems for BEA-2019
We describe two entries from the Cambridge University Engineering Department to the BEA 2019 Shared Task on grammatical error correction. Our submission to the low-resource track is based on prior work on using finite state transducers together with strong neural language models. Our system for the restricted track is a purely neural system consisting of neural language models and neural machine translation models trained with back-translation and a combination of checkpoint averaging and fine-tuning -- without the help of any additional tools like spell checkers. The latter system has been used inside a separate system combination entry in cooperation with the Cambridge University Computer Lab.
{ "paragraphs": [ [ "The automatic correction of errors in text [In a such situaction INLINEFORM0 In such a situation] is receiving more and more attention from the natural language processing community. A series of competitions has been devoted to grammatical error correction (GEC): the CoNLL-2013 shared task BIBREF0 , the CoNLL-2014 shared task BIBREF1 , and finally the BEA 2019 shared task BIBREF2 . This paper presents the contributions from the Cambridge University Engineering Department to the latest GEC competition at the BEA 2019 workshop.", "We submitted systems to two different tracks. The low-resource track did not permit the use of parallel training data except a small development set with around 4K sentence pairs. For our low-resource system we extended our prior work on finite state transducer based GEC BIBREF3 to handle new error types such as punctuation errors as well as insertions and deletions of a small number of frequent words. For the restricted track, the organizers provided 1.2M pairs (560K without identity mappings) of corrected and uncorrected sentences. Our goal on the restricted track was to explore the potential of purely neural models for grammatical error correction. We confirm the results of BIBREF4 and report substantial gains by applying back-translation BIBREF5 to GEC – a data augmentation technique common in machine translation. Furthermore, we noticed that large parts of the training data do not match the target domain. We mitigated the domain gap by over-sampling the in-domain training corpus, and by fine-tuning through continued training. Our final model is an ensemble of four neural machine translation (NMT) models and two neural language models (LMs) with Transformer architecture BIBREF6 . Our purely neural system was also part of the joint submission with the Cambridge University Computer Lab described by BIBREF7 ." ], [ " BIBREF3 investigated the use of finite state transducers (FSTs) for neural grammatical error correction. They proposed a cascade of FST compositions to construct a hypothesis space which is then rescored with a neural language model. We will outline this approach and explain our modifications in this section. For more details we refer to BIBREF3 .", "In a first step, the source sentence is converted to an FST INLINEFORM0 (Fig. FIGREF3 ). This initial FST is augmented by composition (denoted with the INLINEFORM1 -operator) with various other FSTs to cover different error types. Composition is a widely used standard operation on FSTs and supported efficiently by FST toolkits such as OpenFST BIBREF8 . We construct the hypothesis space as follows:", "We compose the input INLINEFORM0 with the deletion transducer INLINEFORM1 in Fig. FIGREF5 . INLINEFORM2 allows to delete tokens on the short list shown in Tab. TABREF6 at a cost INLINEFORM3 . We selected INLINEFORM4 by looking up all tokens which have been deleted in the dev set more than five times and manually filtered that list slightly. We did not use the full list of dev set deletions to avoid under-estimating INLINEFORM5 in tuning.", "In a next step, we compose the transducer from step 1 with the edit transducer INLINEFORM0 in Fig. FIGREF7 . This step addresses substitution errors such as spelling or morphology errors. Like BIBREF3 , we use the confusion sets of BIBREF9 based on CyHunspell for spell checking BIBREF10 , the AGID morphology database for morphology errors BIBREF11 , and manually defined corrections for determiner and preposition errors to construct INLINEFORM1 . Additionally, we extracted all substitution errors from the BEA-2019 dev set which occurred more than five times, and added a small number of manually defined rules that fix tokenization around punctuation symbols.", "We found it challenging to allow insertions in LM-based GEC because the LM often prefers inserting words with high unigram probability such as articles and prepositions before less predictable words like proper names. We therefore restrict insertions to the three tokens “,”, “-”, and “'s” and allow only one insertion per sentence. We achieve this by adding the transducer INLINEFORM0 in Fig. FIGREF8 to our composition cascade.", "Finally, we map the word-level FSTs to the subword-level by composition with a mapping transducer INLINEFORM0 that applies byte pair encoding BIBREF12 to the full words. Word-to-BPE mapping transducers have been used in prior work to combine word-level models with subword-level neural sequence models BIBREF3 , BIBREF13 , BIBREF14 , BIBREF15 .", "In a more condensed form, we can describe the final transducer as: DISPLAYFORM0 ", "with INLINEFORM0 for deletions, INLINEFORM1 for substitutions, INLINEFORM2 for insertions, and INLINEFORM3 for converting words to BPE tokens. Path scores in the FST in Eq. EQREF14 are the accumulated penalties INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 . The INLINEFORM7 -parameters are tuned on the dev set using a variant of Powell search BIBREF16 . We apply standard FST operations like output projection, INLINEFORM8 -removal, determinization, minimization, and weight pushing BIBREF17 , BIBREF18 to help downstream decoding. Following BIBREF3 we then use the resulting transducer to constrain a neural LM beam decoder." ], [ "Our LMs are Transformer BIBREF6 decoders (transformer_big) trained using the Tensor2Tensor library BIBREF19 . We delay SGD updates BIBREF20 , BIBREF21 with factor 2 to simulate 500K training steps with 8 GPUs on 4 physical GPUs. Training batches contain about 4K source and target tokens. Our LM training set comprises the monolingual news2015-news2018 English training sets from the WMT evaluation campaigns BIBREF22 after language detection BIBREF23 (138M sentences) and subword segmentation using byte pair encoding BIBREF12 with 32K merge operations. For decoding, we use our SGNMT tool BIBREF13 , BIBREF14 with OpenFST backend BIBREF8 .", "We use neural LMs and neural machine translation (NMT) models in our restricted track entry. Our neural LM is as described in Sec. SECREF15 . Our LMs and NMT models share the same subword segmentation. We perform exploratory NMT experiments with the Base setup, but switch to the Big setup for our final models. Tab. TABREF21 shows the differences between both setups. Tab. TABREF22 lists some corpus statistics for the BEA-2019 training sets. In our experiments without fine-tuning we decode with the average of the 20 most recent checkpoints BIBREF26 . We use the SGNMT decoder BIBREF13 , BIBREF14 in all our experiments.", "The BEA-2019 training corpora (Tab. TABREF22 ) differ significantly not only in size but also their closeness to the target domain. The W&I+LOCNESS corpus is most similar to the BEA-2019 dev and test sets in terms of domains and the distribution over English language proficiency, but only consists of 34K sentence pairs. To increase the importance of in-domain training samples we over-sampled the W&I+LOCNESS corpus with different rates. Tab. TABREF24 shows that over-sampling by factor 4 (i.e. adding the W&I+LOCNESS corpus four times to the training set) improves the ERRAMT INLINEFORM0 -score by 2.2 points on the BEA-2019 dev set and does not lead to substantial losses on the CoNLL-2014 test set. We will over-sample the W&I+LOCNESS corpus by four in all subsequent experiments.", "Previous works often suggested to remove unchanged sentences (i.e. source and target sentences are equal) from the training corpora BIBREF3 , BIBREF27 , BIBREF28 . We note that removing these identity mappings can be seen as measure to control the balance between precision and recall. As shown in Tab. TABREF26 , removing identities encourages the model to make more corrections and thus leads to higher recall but lower precision. It depends on the test set whether this results in an improvement in INLINEFORM0 score. For the subsequent experiments we found that removing identities in the parallel training corpora but not in the back-translated synthetic data works well in practice.", "Back-translation BIBREF5 has become the most widely used technique to use monolingual data in neural machine translation. Back-translation extends the existing parallel training set by additional training samples with real English target sentences but synthetic source sentences. Different methods have been proposed to synthesize the source sentence such as using dummy tokens BIBREF5 , copying the target sentence BIBREF29 , or sampling from or decoding with a reverse sequence-to-sequence model BIBREF5 , BIBREF30 , BIBREF4 . The most popular approach is to generate the synthetic source sentences with a reverse model that is trained to transform target to source sentences using beam search. In GEC, this means that the reverse model learns to introduce errors into a correct English sentence. Back-translation has been applied successfully to GEC by BIBREF4 . We confirm the effectiveness of back-translation in GEC and discuss some of the differences between applying this technique to grammatical error correction and machine translation.", "Our experiments with back-translation are summarized in Tab. TABREF28 . Adding 1M synthetic sentences to the training data already yields very substantial gains on both test sets. We achieve our best results with 5M synthetic sentences (+8.44 on BEA-2019 Dev). In machine translation, it is important to maintain a balance between authentic and synthetic data BIBREF5 , BIBREF31 , BIBREF32 . Over-sampling the real data is a common practice to rectify that ratio if large amounts of synthetic data are available. Interestingly, over-sampling real data in GEC hurts performance (row 3 vs. 5 in Tab. TABREF28 ), and it is possible to mix real and synthetic sentences at a ratio of 1:7.9 (last three rows in Tab. TABREF28 ). We will proceed with the 5M setup for the remainder of this paper.", "As explained previously, we over-sample the W&I+LOCNESS corpus by factor 4 to mitigate the domain gap between the training set and the BEA-2019 dev and test sets. To further adapt our system to the target domain, we fine-tune the NMT models on W&I+LOCNESS after convergence on the full training set. We do that by continuing training on W&I+LOCNESS from the last checkpoint of the first training pass. Fig. FIGREF30 plots the INLINEFORM0 score on the BEA-2019 dev set for two different setups. For the red curve, we average all checkpoints BIBREF26 (including the last unadapted checkpoint) up to a certain training iteration. Checkpoints are dumped every 500 steps. The green curve does not use any checkpoint averaging. Checkpoint averaging helps to smooth out fluctuations in INLINEFORM1 score, and also generalizes better to CoNLL-2014 (Tab. TABREF31 ).", "Tab. TABREF33 contains our experiments with the Big configuration. In addition to W&I+LOCNESS over-sampling, back-translation with 5M sentences, and fine-tuning with checkpoint averaging, we report further gains by adding the language models from our low-resource system (Sec. SECREF15 ) and ensembling. Our best system (4 NMT models, 2 language models) achieves 58.9 M2 on CoNLL-2014, which is slightly (2.25 points) worse than the best published result on that test set BIBREF27 . However, we note that we have tailored our system towards the BEA-2019 dev set and not the CoNLL-2013 or CoNLL-2014 test sets. As we argued in Sec. SECREF18 , our results throughout this work suggest strongly that the optimal system parameters for these test sets are very different from each other, and that our final system settings are not optimal for CoNLL-2014. We also note that unlike the system of BIBREF27 , our system for the restricted track does not use spell checkers or other NLP tools but relies solely on neural sequence models." ], [ "We report M2 BIBREF24 scores on the CoNLL-2014 test set BIBREF1 and span-based ERRANT scores BIBREF25 on the BEA-2019 dev set BIBREF2 . On CoNLL-2014 we compare with the best published results with comparable amount of parallel training data. We refer to BIBREF2 for a full comparison of BEA-2019 systems. We tune our systems on BEA-2019 and only report the performance on CoNLL-2014 for comparison to prior work.", "Tab. TABREF9 summarizes our low-resource experiments. Our substitution-only system already outperforms the prior work of BIBREF3 . Allowing for deletions and insertions improves the ERRANT score on BEA-2019 Dev by 2.57 points. We report further gains on both test sets by ensembling two language models and increasing the beam size." ], [ "Our results in Tab. TABREF9 differ significantly between the CoNLL-2014 test set and the BEA-2019 dev set. Allowing insertions is beneficial on BEA-2019 Dev but decreases the M2 score on CoNLL-2014. Increasing the beam size improves our system by 3.28 points on CoNLL-2014 while the impact on BEA-2019 Dev is smaller (+0.85 points). These differences can be partially explained by comparing the error type frequencies in the reference annotations in both test sets (Tab. TABREF19 ). Samples in CoNLL-2014 generally need more corrections per sentence than in BEA-2019 Dev. More importantly, the CoNLL-2014 test set contains fewer missing words, but much more unnecessary words than BEA-2019 Dev. This mismatch tempers with tuning as we explicitly tune insertion and deletion penalties." ], [ "In contrast to our low-resource submission, our restricted system entirely relies on neural models and does not use any external NLP tools, spell checkers, or hand-crafted confusion sets. For simplicity, we also chose to use standard implementations BIBREF19 of standard Transformer BIBREF6 models with standard hyper-parameters. This makes our final system easy to deploy as it is a simple ensemble of standard neural models with minimal preprocessing (subword segmentation). Our contributions on this track focus on NMT training techniques such as over-sampling, back-translation, and fine-tuning. We show that over-sampling effectively reduces domain mismatch. We found back-translation BIBREF5 to be a very effective technique to utilize unannotated training data. However, while over-sampling is commonly used in machine translation to balance the number of real and back-translated training sentences, we report that using over-sampling this way for GEC hurts performance. Finally, we propose a combination of checkpoint averaging BIBREF26 and continued training to adapt our NMT models to the target domain." ], [ "We participated in the BEA 2019 Shared Task on grammatical error correction with submissions to the low-resource and the restricted track. Our low-resource system is an extension of prior work on FST-based GEC BIBREF3 to allow insertions and deletions. Our restricted track submission is a purely neural system based on standard NMT and LM architectures. We pointed out the similarity between GEC and machine translation, and demonstrated that several techniques which originate from MT research such as over-sampling, back-translation, and fine-tuning, are also useful for GEC. Our models have been used in a joint submission with the Cambridge University Computer Lab BIBREF7 ." ], [ "This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) grant EP/L027623/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service funded by EPSRC Tier-2 capital grant EP/P020259/1." ] ], "section_name": [ "Introduction", "FST-based Grammatical Error Correction", "Experimental Setup", "Results", "Differences Between CoNLL-2014 and BEA-2019 Dev", "Restricted Track Submission", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "1af80fde5dd33e327dceb6602c0dae07dc58f269", "3840533eaee030fee8755ce0ed216041332ce55a", "64a3cc01aed4489002d40533030ecb3221865c4b" ], "answer": [ { "evidence": [ "Our LMs are Transformer BIBREF6 decoders (transformer_big) trained using the Tensor2Tensor library BIBREF19 . We delay SGD updates BIBREF20 , BIBREF21 with factor 2 to simulate 500K training steps with 8 GPUs on 4 physical GPUs. Training batches contain about 4K source and target tokens. Our LM training set comprises the monolingual news2015-news2018 English training sets from the WMT evaluation campaigns BIBREF22 after language detection BIBREF23 (138M sentences) and subword segmentation using byte pair encoding BIBREF12 with 32K merge operations. For decoding, we use our SGNMT tool BIBREF13 , BIBREF14 with OpenFST backend BIBREF8 ." ], "extractive_spans": [ "SGNMT tool BIBREF13 , BIBREF14 with OpenFST backend BIBREF8" ], "free_form_answer": "", "highlighted_evidence": [ "For decoding, we use our SGNMT tool BIBREF13 , BIBREF14 with OpenFST backend BIBREF8 ." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "We use neural LMs and neural machine translation (NMT) models in our restricted track entry. Our neural LM is as described in Sec. SECREF15 . Our LMs and NMT models share the same subword segmentation. We perform exploratory NMT experiments with the Base setup, but switch to the Big setup for our final models. Tab. TABREF21 shows the differences between both setups. Tab. TABREF22 lists some corpus statistics for the BEA-2019 training sets. In our experiments without fine-tuning we decode with the average of the 20 most recent checkpoints BIBREF26 . We use the SGNMT decoder BIBREF13 , BIBREF14 in all our experiments." ], "extractive_spans": [ "SGNMT" ], "free_form_answer": "", "highlighted_evidence": [ "We use the SGNMT decoder BIBREF13 , BIBREF14 in all our experiments." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "46d547db72fae32682615f11ebc593d77535b4c6", "4ee33d0e422b52ca9c42fb0d56f2eba5f4b9bb87", "f2afb1d27fc77a99b0d3e5c484d2420da8cfb5fa" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "annotation_id": [ "7fd42e8f9d7797158263109b648c4ada2640c7be", "9985ed3d270c7b79342a3b3269cb788feb08407f", "cc724f339f453aa77fd3da554f37635bf20a2231" ], "answer": [ { "evidence": [ "We submitted systems to two different tracks. The low-resource track did not permit the use of parallel training data except a small development set with around 4K sentence pairs. For our low-resource system we extended our prior work on finite state transducer based GEC BIBREF3 to handle new error types such as punctuation errors as well as insertions and deletions of a small number of frequent words. For the restricted track, the organizers provided 1.2M pairs (560K without identity mappings) of corrected and uncorrected sentences. Our goal on the restricted track was to explore the potential of purely neural models for grammatical error correction. We confirm the results of BIBREF4 and report substantial gains by applying back-translation BIBREF5 to GEC – a data augmentation technique common in machine translation. Furthermore, we noticed that large parts of the training data do not match the target domain. We mitigated the domain gap by over-sampling the in-domain training corpus, and by fine-tuning through continued training. Our final model is an ensemble of four neural machine translation (NMT) models and two neural language models (LMs) with Transformer architecture BIBREF6 . Our purely neural system was also part of the joint submission with the Cambridge University Computer Lab described by BIBREF7 ." ], "extractive_spans": [ "explore the potential of purely neural models for grammatical error correction" ], "free_form_answer": "", "highlighted_evidence": [ "Our goal on the restricted track was to explore the potential of purely neural models for grammatical error correction." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We submitted systems to two different tracks. The low-resource track did not permit the use of parallel training data except a small development set with around 4K sentence pairs. For our low-resource system we extended our prior work on finite state transducer based GEC BIBREF3 to handle new error types such as punctuation errors as well as insertions and deletions of a small number of frequent words. For the restricted track, the organizers provided 1.2M pairs (560K without identity mappings) of corrected and uncorrected sentences. Our goal on the restricted track was to explore the potential of purely neural models for grammatical error correction. We confirm the results of BIBREF4 and report substantial gains by applying back-translation BIBREF5 to GEC – a data augmentation technique common in machine translation. Furthermore, we noticed that large parts of the training data do not match the target domain. We mitigated the domain gap by over-sampling the in-domain training corpus, and by fine-tuning through continued training. Our final model is an ensemble of four neural machine translation (NMT) models and two neural language models (LMs) with Transformer architecture BIBREF6 . Our purely neural system was also part of the joint submission with the Cambridge University Computer Lab described by BIBREF7 ." ], "extractive_spans": [], "free_form_answer": "The organizers provided a dataset allowed to use for training", "highlighted_evidence": [ "For the restricted track, the organizers provided 1.2M pairs (560K without identity mappings) of corrected and uncorrected sentences." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We submitted systems to two different tracks. The low-resource track did not permit the use of parallel training data except a small development set with around 4K sentence pairs. For our low-resource system we extended our prior work on finite state transducer based GEC BIBREF3 to handle new error types such as punctuation errors as well as insertions and deletions of a small number of frequent words. For the restricted track, the organizers provided 1.2M pairs (560K without identity mappings) of corrected and uncorrected sentences. Our goal on the restricted track was to explore the potential of purely neural models for grammatical error correction. We confirm the results of BIBREF4 and report substantial gains by applying back-translation BIBREF5 to GEC – a data augmentation technique common in machine translation. Furthermore, we noticed that large parts of the training data do not match the target domain. We mitigated the domain gap by over-sampling the in-domain training corpus, and by fine-tuning through continued training. Our final model is an ensemble of four neural machine translation (NMT) models and two neural language models (LMs) with Transformer architecture BIBREF6 . Our purely neural system was also part of the joint submission with the Cambridge University Computer Lab described by BIBREF7 ." ], "extractive_spans": [ "goal on the restricted track was to explore the potential of purely neural models for grammatical error correction" ], "free_form_answer": "", "highlighted_evidence": [ "For the restricted track, the organizers provided 1.2M pairs (560K without identity mappings) of corrected and uncorrected sentences. Our goal on the restricted track was to explore the potential of purely neural models for grammatical error correction." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "2a46641499fde3f0a66c6ac8fb22e19a0a75d145", "644b3c7e1bd8e75dc29a52690766e106b0ba083d", "fd849cda0a5ed4e82fde558cc23ffed202f7e74f" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "QUESTION (4 / 4): WHAT DOES BEA STAND FOR?" ], "unanswerable": true, "yes_no": null } ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five", "five", "five" ], "paper_read": [ "no", "no", "no", "no" ], "question": [ "Which neural machine translation model was used?", "What position did this entry finish in, in the overall shared task?", "What are the restrictions of the restricted track?", "What does BEA stand for?" ], "question_id": [ "94d794df4a3109522c2ea09dad5d40e55d35df51", "044c66c6b7ff7378682f24887b05e1af79dcd04f", "903ac8686ed7e6e3269a5d863f06ff11c50e49e8", "ab95ca983240ad5289c123a2774f8e0db424f4a1" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ] }
{ "caption": [ "Figure 3: Edit FSTE which allows substitutions with a cost of λsub. The σ-label matches any symbol and maps it to itself at no cost.", "Table 1: List of tokens R that can be deleted by the deletion transducer D in Fig. 2.", "Table 2: Results on the low-resource track. The λ-parameters are tuned on the BEA-2019 dev set.", "Table 3: Number of correction types in CoNLL-2014 and BEA-2019 Dev references.", "Table 4: NMT setups BASE and BIG used in our experiments for the restricted track.", "Table 5: BEA-2019 parallel training data with and without removing pairs where source and target sentences are the same.", "Table 6: Over-sampling the BEA-2019 in-domain corpus W&I+LOCNESS under BASE models. The second column contains the ratio of W&I+LOCNESS samples to training samples from the other corpora.", "Table 7: Impact of identity removal on BASE models.", "Table 8: Using back-translation for GEC (BASE models). The third column contains the ratio between real and synthetic sentence pairs.", "Table 9: Fine-tuning through continued training on W&I+LOCNESS and checkpoint averaging with a BASE model with 5M back-translated sentences.", "Figure 5: Span-based ERRANT F0.5 scores on the BEA-2019 dev set over the number of fine-tuning training iterations (single GPU, SGD delay factor (Saunders et al., 2018) of 16).", "Table 10: Final results on the restricted track with BIG models and back-translation." ], "file": [ "2-Figure3-1.png", "2-Table1-1.png", "3-Table2-1.png", "4-Table3-1.png", "4-Table4-1.png", "4-Table5-1.png", "5-Table6-1.png", "5-Table7-1.png", "5-Table8-1.png", "6-Table9-1.png", "6-Figure5-1.png", "6-Table10-1.png" ] }
[ "What are the restrictions of the restricted track?" ]
[ [ "1907.00168-Introduction-1" ] ]
[ "The organizers provided a dataset allowed to use for training" ]
80
1911.09709
Automatically Neutralizing Subjective Bias in Text
Texts like news, encyclopedias, and some social media strive for objectivity. Yet bias in the form of inappropriate subjectivity - introducing attitudes via framing, presupposing truth, and casting doubt - remains ubiquitous. This kind of bias erodes our collective trust and fuels social conflict. To address this issue, we introduce a novel testbed for natural language generation: automatically bringing inappropriately subjective text into a neutral point of view ("neutralizing" biased text). We also offer the first parallel corpus of biased language. The corpus contains 180,000 sentence pairs and originates from Wikipedia edits that removed various framings, presuppositions, and attitudes from biased sentences. Last, we propose two strong encoder-decoder baselines for the task. A straightforward yet opaque CONCURRENT system uses a BERT encoder to identify subjective words as part of the generation process. An interpretable and controllable MODULAR algorithm separates these steps, using (1) a BERT-based classifier to identify problematic words and (2) a novel join embedding through which the classifier can edit the hidden states of the encoder. Large-scale human evaluation across four domains (encyclopedias, news headlines, books, and political speeches) suggests that these algorithms are a first step towards the automatic identification and reduction of bias.
{ "paragraphs": [ [ "Writers and editors of texts like encyclopedias, news, and textbooks strive to avoid biased language. Yet bias remains ubiquitous. 62% of Americans believe their news is biased BIBREF0 and bias is the single largest source of distrust in the media BIBREF1.", "This work presents data and algorithms for automatically reducing bias in text. We focus on a particular kind of bias: inappropriate subjectivity (“subjective bias”). Subjective bias occurs when language that should be neutral and fair is skewed by feeling, opinion, or taste (whether consciously or unconsciously). In practice, we identify subjective bias via the method of BIBREF2: using Wikipedia's neutral point of view (NPOV) policy. This policy is a set of principles which includes “avoiding stating opinions as facts” and “preferring nonjudgemental language”.", "For example a news headline like “John McCain exposed as an unprincipled politician\" (Figure FIGREF1) is biased because the verb expose is a factive verb that presupposes the truth of its complement; a non-biased sentence would use a verb like describe so as not to presuppose something that is the subjective opinion of the writer. “Pilfered” in “the gameplay is pilfered from DDR” (Table TABREF3) subjectively frames the shared gameplay as a kind of theft. “His” in “a lead programmer usually spends his career” again introduces a biased and subjective viewpoint (that all programmers are men) through presupposition.", "We aim to debias text by suggesting edits that would make it more neutral. This contrasts with prior research which has debiased representations of text by removing dimensions of prejudice from word embeddings BIBREF3, BIBREF4 and the hidden states of predictive models BIBREF5, BIBREF6. To avoid overloading the definition of “debias,” we refer to our kind of text debiasing as neutralizing that text. Figure FIGREF1 gives an example.", "We introduce the Wiki Neutrality Corpus (WNC). This is a new parallel corpus of 180,000 biased and neutralized sentence pairs along with contextual sentences and metadata. The corpus was harvested from Wikipedia edits that were designed to ensure texts had a neutral point of view. WNC is the first parallel corpus targeting biased and neutralized language. We also define the task of neutralizing subjectively biased text. This task shares many properties with tasks like detecting framing or epistemological bias BIBREF2, or veridicality assessment/factuality prediction BIBREF7, BIBREF8, BIBREF9, BIBREF10. Our new task extends these detection/classification problems into a generation task: generating more neutral text with otherwise similar meaning.", "Finally, we propose a pair of novel sequence-to-sequence algorithms for this neutralization task. Both methods leverage denoising autoencoders and a token-weighted loss function. An interpretable and controllable modular algorithm breaks the problem into (1) detection and (2) editing, using (1) a BERT-based detector to explicitly identify problematic words, and (2) a novel join embedding through which the detector can modify an editors' hidden states. This paradigm advances an important human-in-the-loop approach to bias understanding and generative language modeling. Second, an easy to train and use but more opaque concurrent system uses a BERT encoder to identify subjectivity as part of the generation process.", "Large-scale human evaluation suggests that while not without flaws, our algorithms can identify and reduce bias in encyclopedias, news, books, and political speeches, and do so better than state-of-the-art style transfer and machine translation systems. This work represents an important first step towards automatically managing bias in the real world. We release data and code to the public." ], [ "The Wiki Neutrality Corpus consists of aligned sentences pre and post-neutralization by English Wikipedia editors (Table TABREF3). We used regular expressions to crawl 423,823 Wikipedia revisions between 2004 and 2019 where editors provided NPOV-related justification BIBREF11, BIBREF2, BIBREF12. To maximize the precision of bias-related changes, we ignored revisions where", "[noitemsep]", "More than a single sentence was changed.", "Minimal edits (character Levenshtein distance $<$ 4).", "Maximal edits (more than half of the words changed).", "Edits where more than half of the words were proper nouns.", "Edits that fixed spelling or grammatical errors.", "Edits that added references or hyperlinks.", "Edits that changed non-literary elements like tables or punctuation.", "We align sentences in the pre and post text by computing a sliding window (of size $k = 5$) of pairwise BLEU BIBREF13 between sentences and matching sentences with the biggest score BIBREF14, BIBREF15. Last, we discarded pairs whose length ratios were beyond the 95th percentile BIBREF16.", "Corpus statistics are given in Table TABREF12. The final data are (1) a parallel corpus of 180k biased sentences and their neutral counterparts, and (2) 385k neutral sentences that were adjacent to a revised sentence at the time of editing but were not changed by the editor. Note that following BIBREF2, the neutralizing experiments in Section SECREF4 focus on the subset of WNC where the editor modified or deleted a single word in the source text (“Biased-word” in Table TABREF12).", "Table TABREF12 also gives a categorization of these sample pairs using a slight extension of the typology of BIBREF2. They defined framing bias as using subjective words or phrases linked with a particular point of view (like using words like best or deepest or using pilfered from instead of based on, and epistemological bias as linguistic features that subtly (often via presupposition) focus on the believability of a proposition. We add to their two a third kind of subjectivity bias that also occurs in our data, which we call demographic bias, text with presuppositions about particular genders or races or other demographic categories (like presupposing that all programmers are male).", "The dataset does not include labels for these categories, but we hand-labeled a random sample of 500 examples to estimate the distribution of the 3 types. Table TABREF13 shows that while framing bias is most common, all types of bias are represented in the data, including instances of demographic bias." ], [ "We take a closer look at WNC to identify characteristics of subjective bias on Wikipedia.", "Topic. We use the Wikimedia Foundation's categorization models BIBREF17 to bucket articles from WNC and the aforementioned random sample into a 44-category ontology, then compare the proportions of NPOV-driven edits across categories. Subjectively biased edits are most prevalent in history, politics, philosophy, sports, and language categories. They are least prevalent in the meteorology, science, landforms, broadcasting, and arts categories. This suggests that there is a relationship between a text's topic and the realization of bias. We use this observation to guide our model design in Section SECREF19.", "Tenure. We group editors into “newcomers” (less than a month of experience) and “experienced” (more than a month). We find that newcomers are less likely to perform neutralizing edits (15% in WNC) compared to other edits (34% in a random sample of 685k edits). This difference is significant ($\\tilde{\\chi }^2$ p $=$ 0.001), suggesting the complexity of neutralizing text is typically reserved for more senior editors, which helps explain the performance of human evaluators in Section SECREF53." ], [ "We propose the task of neutralizing text, in which the algorithm is given an input sentence and must produce an output sentence whose meaning is as similar as possible to the input but with the subjective bias removed.", "We propose two algorithms for this task, each with its own benefits. A modular algorithm enables human control and interpretability. A concurrent algorithm is simple to train and operate.", "We adopt the following notation:", "", "$\\mathbf {s} = [w^s_1, ..., w^s_n]$ is a source sequence of subjectively biased text.", "", "$\\mathbf {t} = [w^t_1, ..., w^t_m]$ is a target sequence and the neutralized version of $\\mathbf {s}$." ], [ "The first algorithm we are proposing has two stages: BERT-based detection and LSTM-based editing. We pretrain a model for each stage and then combine them into a joint system for end-to-end fine tuning on the overall neutralizing task. We proceed to describe each module." ], [ "The detection module is a neural sequence tagger that estimates $p_i$, the probability that each input word $w^s_i$ is subjectively biased (Figure FIGREF26).", "Module description. Each $p_i$ is calculated according to", "$\\mathbf {b}_i \\in \\mathcal {R}^{b}$ represents $w^s_i$'s semantic meaning. It is a contextualized word vector produced by BERT, a transformer encoder that has been pre-trained as a masked language model BIBREF18. To leverage the bias-topic relationship uncovered in Section SECREF14, we prepend a token indicating an article's topic category (<arts>, <sports>, etc) to $\\mathbf {s}$. The word vectors for these tokens are learned from scratch.", "$\\mathbf {e}_i$ represents expert features of bias proposed by BIBREF2:", "", "$\\mathbf {W}^{in} \\in \\mathcal {R}^{f \\times h}$ is a matrix of learned parameters, and $\\mathbf {f}_i$ is a vector of discrete features.", "$\\mathbf {W}^{b} \\in \\mathcal {R}^{b}$, $\\mathbf {W}^{e} \\in \\mathcal {R}^{h}$, and $b \\in \\mathcal {R}$ are learnable parameters.", "Module pre-training. We train this module using diffs between the source and target text. A label $p^*_i$ is 1 if $w^s_i$ was deleted or modified as part of the neutralizing process. A label is 0 if it occurs in both the source and target text. The loss is calculated as the average negative log likelihood of the labels:" ], [ "The editing module takes a subjective source sentence $\\mathbf {s}$ and is trained to edit it into a more neutral compliment $\\mathbf {t}$.", "Module description. This module is based on a sequence-to-sequence neural machine translation model BIBREF19. A bi-LSTM BIBREF20 encoder turns $\\mathbf {s}$ into a sequence of hidden states $\\mathbf {H} = (\\mathbf {h}_1, ..., \\mathbf {h}_n)$. Next, an LSTM decoder generates text one token at a time by repeatedly attending to $\\mathbf {H}$ and producing probability distributions over the vocabulary. We also add two mechanisms from the summarization literature BIBREF21. The first is a copy mechanism, where the model's final output for timestep $i$ becomes a weighted combination of the predicted vocabulary distribution and attentional distribution from that timestep. The second is a coverage mechanism which incorporates the sum of previous attention distributions into the final loss function to discourage the model from re-attending to a word and repeating itself.", "Module pre-training. We pre-train the decoder as a language model of neutral text using the neutral portion of WNC (Section SECREF2). Doing so expresses a data-driven prior about how target sentences should read. We accomplish this with a denoising autoencoder objective BIBREF22 and maximizing the conditional log probability $\\log p(\\mathbf {x} \\vert \\widetilde{\\mathbf {x}})$ of reconstructing a sequence $\\mathbf {x}$ from a corrupted version of itself $\\widetilde{\\mathbf {x}} = C(\\mathbf {x})$ using noise model $C$.", "Our $C$ is similar to BIBREF23. We slightly shuffle $\\mathbf {x}$ such that $x_i$'s index in $\\widetilde{\\mathbf {x}}$ is randomly selected from $[i - k, i + k]$. We then drop words with probability $p$. For our experiments, we set $k = 3$ and $p = 0.25$." ], [ "Once the detection and editing modules have been pre-trained, we join them and fine-tune together as an end to end system for translating $\\mathbf {s}$ into $\\mathbf {t}$.", "This is done with a novel join embedding mechanism that lets the detector control the editor (Figure FIGREF29). The join embedding is a vector $\\mathbf {v} \\in \\mathcal {R}^h$ that we add to each encoder hidden state in the editing module. This operation is gated by the detector's output probabilities $\\mathbf {p} = (p_1, ..., p_n)$. Note that the same $\\mathbf {v}$ is applied across all timesteps.", "", "We proceed to condition the decoder on the new hidden states $\\mathbf {H}^{\\prime } = (\\mathbf {h^{\\prime }}_1, ..., \\mathbf {h}^{\\prime }_n)$. Intuitively, $\\mathbf {v}$ is enriching the hidden states of words that the detector identified as subjective. This tells the decoder what language should be changed and what is safe to be be copied during the neutralization process. Error signals are allowed to flow backwards into both the encoder and detector, creating an end-to-end system from the two modules.", "To fine-tune the parameters of the joint system, we use a token-weighted loss function that scales the loss on neutralized words (i.e. words unique to $\\mathbf {t}$) by a factor of $\\alpha $:", "", "Note that $c$ is a term from the coverage mechanism (Section SECREF28). We use $\\alpha = 1.3$ in our experiments. Intuitively, this loss function incorporates an inductive bias of the neutralizing process: the source and target have a high degree of lexical similarity but the goal is to learn the structure of their differences, not simply copying words into the output (something a pre-trained autoencoder should already have knowledge of). This loss function is related to previous work on grammar correction BIBREF24, and cost-sensitive learning BIBREF25." ], [ "Our second algorithm takes the problematic source $\\textbf {s}$ and directly generates a neutralized $\\mathbf {\\hat{t}}$. While this renders the system easier to train and operate, it limits interpretability and controllability.", "Model description. The concurrent system is an encoder-decoder neural network. The encoder is BERT. The decoder is the same as that of Section SECREF28: an attentional LSTM with copy and coverage mechanisms. The decoder's inputs are set to:", "Hidden states $\\mathbf {H} = \\mathbf {W}^H\\ \\mathbf {B}$, where $\\mathbf {B} = (\\mathbf {b}_1, ..., \\mathbf {b}_{n}) \\in \\mathcal {R}^{b \\times n}$ is the BERT-embedded source and $\\mathbf {W}^H \\in \\mathcal {R}^{h \\times b}$ is a matrix of learned parameters.", "Initial states $\\mathbf {c}_0 = \\mathbf {W}^{c0}\\ \\sum \\mathbf {b}_i / n$ and $\\mathbf {h_0} = \\mathbf {W}^{h0}\\ \\sum \\mathbf {b}_i / n$. $\\mathbf {W}^{c0} \\in \\mathcal {R}^{h \\times b}$ and $\\mathbf {W}^{h0} \\in \\mathcal {R}^{h \\times b}$ are learned matrices.", "Model training. The concurrent model is pre-trained with the same autoencoding procedure described in Section SECREF28. It is then fine-tuned as a subjective-to-neutral translation system with the same loss function described in Section SECREF30." ], [ "Implementation. We implemented nonlinear models with Pytorch BIBREF29 and optimized using Adam BIBREF30 as configured in BIBREF18 with a learning rate of 5e-5. We used a batch size of 16. All vectors were of length $h = 512$ unless otherwise specified. We use gradient clipping with a maximum gradient norm of 3 and a dropout probability of 0.2 on the inputs of each LSTM cell BIBREF31. We initialize the BERT component of the tagging module with the publicly-released bert-base-uncased parameters. All other parameters were uniformly initialized in the range $[-0.1,\\ 0.1]$.", "Procedure. Following BIBREF2, we train and evaluate our system on the subset of WNC where the editor changed or deleted a single word in the source text. This yielded 53,803 training pairs (about a quarter of the WNC), from which we sampled 700 development and 1,000 test pairs. For fair comparison, we gave our baselines additional access to the 385,639 neutral examples when possible. We pretrained the tagging module for 4 epochs. We pretrained the editing module on the neutral portion of our WNC for 4 epochs. The joint system was trained on the same data as the tagger for 25,000 steps (about 7 epochs). We perform interference using beam search and a beam width of 4. All computations were performed on a single NVIDIA TITAN X GPU; training the full system took approximately 10 hours. We report statistical significance with bootstrap resampling and a 95% confidence level BIBREF32, BIBREF33.", "Evaluation. We evaluate our models according to five metrics. BLEU BIBREF13 and accuracy (the proportion of decodings that exactly matched the editors changes) are quantitative. We also hired fluent English-speaking crowdworkers on Amazon Mechanical Turk. Workers were shown the BIBREF2 and Wikipedia definition of a “biased statement” and six example sentences, then subjected to a five-question qualification test where they had to identify subjectivity bias. Approximately half of the 30,000 workers who took the qualification test passed. Those who passed were asked to compare pairs of original and edited sentences (not knowing which was the original) along three criteria: fluency, meaning preservation, and bias. Fluency and bias were evaluated on a Semantic Differential scale from -2 to 2. Here, a semantic differential scale can better evaluate attitude oriented questions with two polarized options (e.g., “is text A or B more fluent?”). Meaning was evaluated on a Likert scale from 0 to 4, ranging from “totally different” to “identical”. Inter-rater agreement was fair to substantial (Krippendorff's alpha of 0.65 for fluency, 0.33 for meaning, and 0.51 for bias). We report statistical significance with a t-test and 95% confidence interval." ], [ "Results on WNC are presented in Table TABREF35. In addition to methods from the literature we include (1) a BERT-based system which simply predicts and deletes subjective words, and (2) a system which predicts replacements (including deletion) for subjective words directly from their BERT embeddings. All methods appear to successfully reduce bias according to the human evaluators. However, many methods appear to lack fluency. Adding a token-weighted loss function and pretraining the decoder help the model's coherence according to BLEU and accuracy. Adding the detector (modular) or a BERT encoder (concurrent) provide additional benefits. The proposed models retain the strong effects of systems from the literature while also producing target-level fluency on average. Our results suggest there is no clear winner between our two proposed systems. modular is better at reducing bias and has higher accuracy, while concurrent produces more fluent responses, preserves meaning better, and has higher BLEU.", "Table TABREF39 indicates that BLEU is more correlated with fluency but accuracy is more correlated with subjective bias reduction. The weak association between BLEU and human evaluation scores is corroborated by other research BIBREF35, BIBREF36. We conclude that neither automatic metric is a true substitute for human judgment." ], [ "To demonstrate the efficacy of the proposed methods on subjective bias in the wild, we perform inference on three out-of-domain datasets (Table TABREF45). We prepared each dataset according to the same procedure as WNC (Section SECREF2). After inference, we enlisted 1800 raters to assess the quality of 200 randomly sampled datapoints. Note that for partisan datasets we sample an equal number of examples from “conservative” and “liberal” sources. These data are:", "", "The Ideological Books Corpus (IBC) consisting of partisan books and magazine articles BIBREF37, BIBREF38.", "", "Headlines of partisan news articles identified as biased according to mediabiasfactcheck.com.", "", "Sentences from the campaign speeches of a prominent politician (United States President Donald Trump). We filtered out dialog-specific artifacts (interjections, phatics, etc) by removing all sentences with less than 4 tokens before sampling a test set.", "Overall, while modular does a better job at reducing bias, concurrent appears to better preserve the meaning and fluency of the original text. We conclude that the proposed methods, while imperfect, are capable of providing useful suggestions for how subjective bias in real-world news or political text can be reduced." ], [ "To better understand the limits of our models and the proposed task of bias neutralization, we randomly sample 50 errors produced by our models on the Wikipedia test set and bin them into the following categories:", "No change. The model failed to remove or change the source sentence.", "Bad change. The model modified the source but introduced an edit which failed to match the ground-truth target (i.e. the Wikipedia editor's change).", "Disfluency. Errors in language modeling and text generation.", "Noise. The datapoint is noisy and the target text is not a neutralized version of the source.", "The distribution of errors is given in Table TABREF50. Most errors are due to the subtlety and complexity of language understanding required for bias neutralization, rather than the generation of fluent text. These challenges are particularly pronounced for neutralizing edits that involve the replacement of factive and assertive verbs. As column 2 shows, a large proportion of the errors, though disagreeing with the edit written by the Wikipedia editors, nonetheless successfully neutralize bias in the source.", "Examples of each error type are given in Table TABREF52 (two pages away). As the examples show, our models have have a tendency to simply remove words instead of finding a good replacement." ], [ "We proceed to analyze our algorithm's ability to detect and categorize bias as well as the efficacy of the proposed join embedding." ], [ "Identifying subjectivity in a sentence (explicitly or implicitly) is prerequisite to neutralizing it. We accordingly evaluate our model's (and 3,000 crowdworker's) ability to detect subjectivity using the procedure of BIBREF2 and the same 50k training examples as Section SECREF4 (Table TABREF51). For each sentence, we select the word with the highest predicted probability and test whether that word was in fact changed by the editor. The proportion of correctly selected words is the system's “accuracy”. Results are given in Table TABREF51.", "Note that concurrent lacks an interpretive window into its detection behavior, so we estimate an upper bound on the model's detection abilities by (1) feeding the encoder's hidden states into a fully connected + softmax layer that predicts the probability of a token being subjectively biased, and (2) training this layer as a sequence tagger according to the procedure of Section SECREF19.", "The low human performance can be attributed to the difficulty of identifying bias. Issues of bias are typically reserved for senior Wikipedia editors (Section SECREF14) and untrained workers performed worse (37.39%) on the same task in BIBREF2 (and can struggle on other tasks requiring linguistic knowledge BIBREF39). concurrent's encoder, which is architecturally identical to BERT, had similar performance to a stand-alone BERT system. The linguistic and category-related features in the modular detector gave it slight leverage over the plain BERT-based models." ], [ "We continue by analyzing the abilities of the proposed join embedding mechanism." ], [ "The join embedding combines two separately pretrained models through a gated embedding instead of the more traditional practice of stripping off any final classification layers and concatenating the exposed hidden states BIBREF40. We accordingly ablated the join embedding mechanism by training a new model where the pre-trained detector is frozen and its pre-output hidden states $\\mathbf {b}_i$ are concatenated to the encoder's hidden states before decoding. Doing so reduced performance to 90.78 BLEU and 37.57 Accuracy (from the 93.52/46.8 with the join embedding). This suggests learned embeddings can be a high-performance and end-to-end conduit between sub-modules of machine learning systems." ], [ "We proceed to demonstrate how the join embedding creates controllability in the neutralization process. Recall that modular relies on a probability distribution $\\mathbf {p}$ to determine which words require editing (Equation DISPLAY_FORM31). Typically, this distribution comes from the detection module (Section SECREF19), but we can also feed in user-specified distributions that force the model to target particular words. This can let human advisors correct errors or push the model's behavior towards some desired outcome. We find that the model is indeed capable of being controlled, letting users target specific words for rewording in case they disagree with the model's output or seek recommendations on specific language. However, doing so can also introduce errors into downstream language generation (Table TABREF52)." ], [ "Subjectivity Bias. The study of subjectivity in NLP was pioneered by the late Janyce Wiebe and colleagues BIBREF41, BIBREF42. Several studies develop methods for highlighting subjective or persuasive frames in a text BIBREF43, BIBREF44, or detecting biased sentences BIBREF45, BIBREF46, BIBREF12, BIBREF47 of which the most similar to ours is BIBREF2, whose early, smaller version of WNC and logistic regression-based bias detector inspired our study.", "Debiasing. Many scholars have worked on removing demographic prejudice from meaning representations BIBREF48, BIBREF49, BIBREF5, BIBREF50, BIBREF51. Such studies begin with identifying a direction or subspace that capture the bias and then removing such bias component to make these representations fair across attributes like gender and age BIBREF3, BIBREF48. For instance, BIBREF50 introduced a regularization term for the language model to penalize the projection of the word embeddings onto that gender subspace, while BIBREF51 used adversarial training to remove directions of bias from hidden states.", "Neural Language Generation. Several studies propose stepwise procedures for text generation, including sampling from a corpus BIBREF52 and identifying language ripe for modification BIBREF53. Most similar to us is BIBREF26 who localize a text's style to a fraction of its words. Our modular detection module performs a similar localization in a soft manner, and our steps are joined by a smooth conduit (the join embedding) instead of discrete logic. There is also work related to our concurrent model. The closest is BIBREF54, where a decoder was attached to BERT for question answering, or BIBREF23, where machine translation systems are initialized to LSTM and Transformer-based language models of the source text." ], [ "The growing presence of bias has marred the credibility of our news, educational systems, and social media platforms. Automatically reducing bias is thus an important new challenge for the Natural Language Processing and Artificial Intelligence community. By learning models to automatically detect and correct subjective bias in text, this work is a first step in this important direction. Nonetheless our scope was limited to single-word edits, which only constitute a quarter of the edits in our data, and are probably among the simplest instances of bias. We therefore encourage future work to tackle broader instances of multi-word, multi-lingual, and cross-sentence bias. Another important direction is integrating aspects of fact-checking BIBREF55, since a more sophisticated system would be able to know when a presupposition is in fact true and hence not subjective. Finally, our new join embedding mechanism can be applied to other modular neural network architectures." ], [ "We thank the Japan-United States Educational Commission (Fulbright Japan) for their generous support. We thank Chris Potts, Hirokazu Kiyomaru, Abigail See, Kevin Clark, the Stanford NLP Group, and our anonymous reviewers for their thoughtful comments and suggestions. We gratefully acknowledge support of the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF15-1-0462 and the NSF via grant IIS-1514268. Diyi Yang is supported by a grant from Google." ] ], "section_name": [ "Introduction", "Wiki Neutrality Corpus (WNC)", "Wiki Neutrality Corpus (WNC) ::: Dataset Properties", "Methods for Neutralizing Text", "Methods for Neutralizing Text ::: MODULAR", "Methods for Neutralizing Text ::: MODULAR ::: Detection Module", "Methods for Neutralizing Text ::: MODULAR ::: Editing Module", "Methods for Neutralizing Text ::: MODULAR ::: Final System", "Methods for Neutralizing Text ::: CONCURRENT", "Experiments ::: Experimental Protocol", "Experiments ::: Wikipedia (WNC)", "Experiments ::: Real-world Media", "Error Analysis", "Algorithmic Analysis", "Algorithmic Analysis ::: Detecting Subjectivity", "Algorithmic Analysis ::: Join Embedding", "Algorithmic Analysis ::: Join Embedding ::: Join Embedding Ablation", "Algorithmic Analysis ::: Join Embedding ::: Join Embedding Control", "Related Work", "Conclusion and Future Work", "Acknowledgements" ] }
{ "answers": [ { "annotation_id": [ "1c4605380752fd9151c538db77e5ed45a64957fa", "3c070f4ec4055bc7eb8684df18ee64067094b81d", "4999a683d3376d9bf30c1dc96d583134022bc2c7" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 8: Performance of various bias detectors. Rows with asterisks are statistically different than the preceding row." ], "extractive_spans": [], "free_form_answer": "Modular", "highlighted_evidence": [ "FLOAT SELECTED: Table 8: Performance of various bias detectors. Rows with asterisks are statistically different than the preceding row." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Overall, while modular does a better job at reducing bias, concurrent appears to better preserve the meaning and fluency of the original text. We conclude that the proposed methods, while imperfect, are capable of providing useful suggestions for how subjective bias in real-world news or political text can be reduced." ], "extractive_spans": [ "Overall, while modular does a better job at reducing bias, concurrent appears to better preserve the meaning and fluency of the original text." ], "free_form_answer": "", "highlighted_evidence": [ "Overall, while modular does a better job at reducing bias, concurrent appears to better preserve the meaning and fluency of the original text." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Results on WNC are presented in Table TABREF35. In addition to methods from the literature we include (1) a BERT-based system which simply predicts and deletes subjective words, and (2) a system which predicts replacements (including deletion) for subjective words directly from their BERT embeddings. All methods appear to successfully reduce bias according to the human evaluators. However, many methods appear to lack fluency. Adding a token-weighted loss function and pretraining the decoder help the model's coherence according to BLEU and accuracy. Adding the detector (modular) or a BERT encoder (concurrent) provide additional benefits. The proposed models retain the strong effects of systems from the literature while also producing target-level fluency on average. Our results suggest there is no clear winner between our two proposed systems. modular is better at reducing bias and has higher accuracy, while concurrent produces more fluent responses, preserves meaning better, and has higher BLEU." ], "extractive_spans": [], "free_form_answer": "They are equal", "highlighted_evidence": [ "Our results suggest there is no clear winner between our two proposed systems. modular is better at reducing bias and has higher accuracy, while concurrent produces more fluent responses, preserves meaning better, and has higher BLEU." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "34f24f19a759cf951159814142bd4a269577f126", "4d006969cec6e8ccf5adb162c79b6f17ddd14895", "82d1e21bc64d87ad82ee0678d1b4bb9b9cf953df" ], "answer": [ { "evidence": [ "The Wiki Neutrality Corpus consists of aligned sentences pre and post-neutralization by English Wikipedia editors (Table TABREF3). We used regular expressions to crawl 423,823 Wikipedia revisions between 2004 and 2019 where editors provided NPOV-related justification BIBREF11, BIBREF2, BIBREF12. To maximize the precision of bias-related changes, we ignored revisions where" ], "extractive_spans": [], "free_form_answer": "Wiki community effort", "highlighted_evidence": [ "The Wiki Neutrality Corpus consists of aligned sentences pre and post-neutralization by English Wikipedia editors (Table TABREF3)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "The Wiki Neutrality Corpus consists of aligned sentences pre and post-neutralization by English Wikipedia editors (Table TABREF3). We used regular expressions to crawl 423,823 Wikipedia revisions between 2004 and 2019 where editors provided NPOV-related justification BIBREF11, BIBREF2, BIBREF12. To maximize the precision of bias-related changes, we ignored revisions where" ], "extractive_spans": [ "Wikipedia editors" ], "free_form_answer": "", "highlighted_evidence": [ "The Wiki Neutrality Corpus consists of aligned sentences pre and post-neutralization by English Wikipedia editors (Table TABREF3)." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We introduce the Wiki Neutrality Corpus (WNC). This is a new parallel corpus of 180,000 biased and neutralized sentence pairs along with contextual sentences and metadata. The corpus was harvested from Wikipedia edits that were designed to ensure texts had a neutral point of view. WNC is the first parallel corpus targeting biased and neutralized language. We also define the task of neutralizing subjectively biased text. This task shares many properties with tasks like detecting framing or epistemological bias BIBREF2, or veridicality assessment/factuality prediction BIBREF7, BIBREF8, BIBREF9, BIBREF10. Our new task extends these detection/classification problems into a generation task: generating more neutral text with otherwise similar meaning." ], "extractive_spans": [ " Wikipedia edits" ], "free_form_answer": "", "highlighted_evidence": [ "The corpus was harvested from Wikipedia edits that were designed to ensure texts had a neutral point of view. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "annotation_id": [ "aa8671add8eae183107bd052c39bafeef69e69d1", "ccf69d756bc2966124f111a6e94cb16cfa630385", "e9b20a85ffbeaa22b6ccdf6cfda038037e3580e8" ], "answer": [ { "evidence": [ "This work presents data and algorithms for automatically reducing bias in text. We focus on a particular kind of bias: inappropriate subjectivity (“subjective bias”). Subjective bias occurs when language that should be neutral and fair is skewed by feeling, opinion, or taste (whether consciously or unconsciously). In practice, we identify subjective bias via the method of BIBREF2: using Wikipedia's neutral point of view (NPOV) policy. This policy is a set of principles which includes “avoiding stating opinions as facts” and “preferring nonjudgemental language”.", "We aim to debias text by suggesting edits that would make it more neutral. This contrasts with prior research which has debiased representations of text by removing dimensions of prejudice from word embeddings BIBREF3, BIBREF4 and the hidden states of predictive models BIBREF5, BIBREF6. To avoid overloading the definition of “debias,” we refer to our kind of text debiasing as neutralizing that text. Figure FIGREF1 gives an example." ], "extractive_spans": [], "free_form_answer": " Identify subjective bias via the method of BIBREF2: using Wikipedia's neutral point of view (NPOV) policy and suggest edits that would make it more neutral.", "highlighted_evidence": [ "This work presents data and algorithms for automatically reducing bias in text. We focus on a particular kind of bias: inappropriate subjectivity (“subjective bias”). Subjective bias occurs when language that should be neutral and fair is skewed by feeling, opinion, or taste (whether consciously or unconsciously). In practice, we identify subjective bias via the method of BIBREF2: using Wikipedia's neutral point of view (NPOV) policy. This policy is a set of principles which includes “avoiding stating opinions as facts” and “preferring nonjudgemental language”.", "We aim to debias text by suggesting edits that would make it more neutral." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We propose the task of neutralizing text, in which the algorithm is given an input sentence and must produce an output sentence whose meaning is as similar as possible to the input but with the subjective bias removed." ], "extractive_spans": [], "free_form_answer": "The text is modified to remove the subjective bias while preserve the meaning as much as possible", "highlighted_evidence": [ "We propose the task of neutralizing text, in which the algorithm is given an input sentence and must produce an output sentence whose meaning is as similar as possible to the input but with the subjective bias removed." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We propose the task of neutralizing text, in which the algorithm is given an input sentence and must produce an output sentence whose meaning is as similar as possible to the input but with the subjective bias removed." ], "extractive_spans": [ "algorithm is given an input sentence and must produce an output sentence whose meaning is as similar as possible to the input but with the subjective bias removed" ], "free_form_answer": "", "highlighted_evidence": [ "We propose the task of neutralizing text, in which the algorithm is given an input sentence and must produce an output sentence whose meaning is as similar as possible to the input but with the subjective bias removed." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8", "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "no", "no", "no" ], "question": [ "Which works better according to human evaluation, the concurrent or the modular system?", "Were the Wikipedia edits that removed framings, presuppositions and attitudes from biased sentences a Wiki community effort, or were annotators trained to do it?", "How is subjective text automatically neutralized?" ], "question_id": [ "fcf9377fc3fce529d4bab1258db3f46b15ae5872", "5422a3f2a083395416d6f99c57d28335eb2e44e1", "7c2d6bc913523d77e8fdc82c60598ee95b445d84" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "search_query": [ "bias", "bias", "bias" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Figure 1: Example output from our MODULAR algorithm. “Exposed” is a factive verb that presupposes the truth of its complement (that McCain is unprincipled). Replacing “exposed” with “described” neutralizes the headline because it conveys a similar main clause proposition (someone is asserting McCain is unprincipled), but no longer introduces the authors subjective bias via presupposition.", "Table 1: Samples from our new corpus. 500 sentence pairs are annotated with “subcategory” information (Column 3).", "Table 2: Corpus statistics.", "Figure 2: The detection module uses discrete features fi and BERT embedding bi to calculate logit yi.", "Table 3: Proportion of bias subcategories in Biased-full.", "Figure 3: The MODULAR system uses join embedding v to reconcile the detector’s predictions with an encoder-decoder architecture. The greater a word’s probability, the more of v is mixed into that word’s hidden state.", "Table 4: Bias neutralization performance. ST indicates a style transfer system. MT indicates a machine translation system. For quantitative metrics, rows with asterisks are significantly different than the preceding row. For qualitative metrics, rows with asterisks are significantly different from zero. Higher is preferable for fluency, while lower is preferable for bias and meaning.", "Table 5: Spearman correlation (R2) between quantitative and qualitative metrics.", "Table 6: Performance on out-of-domain datasets. Higher is preferable for fluency, while lower is preferable for bias and meaning. Rows with asterisks are significantly different from zero", "Table 7: Distribution of model errors on the Wikipedia test set. We also give the percent of errors that were valid neutralizations of the source despite failing to match the target sentence.", "Table 8: Performance of various bias detectors. Rows with asterisks are statistically different than the preceding row.", "Table 9: Top: examples of model errors from each error category. Middle: the model treats words differently based on their context; in this case, “dominant” is ignored when it accurately describes an individual’s winning performance, but deleted when it describes a group of people in arbitrary comparison. Bottom: the MODULAR model can sometimes be controlled, for example by selecting words to change, to correct errors or otherwise change the model’s behavior." ], "file": [ "1-Figure1-1.png", "2-Table1-1.png", "2-Table2-1.png", "3-Figure2-1.png", "3-Table3-1.png", "4-Figure3-1.png", "5-Table4-1.png", "6-Table5-1.png", "6-Table6-1.png", "6-Table7-1.png", "7-Table8-1.png", "8-Table9-1.png" ] }
[ "Which works better according to human evaluation, the concurrent or the modular system?", "Were the Wikipedia edits that removed framings, presuppositions and attitudes from biased sentences a Wiki community effort, or were annotators trained to do it?", "How is subjective text automatically neutralized?" ]
[ [ "1911.09709-7-Table8-1.png", "1911.09709-Experiments ::: Real-world Media-7", "1911.09709-Experiments ::: Wikipedia (WNC)-0" ], [ "1911.09709-Introduction-4", "1911.09709-Wiki Neutrality Corpus (WNC)-0" ], [ "1911.09709-Introduction-3", "1911.09709-Methods for Neutralizing Text-0", "1911.09709-Introduction-1" ] ]
[ "They are equal", "Wiki community effort", "The text is modified to remove the subjective bias while preserve the meaning as much as possible" ]
81
1909.11232
Sign Language Recognition Analysis using Multimodal Data
Voice-controlled personal and home assistants (such as the Amazon Echo and Apple Siri) are becoming increasingly popular for a variety of applications. However, the benefits of these technologies are not readily accessible to Deaf or Hard-ofHearing (DHH) users. The objective of this study is to develop and evaluate a sign recognition system using multiple modalities that can be used by DHH signers to interact with voice-controlled devices. With the advancement of depth sensors, skeletal data is used for applications like video analysis and activity recognition. Despite having similarity with the well-studied human activity recognition, the use of 3D skeleton data in sign language recognition is rare. This is because unlike activity recognition, sign language is mostly dependent on hand shape pattern. In this work, we investigate the feasibility of using skeletal and RGB video data for sign language recognition using a combination of different deep learning architectures. We validate our results on a large-scale American Sign Language (ASL) dataset of 12 users and 13107 samples across 51 signs. It is named as GMUASL51. 1 We collected the dataset over 6 months and it will be publicly released in the hope of spurring further machine learning research towards providing improved accessibility for digital assistants.
{ "paragraphs": [ [ "According to The National Institute on Deafness, one in thousand infants is born deaf. An additional one to six per thousand are born with hearing loss at different levels BIBREF0. Sign language is commonly used by Deaf and Hard-of-Hearing (DHH) persons to communicate via hand gestures. An automatic sign language recognizer enables an ASL user to translate the sign language to written text or speech, allowing them to communicate with people who are not familiar with ASL. There is a tremendous rise in the popularity of personal digital assistants; available on user's personal and wearable devices (Google Now, Amazon Alexa and Apple Siri, etc.) and also in the form of standalone devices (Amazon Echo and Google Home smart speakers). These devices are primarily controlled through voice, and hence, their functionality is not readily available to DHH users. An automatic sign recognizer can also enable the interaction between a DHH user and a digital assistant.", "Most current systems have capability of ASL recognition with RGB video data BIBREF1, BIBREF2, BIBREF3. An ASL sign is performed by a combination of hand gestures, facial expressions and postures of the body. Sequential motion of specific body locations (such as hand-tip, neck and arm) provide informative cues about a sign. Using video data, it is difficult to extract different body locations and associated motion sequences from a series of RGB frames. Microsoft Kinect is a 3D camera sensor which can use the depth information of a person to capture 3D coordinates of his/her body location across a video. This sequence of 3D body location is referred by skeletal data BIBREF4. To the best of our knowledge, there is no publicly available skeletal dataset in literature for ASL recognition.", "With skeletal data, an ASL sign can be seen as a sequence of 3D coordinates or a 3D time series BIBREF5. Recurrent neural networks (RNN) have shown strong performance for sequential modeling BIBREF6. In this work, we investigate the impact of RGB video data in recognition accuracy when combined with skeletal data. We also propose a combined RNN network with a simple spatial data augmentation technique. In summary, the contributions of this work are:", "We propose an RNN architecture with a novel spatial data augmentation technique.", "We propose an architecture which uses both RGB and skeletal data to improve recognition accuracy.", "We introduce and publicly release a multi–modal dataset for ASL called GMU-ASL51." ], [ "Most sign language recognition systems use RGB video data as input. These approaches model sequential dependencies using Hidden Markov Models (HMM). Zafrullah et al. BIBREF7 used colored gloves (worn on hands) during data collection and developed an HMM based framework for ASL phrase verification. They also used hand crafted features from Kinect skeletal data and accelerometers worn on hand BIBREF8. Huang et al. BIBREF1 demonstrated the effectiveness of using Convolutional neural network (CNN) with RGB video data for sign language recognition. Three dimensional CNN have been used to extract spatio-temporal features from video BIBREF2. Similar architecture was implemented for Italian gestures BIBREF9. Sun et al. BIBREF3 hypothesized that not all RGB frames in a video are equally important and assigned a binary latent variable to each frame in training videos for indicating the importance of a frame within a latent support vector machine model. Zaki et al. BIBREF10 proposed two new features with existing hand crafted features and developed the system using HMM based approach. Some researchers have used appearance-based features and divided the approach into sub units of RGB and tracking data, with a HMM model for recognition BIBREF11.", "Compared to RGB methods, skeletal data has received little attention in ASL recognition. However, in a closely similar human action recognition task, a significant amount of work has been done using body joint location related data. Shahroudy et al. BIBREF12 published the largest dataset for human activity recognition. They proposed an extension of long short term memory (LSTM) model which leverages group motion of several body joints to recognize human activity from skeletal data. A different adaptation of the LSTM model was proposed by Liu et al. BIBREF13 where spatial interaction among joints was considered in addition to the temporal dynamics. Veeriah et al. BIBREF14 proposed a LSTM network to capture the salient motion pattern of body joints. This method takes into account the derivative of motion states associated with different body joints. Some have treated the whole body as a hierarchical configuration of different body parts and proposed a hierarchical RNN to recognize human activities BIBREF5. Several attention based models were proposed for human activity analysis BIBREF15, BIBREF16. Some prior works converted skeleton sequences of body joints or RGB videos into an image representation and then applied state-of-the-art image recognition models BIBREF17, BIBREF18. Motivated by the success of skeletal data in human activity recognition, we investigate its suitability for recognizing ASL signs." ], [ "ASL recognition with skeletal data has received little attention, resulting in a scarcity of public datasets. There exists one dataset for ASL recognition with skeletal data BIBREF19. This dataset has 9800 samples from 6 subjects and more than 3300 sign classes. The number of samples per class was small for use in deep learning based models. Adding to this, the samples were collected in controlled settings with uncluttered background. In contrast, GMU-ASL51 has 13107 samples for 51 word level classes from 12 distinct subjects of different height, build and signing (using sign language) experience.", "Figure FIGREF6 shows the T-SNE representation of a subset of samples from GMU-ASL51. It was performed on output vectors from a trained RNN model for each sign example in the subset. The used model, AI-LSTM, is described in section SECREF19." ], [ "The data was collected with a Microsoft Kinect version 2.0 depth camera positioned in front of the signer. For each sign (a single class like Air Condition or AC) we collected 24 samples continuously; and the process was repeated for every sign (51 classes in total). Due to time and availability constraints, for some subjects we could not collect the samples for all the classes resulting in a total of 13107 samples. The distance between the subject and the sensor was varied in the range from 10 to 15 feet to simulate practical scenarios. No constraints were imposed on performers' posture and lighting condition of the room.", "To gather individual samples from the continuous data, segmentation marks were interleaved through a user interface. This was later used to segment individual samples. These samples were further segmented using motion calculation of the wrist joint co-ordinates from skeletal data. Figure FIGREF7 (a) illustrates the distribution of number of samples per gesture class in GMU-ASL51 dataset. Figure FIGREF7 (b) shows the distribution of duration of videos in our dataset." ], [ "All of our experiments on ASL recognition were done with RGB video data and/or skeletal data. Skeletal data is a multivariate, multidimensional time series input where each body part acts as a variable and each of them have 3D coordinate data at each time step. The skeletal data provides motion trajectory of different body parts such as wrist, elbow and shoulder (total 25 such body parts) over whole video frames. This process is called skeletal tracking. Skeletal data provides high level motion of different body parts. These are useful for capturing discriminant features associated with different types of gestures. However, for better modeling of sign language, hand shape is crucial, as different signs may have similar motion but different hand shapes and orientation. Figure FIGREF10 presents one such example where the sign pair Alarm and Doorbell have exact same motion pattern according to skeletal data but have different hand shapes. We observe similar situation for sign pairs such as Kitchen/Room, Time/Movie, Quote/Camera, Lock/Stop and many more.", "We hypothesize that hand shape is useful in situations where skeletal data has similar dynamic motion pattern for different sign classes. Due to this fact, we extract and use hand shape patterns from RGB video data." ], [ "Inspired by the success of deep learning approaches in computer vision BIBREF20, we applied different deep learning architectures to model sign languages from both input modes (RGB and skeletal). Unlike traditional image classification or object detection models where neural networks learn hierarchical spatial features from data, sign recognition requires capture of temporal body motion." ], [ "RNN has shown success in modeling sequential pattern in dataBIBREF6. It can capture temporal dynamics in data by maintaining an internal state. However, the basic RNN has problems dealing with long term dependencies in data due to the vanishing gradient problem BIBREF21. Some solutions to the vanishing gradient problem involve careful initialization of network parameters or early stopping BIBREF22. But the most effective solution is to modify the RNN architecture in such a way that there exists a memory state (cell state) at every time step that can identify what to remember and what to forget. This architecture is referred to as long short term memory (LSTM) network BIBREF23. While the basic RNN is a direct transformation of the previous state and the current input, the LSTM maintains an internal memory and has mechanisms to update and use that memory. This is achieved by deploying four separate neural networks also called gates. Figure FIGREF12 depicts a cell of an LSTM network which shows input at the current time step ${x_t}$ and the previous state ${h_{t-1}}$ enter into the cell; and get concatenated. The forget gate processes it to remove unnecessary information, and outputs ${f_t}$ which gets multiplied with the previously stored memory ${C_{t-1}}$ and produces a refined memory for the current time.", "Meanwhile, the input and update gate process the concatenated input and convert it into a candidate memory for the current time step by element–wise multiplication. The refined memory and proposed candidate memory of the current step are added to produce the final memory for the current step. This addition could render the output to be out of scale. To avoid that, a squashing function (hyperbolic tan) is used, which scales the elements of the output vector into a fixed range. Finally ${o_t}$, the output from output gate gets multiplied with the squashing function and produces the current time step output. Figure FIGREF12 shows an LSTM cell. The forget, input, update and output gates are represented by four circles and symbolized as $f_t$, $i_t$, $\\tilde{C_t}$ and $o_t$, respectively . Equation DISPLAY_FORM13 shows LSTM functions; where $\\oplus $ and $\\otimes $ represent element wise addition and multiplication respectively; $\\times $ represents matrix multiplication, $concat$ process means a concatenation of its input." ], [ "Traditional convolutional neural network (CNN) is two dimensional in which each layer has a stack of 2D feature maps generated from previous layer or from inputs in case of first layer. A layer also has a certain numbers of filters which are rectangular patches of parameters. Each filter convolves over the stack of 2D feature maps at previous layer and produces feature maps (equal to the number of filters in the current layer) at current layer. The operation is given by Equation DISPLAY_FORM17 where $F_{i,j}^{l}$ denotes the value of feature map at $l^{th}$ layer at location $(i,j)$. $\\odot $ represents dot product of filter $W$ and associated feature map patch in previous layer.", "Standard CNN fails to capture the temporal information associated with data, which is important in video or any type of sequential data representation. To solve this problem, 3D convolution was introduced in BIBREF2. The key difference is that kernels are 3D and sub sampling (pooling) layers work across three dimensions.", "Equation DISPLAY_FORM18 shows 3D convolution function. In this case from each filter we get a 3D feature map and $F_{i,j,k}$ denotes value at $(i, j,k)$ location after convolution operation. The dot product is between two three-dimensional matrices (also called tensors)." ], [ "Given a sample skeletal data of $R^{T \\times J \\times 3}$, where $T$ denotes time axis, $J$ is the number of body joints and the last dimension is the 3D coordinates of each joint. We flatten every dimension except time and at each time step we can feed a vector of size $R^{3 \\times J}$ as input. However, we have empirically verified that learning a sequential pattern for each coordinate axis independently and combining them later shows stronger classification performance. Based on this, we trained three different 2 layer LSTMs for data from x, y, and z coordinates separately; and concatenate their final embedding to produce softmax output. In this setting, each separate LSTM receives data as $R^{T \\times J}$ and final embedding size is $R^{3\\times S}$ where $S$ is the state size of LSTM cell. Figure FIGREF15 (a) shows the architecture where as a sample arrives, just before entering into main network, data along separate axis is split and entered into three different LSTM networks. The model concatenates the final state from each of the separate LSTM networks; followed by feeding this into the softmax layer for classification. This approach is referred by Axis Independent Architecture (AI-LSTM). Implementation details such as values of T and J are provided in the `Experiments' section." ], [ "AI-LSTM, described in last section, works by modeling temporal dynamics of body joints' data over time. However, there can be spatial interactions with joints at a specific time step. It fails to capture any such interaction among joints in a given time. To incorporate spatial relationship among joints, we propose a simple novel data augmentation technique for skeletal data. We do this by origin transfer. For each frame in a gesture sample, we use each wrist joints as origin and transform all other joints' data by subtracting that origin from them. In this way spatial information is added to the input. We refer this model with spatial data augmentation as Spatial AI-LSTM. This augmentation technique is depicted in Figure FIGREF21. A sample data of form $R^{T \\times 6 \\times 3}$ results in a representation of $R^{T \\times 5 \\times 3}$ after subtracting left wrist joint (origin transfer). After this augmentation process, each sample is a $R^{20 \\times 16 \\times 3}$ matrix. Hence, each separate LSTM networks in our Spatial AI-LSTM network receives an input of $R^{20 \\times 16}$." ], [ "We hypothesize that, some signs that have mostly similar skeletal motion pattern could be distinguishable using hand shape information. We propose a combination of LSTM and 3D CNN networks. We call this Max CNN-LSTM network. Figure FIGREF15 (b) represents the the Max CNN-LSTM. The details of 3D CNN module is shown in Figure FIGREF14. This architecture has two parts: one for left hand patches and other for right hand patches. Each part has four 3D convolutional layers (second and fourth layers have following maximum pooling layers) followed by 2 fully connected layers. Final embeddings from these two parts are concatenated and by using a softmax layer, from which a classification score is produced. The other AI-LSTM network is fed with skeletal time series data. At the final time step, the LSTM state vector is taken and using a softmax layer another probability score is produced. The final classification score is created by taking element wise maximum of the output scores from the two networks. During back–propagation, both networks are trained on their own score. The combined network acts like a model ensemble and some sign classes which are confused by RNN network alone might have an improved recognition accuracy with this approach." ], [ "Naturally each sign has different frame length after segmentation because each subject does a sign at different speed. It is possible that the same subject may do the same sign at different speeds at different times which makes the recognition challenging. Further, neighboring frames contain redundant information; and all joints will not have equal amount of motion or pattern in case of skeletal data." ], [ "Most of the signs do not involve all the 25 joints' information provided by Kinect sensor; specifically, joints involved with the two hands convey most information. Based on this, we consider only 6 joints (wrist, elbow, shoulder) from both as input to the LSTM network. Figure FIGREF22 shows an example where 7 frames were sampled from a sign video of class Air Condition and the bottom panel shows the skeletal configuration across those 7 frames. From each sign video we sampled some number of frames uniformly and took joints' data associated with those frames. We verified empirically that picking 20 frames uniformly works best for skeletal data. For samples with less than 20 samples we convert them to 20 frame signs by interleaving existing frames uniformly. Thus skeletal data for each sample is a vector in $R^{20 \\times 6 \\times 3}$." ], [ "Since ASL involves specific hand shape patterns, we crop both hand regions at each frame. Using 2D coordinates of hand joints on a video frame as center, we do a $100 \\times 100$ crop to generate hand patches. To reduce motion blur, we calculate velocity of joints at each video frame using skeletal coordinates and then sample from frames which have less motion. We sampled 15 frames from each sign video resulting in a vector of $R^{15 \\times 100 \\times 100 \\times 3}$ for each hand patch." ], [ "To deal with over-fitting, dropout was used for all networks except convolutional layers with probability of 0.5. In addition to dropout, L2 regularization was used for LSTM networks and for dense layers; $\\beta $ was set to 0.008 which controls the impact of regularization on the network. State size and number of layers of LSTM networks were 50 and 2, respectively. Learning rate for Max CNN-LSTM and LSTM networks were set to $0.00001$ and $0.00005$, respectively. We used Adam Optimizer for training our networks BIBREF24. All networks were run for a certain number of epochs (200-300) with a batch size of 64. We developed all of our models with Tensorflow 1.10 (python). Average time taken to train an AI-LSTM and an Spatial AI-LSTM are 25 and 30 minutes on an Intel(R) Core(TM) i5-7600 (3.50GHz) processor respectively. We trained 3D CNN and Max 3D CNN models on GPU (Tesla K80) and each model took around 20 hours to train." ], [ "We use support vector machines and random forest for baseline comparison. The baseline models utilize skeletal data in each axis for every joint in building the following features per sample: Mean, Area, Skew, Kurtosis, Motion Energy, Range and Variance over the frames BIBREF25. We have 6 upper body joints and 3 axes per joint and 7 features for each giving a total of 126 $(7 \\times 6 \\times 3)$ features per sample." ], [ "Table TABREF28 shows the comparative results among our proposed architectures and baselines. Overall, we use data from 12 subjects for our experiments which sum up to 13107 sign gesture samples in total. To evaluate model performance on a specific subject (test subject), we adopt cross subject evaluation criteria. Suppose, X is the test subject. We train our networks with all sign samples except those are from subject X. We use subject X's data as test split to evaluate the performance of the networks. Table TABREF28 shows the average test accuracy for all 12 subjects. We can see that 3D CNN network alone performs worse than simpler baselines. But when coupled with AI-LSTM as Max CNN-LSTM, it shows an increase in recognition accuracy by 2% from AI-LSTM alone. This is because some of the signs are confused by the AI-LSTM network because of similar skeletal motion pattern. Incorporating spatial relationship among joints leads to a significant accuracy gain. The Spatial AI-LSTM is trained only on skeletal data but outperforms the combined network by 6%.", "Figure FIGREF30 shows three confusion matrices for a subset of twelve sign classes for a subject. The top matrix is for AI-LSTM network, middle one is for Max CNN-LSTM and bottom one is for Spatial AI-LSTM. As seen in Figure FIGREF10 the sign pairs Alarm/Doorbell are similar in skeletal motion but have different hand shapes. Since Max CNN-LSTM includes hand shapes, it can successfully recognize it while other two models struggles. Same is true for some other signs like Email, Event, List, Order and Weather . Some other signs are better recognized by Spatial AI-LSTM network. It should be mentioned here that accuracy listed in Table TABREF28 shows average accuracy across all test subjects, while Figure FIGREF30 presents confusion matrix for a single test subject. For this particular subject overall test accuracy is 58%, 70% and 69% for AI-LSTM, Max CNN-LSTM and Spatial AI-LSTM network respectively." ], [ "In addition to having the cross subject accuracy described in section SECREF29, we also want to know the impact of adding a test subject's data to the training process. It is obvious that adding test subject's data to the training must increase the accuracy of the network for the subject. However, we want to know how much or what fraction of data is necessary for significant improvement in performance. This is important for assessing the practial usability of a recognition system. In other words, we want to know how quickly or with what amount of data, the current system can be adapted for a subject completely unknown to the system. To do that, we first pick a test subject and train a model for the test subject with data from all other subjects in our dataset. Then we retrain the model with some fraction of data from the test subject. We keep increasing the fraction of data being used from the test subject in the retraining process up to $50\\%$. The other half of the test subject's data is used for testing the model.", "Figure FIGREF32 shows the effect of added training data from test subjects in the retraining on six subjects from our dataset in case of Spatial AI-LSTM model. We see that, adding data from a test subject increase recognition accuracy for all of the subjects shown. It is interesting to observe that adding even $10\\%$ of data from a test subject gives significant improvement in recognition accuracy (close to $95\\%$) for almost all of the subjects shown." ], [ "We present a deep learning based approach for ASL recognition that leverages skeletal and video data. The proposed model captures the underlying temporal dynamics associated with sign language and also identifies specific hand shape patterns from video data to improve recognition performance. A new data augmentation technique was introduced that allowed the LSTM networks to capture spatial dynamics among joints. Finally, a large public dataset for ASL recognition will be released to the community to spur research in this direction; and bring benefits of digital assistants to the deaf and hard of hearing community. For future research direction, we are looking into the problem of sentence level ASL recognition. We also plan to use other data modality such as wifi signals which can be complimentary to video in sign language recognition." ], [ "This work was supported by Google Research Award. Some of the experiments were run on ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University, VA. (URL:http://orc.gmu.edu)" ] ], "section_name": [ "Introduction", "Literature Review", "Dataset", "Dataset ::: Collection Protocol", "Dataset ::: Data Modality", "Our Approach", "Our Approach ::: Recurrent Neural Networks (RNN)", "Our Approach ::: 3D Convolutional Neural Network", "Our Approach ::: Axis Independent LSTM", "Our Approach ::: Spatial AI-LSTM", "Our Approach ::: Combined Network", "Experiments", "Experiments ::: Skeletal Data", "Experiments ::: Video Data", "Experiments ::: Training Details", "Experiments ::: Baseline Methods", "Experiments ::: Experimental Results", "Experiments ::: Effect of Same Subject Data in Training", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "2cf29132862c6610d5e96ae5dc76efcc8a9a221f", "d5ccb955e73b5a85b219a991a9415bd09ffa5acb", "f5030084930728181d31960fb72d3cdaa8e15962" ], "answer": [ { "evidence": [ "According to The National Institute on Deafness, one in thousand infants is born deaf. An additional one to six per thousand are born with hearing loss at different levels BIBREF0. Sign language is commonly used by Deaf and Hard-of-Hearing (DHH) persons to communicate via hand gestures. An automatic sign language recognizer enables an ASL user to translate the sign language to written text or speech, allowing them to communicate with people who are not familiar with ASL. There is a tremendous rise in the popularity of personal digital assistants; available on user's personal and wearable devices (Google Now, Amazon Alexa and Apple Siri, etc.) and also in the form of standalone devices (Amazon Echo and Google Home smart speakers). These devices are primarily controlled through voice, and hence, their functionality is not readily available to DHH users. An automatic sign recognizer can also enable the interaction between a DHH user and a digital assistant.", "Most current systems have capability of ASL recognition with RGB video data BIBREF1, BIBREF2, BIBREF3. An ASL sign is performed by a combination of hand gestures, facial expressions and postures of the body. Sequential motion of specific body locations (such as hand-tip, neck and arm) provide informative cues about a sign. Using video data, it is difficult to extract different body locations and associated motion sequences from a series of RGB frames. Microsoft Kinect is a 3D camera sensor which can use the depth information of a person to capture 3D coordinates of his/her body location across a video. This sequence of 3D body location is referred by skeletal data BIBREF4. To the best of our knowledge, there is no publicly available skeletal dataset in literature for ASL recognition.", "We present a deep learning based approach for ASL recognition that leverages skeletal and video data. The proposed model captures the underlying temporal dynamics associated with sign language and also identifies specific hand shape patterns from video data to improve recognition performance. A new data augmentation technique was introduced that allowed the LSTM networks to capture spatial dynamics among joints. Finally, a large public dataset for ASL recognition will be released to the community to spur research in this direction; and bring benefits of digital assistants to the deaf and hard of hearing community. For future research direction, we are looking into the problem of sentence level ASL recognition. We also plan to use other data modality such as wifi signals which can be complimentary to video in sign language recognition." ], "extractive_spans": [ "We present a deep learning based approach for ASL recognition that leverages skeletal and video data. The proposed model captures the underlying temporal dynamics associated with sign language and also identifies specific hand shape patterns from video data to improve recognition performance. " ], "free_form_answer": "", "highlighted_evidence": [ "An automatic sign language recognizer enables an ASL user to translate the sign language to written text or speech, allowing them to communicate with people who are not familiar with ASL.", "Most current systems have capability of ASL recognition with RGB video data BIBREF1, BIBREF2, BIBREF3. An ASL sign is performed by a combination of hand gestures, facial expressions and postures of the body. Sequential motion of specific body locations (such as hand-tip, neck and arm) provide informative cues about a sign. Using video data, it is difficult to extract different body locations and associated motion sequences from a series of RGB frames. ", "We present a deep learning based approach for ASL recognition that leverages skeletal and video data. The proposed model captures the underlying temporal dynamics associated with sign language and also identifies specific hand shape patterns from video data to improve recognition performance." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We present a deep learning based approach for ASL recognition that leverages skeletal and video data. The proposed model captures the underlying temporal dynamics associated with sign language and also identifies specific hand shape patterns from video data to improve recognition performance. A new data augmentation technique was introduced that allowed the LSTM networks to capture spatial dynamics among joints. Finally, a large public dataset for ASL recognition will be released to the community to spur research in this direction; and bring benefits of digital assistants to the deaf and hard of hearing community. For future research direction, we are looking into the problem of sentence level ASL recognition. We also plan to use other data modality such as wifi signals which can be complimentary to video in sign language recognition." ], "extractive_spans": [], "free_form_answer": " American Sign Language recognition ", "highlighted_evidence": [ "We present a deep learning based approach for ASL recognition that leverages skeletal and video data." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "fa716cd87ce6fd6905e2f23f09b262e90413167f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "1ccddcb828d50e932f84127a0296a5bb976f2352", "6c10de1e87fa979950766c8039af7be8ecb4d04e", "fea88f37c8fddff228b251ef1d0ccddf7c0a7a79" ], "answer": [ { "evidence": [ "FLOAT SELECTED: TABLE I AVERAGE CROSS SUBJECT (CS) ACCURACY ACROSS ALL TEST SUBJECTS FOR DIFFERENT PROPOSED ARCHITECTURES AND BASELINES. STANDARD DEVIATION ACROSS TEST SUBJECTS’ ACCURACY IS ALSO SHOWN.", "Table TABREF28 shows the comparative results among our proposed architectures and baselines. Overall, we use data from 12 subjects for our experiments which sum up to 13107 sign gesture samples in total. To evaluate model performance on a specific subject (test subject), we adopt cross subject evaluation criteria. Suppose, X is the test subject. We train our networks with all sign samples except those are from subject X. We use subject X's data as test split to evaluate the performance of the networks. Table TABREF28 shows the average test accuracy for all 12 subjects. We can see that 3D CNN network alone performs worse than simpler baselines. But when coupled with AI-LSTM as Max CNN-LSTM, it shows an increase in recognition accuracy by 2% from AI-LSTM alone. This is because some of the signs are confused by the AI-LSTM network because of similar skeletal motion pattern. Incorporating spatial relationship among joints leads to a significant accuracy gain. The Spatial AI-LSTM is trained only on skeletal data but outperforms the combined network by 6%." ], "extractive_spans": [], "free_form_answer": "Spatial AI-LSTM", "highlighted_evidence": [ "FLOAT SELECTED: TABLE I AVERAGE CROSS SUBJECT (CS) ACCURACY ACROSS ALL TEST SUBJECTS FOR DIFFERENT PROPOSED ARCHITECTURES AND BASELINES. STANDARD DEVIATION ACROSS TEST SUBJECTS’ ACCURACY IS ALSO SHOWN.", "The Spatial AI-LSTM is trained only on skeletal data but outperforms the combined network by 6%." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: TABLE I AVERAGE CROSS SUBJECT (CS) ACCURACY ACROSS ALL TEST SUBJECTS FOR DIFFERENT PROPOSED ARCHITECTURES AND BASELINES. STANDARD DEVIATION ACROSS TEST SUBJECTS’ ACCURACY IS ALSO SHOWN.", "Table TABREF28 shows the comparative results among our proposed architectures and baselines. Overall, we use data from 12 subjects for our experiments which sum up to 13107 sign gesture samples in total. To evaluate model performance on a specific subject (test subject), we adopt cross subject evaluation criteria. Suppose, X is the test subject. We train our networks with all sign samples except those are from subject X. We use subject X's data as test split to evaluate the performance of the networks. Table TABREF28 shows the average test accuracy for all 12 subjects. We can see that 3D CNN network alone performs worse than simpler baselines. But when coupled with AI-LSTM as Max CNN-LSTM, it shows an increase in recognition accuracy by 2% from AI-LSTM alone. This is because some of the signs are confused by the AI-LSTM network because of similar skeletal motion pattern. Incorporating spatial relationship among joints leads to a significant accuracy gain. The Spatial AI-LSTM is trained only on skeletal data but outperforms the combined network by 6%." ], "extractive_spans": [], "free_form_answer": "Accuracy 81%", "highlighted_evidence": [ "FLOAT SELECTED: TABLE I AVERAGE CROSS SUBJECT (CS) ACCURACY ACROSS ALL TEST SUBJECTS FOR DIFFERENT PROPOSED ARCHITECTURES AND BASELINES. STANDARD DEVIATION ACROSS TEST SUBJECTS’ ACCURACY IS ALSO SHOWN.", "The Spatial AI-LSTM is trained only on skeletal data but outperforms the combined network by 6%.", "Table TABREF28 shows the comparative results among our proposed architectures and baselines. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Table TABREF28 shows the comparative results among our proposed architectures and baselines. Overall, we use data from 12 subjects for our experiments which sum up to 13107 sign gesture samples in total. To evaluate model performance on a specific subject (test subject), we adopt cross subject evaluation criteria. Suppose, X is the test subject. We train our networks with all sign samples except those are from subject X. We use subject X's data as test split to evaluate the performance of the networks. Table TABREF28 shows the average test accuracy for all 12 subjects. We can see that 3D CNN network alone performs worse than simpler baselines. But when coupled with AI-LSTM as Max CNN-LSTM, it shows an increase in recognition accuracy by 2% from AI-LSTM alone. This is because some of the signs are confused by the AI-LSTM network because of similar skeletal motion pattern. Incorporating spatial relationship among joints leads to a significant accuracy gain. The Spatial AI-LSTM is trained only on skeletal data but outperforms the combined network by 6%.", "FLOAT SELECTED: TABLE I AVERAGE CROSS SUBJECT (CS) ACCURACY ACROSS ALL TEST SUBJECTS FOR DIFFERENT PROPOSED ARCHITECTURES AND BASELINES. STANDARD DEVIATION ACROSS TEST SUBJECTS’ ACCURACY IS ALSO SHOWN." ], "extractive_spans": [], "free_form_answer": "Best performing model is Spatial AI-LSTM with accuracy 81% and Std. Deviation 6%", "highlighted_evidence": [ "Table TABREF28 shows the comparative results among our proposed architectures and baselines.", "FLOAT SELECTED: TABLE I AVERAGE CROSS SUBJECT (CS) ACCURACY ACROSS ALL TEST SUBJECTS FOR DIFFERENT PROPOSED ARCHITECTURES AND BASELINES. STANDARD DEVIATION ACROSS TEST SUBJECTS’ ACCURACY IS ALSO SHOWN." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "fa716cd87ce6fd6905e2f23f09b262e90413167f", "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "annotation_id": [ "1e0af788cd643a87f7739a660bcc54b48d108c02", "8bb99ae366df63c59958ee5ff16332663a8b6139", "d3cb51e0daefb68e9dbbe9f01da00ae1c86b13ec" ], "answer": [ { "evidence": [ "Inspired by the success of deep learning approaches in computer vision BIBREF20, we applied different deep learning architectures to model sign languages from both input modes (RGB and skeletal). Unlike traditional image classification or object detection models where neural networks learn hierarchical spatial features from data, sign recognition requires capture of temporal body motion.", "RNN has shown success in modeling sequential pattern in dataBIBREF6. It can capture temporal dynamics in data by maintaining an internal state. However, the basic RNN has problems dealing with long term dependencies in data due to the vanishing gradient problem BIBREF21. Some solutions to the vanishing gradient problem involve careful initialization of network parameters or early stopping BIBREF22. But the most effective solution is to modify the RNN architecture in such a way that there exists a memory state (cell state) at every time step that can identify what to remember and what to forget. This architecture is referred to as long short term memory (LSTM) network BIBREF23. While the basic RNN is a direct transformation of the previous state and the current input, the LSTM maintains an internal memory and has mechanisms to update and use that memory. This is achieved by deploying four separate neural networks also called gates. Figure FIGREF12 depicts a cell of an LSTM network which shows input at the current time step ${x_t}$ and the previous state ${h_{t-1}}$ enter into the cell; and get concatenated. The forget gate processes it to remove unnecessary information, and outputs ${f_t}$ which gets multiplied with the previously stored memory ${C_{t-1}}$ and produces a refined memory for the current time.", "Given a sample skeletal data of $R^{T \\times J \\times 3}$, where $T$ denotes time axis, $J$ is the number of body joints and the last dimension is the 3D coordinates of each joint. We flatten every dimension except time and at each time step we can feed a vector of size $R^{3 \\times J}$ as input. However, we have empirically verified that learning a sequential pattern for each coordinate axis independently and combining them later shows stronger classification performance. Based on this, we trained three different 2 layer LSTMs for data from x, y, and z coordinates separately; and concatenate their final embedding to produce softmax output. In this setting, each separate LSTM receives data as $R^{T \\times J}$ and final embedding size is $R^{3\\times S}$ where $S$ is the state size of LSTM cell. Figure FIGREF15 (a) shows the architecture where as a sample arrives, just before entering into main network, data along separate axis is split and entered into three different LSTM networks. The model concatenates the final state from each of the separate LSTM networks; followed by feeding this into the softmax layer for classification. This approach is referred by Axis Independent Architecture (AI-LSTM). Implementation details such as values of T and J are provided in the `Experiments' section.", "AI-LSTM, described in last section, works by modeling temporal dynamics of body joints' data over time. However, there can be spatial interactions with joints at a specific time step. It fails to capture any such interaction among joints in a given time. To incorporate spatial relationship among joints, we propose a simple novel data augmentation technique for skeletal data. We do this by origin transfer. For each frame in a gesture sample, we use each wrist joints as origin and transform all other joints' data by subtracting that origin from them. In this way spatial information is added to the input. We refer this model with spatial data augmentation as Spatial AI-LSTM. This augmentation technique is depicted in Figure FIGREF21. A sample data of form $R^{T \\times 6 \\times 3}$ results in a representation of $R^{T \\times 5 \\times 3}$ after subtracting left wrist joint (origin transfer). After this augmentation process, each sample is a $R^{20 \\times 16 \\times 3}$ matrix. Hence, each separate LSTM networks in our Spatial AI-LSTM network receives an input of $R^{20 \\times 16}$.", "We hypothesize that, some signs that have mostly similar skeletal motion pattern could be distinguishable using hand shape information. We propose a combination of LSTM and 3D CNN networks. We call this Max CNN-LSTM network. Figure FIGREF15 (b) represents the the Max CNN-LSTM. The details of 3D CNN module is shown in Figure FIGREF14. This architecture has two parts: one for left hand patches and other for right hand patches. Each part has four 3D convolutional layers (second and fourth layers have following maximum pooling layers) followed by 2 fully connected layers. Final embeddings from these two parts are concatenated and by using a softmax layer, from which a classification score is produced. The other AI-LSTM network is fed with skeletal time series data. At the final time step, the LSTM state vector is taken and using a softmax layer another probability score is produced. The final classification score is created by taking element wise maximum of the output scores from the two networks. During back–propagation, both networks are trained on their own score. The combined network acts like a model ensemble and some sign classes which are confused by RNN network alone might have an improved recognition accuracy with this approach.", "To deal with over-fitting, dropout was used for all networks except convolutional layers with probability of 0.5. In addition to dropout, L2 regularization was used for LSTM networks and for dense layers; $\\beta $ was set to 0.008 which controls the impact of regularization on the network. State size and number of layers of LSTM networks were 50 and 2, respectively. Learning rate for Max CNN-LSTM and LSTM networks were set to $0.00001$ and $0.00005$, respectively. We used Adam Optimizer for training our networks BIBREF24. All networks were run for a certain number of epochs (200-300) with a batch size of 64. We developed all of our models with Tensorflow 1.10 (python). Average time taken to train an AI-LSTM and an Spatial AI-LSTM are 25 and 30 minutes on an Intel(R) Core(TM) i5-7600 (3.50GHz) processor respectively. We trained 3D CNN and Max 3D CNN models on GPU (Tesla K80) and each model took around 20 hours to train." ], "extractive_spans": [ "Axis Independent Architecture (AI-LSTM)", "Spatial AI-LSTM", " Max CNN-LSTM", "3D CNN" ], "free_form_answer": "", "highlighted_evidence": [ "Inspired by the success of deep learning approaches in computer vision BIBREF20, we applied different deep learning architectures to model sign languages from both input modes (RGB and skeletal). ", " While the basic RNN is a direct transformation of the previous state and the current input, the LSTM maintains an internal memory and has mechanisms to update and use that memory. ", "Figure FIGREF15 (a) shows the architecture where as a sample arrives, just before entering into main network, data along separate axis is split and entered into three different LSTM networks. The model concatenates the final state from each of the separate LSTM networks; followed by feeding this into the softmax layer for classification. This approach is referred by Axis Independent Architecture (AI-LSTM). Implementation details such as values of T and J are provided in the `Experiments' section.", " To incorporate spatial relationship among joints, we propose a simple novel data augmentation technique for skeletal data. We do this by origin transfer. For each frame in a gesture sample, we use each wrist joints as origin and transform all other joints' data by subtracting that origin from them. In this way spatial information is added to the input. We refer this model with spatial data augmentation as Spatial AI-LSTM. This augmentation technique is depicted in Figure FIGREF21.", "We hypothesize that, some signs that have mostly similar skeletal motion pattern could be distinguishable using hand shape information. We propose a combination of LSTM and 3D CNN networks. We call this Max CNN-LSTM network. Figure FIGREF15 (b) represents the the Max CNN-LSTM. ", " The details of 3D CNN module is shown in Figure FIGREF14. This architecture has two parts: one for left hand patches and other for right hand patches. ", "We trained 3D CNN and Max 3D CNN models on GPU (Tesla K80) and each model took around 20 hours to train." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our Approach ::: Recurrent Neural Networks (RNN)", "RNN has shown success in modeling sequential pattern in dataBIBREF6. It can capture temporal dynamics in data by maintaining an internal state. However, the basic RNN has problems dealing with long term dependencies in data due to the vanishing gradient problem BIBREF21. Some solutions to the vanishing gradient problem involve careful initialization of network parameters or early stopping BIBREF22. But the most effective solution is to modify the RNN architecture in such a way that there exists a memory state (cell state) at every time step that can identify what to remember and what to forget. This architecture is referred to as long short term memory (LSTM) network BIBREF23. While the basic RNN is a direct transformation of the previous state and the current input, the LSTM maintains an internal memory and has mechanisms to update and use that memory. This is achieved by deploying four separate neural networks also called gates. Figure FIGREF12 depicts a cell of an LSTM network which shows input at the current time step ${x_t}$ and the previous state ${h_{t-1}}$ enter into the cell; and get concatenated. The forget gate processes it to remove unnecessary information, and outputs ${f_t}$ which gets multiplied with the previously stored memory ${C_{t-1}}$ and produces a refined memory for the current time.", "Our Approach ::: 3D Convolutional Neural Network", "Traditional convolutional neural network (CNN) is two dimensional in which each layer has a stack of 2D feature maps generated from previous layer or from inputs in case of first layer. A layer also has a certain numbers of filters which are rectangular patches of parameters. Each filter convolves over the stack of 2D feature maps at previous layer and produces feature maps (equal to the number of filters in the current layer) at current layer. The operation is given by Equation DISPLAY_FORM17 where $F_{i,j}^{l}$ denotes the value of feature map at $l^{th}$ layer at location $(i,j)$. $\\odot $ represents dot product of filter $W$ and associated feature map patch in previous layer.", "Standard CNN fails to capture the temporal information associated with data, which is important in video or any type of sequential data representation. To solve this problem, 3D convolution was introduced in BIBREF2. The key difference is that kernels are 3D and sub sampling (pooling) layers work across three dimensions.", "Our Approach ::: Axis Independent LSTM", "Given a sample skeletal data of $R^{T \\times J \\times 3}$, where $T$ denotes time axis, $J$ is the number of body joints and the last dimension is the 3D coordinates of each joint. We flatten every dimension except time and at each time step we can feed a vector of size $R^{3 \\times J}$ as input. However, we have empirically verified that learning a sequential pattern for each coordinate axis independently and combining them later shows stronger classification performance. Based on this, we trained three different 2 layer LSTMs for data from x, y, and z coordinates separately; and concatenate their final embedding to produce softmax output. In this setting, each separate LSTM receives data as $R^{T \\times J}$ and final embedding size is $R^{3\\times S}$ where $S$ is the state size of LSTM cell. Figure FIGREF15 (a) shows the architecture where as a sample arrives, just before entering into main network, data along separate axis is split and entered into three different LSTM networks. The model concatenates the final state from each of the separate LSTM networks; followed by feeding this into the softmax layer for classification. This approach is referred by Axis Independent Architecture (AI-LSTM). Implementation details such as values of T and J are provided in the `Experiments' section.", "Our Approach ::: Spatial AI-LSTM", "AI-LSTM, described in last section, works by modeling temporal dynamics of body joints' data over time. However, there can be spatial interactions with joints at a specific time step. It fails to capture any such interaction among joints in a given time. To incorporate spatial relationship among joints, we propose a simple novel data augmentation technique for skeletal data. We do this by origin transfer. For each frame in a gesture sample, we use each wrist joints as origin and transform all other joints' data by subtracting that origin from them. In this way spatial information is added to the input. We refer this model with spatial data augmentation as Spatial AI-LSTM. This augmentation technique is depicted in Figure FIGREF21. A sample data of form $R^{T \\times 6 \\times 3}$ results in a representation of $R^{T \\times 5 \\times 3}$ after subtracting left wrist joint (origin transfer). After this augmentation process, each sample is a $R^{20 \\times 16 \\times 3}$ matrix. Hence, each separate LSTM networks in our Spatial AI-LSTM network receives an input of $R^{20 \\times 16}$.", "Our Approach ::: Combined Network", "We hypothesize that, some signs that have mostly similar skeletal motion pattern could be distinguishable using hand shape information. We propose a combination of LSTM and 3D CNN networks. We call this Max CNN-LSTM network. Figure FIGREF15 (b) represents the the Max CNN-LSTM. The details of 3D CNN module is shown in Figure FIGREF14. This architecture has two parts: one for left hand patches and other for right hand patches. Each part has four 3D convolutional layers (second and fourth layers have following maximum pooling layers) followed by 2 fully connected layers. Final embeddings from these two parts are concatenated and by using a softmax layer, from which a classification score is produced. The other AI-LSTM network is fed with skeletal time series data. At the final time step, the LSTM state vector is taken and using a softmax layer another probability score is produced. The final classification score is created by taking element wise maximum of the output scores from the two networks. During back–propagation, both networks are trained on their own score. The combined network acts like a model ensemble and some sign classes which are confused by RNN network alone might have an improved recognition accuracy with this approach." ], "extractive_spans": [ "Recurrent Neural Networks (RNN)", "3D Convolutional Neural Network", "Axis Independent LSTM", "Spatial AI-LSTM", "Max CNN-LSTM network" ], "free_form_answer": "", "highlighted_evidence": [ "Recurrent Neural Networks (RNN)\nRNN has shown success in modeling sequential pattern in dataBIBREF6.", "3D Convolutional Neural Network\nTraditional convolutional neural network (CNN) is two dimensional in which each layer has a stack of 2D feature maps generated from previous layer or from inputs in case of first layer.", "Standard CNN fails to capture the temporal information associated with data, which is important in video or any type of sequential data representation. To solve this problem, 3D convolution was introduced in BIBREF2.", "Axis Independent LSTM\nGiven a sample skeletal data of $R^{T \\times J \\times 3}$, where $T$ denotes time axis, $J$ is the number of body joints and the last dimension is the 3D coordinates of each joint. We flatten every dimension except time and at each time step we can feed a vector of size $R^{3 \\times J}$ as input. However, we have empirically verified that learning a sequential pattern for each coordinate axis independently and combining them later shows stronger classification performance. Based on this, we trained three different 2 layer LSTMs for data from x, y, and z coordinates separately; and concatenate their final embedding to produce softmax output.", "Spatial AI-LSTM\nAI-LSTM, described in last section, works by modeling temporal dynamics of body joints' data over time. However, there can be spatial interactions with joints at a specific time step.", "Combined Network\nWe hypothesize that, some signs that have mostly similar skeletal motion pattern could be distinguishable using hand shape information. We propose a combination of LSTM and 3D CNN networks. We call this Max CNN-LSTM network." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We propose an RNN architecture with a novel spatial data augmentation technique.", "We propose an architecture which uses both RGB and skeletal data to improve recognition accuracy.", "Our Approach ::: 3D Convolutional Neural Network", "Our Approach ::: Axis Independent LSTM", "Our Approach ::: Spatial AI-LSTM" ], "extractive_spans": [], "free_form_answer": "3D CNN, Axis independent LSTM, spatial axis independent LSTM, and combined network ", "highlighted_evidence": [ "We propose an RNN architecture with a novel spatial data augmentation technique.\n\nWe propose an architecture which uses both RGB and skeletal data to improve recognition accuracy.", "Our Approach ::: 3D Convolutional Neural Network", "Our Approach ::: Axis Independent LSTM", "Our Approach ::: Spatial AI-LSTM" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f", "258ee4069f740c400c0049a2580945a1cc7f044c", "fa716cd87ce6fd6905e2f23f09b262e90413167f" ] } ], "nlp_background": [ "two", "two", "two" ], "paper_read": [ "no", "no", "no" ], "question": [ "What is the sign language recognition task investigated?", "What is the performance of the best model in the sign language recognition task?", "What are the deep learning architectures used?" ], "question_id": [ "1a0794ebbc9ee61bbb7ef2422d576a10576d9d96", "256dfa501a71d7784520a527f43aec0549b1afea", "f85520bbc594918968d7d9f33d11639055458344" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "search_query": [ "language recognition", "language recognition", "language recognition" ], "topic_background": [ "familiar", "familiar", "familiar" ] }
{ "caption": [ "Fig. 1. T-SNE representation of 11824 data samples from 51 different sign classes. Best viewed in color.", "Fig. 2. a) Distribution of number of samples per gesture class. b) Distribution of gesture sample duration; X-axis represents gesture length in terms of number of frames in a video.", "Fig. 3. Visualization of hand shapes and skeletal joints of two sign classes. Top panel shows the sign Alarm and middle panel shows the sign Doorbell. For each sign, first two rows are the left and right hand image patches and third row is the skeletal configuration. We can see for Alarm and Doorbell, the skeletal motion is almost similar but has different hand shapes. Bottom panel shows another sign Weather which has quite distinguishable skeletal motion from top two.", "Fig. 4. An LSTM cell. Four circles represent four different neural network which act as gates.", "Fig. 5. Used 3D CNN architecture for this work. It consists of four 3D convolutional layers and two fully connected layers at the end. There are two separate networks for left and right hands. Final embedding of these two networks are concatenated before producing softmax score. Feature map dimensions after each layer are shown in the middle.", "Fig. 6. Proposed architectures. Fig (a): Axis independent LSTM network where data from each axis enters into different LSTM networks and at the end we take the concatenation of individual states. Fig (b): Combined architecture. Here 3D CNN symbolizes the architecture we presented in Figure 5. Here both CNN and LSTM network model data separately. At the end we take the maximum of probability scores produced by both network.", "Fig. 7. Spatial data augmentation.", "Fig. 8. Seven sampled frames from a sign of class Air Condition. Top two panels show cropped hand patches while bottom panel shows skeletal configuration of corresponding frames.", "TABLE I AVERAGE CROSS SUBJECT (CS) ACCURACY ACROSS ALL TEST SUBJECTS FOR DIFFERENT PROPOSED ARCHITECTURES AND BASELINES. STANDARD DEVIATION ACROSS TEST SUBJECTS’ ACCURACY IS ALSO SHOWN.", "Fig. 9. Confusion matrix for a subset of sign classes from a subject for AI-LSTM, Max CNN-LSTM and Spatial AI-LSTM from top to bottom respectively. Mentioned signs are a subset of 51 sign classes.", "Fig. 10. Effect of adding data to the training from test subject in Spatial AI-LSTM model. X axis is the fraction of test subject’s data used in training. Y axis is the test accuracy." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "4-Figure5-1.png", "5-Figure6-1.png", "5-Figure7-1.png", "6-Figure8-1.png", "6-TableI-1.png", "7-Figure9-1.png", "7-Figure10-1.png" ] }
[ "What is the sign language recognition task investigated?", "What is the performance of the best model in the sign language recognition task?", "What are the deep learning architectures used?" ]
[ [ "1909.11232-Introduction-1", "1909.11232-Introduction-0", "1909.11232-Conclusion-0" ], [ "1909.11232-6-TableI-1.png", "1909.11232-Experiments ::: Experimental Results-0" ], [ "1909.11232-Introduction-3", "1909.11232-Our Approach ::: Axis Independent LSTM-0", "1909.11232-Introduction-4", "1909.11232-Experiments ::: Training Details-0", "1909.11232-Our Approach-0", "1909.11232-Our Approach ::: 3D Convolutional Neural Network-0", "1909.11232-Our Approach ::: Recurrent Neural Networks (RNN)-0", "1909.11232-Our Approach ::: 3D Convolutional Neural Network-1", "1909.11232-Our Approach ::: Spatial AI-LSTM-0", "1909.11232-Our Approach ::: Combined Network-0" ] ]
[ " American Sign Language recognition ", "Best performing model is Spatial AI-LSTM with accuracy 81% and Std. Deviation 6%", "3D CNN, Axis independent LSTM, spatial axis independent LSTM, and combined network " ]
82
1808.09180
What do character-level models learn about morphology? The case of dependency parsing
When parsing morphologically-rich languages with neural models, it is beneficial to model input at the character level, and it has been claimed that this is because character-level models learn morphology. We test these claims by comparing character-level models to an oracle with access to explicit morphological analysis on twelve languages with varying morphological typologies. Our results highlight many strengths of character-level models, but also show that they are poor at disambiguating some words, particularly in the face of case syncretism. We then demonstrate that explicitly modeling morphological case improves our best model, showing that character-level models can benefit from targeted forms of explicit morphological modeling.
{ "paragraphs": [ [ "Modeling language input at the character level BIBREF0 , BIBREF1 is effective for many NLP tasks, and often produces better results than modeling at the word level. For parsing, ballesteros-dyer-smith:2015:EMNLP have shown that character-level input modeling is highly effective on morphologically-rich languages, and the three best systems on the 45 languages of the CoNLL 2017 shared task on universal dependency parsing all use character-level models BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , showing that they are effective across many typologies.", "The effectiveness of character-level models in morphologically-rich languages has raised a question and indeed debate about explicit modeling of morphology in NLP. BIBREF0 propose that “prior information regarding morphology ... among others, should be incorporated” into character-level models, while BIBREF6 counter that it is “unnecessary to consider these prior information” when modeling characters. Whether we need to explicitly model morphology is a question whose answer has a real cost: as ballesteros-dyer-smith:2015:EMNLP note, morphological annotation is expensive, and this expense could be reinvested elsewhere if the predictive aspects of morphology are learnable from strings.", "Do character-level models learn morphology? We view this as an empirical claim requiring empirical evidence. The claim has been tested implicitly by comparing character-level models to word lookup models BIBREF7 , BIBREF8 . In this paper, we test it explicitly, asking how character-level models compare with an oracle model with access to morphological annotations. This extends experiments showing that character-aware language models in Czech and Russian benefit substantially from oracle morphology BIBREF9 , but here we focus on dependency parsing (§ \"Dependency parsing model\" )—a task that benefits substantially from morphological knowledge—and we experiment with twelve languages using a variety of techniques to probe our models.", "Our summary finding is that character-level models lag the oracle in nearly all languages (§ \"Experiments\" ). The difference is small, but suggests that there is value in modeling morphology. When we tease apart the results by part of speech and dependency type, we trace the difference back to the character-level model's inability to disambiguate words even when encoded with arbitrary context (§ \"Analysis\" ). Specifically, it struggles with case syncretism, in which noun case—and thus syntactic function—is ambiguous. We show that the oracle relies on morphological case, and that a character-level model provided only with morphological case rivals the oracle, even when case is provided by another predictive model (§ \"Characters and case syncretism\" ). Finally, we show that the crucial morphological features vary by language (§ \"Understanding head selection\" )." ], [ "We use a neural graph-based dependency parser combining elements of two recent models BIBREF10 , BIBREF11 . Let $w = w_1, \\dots , w_{|w|}$ be an input sentence of length $|w|$ and let $w_0$ denote an artificial Root token. We represent the $i$ th input token $w_i$ by concatenating its word representation (§ \"Computing word representations\" ), $\\textbf {e}(w_i)$ and part-of-speech (POS) representation, $\\textbf {p}_i$ . Using a semicolon $(;)$ to denote vector concatenation, we have: ", "$$\\textbf {x}_i = [\\textbf {e}(w_i);\\textbf {p}_i]$$ (Eq. 2) ", " We call $\\textbf {x}_i$ the embedding of $w_i$ since it depends on context-independent word and POS representations. We obtain a context-sensitive encoding $\\textbf {h}_i$ with a bidirectional LSTM (bi-LSTM), which concatenates the hidden states of a forward and backward LSTM at position $i$ . Using $\\textbf {h}_i^f$ and $\\textbf {h}_i^b$ respectively to denote these hidden states, we have: ", "$$\\textbf {h}_i = [\\textbf {h}_i^f;\\textbf {h}_i^b]$$ (Eq. 3) ", " We use $\\textbf {h}_i$ as the final input representation of $w_i$ ." ], [ "For each word $w_i$ , we compute a distribution over all other word positions $j \\in \\lbrace 0,...,|w|\\rbrace /i$ denoting the probability that $w_j$ is the headword of $w_i$ . ", "$$P_{head}(w_j \\mid w_i,w) = \\frac{\\exp (a(\\textbf {h}_i, \\textbf {h}_j))}{\\sum _{j^{\\prime }=0}^{|w|} \\exp (a(\\textbf {h}_i, \\textbf {h}_{j^{\\prime }}))}$$ (Eq. 5) ", " Here, $a$ is a neural network that computes an association between $w_i$ and $w_j$ using model parameters $\\textbf {U}_a, \\textbf {W}_a,$ and $\\textbf {v}_a$ . ", "$$a(\\textbf {h}_i, \\textbf {h}_j) = \\textbf {v}_a \\tanh (\\textbf {U}_a \\textbf {h}_i + \\textbf {W}_a \\textbf {h}_j)$$ (Eq. 6) " ], [ "Given a head prediction for word $w_i$ , we predict its syntactic label $\\ell _k \\in L$ using a similar network. ", "$$P_{label}(\\ell _k \\mid w_i, w_j, w) = \\frac{\\exp (f(\\textbf {h}_i, \\textbf {h}_j)[k])}{\\sum _{k^{\\prime }=1}^{|L|} \\exp (f(\\textbf {h}_i, \\textbf {h}_{j})[k^{\\prime }])}$$ (Eq. 8) ", " where $L$ is the set of output labels and $f$ is a function that computes label score using model parameters $\\textbf {U}_\\ell , \\textbf {W}_\\ell ,$ and $\\textbf {V}_\\ell $ : ", "$$f(\\textbf {h}_i, \\textbf {h}_j) = \\textbf {V}_\\ell \\tanh (\\textbf {U}_\\ell \\textbf {h}_i + \\textbf {W}_\\ell \\textbf {h}_j)$$ (Eq. 9) ", " The model is trained to minimize the summed cross-entropy losses of both head and label prediction. At test time, we use the Chu-Liu-Edmonds BIBREF12 , BIBREF13 algorithm to ensure well-formed, possibly non-projective trees." ], [ "We consider several ways to compute the word representation $\\textbf {e}({w_i})$ in Eq. 2 :", "Every word type has its own learned vector representation.", "Characters are composed using a bi-LSTM BIBREF0 , and the final states of the forward and backward LSTMs are concatenated to yield the word representation.", "Characters are composed using a convolutional neural network BIBREF1 .", "Character trigrams are composed using a bi-LSTM, an approach that we previously found to be effective across typologies BIBREF9 .", "We treat the morphemes of a morphological annotation as a sequence and compose them using a bi-LSTM. We only use universal inflectional features defined in the UD annotation guidelines. For example, the morphological annotation of “chases” is $\\langle $ chase, person=3rd, num-SG, tense=Pres $\\rangle $ .", "For the remainder of the paper, we use the name of model as shorthand for the dependency parser that uses that model as input (Eq. 2 ).", "We experiment on twelve languages with varying morphological typologies (Table 1 ) in the Universal Dependencies (UD) treebanks version 2.0 BIBREF14 . Note that while Arabic and Hebrew follow a root & pattern typology, their datasets are unvocalized, which might reduce the observed effects of this typology. Following common practice, we remove language-specific dependency relations and multiword token annotations. We use gold sentence segmentation, tokenization, universal POS (UPOS), and morphological (XFEATS) annotations provided in UD.", "Our Chainer BIBREF15 implementation encodes words (Eq. 3 ) in two-layer bi-LSTMs with 200 hidden units, and uses 100 hidden units for head and label predictions (output of Eqs. 4 and 6). We set batch size to 16 for char-cnn and 32 for other models following a grid search. We apply dropout to the embeddings (Eq. 2 ) and the input of the head prediction. We use Adam optimizer with initial learning rate 0.001 and clip gradients to 5, and train all models for 50 epochs with early stopping. For the word model, we limit our vocabulary to the 20K most frequent words, replacing less frequent words with an unknown word token. The char-lstm, trigram-lstm, and oracle models use a one-layer bi-LSTM with 200 hidden units to compose subwords. For char-cnn, we use the small model setup of kim2015.", "Table 2 presents test results for every model on every language, establishing three results. First, they support previous findings that character-level models outperform word-based models—indeed, the char-lstm model outperforms the word model on LAS for all languages except Hindi and Urdu for which the results are identical. Second, they establish strong baselines for the character-level models: the char-lstm generally obtains the best parsing accuracy, closely followed by char-cnn. Third, they demonstrate that character-level models rarely match the accuracy of an oracle model with access to explicit morphology. This reinforces a finding of BIBREF9 : character-level models are effective tools, but they do not learn everything about morphology, and they seem to be closer to oracle accuracy in agglutinative rather than in fusional languages.", "In character-level models, orthographically similar words share many parameters, so we would expect these models to produce good representations of OOV words that are morphological variants of training words. Does this effect explain why they are better than word-level models?", "Table 3 shows how the character model improves over the word model for both non-OOV and OOV words. On the agglutinative languages Finnish and Turkish, where the OOV rates are 23% and 24% respectively, we see the highest LAS improvements, and we see especially large improvements in accuracy of OOV words. However, the effects are more mixed in other languages, even with relatively high OOV rates. In particular, languages with rich morphology like Czech, Russian, and (unvocalised) Arabic see more improvement than languages with moderately rich morphology and high OOV rates like Portuguese or Spanish. This pattern suggests that parameter sharing between pairs of observed training words can also improve parsing performance. For example, if “dog” and “dogs” are observed in the training data, they will share activations in their context and on their common prefix.", "Let's turn to our main question: what do character-level models learn about morphology? To answer it, we compare the oracle model to char-lstm, our best character-level model.", "In the oracle, morphological annotations disambiguate some words that the char-lstm must disambiguate from context. Consider these Russian sentences from baerman-brown-corbett-2005:", " Maša čitaet pisˊmo", "Masha reads letter", "`Masha reads a letter.'", " Na stole ležit pisˊmo", "on table lies letter", "`There's a letter on the table.' Pisˊmo (“letter”) acts as the subject in ( UID28 ), and as object in ( UID28 ). This knowledge is available to the oracle via morphological case: in ( UID28 ), the case of pisˊmo is nominative and in ( UID28 ) it is accusative. Could this explain why the oracle outperforms the character model?", "To test this, we look at accuracy for word types that are empirically ambiguous—those that have more than one morphological analysis in the training data. Note that by this definition, some ambiguous words will be seen as unambiguous, since they were seen with only one analysis. To make the comparison as fair as possible, we consider only words that were observed in the training data. Figure 1 compares the improvement of the oracle on ambiguous and seen unambiguous words, and as expected we find that handling of ambiguous words improves with the oracle in almost all languages. The only exception is Turkish, which has the least training data.", "Now we turn to a more fine-grained analysis conditioned on the annotated part-of-speech (POS) of the dependent. We focus on four languages where the oracle strongly outperforms the best character-level model on the development set: Finnish, Czech, German, and Russian. We consider five POS categories that are frequent in all languages and consistently annotated for morphology in our data: adjective (ADJ), noun (NOUN), pronoun (PRON), proper noun (PROPN), and verb (VERB).", "Table 4 shows that the three noun categories—ADJ, PRON, and PROPN—benefit substantially from oracle morphology, especially for the three fusional languages: Czech, German, and Russian.", "We analyze results by the dependency type of the dependent, focusing on types that interact with morphology: root, nominal subjects (nsubj), objects (obj), indirect objects (iobj), nominal modifiers (nmod), adjectival modifier (amod), obliques (obl), and (syntactic) case markings (case).", "Figure 2 shows the differences in the confusion matrices of the char-lstm and oracle for those words on which both models correctly predict the head. The differences on Finnish are small, which we expect from the similar overall LAS of both models. But for the fusional languages, a pattern emerges: the char-lstm consistently underperforms the oracle on nominal subject, object, and indirect object dependencies—labels closely associated with noun categories. From inspection, it appears to frequently mislabel objects as nominal subjects when the dependent noun is morphologically ambiguous. For example, in the sentence of Figure 3 , Gelände (“terrain”) is an object, but the char-lstm incorrectly predicts that it is a nominal subject. In the training data, Gelände is ambiguous: it can be accusative, nominative, or dative.", "In German, the char-lstm frequently confuses objects and indirect objects. By inspection, we found 21 mislabeled cases, where 20 of them would likely be correct if the model had access to morphological case (usually dative). In Czech and Russian, the results are more varied: indirect objects are frequently mislabeled as objects, obliques, nominal modifiers, and nominal subjects. We note that indirect objects are relatively rare in these data, which may partly explain their frequent mislabeling.", "So far, we've seen that for our three fusional languages—German, Czech, and Russian—the oracle strongly outperforms a character model on nouns with ambiguous morphological analyses, particularly on core dependencies: nominal subjects, objects and indirect objects. Since the nominative, accusative, and dative morphological cases are strongly (though not perfectly) correlated with these dependencies, it is easy to see why the morphologically-aware oracle is able to predict them so well. We hypothesized that these cases are more challenging for the character model because these languages feature a high degree of syncretism—functionally distinct words that have the same form—and in particular case syncretism. For example, referring back to examples ( UID28 ) and ( UID28 ), the character model must disambiguate pisˊmo from its context, whereas the oracle can directly disambiguate it from a feature of the word itself.", "To understand this, we first designed an experiment to see whether the char-lstm could successfully disambiguate noun case, using a method similar to BIBREF8 . We train a neural classifier that takes as input a word representation from the trained parser and predicts a morphological feature of that word—for example that its case is nominative (Case=Nom). The classifier is a feedforward neural network with one hidden layer, followed by a ReLU non-linearity. We consider two representations of each word: its embedding ( $\\textbf {x}_i$ ; Eq. 2 ) and its encoding ( $\\textbf {h}_i$ ; Eq. 3 ). To understand the importance of case, we consider it alongside number and gender features as well as whole feature bundles.", "Table 5 shows the results of morphological feature classification on Czech; we found very similar results in German and Russian (Appendix \"Results on morphological tagging\" ). The oracle embeddings have almost perfect accuracy—and this is just what we expect, since the representation only needs to preserve information from its input. The char-lstm embeddings perform well on number and gender, but less well on case. This results suggest that the character-level models still struggle to learn case when given only the input text. Comparing the char-lstm with a baseline model which predicts the most frequent feature for each type in the training data, we observe that both of them show similar trends even though character models slightly outperforms the baseline model.", "The classification results from the encoding are particularly interesting: the oracle still performs very well on morphological case, but less well on other features, even though they appear in the input. In the character model, the accuracy in morphological prediction also degrades in the encoding—except for case, where accuracy on case improves by 12%.", "These results make intuitive sense: representations learn to preserve information from their input that is useful for subsequent predictions. In our parsing model, morphological case is very useful for predicting dependency labels, and since it is present in the oracle's input, it is passed almost completely intact through each representation layer. The character model, which must disambiguate case from context, draws as much additional information as it can from surrounding words through the LSTM encoder. But other features, and particularly whole feature bundles, are presumably less useful for parsing, so neither model preserves them with the same fidelity.", "Our analysis indicates that case is important for parsing, so it is natural to ask: Can we improve the neural model by explicitly modeling case? To answer this question, we ran a set of experiments, considering two ways to augment the char-lstm with case information: multitask learning BIBREF16 and a pipeline model in which we augment the char-lstm model with either predicted or gold case. For example, we use $\\langle $ p, i, z, z, a, Nom $\\rangle $ to represent pizza with nominative case. For MTL, we follow the setup of BIBREF17 and BIBREF18 . We increase the biLSTMs layers from two to four and use the first two layers to predict morphological case, leaving out the other two layers specific only for parser. For the pipeline model, we train a morphological tagger to predict morphological case (Appendix \"Morphological tagger\" ). This tagger does not share parameters with the parser.", "Table 6 summarizes the results on Czech, German, and Russian. We find augmenting the char-lstm model with either oracle or predicted case improve its accuracy, although the effect is different across languages. The improvements from predicted case results are interesting, since in non-neural parsers, predicted case usually harms accuracy BIBREF19 . However, we note that our taggers use gold POS, which might help. The MTL models achieve similar or slightly better performance than the character-only models, suggesting that supplying case in this way is beneficial. Curiously, the MTL parser is worse than the the pipeline parser, but the MTL case tagger is better than the pipeline case tagger (Table 7 ). This indicates that the MTL model must learn to encode case in the model's representation, but must not learn to effectively use it for parsing. Finally, we observe that augmenting the char-lstm with either gold or predicted case improves the parsing performance for all languages, and indeed closes the performance gap with the full oracle, which has access to all morphological features. This is especially interesting, because it shows using carefully targeted linguistic analyses can improve accuracy as much as wholesale linguistic analysis.", "The previous experiments condition their analysis on the dependent, but dependency is a relationship between dependents and heads. We also want to understand the importance of morphological features to the head. Which morphological features of the head are important to the oracle?", "To see which morphological features the oracle depends on when making predictions, we augmented our model with a gated attention mechanism following kuncoro-EtAl:2017:EACLlong. Our new model attends to the morphological features of candidate head $w_j$ when computing its association with dependent $w_i$ (Eq. 5 ), and morpheme representations are then scaled by their attention weights to produce a final representation.", "Let $f_{i1}, \\cdots , f_{ik}$ be the $k$ morphological features of $w_i$ , and denote by $\\textbf {f}_{i1}, \\cdots , \\textbf {f}_{ik}$ their corresponding feature embeddings. As in § \"Dependency parsing model\" , $\\textbf {h}_i$ and $\\textbf {h}_j$ are the encodings of $w_i$ and $w_j$ , respectively. The morphological representation $\\textbf {m}_j$ of $w_j$ is: ", "$$\\textbf {m}_j = [\\textbf {f}_{j1}, \\cdots , \\textbf {f}_{jk}]^\\top \\textbf {k}$$ (Eq. 43) ", " where $\\textbf {k}$ is a vector of attention weights: ", "$$\\textbf {k} = \\textrm {softmax}([\\textbf {f}_{j1}, \\cdots , \\textbf {f}_{jk}]^\\top \\textbf {V} \\textbf {h}_i )$$ (Eq. 44) ", " The intuition is that dependent $w_i$ can choose which morphological features of $w_j$ are most important when deciding whether $w_j$ is its head. Note that this model is asymmetric: a word only attends to the morphological features of its (single) parent, and not its (many) children, which may have different functions. ", "We combine the morphological representation with the word's encoding via a sigmoid gating mechanism. ", "$$\\textbf {z}_j &= \\textbf {g} \\odot \\textbf {h}_j + (1 - \\textbf {g}) \\odot \\textbf {m}_j\\\\\n\\textbf {g} & = \\sigma (\\textbf {W}_1 \\textbf {h}_j + \\textbf {W}_2 \\textbf {m}_j)$$ (Eq. 46) ", " where $\\odot $ denotes element-wise multiplication. The gating mechanism allows the model to choose between the computed word representation and the weighted morphological representations, since for some dependencies, morphological features of the head might not be important. In the final model, we replace Eq. 5 and Eq. 6 with the following: ", "$$P_{head}(w_j|w_i, w) = \\frac{\\exp (a(\\textbf {h}_i, \\textbf {z}_j))}{\\sum _{j^{\\prime }=0}^N \\exp a(\\textbf {h}_i, \\textbf {z}_{j^{\\prime }})} \\\\\na(\\textbf {h}_i, \\textbf {z}_j) = \\textbf {v}_a \\tanh (\\textbf {U}_a \\textbf {h}_i + \\textbf {W}_a \\textbf {z}_j)$$ (Eq. 47) ", " The modified label prediction is: ", "$$P_{label}(\\ell _k|w_i, w_j, w) = \\frac{\\exp (f(\\textbf {h}_i, \\textbf {z}_j)[k])}{\\sum _{k^{\\prime }=0}^{|L|} \\exp (f(\\textbf {h}_i, \\textbf {z}_{j})[k^{\\prime }])}$$ (Eq. 48) ", " where $f$ is again a function to compute label score: ", "$$f(\\textbf {h}_i, \\textbf {z}_j) = \\textbf {V}_\\ell \\tanh (\\textbf {U}_\\ell \\textbf {h}_i + \\textbf {W}_\\ell \\textbf {z}_j)$$ (Eq. 49) ", "We trained our augmented model (oracle-attn) on Finnish, German, Czech, and Russian. Its accuracy is very similar to the oracle model (Table 8 ), so we obtain a more interpretable model with no change to our main results.", "Next, we look at the learned attention vectors to understand which morphological features are important, focusing on the core arguments: nominal subjects, objects, and indirect objects. Since our model knows the case of each dependent, this enables us to understand what features it seeks in potential heads for each case. For simplicity, we only report results for words where both head and label predictions are correct.", "Figure 4 shows how attention is distributed across multiple features of the head word. In Czech and Russian, we observe that the model attends to Gender and Number when the noun is in nominative case. This makes intuitive sense since these features often signal subject-verb agreement. As we saw in earlier experiments, these are features for which a character model can learn reliably good representations. For most other dependencies (and all dependencies in German), Lemma is the most important feature, suggesting a strong reliance on lexical semantics of nouns and verbs. However, we also notice that the model sometimes attends to features like Aspect, Polarity, and VerbForm—since these features are present only on verbs, we suspect that the model may simply use them as convenient signals that a word is verb, and thus a likely head for a given noun.", "Character-level models are effective because they can represent OOV words and orthographic regularities of words that are consistent with morphology. But they depend on context to disambiguate words, and for some words this context is insufficient. Case syncretism is a specific example that our analysis identified, but the main results in Table 2 hint at the possibility that different phenomena are at play in different languages.", "While our results show that prior knowledge of morphology is important, they also show that it can be used in a targeted way: our character-level models improved markedly when we augmented them only with case. This suggests a pragmatic reality in the middle of the wide spectrum between pure machine learning from raw text input and linguistically-intensive modeling: our new models don't need all prior linguistic knowledge, but they clearly benefit from some knowledge in addition to raw input. While we used a data-driven analysis to identify case syncretism as a problem for neural parsers, this result is consistent with previous linguistically-informed analyses BIBREF20 , BIBREF19 . We conclude that neural models can still benefit from linguistic analyses that target specific phenomena where annotation is likely to be useful.", "Clara Vania is supported by the Indonesian Endowment Fund for Education (LPDP), the Centre for Doctoral Training in Data Science, funded by the UK EPSRC (grant EP/L016427/1), and the University of Edinburgh. We would like to thank Yonatan Belinkov for the helpful discussion regarding morphological tagging experiments. We thank Sameer Bansal, Marco Damonte, Denis Emelin, Federico Fancellu, Sorcha Gilroy, Jonathan Mallinson, Joana Ribeiro, Naomi Saphra, Ida Szubert, Sabine Weber, and the anonymous reviewers for helpful discussion of this work and comments on previous drafts of the paper.", "We adapt the parser's encoder architecture for our morphological tagger. Following notation in Section \"Dependency parsing model\" , each word $w_i$ is represented by its context-sensitive encoding, $\\textbf {h}_i$ (Eq. 3 ). The encodings are then fed into a feed-forward neural network with two hidden layers—each has a ReLU non-linearity—and an output layer mapping the to the morphological tags, followed by a softmax. We set the size of the hidden layer to 100 and use dropout probability 0.2. We use Adam optimizer with initial learning rate 0.001 and clip gradients to 5. We train each model for 20 epochs with early stopping. The model is trained to minimized the cross-entropy loss.", "Since we do not have additional data with the same annotations, we use the same UD dataset to train our tagger. To prevent overfitting, we only use the first 75% of training data for training. After training the taggers, we predict the case for the training, development, and test sets and use them for dependency parsing.", "Table 9 and 10 present morphological tagging results for German and Russian. We found that German and Russian have similar pattern to Czech (Table 5 ), where morphological case seems to be preserved in the encoder because they are useful for dependency parsing. In these three fusional languages, contextual information helps character-level model to predict the correct case. However, its performance still behind the oracle.", "We observe a slightly different pattern on Finnish results (Table 11 ). The character embeddings achieves almost similar performance as the oracle embeddings. This results highlights the differences in morphological process between Finnish and the other fusional languages. We observe that performance of the encoder representations are slightly worse than the embeddings." ] ], "section_name": [ "Introduction", "Dependency parsing model", "Head prediction", "Label prediction", "Computing word representations" ] }
{ "answers": [ { "annotation_id": [ "5618d56de6538b95cb0dcab3c8d5a3875b106d6f", "77c04dd02f8b4375846bbe2c25e4b489cfaaa51a", "d9d726a4568de4b7407bef4865eb78cdfd0b377e" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "The effectiveness of character-level models in morphologically-rich languages has raised a question and indeed debate about explicit modeling of morphology in NLP. BIBREF0 propose that “prior information regarding morphology ... among others, should be incorporated” into character-level models, while BIBREF6 counter that it is “unnecessary to consider these prior information” when modeling characters. Whether we need to explicitly model morphology is a question whose answer has a real cost: as ballesteros-dyer-smith:2015:EMNLP note, morphological annotation is expensive, and this expense could be reinvested elsewhere if the predictive aspects of morphology are learnable from strings." ], "extractive_spans": [], "free_form_answer": "Chung et al. (2016)", "highlighted_evidence": [ "BIBREF0 propose that “prior information regarding morphology ... among others, should be incorporated” into character-level models, while BIBREF6 counter that it is “unnecessary to consider these prior information” when modeling characters." ], "unanswerable": false, "yes_no": null }, { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null } ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "c7d4a630661cd719ea504dba56393f78278b296b", "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "annotation_id": [ "1d192dc84688254a55b97739f3671b0d7e70d0c0", "7bde8ebe90e4d5c0416681ce6b1439c7a7914480", "a1f6767bac4e43d50e21d20fdba4ad9e6ab2de9c" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Training data statistics. Languages are grouped by their dominant morphological processes, from top to bottom: agglutinative, fusional, and root & pattern." ], "extractive_spans": [], "free_form_answer": "Finnish, Turkish, Czech, English, German, Hindi, Portuguese, Russian, Spanish, Urdu, Arabic, and Hebrew", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Training data statistics. Languages are grouped by their dominant morphological processes, from top to bottom: agglutinative, fusional, and root & pattern." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "We experiment on twelve languages with varying morphological typologies (Table 1 ) in the Universal Dependencies (UD) treebanks version 2.0 BIBREF14 . Note that while Arabic and Hebrew follow a root & pattern typology, their datasets are unvocalized, which might reduce the observed effects of this typology. Following common practice, we remove language-specific dependency relations and multiword token annotations. We use gold sentence segmentation, tokenization, universal POS (UPOS), and morphological (XFEATS) annotations provided in UD.", "FLOAT SELECTED: Table 1: Training data statistics. Languages are grouped by their dominant morphological processes, from top to bottom: agglutinative, fusional, and root & pattern." ], "extractive_spans": [], "free_form_answer": "Finnish, Turkish, Czech, English, German, Hindi, Portuguese, Russian, Spanish, Urdu, Arabic, and Hebrew", "highlighted_evidence": [ "We experiment on twelve languages with varying morphological typologies (Table 1 ) in the Universal Dependencies (UD) treebanks version 2.0 BIBREF14 .", "FLOAT SELECTED: Table 1: Training data statistics. Languages are grouped by their dominant morphological processes, from top to bottom: agglutinative, fusional, and root & pattern." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "FLOAT SELECTED: Table 1: Training data statistics. Languages are grouped by their dominant morphological processes, from top to bottom: agglutinative, fusional, and root & pattern.", "We experiment on twelve languages with varying morphological typologies (Table 1 ) in the Universal Dependencies (UD) treebanks version 2.0 BIBREF14 . Note that while Arabic and Hebrew follow a root & pattern typology, their datasets are unvocalized, which might reduce the observed effects of this typology. Following common practice, we remove language-specific dependency relations and multiword token annotations. We use gold sentence segmentation, tokenization, universal POS (UPOS), and morphological (XFEATS) annotations provided in UD." ], "extractive_spans": [], "free_form_answer": "Finnish, Turkish, Czech, English, German, Hindi, Portuguese, Russian, Spanish, Urdu, Arabic, Hebrew", "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Training data statistics. Languages are grouped by their dominant morphological processes, from top to bottom: agglutinative, fusional, and root & pattern.", "We experiment on twelve languages with varying morphological typologies (Table 1 ) in the Universal Dependencies (UD) treebanks version 2.0 BIBREF14 ." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "annotation_id": [ "52b9beab01e5cce3c6df5be7c8899ee54458a475", "a6b8255c2b6726a6b875700e169ac00c1181bb3b", "f39a344b090eb3812550c1ddf536e877caba2331" ], "answer": [ { "evidence": [ "Table 2 presents test results for every model on every language, establishing three results. First, they support previous findings that character-level models outperform word-based models—indeed, the char-lstm model outperforms the word model on LAS for all languages except Hindi and Urdu for which the results are identical. Second, they establish strong baselines for the character-level models: the char-lstm generally obtains the best parsing accuracy, closely followed by char-cnn. Third, they demonstrate that character-level models rarely match the accuracy of an oracle model with access to explicit morphology. This reinforces a finding of BIBREF9 : character-level models are effective tools, but they do not learn everything about morphology, and they seem to be closer to oracle accuracy in agglutinative rather than in fusional languages." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ " Third, they demonstrate that character-level models rarely match the accuracy of an oracle model with access to explicit morphology." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "Our summary finding is that character-level models lag the oracle in nearly all languages (§ \"Experiments\" ). The difference is small, but suggests that there is value in modeling morphology. When we tease apart the results by part of speech and dependency type, we trace the difference back to the character-level model's inability to disambiguate words even when encoded with arbitrary context (§ \"Analysis\" ). Specifically, it struggles with case syncretism, in which noun case—and thus syntactic function—is ambiguous. We show that the oracle relies on morphological case, and that a character-level model provided only with morphological case rivals the oracle, even when case is provided by another predictive model (§ \"Characters and case syncretism\" ). Finally, we show that the crucial morphological features vary by language (§ \"Understanding head selection\" )." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Our summary finding is that character-level models lag the oracle in nearly all languages (§ \"Experiments\" )." ], "unanswerable": false, "yes_no": false }, { "evidence": [ "Table 2 presents test results for every model on every language, establishing three results. First, they support previous findings that character-level models outperform word-based models—indeed, the char-lstm model outperforms the word model on LAS for all languages except Hindi and Urdu for which the results are identical. Second, they establish strong baselines for the character-level models: the char-lstm generally obtains the best parsing accuracy, closely followed by char-cnn. Third, they demonstrate that character-level models rarely match the accuracy of an oracle model with access to explicit morphology. This reinforces a finding of BIBREF9 : character-level models are effective tools, but they do not learn everything about morphology, and they seem to be closer to oracle accuracy in agglutinative rather than in fusional languages." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "Third, they demonstrate that character-level models rarely match the accuracy of an oracle model with access to explicit morphology. " ], "unanswerable": false, "yes_no": false } ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe", "057bf5a20e4406f1f05cf82ecd49cf4f227dd287", "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "annotation_id": [ "5e5ebbe56f9ff3215377ff34a91cdff0d40bd0b9", "f9cd1629d594ccd8e291b27573fd55cf9ccf257d", "fdf8e662d824ccbb2d665a54f1fd87e2a40e9d83" ], "answer": [ { "evidence": [ "Our summary finding is that character-level models lag the oracle in nearly all languages (§ \"Experiments\" ). The difference is small, but suggests that there is value in modeling morphology. When we tease apart the results by part of speech and dependency type, we trace the difference back to the character-level model's inability to disambiguate words even when encoded with arbitrary context (§ \"Analysis\" ). Specifically, it struggles with case syncretism, in which noun case—and thus syntactic function—is ambiguous. We show that the oracle relies on morphological case, and that a character-level model provided only with morphological case rivals the oracle, even when case is provided by another predictive model (§ \"Characters and case syncretism\" ). Finally, we show that the crucial morphological features vary by language (§ \"Understanding head selection\" )." ], "extractive_spans": [], "free_form_answer": "A situation in which a noun's syntactic function is ambiguous without context.", "highlighted_evidence": [ "Specifically, it struggles with case syncretism, in which noun case—and thus syntactic function—is ambiguous." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "So far, we've seen that for our three fusional languages—German, Czech, and Russian—the oracle strongly outperforms a character model on nouns with ambiguous morphological analyses, particularly on core dependencies: nominal subjects, objects and indirect objects. Since the nominative, accusative, and dative morphological cases are strongly (though not perfectly) correlated with these dependencies, it is easy to see why the morphologically-aware oracle is able to predict them so well. We hypothesized that these cases are more challenging for the character model because these languages feature a high degree of syncretism—functionally distinct words that have the same form—and in particular case syncretism. For example, referring back to examples ( UID28 ) and ( UID28 ), the character model must disambiguate pisˊmo from its context, whereas the oracle can directly disambiguate it from a feature of the word itself." ], "extractive_spans": [], "free_form_answer": "The phenomena where words that have the same form express different morphological cases", "highlighted_evidence": [ "So far, we've seen that for our three fusional languages—German, Czech, and Russian—the oracle strongly outperforms a character model on nouns with ambiguous morphological analyses, particularly on core dependencies: nominal subjects, objects and indirect objects. Since the nominative, accusative, and dative morphological cases are strongly (though not perfectly) correlated with these dependencies, it is easy to see why the morphologically-aware oracle is able to predict them so well. We hypothesized that these cases are more challenging for the character model because these languages feature a high degree of syncretism—functionally distinct words that have the same form—and in particular case syncretism. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "Our summary finding is that character-level models lag the oracle in nearly all languages (§ \"Experiments\" ). The difference is small, but suggests that there is value in modeling morphology. When we tease apart the results by part of speech and dependency type, we trace the difference back to the character-level model's inability to disambiguate words even when encoded with arbitrary context (§ \"Analysis\" ). Specifically, it struggles with case syncretism, in which noun case—and thus syntactic function—is ambiguous. We show that the oracle relies on morphological case, and that a character-level model provided only with morphological case rivals the oracle, even when case is provided by another predictive model (§ \"Characters and case syncretism\" ). Finally, we show that the crucial morphological features vary by language (§ \"Understanding head selection\" )." ], "extractive_spans": [], "free_form_answer": "when noun case is ambiguous", "highlighted_evidence": [ "Specifically, it struggles with case syncretism, in which noun case—and thus syntactic function—is ambiguous." ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287", "c7d4a630661cd719ea504dba56393f78278b296b", "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] } ], "nlp_background": [ "infinity", "five", "five", "five" ], "paper_read": [ "", "somewhat", "somewhat", "somewhat" ], "question": [ "Who made the stated claim (that \"this is because character-level models learn morphology\")?", "Which languages do they use?", "Do the character-level models perform better than models with access to morphological analyses only?", "What is case syncretism?" ], "question_id": [ "e4f2d59030b17867449cf5456118ab722296bebd", "e664b58ea034a638e7142f8a393a88aadd1e215e", "c4b621f573bbb411bdaa84a7562c9c4795a7eb3a", "3ccc4ccebc3b0de5546b1208e8094a839fd4a4ab" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "search_query": [ "", "morphology", "morphology", "morphology" ], "topic_background": [ "", "familiar", "familiar", "familiar" ] }
{ "caption": [ "Table 1: Training data statistics. Languages are grouped by their dominant morphological processes, from top to bottom: agglutinative, fusional, and root & pattern.", "Table 2: Unlabeled Attachment Score (UAS) and Labeled Attachment Score (LAS) on test set. The best accuracy for each language is highlighted in bold for all models, and for all non-oracle models. o/c: LAS improvement from char-lstm to oracle.", "Table 3: LAS improvements (char-lstm − word) for non-OOV and OOV words on development set.", "Figure 1: LAS improvements (oracle − char-lstm) for ambiguous and unambiguous words on development set.", "Table 4: Labeled accuracy for different parts of speech on development set.", "Figure 2: Heatmaps of the difference between oracle vs. char-lstm confusion matrices for label prediction when both head predictions are correct (x-axis: predicted labels; y-axis: gold labels). Blue cells have higher oracle values, red cells have higher char-lstm values.", "Figure 3: A sentence which the oracle parses perfectly (shown in white) and the char-lstm predicts an incorrect label (shown in black).", "Table 5: Morphological tagging accuracy from representations using the char-lstm and oracle embedding and encoder representations in Czech. Baseline simply chooses the most frequent tag. All means we concatenate all annotated features in UD as one tag.", "Table 6: LAS results when case information is added. We use bold to highlight the best results for models without explicit access to gold annotations.", "Table 7: Case accuracy for case-annotated tokens, for pipeline (PL) vs. multitask (MT) setup. %case shows percentage of training tokens annotated with case.", "Table 8: Our attention experiment results on development set.", "Figure 4: The importance of morphological features of the head for subject and object predictions.", "Table 9: Morphological tagging results for German.", "Table 10: Morphological tagging results for Russian.", "Table 11: Morphological tagging results for Finnish." ], "file": [ "4-Table1-1.png", "5-Table2-1.png", "5-Table3-1.png", "5-Figure1-1.png", "6-Table4-1.png", "7-Figure2-1.png", "7-Figure3-1.png", "8-Table5-1.png", "8-Table6-1.png", "9-Table7-1.png", "10-Table8-1.png", "10-Figure4-1.png", "13-Table9-1.png", "13-Table10-1.png", "13-Table11-1.png" ] }
[ "Who made the stated claim (that \"this is because character-level models learn morphology\")?", "Which languages do they use?", "What is case syncretism?" ]
[ [ "1808.09180-Introduction-1" ], [ "1808.09180-Computing word representations-7", "1808.09180-4-Table1-1.png" ], [ "1808.09180-Introduction-3", "1808.09180-Computing word representations-26" ] ]
[ "Chung et al. (2016)", "Finnish, Turkish, Czech, English, German, Hindi, Portuguese, Russian, Spanish, Urdu, Arabic, Hebrew", "when noun case is ambiguous" ]
83
1909.04251
A Benchmark Dataset for Learning to Intervene in Online Hate Speech
Countering online hate speech is a critical yet challenging task, but one which can be aided by the use of Natural Language Processing (NLP) techniques. Previous research has primarily focused on the development of NLP methods to automatically and effectively detect online hate speech while disregarding further action needed to calm and discourage individuals from using hate speech in the future. In addition, most existing hate speech datasets treat each post as an isolated instance, ignoring the conversational context. In this paper, we propose a novel task of generative hate speech intervention, where the goal is to automatically generate responses to intervene during online conversations that contain hate speech. As a part of this work, we introduce two fully-labeled large-scale hate speech intervention datasets collected from Gab and Reddit. These datasets provide conversation segments, hate speech labels, as well as intervention responses written by Mechanical Turk Workers. In this paper, we also analyze the datasets to understand the common intervention strategies and explore the performance of common automatic response generation methods on these new datasets to provide a benchmark for future research.
{ "paragraphs": [ [ "The growing popularity of online interactions through social media has been shown to have both positive and negative impacts. While social media improves information sharing, it also facilitates the propagation of online harassment, including hate speech. These negative experiences can have a measurable negative impact on users. Recently, the Pew Research Center BIBREF0 reported that “roughly four-in-ten Americans have personally experienced online harassment, and 63% consider it a major problem.”", "To address the growing problem of online hate, an extensive body of work has focused on developing automatic hate speech detection models and datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, simply detecting and blocking hate speech or suspicious users often has limited ability to prevent these users from simply turning to other social media platforms to continue to engage in hate speech as can be seen in the large move of individuals blocked from Twitter to Gab BIBREF9. What's more, such a strategy is often at odds with the concept of free speech. As reported by the Pew Research Center BIBREF0, “Despite this broad concern over online harassment, 45% of Americans say it is more important to let people speak their minds freely online; a slightly larger share (53%) feels that it is more important for people to feel welcome and safe online.” The special rapporteurs representing the Office of the United Nations High Commissioner for Human Rights (OHCHR) have recommended that “The strategic response to hate speech is more speech.” BIBREF10 They encourage to change what people think instead of merely changing what they do, so they advocate more speech that educates about cultural differences, diversity, and minorities as a better strategy to counter hate speech.", "Therefore, in order to encourage strategies of countering online hate speech, we propose a novel task of generative hate speech intervention and introduce two new datasets for this task. Figure FIGREF5 illustrates the task. Our datasets consist of 5K conversations retrieved from Reddit and 12k conversations retrieved from Gab. Distinct from existing hate speech datasets, our datasets retain their conversational context and introduce human-written intervention responses. The conversational context and intervention responses are critical in order to build generative models to automatically mitigate the spread of these types of conversations.", "To summarize, our contributions are three-fold:", "We introduce the generative hate speech intervention task and provide two fully-labeled hate speech datasets with human-written intervention responses.", "Our data is collected in the form of conversations, providing better context.", "The two data sources, Gab and Reddit, are not well studied for hate speech. Our datasets fill this gap.", "Due to our data collecting strategy, all the posts in our datasets are manually labeled as hate or non-hate speech by Mechanical Turk workers, so they can also be used for the hate speech detection task. The performance of commonly-used classifiers on our datasets is shown in Section SECREF6." ], [ "In recent years, a few datasets for hate speech detection have been built and released by researchers. Most are collected from Twitter and are labeled using a combination of expert and non-expert hand labeling, or through machine learning assistance using a list of common negative words. It is widely accepted that labels can vary in their accuracy overall, though this can be mitigated by relying on a consensus rule to rectify disagreements in labels. A synopsis of these datasets can be found in Table TABREF10.", "BIBREF2 collect 17k tweets based on hate-related slurs and users. The tweets are manually annotated with three categories: sexist (20.0%), racist (11.7%), and normal (68.3%). Because the authors identified a number of prolific users during the initial manual search, the resulting dataset has a small number of users (1,236 users) involved, causing a potential selection bias. This problem is most prevalent on the 1,972 racist tweets, which are sent by only 9 Twitter users. To avoid this problem, we did not identify suspicious user accounts or utilize user information when collecting our data.", "BIBREF3 use a similar strategy, which combines the utilization of hate keywords and suspicious user accounts to build a dataset from Twitter. But different from BIBREF2, this dataset consists of 25k tweets randomly sampled from the 85.4 million posts of a large number of users (33,458 users). This dataset is proposed mainly to distinguish hateful and offensive language, which tend to be conflated by many studies.", "BIBREF11 focus on online harassment on Twitter and propose a fine-grained labeled dataset with 6 categories. BIBREF14 introduce a large Twitter dataset with 100k tweets. Despite the large size of this dataset, the ratio of the hateful tweets are relatively low (5%). Thus the size of the hateful tweets is around 5k in this dataset, which is not significantly larger than that of the previous datasets.", "The dataset introduced by BIBREF12 is different from the other datasets as it investigates the behavior of hate-related users on Twitter, instead of evaluating hate-related tweets. The large majority of the 1.5k users are labeled as spammers (31.8%) or normal (60.3%). Only a small fraction of the users are labeled as bullies (4.5%) or aggressors (3.4%). While most datasets are from single sources, BIBREF13 introduce a dataset with a combination of Twitter (58.9%), Reddit, and The Guardian. In total 20,432 unique comments were obtained with 4,136 labeled as harassment (20.2%) and 16,296 as non-harassment (79.8%).", "Since most of the publicly available hate speech datasets are collected from Twitter, previous research of hate speech mainly focus on Twitter posts or users BIBREF2, BIBREF17, BIBREF18, BIBREF19, BIBREF3. While there are several studies on the other sources, such as Instagram BIBREF20, Yahoo! BIBREF1, BIBREF15, and Ask.fm BIBREF16, the hate speech on Reddit and Gab is not widely studied. What's more, all the previous hate speech datasets are built for the classification or detection of hate speech from a single post or user on social media, ignoring the context of the post and intervention methods needed to effectively calm down the users and diffuse negative online conversations." ], [ "Our study got approval from our Internal Review Board. Workers were warned about the offensive content before they read the data and they were informed by our instructions to feel free to quit the task at any time if they are uncomfortable with the content. Additionally, all personally identifiable information such as user names is masked in the datasets." ], [ "Reddit: To retrieve high-quality conversational data that would likely include hate speech, we referenced the list of the whiniest most low-key toxic subreddits. Skipping the three subreddits that have been removed, we collect data from ten subreddits: r/DankMemes, r/Imgoingtohellforthis, r/KotakuInAction, r/MensRights, r/MetaCanada, r/MGTOW, r/PussyPass, r/PussyPassDenied, r/The_Donald, and r/TumblrInAction. For each of these subreddits, we retrieve the top 200 hottest submissions using Reddit's API. To further focus on conversations with hate speech in each submission, we use hate keywords BIBREF6 to identify potentially hateful comments and then reconstructed the conversational context of each comment. This context consists of all comments preceding and following a potentially hateful comment. Thus for each potentially hateful comment, we rebuild the conversation where the comment appears. Figure FIGREF14 shows an example of the collected conversation, where the second comment contains a hate keyword and is considered as potentially hateful. Because a conversation may contain more than one comments with hate keywords, we removed any duplicated conversations.", "Gab: We collect data from all the Gab posts in October 2018. Similar to Reddit, we use hate keywords BIBREF6 to identify potentially hateful posts, rebuild the conversation context and clean duplicate conversations." ], [ "After we collected the conversations from both Reddit and Gab, we presented this data to Mechanical Turk workers to label and create intervention suggestions. In order not to over-burden the workers, we filtered out conversations consisting of more than 20 comments. Each assignment consists of 5 conversations. For Reddit, we also present the title and content of the corresponding submission in order to give workers more information about the topic and context. For each conversation, a worker is asked to answer two questions:", "", "Q1: Which posts or comments in this conversation are hate speech?", "Q2: If there exists hate speech in the conversation, how would you respond to intervene? Write down a response that can probably hold it back (word limit: 140 characters).", "If the worker thinks no hate speech exists in the conversation, then the answers to both questions are “n/a”. To provide context, the definition of hate speech from Facebook: “We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” is presented to the workers. Also, to prevent workers from using hate speech in the response or writing responses that are too general, such as “Please do not say that”, we provide additional instructions and rejected examples." ], [ "Each conversation is assigned to three different workers. To ensure data quality, we restrict the workers to be in an English speaking country including Australia, Canada, Ireland, New Zealand, the United Kingdom, and the United States, with a HIT approval rate higher than 95%. Excluding the rejected answers, the collected data involves 926 different workers. The final hate speech labels (answers to Q1) are aggregated according to the majority of the workers' answers. A comment is considered hate speech only when at least two out of the three workers label it as hate speech. The responses (answers to Q2) are aggregated according to the aggregated result of Q1. If the worker's label to Q1 agrees with the aggregated result, then their answer to Q2 is included as a candidate response to the corresponding conversation but is otherwise disregarded. See Figure FIGREF14 for an example of the aggregated data." ], [ "From Reddit, we collected 5,020 conversations, including 22,324 comments. On average, each conversation consists of 4.45 comments and the length of each comment is 58.0 tokens. 5,257 of the comments are labeled as hate speech and 17,067 are labeled as non-hate speech. A majority of the conversations, 3,847 (76.6%), contain hate speech. Each conversation with hate speech has 2.66 responses on average, for a total of 10,243 intervention responses. The average length of the intervention responses is 17.96 tokens.", "From Gab, we collected 11,825 conversations, consisting of 33,776 posts. On average, each conversation consists of 2.86 posts and the average length of each post is 35.6 tokens. 14,614 of the posts are labeled as hate speech and 19,162 are labeled as non-hate speech. Nearly all the conversations, 11,169 (94.5%), contain hate speech. 31,487 intervention responses were originally collected for conversations with hate speech, or 2.82 responses per conversation on average. The average length of the intervention responses is 17.27 tokens.", "Compared with the Gab dataset, there are fewer conversations and comments in the Reddit dataset, comments and conversations are longer, and the distribution of hate and non-hate speech labels is more imbalanced. Figure FIGREF20 illustrates the distributions of the top 10 keywords in the hate speech collected from Reddit and Gab separately. The Gab dataset and the Reddit dataset have similar popular hate keywords, but the distributions are very different. All the statistics shown above indicate that the characteristics of the data collected from these two sources are very different, thus the challenges of doing detection or generative intervention tasks on the dataset from these sources will also be different." ], [ "Removing duplicates, there are 21,747 unique intervention responses in the aggregated Gab dataset and 7,641 in the aggregated Reddit dataset. Despite the large diversity of the collected responses for intervention, we find workers tend to have certain strategies for intervention.", "", "Identify Hate Keywords: One of the most common strategies is to identify the inappropriate terms in the post and then urge the user to stop using that work. For example, “The C word and language attacking gender is unacceptable. Please refrain from future use.” This strategy is often used when the hatred in the post is mainly conveyed by specific hate keywords.", "", "Categorize Hate Speech: This is another common strategy used by the workers. The workers classify hate speech into different categories, such as racist, sexist, homophobic, etc. This strategy is often combined with identifying hate keywords or targets of hatred. For example, “The term \"\"fa**ot\"\" comprises homophobic hate, and as such is not permitted here.”", "", "Positive Tone Followed by Transitions: This is a strategy where the response consists of two parts combined with a transitional word, such as “but” and “even though”. The first part starts with affirmative terms, such as “I understand”, “You have the right to”, and “You are free to express”, showing kindness and understanding, while the second part is to alert the users that their post is inappropriate. For example, “I understand your frustration, but the term you have used is offensive towards the disabled community. Please be more aware of your words.”. Intuitively, compared with the response that directly warns, this strategy is likely more acceptable for the users and be more likely to clam down a quarrel full of hate speech.", "", "Suggest Proper Actions: Besides warning and discouraging the users from continuing hate speech, workers also suggest the actions that the user should take. This strategy can either be combined with other strategies mentioned above or be used alone. In the latter case, a negative tone can be greatly alleviated. For example, “I think that you should do more research on how resources are allocated in this country.”" ], [ "Our datasets can be used for various hate speech tasks. In this paper, we focus on generative hate speech intervention.", "The goal of this task is to generate a response to hate speech that can mitigate its use during a conversation. The objective can be formulated as the following equation:", "where $c$ is the conversation, $r$ is the corresponding intervention response, and $D$ is the dataset. This task is closely related to the response generation and dialog generation, though several differences exist including dialog length, language cadence, and word imbalances. As a baseline, we chose the most common methods of these two tasks, such as Seq2Seq and VAE, to determine the initial feasibility of automatically generate intervention responses. More recent Reinforcement Learning method for dialog generation BIBREF21 can also be applied to this task with slight modification. Future work will explore more complex, and unique models.", "Similar to BIBREF21, a generative model is considered as an agent. However, different from dialog generation, generative intervention does not have multiple turns of utterance, so the action of the agent is to select a token in the response. The state of the agent is given by the input posts and the previously generated tokens. Another result due to this difference is that the rewards with regard to ease of answering or information flow do not apply to this case, but the reward for semantic coherence does. Therefore, the reward of the agent is:", "where $rw(c,r)$ is the reward with regard to the conversation $c$ and its reference response $r$ in the dataset. $p(r|c)$ denotes the probability of generating response $r$ given the conversation $c$, and $p_{back}(c|r)$ denotes the backward probability of generating the conversation based on the response, which is parameterized by another generation network. The reward is a weighted combination of these two parts, which are observed after the agent finishing generating the response. We refer the readers to BIBREF21 for details." ], [ "We evaluate the commonly-used detection and generation methods with our dataset. Due to the different characteristics of the data collected from the two sources (Section SECREF4), we treat them as two independent datasets." ], [ "For binary hate speech detection, we experimented the following four different methods.", "", "Logistic Regression (LR): We evaluate the Logistic Regression model with L2 regularization. The penalty parameter C is set to 1. The input features are the Term Frequency Inverse Document Frequency (TF-IDF) values of up to 2-grams.", "", "Support Vector Machine (SVM): We evaluate the SVM model with linear kernels. We use L2 regularization and the coefficient is 1. The features are the same as in LR.", "", "Convolutional Neural Network (CNN): We use the CNN model for sentence classification proposed by BIBREF22 with default hyperparameters. The word embeddings are randomly initialized (CNN in Table TABREF27) or initialized with pretrained Word2Vec BIBREF23 embeddings on Google News (CNN$^\\ast $ in Table TABREF27).", "", "Recurrent Neural Network (RNN): The model we evaluated consists of 2-layer bidirectional Gated Recurrent Unit (GRU) BIBREF24 followed by a linear layer. Same as for CNN, we report the performance of RNN with two different settings of the word embeddings.", "The methods are evaluated on testing data randomly selected from the dataset with the ratio of 20%. The input data is not manipulated to manually balance the classes for any of the above methods. Therefore, the training and testing data retain the same distribution as the collected results (Section SECREF4). The methods are evaluated using F-1 score, Precision-Recall (PR) AUC, and Receiver-Operating-Characteristic (ROC) AUC.", "For generative hate speech intervention, we evaluated the following three methods.", "", "Seq2Seq BIBREF25, BIBREF24: The encoder consists of 2 bidirectional GRU layers. The decoder consists of 2 GRU layers followed by a 3-layer MLP (Multi-Layer Perceptron).", "", "Variational Auto-Encoder (VAE) BIBREF26: The structure of the VAE model is similar to that of the Seq2Seq model, except that it has two independent linear layers followed by the encoder to calculate the mean and variance of the distribution of the latent variable separately. We assume the latent variable follows a multivariate Gaussian Distribution. KL annealing BIBREF27 is applied during training.", "", "Reinforcement Learning (RL): We also implement the Reinforcement Learning method described in Section SECREF5. The backbone of this model is the Seq2Seq model, which follows the same Seq2Seq network structure described above. This network is used to parameterize the probability of a response given the conversation. Besides this backbone Seq2Seq model, another Seq2Seq model is used to generate the backward probability. This network is trained in a similar way as the backbone Seq2Seq model, but with a response as input and the corresponding conversation as the target. In our implementation, the function of the first part of the reward ($\\log p(r|c)$) is conveyed by the MLE loss. A curriculum learning strategy is adopted for the reward of $\\log p_{back}(c|r)$ as in BIBREF28. Same as in BIBREF21 and BIBREF28, a baseline strategy is employed to estimate the average reward. We parameterize it as a 3-layer MLP.", "The Seq2Seq model and VAE model are evaluated under two different settings. In one setting, the input for the generative model is the complete conversation, while in the other setting, the input is the filtered conversation, which only includes the posts labeled as hate speech. The filtered conversation was necessary to test the Reinforcement Learning model, as it is too challenging for the backward model to reconstruct the complete conversation based only on the intervention response.", "In our experiments on the generative hate speech intervention task, we do not consider conversations without hate speech. The testing dataset is then randomly selected from the resulting dataset with the ratio of 20%. Since each conversation can have multiple reference responses, we dis-aggregate the responses and construct a pair (conversation, reference response) for each of the corresponding references during training. Teacher forcing is used for each of the three methods. The automatic evaluation metrics include BLEU BIBREF29, ROUGE-L BIBREF30, and METEOR BIBREF31.", "In order to validate and compare the quality of the generated results from each model, we also conducted human evaluations as previous research has shown that automatic evaluation metrics often do not correlate with human preference BIBREF32. We randomly sampled 450 conversations from the testing dataset. We then generated responses using each of the above models trained with the filtered conversation setting. In each assignment, a Mechanical Turk worker is presented 10 conversations, along with corresponding responses generated by the three models. For each conversation, the worker is asked to evaluate the effectiveness of the generated intervention by selecting a response that can best mitigate hate speech. 9 of the 10 questions are filled with the sampled testing data and the generated results, while the other is artificially constructed to monitor response quality. After selecting the 10 best mitigation measures, the worker is asked to select which of the three methods has the best diversity of responses over all the 10 conversations. Ties are permitted for answers. Assignments failed on the quality check are rejected." ], [ "The experimental results of the detection task and the generative intervention task are shown in Table TABREF27 and Table TABREF29 separately. The results of the human evaluation are shown in Table TABREF30. Figure FIGREF25 shows examples of the generated responses.", "As shown in Table TABREF27 and TABREF29, all the classification and generative models perform better on the Gab dataset than on the Reddit dataset. We think this stems from the datasets' characteristics. First, the Gab dataset is larger and has a more balanced category distribution than the Reddit dataset. Therefore, it is inherently more challenging to train a classifier on the Reddit dataset. Further, the average lengths of the Reddit posts and conversations are much larger than those of Gab, potentially making the Reddit input nosier than the Gab input for both tasks. On both the Gab and Reddit datasets, the SVM classifier and the LR classifier achieved better performance than the CNN and RNN model with randomly initialized word embeddings. A possible reason is that without pretrained word embeddings, the neural network models tend to overfit on the dataset.", "For the generative intervention task, three models perform similarly on all three automatic evaluation metrics. As expected, the Seq2Seq model achieves higher scores with filtered conversation as input. However, this is not the case for the VAE model. This indicates that the two models may have different capabilities to capture important information in conversations.", "As shown in Table TABREF29, applying Reinforcement Learning does not lead to higher scores on the three automatic metrics. However, human evaluation (Table TABREF30) shows that the RL model creates responses that are potentially better at mitigating hate speech and are more diverse, which is consistent with BIBREF21. There is a larger performance difference with the Gab dataset, while the effectiveness and the diversity of the responses generated by the Seq2Seq model and the RL model are quite similar on the Reddit dataset. One possible reason is that the size of the training data from Reddit (around 8k) is only 30% the size of the training data from Gab. The inconsistency between the human evaluation results and the automatic ones indicates the automatic evaluation metrics listed in Table TABREF29 can hardly reflect the quality of the generated responses. As mentioned in Section SECREF4, annotators tend to have strategies for intervention. Therefore, generating the common parts of the most popular strategies for all the testing input can lead to high scores of these automatic evaluation metrics. For example, generating “Please do not use derogatory language.” for all the testing Gab data can achieve 4.2 on BLEU, 20.4 on ROUGE, and 18.2 on METEOR. However, this response is not considered as high-quality because it is almost a universal response to all the hate speech, regardless of the context and topic.", "Surprisingly, the responses generated by the VAE model have much worse diversity than the other two methods according to human evaluation. As indicated in Figure FIGREF25, the responses generated by VAE tend to repeat the responses related to some popular hate keyword. For example, “Use of the r-word is unacceptable in our discourse as it demeans and insults people with mental disabilities.” and “Please do not use derogatory language for intellectual disabilities.” are the generated responses for a large part of the Gab testing data. According to Figure FIGREF20, insults towards disabilities are the largest portion in the dataset, so we suspect that the performance of the VAE model is affected by the imbalanced keyword distribution.", "The sampled results in Figure FIGREF25 show that the Seq2Seq and the RL model can generate reasonable responses for intervention. However, as is to be expected with machine-generated text, in the other human evaluation we conducted, where Mechanical Turk workers were also presented with sampled human-written responses alongside the machine generated responses, the human-written responses were chosen as the most effective and diverse option a majority of the time (70% or more) for both datasets. This indicates that there is significant room for improvement while generating automated intervention responses.", "In our experiments, we only utilized the text of the posts, but more information is available and can be utilized, such as the user information and the title of a Reddit submission." ], [ "Towards the end goal of mitigating the problem of online hate speech, we propose the task of generative hate speech intervention and introduce two fully-labeled datasets collected from Reddit and Gab, with crowd-sourced intervention responses. The performance of the three generative models: Seq2Seq, VAE, and RL, suggests ample opportunity for improvement. We intend to make our dataset freely available to facilitate further exploration of hate speech intervention and better models for generative intervention." ], [ "This research was supported by the Intel AI Faculty Research Grant. The authors are solely responsible for the contents of the paper and the opinions expressed in this publication do not reflect those of the funding agencies." ] ], "section_name": [ "Introduction", "Related Work", "Dataset Collection ::: Ethics", "Dataset Collection ::: Data Filtering", "Dataset Collection ::: Crowd-Sourcing", "Dataset Collection ::: Data Quality", "Dataset Analysis ::: Statistics", "Dataset Analysis ::: Intervention Strategies", "Generative Intervention", "Experiments", "Experiments ::: Experimental Settings", "Experiments ::: Experimental Results and Discussion", "Conclusion", "Acknowledgments" ] }
{ "answers": [ { "annotation_id": [ "1dd207fb8b62a4d44310043c4f6d9716ae59539d", "614e5ad22f71740636465ee1c92d4acc9ff71d78", "915b3790c00e945b6650352589bee13299c8fef3" ], "answer": [ { "evidence": [ "In order to validate and compare the quality of the generated results from each model, we also conducted human evaluations as previous research has shown that automatic evaluation metrics often do not correlate with human preference BIBREF32. We randomly sampled 450 conversations from the testing dataset. We then generated responses using each of the above models trained with the filtered conversation setting. In each assignment, a Mechanical Turk worker is presented 10 conversations, along with corresponding responses generated by the three models. For each conversation, the worker is asked to evaluate the effectiveness of the generated intervention by selecting a response that can best mitigate hate speech. 9 of the 10 questions are filled with the sampled testing data and the generated results, while the other is artificially constructed to monitor response quality. After selecting the 10 best mitigation measures, the worker is asked to select which of the three methods has the best diversity of responses over all the 10 conversations. Ties are permitted for answers. Assignments failed on the quality check are rejected." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In order to validate and compare the quality of the generated results from each model, we also conducted human evaluations as previous research has shown that automatic evaluation metrics often do not correlate with human preference BIBREF32." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "In order to validate and compare the quality of the generated results from each model, we also conducted human evaluations as previous research has shown that automatic evaluation metrics often do not correlate with human preference BIBREF32. We randomly sampled 450 conversations from the testing dataset. We then generated responses using each of the above models trained with the filtered conversation setting. In each assignment, a Mechanical Turk worker is presented 10 conversations, along with corresponding responses generated by the three models. For each conversation, the worker is asked to evaluate the effectiveness of the generated intervention by selecting a response that can best mitigate hate speech. 9 of the 10 questions are filled with the sampled testing data and the generated results, while the other is artificially constructed to monitor response quality. After selecting the 10 best mitigation measures, the worker is asked to select which of the three methods has the best diversity of responses over all the 10 conversations. Ties are permitted for answers. Assignments failed on the quality check are rejected." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In each assignment, a Mechanical Turk worker is presented 10 conversations, along with corresponding responses generated by the three models. For each conversation, the worker is asked to evaluate the effectiveness of the generated intervention by selecting a response that can best mitigate hate speech." ], "unanswerable": false, "yes_no": true }, { "evidence": [ "In order to validate and compare the quality of the generated results from each model, we also conducted human evaluations as previous research has shown that automatic evaluation metrics often do not correlate with human preference BIBREF32. We randomly sampled 450 conversations from the testing dataset. We then generated responses using each of the above models trained with the filtered conversation setting. In each assignment, a Mechanical Turk worker is presented 10 conversations, along with corresponding responses generated by the three models. For each conversation, the worker is asked to evaluate the effectiveness of the generated intervention by selecting a response that can best mitigate hate speech. 9 of the 10 questions are filled with the sampled testing data and the generated results, while the other is artificially constructed to monitor response quality. After selecting the 10 best mitigation measures, the worker is asked to select which of the three methods has the best diversity of responses over all the 10 conversations. Ties are permitted for answers. Assignments failed on the quality check are rejected." ], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [ "In order to validate and compare the quality of the generated results from each model, we also conducted human evaluations as previous research has shown that automatic evaluation metrics often do not correlate with human preference BIBREF32. " ], "unanswerable": false, "yes_no": true } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "42f6c46b9f48778e843923c62f32bea3870c37ee", "7faf2ea72541d163fbc02b3dee29f5e55ce33cd1", "fafadcef097f478af46e87f1ce1625a9b6a03168" ], "answer": [ { "evidence": [ "For generative hate speech intervention, we evaluated the following three methods.", "Seq2Seq BIBREF25, BIBREF24: The encoder consists of 2 bidirectional GRU layers. The decoder consists of 2 GRU layers followed by a 3-layer MLP (Multi-Layer Perceptron).", "Variational Auto-Encoder (VAE) BIBREF26: The structure of the VAE model is similar to that of the Seq2Seq model, except that it has two independent linear layers followed by the encoder to calculate the mean and variance of the distribution of the latent variable separately. We assume the latent variable follows a multivariate Gaussian Distribution. KL annealing BIBREF27 is applied during training.", "Reinforcement Learning (RL): We also implement the Reinforcement Learning method described in Section SECREF5. The backbone of this model is the Seq2Seq model, which follows the same Seq2Seq network structure described above. This network is used to parameterize the probability of a response given the conversation. Besides this backbone Seq2Seq model, another Seq2Seq model is used to generate the backward probability. This network is trained in a similar way as the backbone Seq2Seq model, but with a response as input and the corresponding conversation as the target. In our implementation, the function of the first part of the reward ($\\log p(r|c)$) is conveyed by the MLE loss. A curriculum learning strategy is adopted for the reward of $\\log p_{back}(c|r)$ as in BIBREF28. Same as in BIBREF21 and BIBREF28, a baseline strategy is employed to estimate the average reward. We parameterize it as a 3-layer MLP." ], "extractive_spans": [ "Seq2Seq", "Variational Auto-Encoder (VAE)", "Reinforcement Learning (RL)" ], "free_form_answer": "", "highlighted_evidence": [ "or generative hate speech intervention, we evaluated the following three methods.\n\nSeq2Seq BIBREF25, BIBREF24: The encoder consists of 2 bidirectional GRU layers. The decoder consists of 2 GRU layers followed by a 3-layer MLP (Multi-Layer Perceptron).\n\nVariational Auto-Encoder (VAE) BIBREF26: The structure of the VAE model is similar to that of the Seq2Seq model, except that it has two independent linear layers followed by the encoder to calculate the mean and variance of the distribution of the latent variable separately. We assume the latent variable follows a multivariate Gaussian Distribution. KL annealing BIBREF27 is applied during training.\n\nReinforcement Learning (RL): We also implement the Reinforcement Learning method described in Section SECREF5. The backbone of this model is the Seq2Seq model, which follows the same Seq2Seq network structure described above. This network is used to parameterize the probability of a response given the conversation. Besides this backbone Seq2Seq model, another Seq2Seq model is used to generate the backward probability. This network is trained in a similar way as the backbone Seq2Seq model, but with a response as input and the corresponding conversation as the target. " ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For generative hate speech intervention, we evaluated the following three methods.", "Seq2Seq BIBREF25, BIBREF24: The encoder consists of 2 bidirectional GRU layers. The decoder consists of 2 GRU layers followed by a 3-layer MLP (Multi-Layer Perceptron).", "Variational Auto-Encoder (VAE) BIBREF26: The structure of the VAE model is similar to that of the Seq2Seq model, except that it has two independent linear layers followed by the encoder to calculate the mean and variance of the distribution of the latent variable separately. We assume the latent variable follows a multivariate Gaussian Distribution. KL annealing BIBREF27 is applied during training.", "Reinforcement Learning (RL): We also implement the Reinforcement Learning method described in Section SECREF5. The backbone of this model is the Seq2Seq model, which follows the same Seq2Seq network structure described above. This network is used to parameterize the probability of a response given the conversation. Besides this backbone Seq2Seq model, another Seq2Seq model is used to generate the backward probability. This network is trained in a similar way as the backbone Seq2Seq model, but with a response as input and the corresponding conversation as the target. In our implementation, the function of the first part of the reward ($\\log p(r|c)$) is conveyed by the MLE loss. A curriculum learning strategy is adopted for the reward of $\\log p_{back}(c|r)$ as in BIBREF28. Same as in BIBREF21 and BIBREF28, a baseline strategy is employed to estimate the average reward. We parameterize it as a 3-layer MLP." ], "extractive_spans": [ "Seq2Seq BIBREF25, BIBREF24", "Variational Auto-Encoder (VAE) BIBREF26", "Reinforcement Learning (RL)" ], "free_form_answer": "", "highlighted_evidence": [ "For generative hate speech intervention, we evaluated the following three methods.\n\nSeq2Seq BIBREF25, BIBREF24: The encoder consists of 2 bidirectional GRU layers. The decoder consists of 2 GRU layers followed by a 3-layer MLP (Multi-Layer Perceptron).\n\nVariational Auto-Encoder (VAE) BIBREF26: The structure of the VAE model is similar to that of the Seq2Seq model, except that it has two independent linear layers followed by the encoder to calculate the mean and variance of the distribution of the latent variable separately. We assume the latent variable follows a multivariate Gaussian Distribution. KL annealing BIBREF27 is applied during training.\n\nReinforcement Learning (RL): We also implement the Reinforcement Learning method described in Section SECREF5. The backbone of this model is the Seq2Seq model, which follows the same Seq2Seq network structure described above. This network is used to parameterize the probability of a response given the conversation. Besides this backbone Seq2Seq model, another Seq2Seq model is used to generate the backward probability." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "For generative hate speech intervention, we evaluated the following three methods.", "Seq2Seq BIBREF25, BIBREF24: The encoder consists of 2 bidirectional GRU layers. The decoder consists of 2 GRU layers followed by a 3-layer MLP (Multi-Layer Perceptron).", "Variational Auto-Encoder (VAE) BIBREF26: The structure of the VAE model is similar to that of the Seq2Seq model, except that it has two independent linear layers followed by the encoder to calculate the mean and variance of the distribution of the latent variable separately. We assume the latent variable follows a multivariate Gaussian Distribution. KL annealing BIBREF27 is applied during training.", "Reinforcement Learning (RL): We also implement the Reinforcement Learning method described in Section SECREF5. The backbone of this model is the Seq2Seq model, which follows the same Seq2Seq network structure described above. This network is used to parameterize the probability of a response given the conversation. Besides this backbone Seq2Seq model, another Seq2Seq model is used to generate the backward probability. This network is trained in a similar way as the backbone Seq2Seq model, but with a response as input and the corresponding conversation as the target. In our implementation, the function of the first part of the reward ($\\log p(r|c)$) is conveyed by the MLE loss. A curriculum learning strategy is adopted for the reward of $\\log p_{back}(c|r)$ as in BIBREF28. Same as in BIBREF21 and BIBREF28, a baseline strategy is employed to estimate the average reward. We parameterize it as a 3-layer MLP." ], "extractive_spans": [ "Seq2Seq BIBREF25, BIBREF24", "Variational Auto-Encoder (VAE) BIBREF26", "Reinforcement Learning (RL)" ], "free_form_answer": "", "highlighted_evidence": [ "For generative hate speech intervention, we evaluated the following three methods.", "Seq2Seq BIBREF25, BIBREF24: The encoder consists of 2 bidirectional GRU layers. ", "Variational Auto-Encoder (VAE) BIBREF26: The structure of the VAE model is similar to that of the Seq2Seq model, except that it has two independent linear layers followed by the encoder to calculate the mean and variance of the distribution of the latent variable separately. ", "Reinforcement Learning (RL): We also implement the Reinforcement Learning method described in Section SECREF5. The backbone of this model is the Seq2Seq model, which follows the same Seq2Seq network structure described above. " ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4", "258ee4069f740c400c0049a2580945a1cc7f044c", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "annotation_id": [ "8bfee0fc62a3848d4bb43b3fab920c981dd7837c", "ca0eea35f5e2b649e48bbe955fd6d998d0750d6f", "ca90e894dcaf148bb2adba0136499378410d35e8" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": null }, { "evidence": [ "Reddit: To retrieve high-quality conversational data that would likely include hate speech, we referenced the list of the whiniest most low-key toxic subreddits. Skipping the three subreddits that have been removed, we collect data from ten subreddits: r/DankMemes, r/Imgoingtohellforthis, r/KotakuInAction, r/MensRights, r/MetaCanada, r/MGTOW, r/PussyPass, r/PussyPassDenied, r/The_Donald, and r/TumblrInAction. For each of these subreddits, we retrieve the top 200 hottest submissions using Reddit's API. To further focus on conversations with hate speech in each submission, we use hate keywords BIBREF6 to identify potentially hateful comments and then reconstructed the conversational context of each comment. This context consists of all comments preceding and following a potentially hateful comment. Thus for each potentially hateful comment, we rebuild the conversation where the comment appears. Figure FIGREF14 shows an example of the collected conversation, where the second comment contains a hate keyword and is considered as potentially hateful. Because a conversation may contain more than one comments with hate keywords, we removed any duplicated conversations." ], "extractive_spans": [], "free_form_answer": " Potentially hateful comments are identified using hate keywords.", "highlighted_evidence": [ "To further focus on conversations with hate speech in each submission, we use hate keywords BIBREF6 to identify potentially hateful comments and then reconstructed the conversational context of each comment. This context consists of all comments preceding and following a potentially hateful comment. Thus for each potentially hateful comment, we rebuild the conversation where the comment appears." ], "unanswerable": false, "yes_no": null }, { "evidence": [ "If the worker thinks no hate speech exists in the conversation, then the answers to both questions are “n/a”. To provide context, the definition of hate speech from Facebook: “We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” is presented to the workers. Also, to prevent workers from using hate speech in the response or writing responses that are too general, such as “Please do not say that”, we provide additional instructions and rejected examples." ], "extractive_spans": [ "race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability." ], "free_form_answer": "", "highlighted_evidence": [ "To provide context, the definition of hate speech from Facebook: “We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.”" ], "unanswerable": false, "yes_no": null } ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "34c35a1877e453ecaebcf625df3ef788e1953cc4", "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ], "nlp_background": [ "five", "five", "five" ], "paper_read": [ "somewhat", "somewhat", "somewhat" ], "question": [ "Do humans assess the quality of the generated responses?", "What models are used to generate responses?", "What types of hate speech are considered?" ], "question_id": [ "2dba0b83fc22995f83e7ac66cc8f68bcdcc70ee9", "a8cc891bb8dccf0d32c1c9cd1699d5ead0eed711", "8330242b56b63708a23c6a92db4d4bcf927a4576" ], "question_writer": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "search_query": [ "Hate speech", "Hate speech", "Hate speech" ], "topic_background": [ "research", "research", "research" ] }
{ "caption": [ "Figure 1: An illustration of hate speech conversation between User 1 and User 2 and the interventions collected for our datasets. The check and the cross icons on the right indicate a normal post and a hateful post. The utterance following the human icon is a humanwritten intervention, while the utterance following the computer icon is machine-generated.", "Table 1: Comparison of our datasets with previous hate speech datasets. Conv.: Conversation. Interv.: Intervention.", "Figure 2: An example of the aggregated data. The first column is the conversation text. Indexes are added to each post. Indentations before each post indicate the structure of replies. The second column is the indexes of the human-labeled hateful post. Each bullet point in the third column is a human-written response.", "Figure 3: The distributions of the top 10 keywords in the hate speech collected from Reddit and Gab. Hate keywords are masked.", "Figure 4: Examples of the generated intervention responses. The hateful terms in the conversation are masked.", "Table 2: Experimental results for the detection task. PR is Precision-Recall AUC and ROC is ROC AUC. The models marked with ∗ use pretrained Word2Vec embeddings. The best results are in bold.", "Table 3: Experimental results for generative intervention task. Inp. Set.: Input Setting (Section 6.1). B: BLEU. R: ROUGE-L. M: METEOR. Best results are in bold.", "Table 4: Human evaluation results. Table values are the percentage of the answers. Eff.: Effectiveness, evaluates how well the generated responses can mitigate hate speech. Div: Diversity, evaluates how many different responses are generated. Best results are in bold." ], "file": [ "1-Figure1-1.png", "3-Table1-1.png", "4-Figure2-1.png", "4-Figure3-1.png", "7-Figure4-1.png", "7-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png" ] }
[ "What types of hate speech are considered?" ]
[ [ "1909.04251-Dataset Collection ::: Data Filtering-0", "1909.04251-Dataset Collection ::: Crowd-Sourcing-4" ] ]
[ " Potentially hateful comments are identified using hate keywords." ]
84