id
stringlengths 10
10
| title
stringlengths 19
145
| abstract
stringlengths 273
1.91k
| full_text
dict | qas
dict | figures_and_tables
dict | question
sequence | retrieval_gt
sequence | answer_gt
sequence | __index_level_0__
int64 0
887
|
---|---|---|---|---|---|---|---|---|---|
1603.04553 | Unsupervised Ranking Model for Entity Coreference Resolution | Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. In this paper, we propose a generative, unsupervised ranking model for entity coreference resolution by introducing resolution mode variables. Our unsupervised system achieves 58.44% F1 score of the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan et al., 2012), outperforming the Stanford deterministic system (Lee et al., 2013) by 3.01%. | {
"paragraphs": [
[
"Entity coreference resolution has become a critical component for many Natural Language Processing (NLP) tasks. Systems requiring deep language understanding, such as information extraction BIBREF2 , semantic event learning BIBREF3 , BIBREF4 , and named entity linking BIBREF5 , BIBREF6 all benefit from entity coreference information.",
"Entity coreference resolution is the task of identifying mentions (i.e., noun phrases) in a text or dialogue that refer to the same real-world entities. In recent years, several supervised entity coreference resolution systems have been proposed, which, according to ng:2010:ACL, can be categorized into three classes — mention-pair models BIBREF7 , entity-mention models BIBREF8 , BIBREF9 , BIBREF10 and ranking models BIBREF11 , BIBREF12 , BIBREF13 — among which ranking models recently obtained state-of-the-art performance. However, the manually annotated corpora that these systems rely on are highly expensive to create, in particular when we want to build data for resource-poor languages BIBREF14 . That makes unsupervised approaches, which only require unannotated text for training, a desirable solution to this problem.",
"Several unsupervised learning algorithms have been applied to coreference resolution. haghighi-klein:2007:ACLMain presented a mention-pair nonparametric fully-generative Bayesian model for unsupervised coreference resolution. Based on this model, ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. poon-domingos:2008:EMNLP proposed an entity-mention model that is able to perform joint inference across mentions by using Markov Logic. Unfortunately, these unsupervised systems' performance on accuracy significantly falls behind those of supervised systems, and are even worse than the deterministic rule-based systems. Furthermore, there is no previous work exploring the possibility of developing an unsupervised ranking model which achieved state-of-the-art performance under supervised settings for entity coreference resolution.",
"In this paper, we propose an unsupervised generative ranking model for entity coreference resolution. Our experimental results on the English data from the CoNLL-2012 shared task BIBREF0 show that our unsupervised system outperforms the Stanford deterministic system BIBREF1 by 3.01% absolute on the CoNLL official metric. The contributions of this work are (i) proposing the first unsupervised ranking model for entity coreference resolution. (ii) giving empirical evaluations of this model on benchmark data sets. (iii) considerably narrowing the gap to supervised coreference resolution accuracy."
],
[
"In the following, $D = \\lbrace m_0, m_1, \\ldots , m_n\\rbrace $ represents a generic input document which is a sequence of coreference mentions, including the artificial root mention (denoted by $m_0$ ). The method to detect and extract these mentions is discussed later in Section \"Mention Detection\" . Let $C = \\lbrace c_1, c_2, \\ldots , c_n\\rbrace $ denote the coreference assignment of a given document, where each mention $m_i$ has an associated random variable $c_i$ taking values in the set $\\lbrace 0, i, \\ldots , i-1\\rbrace $ ; this variable specifies $m_i$ 's selected antecedent ( $c_i \\in \\lbrace 1, 2, \\ldots , i-1\\rbrace $ ), or indicates that it begins a new coreference chain ( $c_i = 0$ )."
],
[
"The following is a straightforward way to build a generative model for coreference: ",
"$$\\begin{array}{rcl}\nP(D, C) & = & P(D|C)P(C) \\\\\n& = & \\prod \\limits _{j=1}^{n}P(m_j|m_{c_j})\\prod \\limits _{j=1}^{n}P(c_j|j)\n\\end{array}$$ (Eq. 3) ",
"where we factorize the probabilities $P(D|C)$ and $P(C)$ into each position $j$ by adopting appropriate independence assumptions that given the coreference assignment $c_j$ and corresponding coreferent mention $m_{c_j}$ , the mention $m_j$ is independent with other mentions in front of it. This independent assumption is similar to that in the IBM 1 model on machine translation BIBREF15 , where it assumes that given the corresponding English word, the aligned foreign word is independent with other English and foreign words. We do not make any independent assumptions among different features (see Section \"Features\" for details).",
"Inference in this model is efficient, because we can compute $c_j$ separately for each mention: $\nc^*_j = \\operatornamewithlimits{argmax}\\limits _{c_j} P(m_j|m_{c_j}) P(c_j|j)\n$ ",
"The model is a so-called ranking model because it is able to identify the most probable candidate antecedent given a mention to be resolved."
],
[
"According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:",
" $\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .",
" $\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.",
" $\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions.",
"Now, we can extend the generative model in Eq. 3 to: $\n\\begin{array}{rcl}\n& & P(D, C) = P(D, C, \\Pi ) \\\\\n& = & \\prod \\limits _{j=1}^{n}P(m_j|m_{c_j}, \\pi _j) P(c_j|\\pi _j, j) P(\\pi _j|j)\n\\end{array}\n$ ",
"where we define $P(\\pi _j|j)$ to be uniform distribution. We model $P(m_j|m_{c_j}, \\pi _j)$ and $P(c_j|\\pi _j, j)$ in the following way: $\n\\begin{array}{l}\nP(m_j|m_{c_j}, \\pi _j) = t(m_j|m_{c_j}, \\pi _j) \\\\\nP(c_j|\\pi _j, j) = \\left\\lbrace \\begin{array}{ll}\nq(c_j|\\pi _j, j) & \\pi _j = attr \\\\\n\\frac{1}{j} & \\textrm {otherwise}\n\\end{array}\\right.\n\\end{array}\n$ ",
"where $\\theta = \\lbrace t, q\\rbrace $ are parameters of our model. Note that in the attribute-matching mode ( $\\pi _j = attr$ ) we model $P(c_j|\\pi _j, j)$ with parameter $q$ , while in the other two modes, we use the uniform distribution. It makes sense because the position information is important for coreference resolved by matching attributes of two mentions such as resolving pronoun coreference, but not that important for those resolved by matching text or special relations like two mentions referring the same person and matching by the name. [t] Learning Model with EM Initialization: Initialize $\\theta _0 = \\lbrace t_0, q_0\\rbrace $ ",
" $t=0$ to $T$ set all counts $c(\\ldots ) = 0$ ",
"each document $D$ $j=1$ to $n$ $k=0$ to $j - 1$ $L_{jk} = \\frac{t(m_j|m_k,\\pi _j)q(k|\\pi _j, j)}{\\sum \\limits _{i = 0}^{j-1} t(m_j|m_i,\\pi _j)q(i|\\pi _j, j)}$ ",
" $c(m_j, m_k, \\pi _j) \\mathrel {+}= L_{jk}$ ",
" $c(m_k, \\pi _j) \\mathrel {+}= L_{jk}$ ",
" $c(k, j, \\pi _j) \\mathrel {+}= L_{jk}$ ",
" $c(j, \\pi _j) \\mathrel {+}= L_{jk}$ Recalculate the parameters $t(m|m^{\\prime }, \\pi ) = \\frac{c(m, m^{\\prime }, \\pi )}{c(m^{\\prime }, \\pi )}$ ",
" $q(k, j, \\pi ) = \\frac{c(k, j, \\pi )}{c(j, \\pi )}$ "
],
[
"In this section, we describe the features we use to represent mentions. Specifically, as shown in Table 1 , we use different features under different resolution modes. It should be noted that only the Distance feature is designed for parameter $q$ , all other features are designed for parameter $t$ ."
],
[
"For model learning, we run EM algorithm BIBREF19 on our Model, treating $D$ as observed data and $C$ as latent variables. We run EM with 10 iterations and select the parameters achieving the best performance on the development data. Each iteration takes around 12 hours with 10 CPUs parallelly. The best parameters appear at around the 5th iteration, according to our experiments.The detailed derivation of the learning algorithm is shown in Appendix A. The pseudo-code is shown is Algorithm \"Resolution Mode Variables\" . We use uniform initialization for all the parameters in our model.",
"Several previous work has attempted to use EM for entity coreference resolution. cherry-bergsma:2005 and charniak-elsner:2009 applied EM for pronoun anaphora resolution. ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. Recently, moosavi2014 proposed an unsupervised model utilizing the most informative relations and achieved competitive performance with the Stanford system."
],
[
"The basic rules we used to detect mentions are similar to those of Lee:2013:CL, except that their system uses a set of filtering rules designed to discard instances of pleonastic it, partitives, certain quantified noun phrases and other spurious mentions. Our system keeps partitives, quantified noun phrases and bare NP mentions, but discards pleonastic it and other spurious mentions."
],
[
"Datasets. Due to the availability of readily parsed data, we select the APW and NYT sections of Gigaword Corpus (years 1994-2010) BIBREF20 to train the model. Following previous work BIBREF3 , we remove duplicated documents and the documents which include fewer than 3 sentences. The development and test data are the English data from the CoNLL-2012 shared task BIBREF0 , which is derived from the OntoNotes corpus BIBREF21 . The corpora statistics are shown in Table 2 . Our system is evaluated with automatically extracted mentions on the version of the data with automatic preprocessing information (e.g., predicted parse trees).",
"Evaluation Metrics. We evaluate our model on three measures widely used in the literature: MUC BIBREF22 , B $^{3}$ BIBREF23 , and Entity-based CEAF (CEAF $_e$ ) BIBREF24 . In addition, we also report results on another two popular metrics: Mention-based CEAF (CEAF $_m$ ) and BLANC BIBREF25 . All the results are given by the latest version of CoNLL-2012 scorer "
],
[
"Table 3 illustrates the results of our model together as baseline with two deterministic systems, namely Stanford: the Stanford system BIBREF10 and Multigraph: the unsupervised multigraph system BIBREF26 , and one unsupervised system, namely MIR: the unsupervised system using most informative relations BIBREF27 . Our model outperforms the three baseline systems on all the evaluation metrics. Specifically, our model achieves improvements of 2.93% and 3.01% on CoNLL F1 score over the Stanford system, the winner of the CoNLL 2011 shared task, on the CoNLL 2012 development and test sets, respectively. The improvements on CoNLL F1 score over the Multigraph model are 1.41% and 1.77% on the development and test sets, respectively. Comparing with the MIR model, we obtain significant improvements of 2.62% and 3.02% on CoNLL F1 score.",
"To make a thorough empirical comparison with previous studies, Table 3 (below the dashed line) also shows the results of some state-of-the-art supervised coreference resolution systems — IMS: the second best system in the CoNLL 2012 shared task BIBREF28 ; Latent-Tree: the latent tree model BIBREF29 obtaining the best results in the shared task; Berkeley: the Berkeley system with the final feature set BIBREF12 ; LaSO: the structured perceptron system with non-local features BIBREF30 ; Latent-Strc: the latent structure system BIBREF31 ; Model-Stack: the entity-centric system with model stacking BIBREF32 ; and Non-Linear: the non-linear mention-ranking model with feature representations BIBREF33 . Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 ."
],
[
"We proposed a new generative, unsupervised ranking model for entity coreference resolution into which we introduced resolution mode variables to distinguish mentions resolved by different categories of information. Experimental results on the data from CoNLL-2012 shared task show that our system significantly improves the accuracy on different evaluation metrics over the baseline systems.",
"One possible direction for future work is to differentiate more resolution modes. Another one is to add more precise or even event-based features to improve the model's performance."
],
[
"This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.",
"Appendix A. Derivation of Model Learning",
"Formally, we iteratively estimate the model parameters $\\theta $ , employing the following EM algorithm:",
"For simplicity, we denote: $\n{\\small \\begin{array}{rcl}\nP(C|D; \\theta ) & = & \\tilde{P}(C|D) \\\\\nP(C|D; \\theta ^{\\prime }) & = & P(C|D)\n\\end{array}}\n$ ",
"In addition, we use $\\tau (\\pi _j|j)$ to denote the probability $P(\\pi _j|j)$ which is uniform distribution in our model. Moreover, we use the following notation for convenience: $\n{\\small \\theta (m_j, m_k, j, k, \\pi _j) = t(m_j|m_k, \\pi _j) q(k|\\pi _j, j) \\tau (\\pi _j|j)\n}\n$ ",
"Then, we have $\n{\\scriptsize {\n\\begin{array}{rl}\n& E_{\\tilde{P}(c|D)} [\\log P(D, C)] \\\\\n= & \\sum \\limits _{C} \\tilde{P}(C|D) \\log P(D, C) \\\\\n= & \\sum \\limits _{C} \\tilde{P}(C|D) \\big (\\sum \\limits _{j=1}^{n} \\log t(m_j|m_{c_j}, \\pi _j) + \\log q(c_j|\\pi _j, j) + \\log \\tau (\\pi _j|j) \\big ) \\\\\n= & \\sum \\limits _{j=1}^{n} \\sum \\limits _{k=0}^{j-1} L_{jk} \\big (\\log t(m_j|m_k, \\pi _j) + \\log q(k|\\pi _j, j) + \\log \\tau (\\pi _j|j) \\big )\n\\end{array}}}\n$ ",
"Then the parameters $t$ and $q$ that maximize $E_{\\tilde{P}(c|D)} [\\log P(D, C)]$ satisfy that $\n{\\small \\begin{array}{rcl}\nt(m_j|m_k, \\pi _j) & = & \\frac{L_{jk}}{\\sum \\limits _{i = 1}^{n} L_{ik}} \\\\\nq(k|\\pi _j, j) & = & \\frac{L_{jk}}{\\sum \\limits _{i = 0}^{j-1} L_{ji}}\n\\end{array}}\n$ ",
"where $L_{jk}$ can be calculated by $\n{\\small \\begin{array}{rcl}\nL_{jk} & = & \\sum \\limits _{C, c_j=k} \\tilde{P}(C|D) = \\frac{\\sum \\limits _{C, c_j=k} \\tilde{P}(C, D)}{\\sum \\limits _{C} \\tilde{P}(C, D)} \\\\\n& = & \\frac{\\sum \\limits _{C, c_j=k}\\prod \\limits _{i = 1}^{n}\\tilde{\\theta }(m_i, m_{c_i}, c_i, i, \\pi _i)}{\\sum \\limits _{C}\\prod \\limits _{i = 1}^{n}\\tilde{\\theta }(m_i, m_{c_i}, c_i, i, \\pi _i)} \\\\\n& = & \\frac{\\tilde{\\theta }(m_j, m_k, k, j, \\pi _j)\\sum \\limits _{C(-j)}\\tilde{P}(C(-j)|D)}{\\sum \\limits _{i=0}^{j-1}\\tilde{\\theta }(m_j, m_i, i, j, \\pi _j)\\sum \\limits _{C(-j)}\\tilde{P}(C(-j)|D)} \\\\\n& = & \\frac{\\tilde{\\theta }(m_j, m_k, k, j, \\pi _j)}{\\sum \\limits _{i=0}^{j-1}\\tilde{\\theta }(m_j, m_i, i, j, \\pi _j)} \\\\\n& = & \\frac{\\tilde{t}(m_j|m_k, \\pi _j) \\tilde{q}(k|\\pi _j, j) \\tilde{\\tau }(\\pi _j|j)}{\\sum \\limits _{i=0}^{j-1}\\tilde{t}(m_j|m_i, \\pi _j) \\tilde{q}(i|\\pi _j, j) \\tilde{\\tau }(\\pi _j|j)} \\\\\n& = & \\frac{\\tilde{t}(m_j|m_k, \\pi _j) \\tilde{q}(k|\\pi _j, j)}{\\sum \\limits _{i=0}^{j-1}\\tilde{t}(m_j|m_i, \\pi _j) \\tilde{q}(i|\\pi _j, j)}\n\\end{array}}\n$ ",
"where $C(-j) = \\lbrace c_1, \\ldots , c_{j-1}, c_{j+1}, \\ldots , c_{n}\\rbrace $ . The above derivations correspond to the learning algorithm in Algorithm \"Resolution Mode Variables\" . "
]
],
"section_name": [
"Introduction",
"Notations and Definitions",
"Generative Ranking Model",
"Resolution Mode Variables",
"Features",
"Model Learning",
"Mention Detection",
"Experimental Setup",
"Results and Comparison",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"f4cf4054065d62aef6d53f8571b081345695a0b6"
],
"answer": [
{
"evidence": [
"According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:",
"$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .",
"$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.",
"$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:\n\n$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .\n\n$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.\n\n$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"f840a836eee0180d2c976457f8b3052d8e78050c"
]
},
{
"annotation_id": [
"cfe30b450534f64f88a0f4a8eb5ec6c9697074e1"
],
"answer": [
{
"evidence": [
"According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:",
"$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .",
"$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.",
"$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions."
],
"extractive_spans": [],
"free_form_answer": "Variables in the set {str, prec, attr} indicating in which mode the mention should be resolved.",
"highlighted_evidence": [
"Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:\n\n$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .\n\n$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.\n\n$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"0d325f52efb19aff203c0364700f5b861a17176a"
],
"answer": [
{
"evidence": [
"To make a thorough empirical comparison with previous studies, Table 3 (below the dashed line) also shows the results of some state-of-the-art supervised coreference resolution systems — IMS: the second best system in the CoNLL 2012 shared task BIBREF28 ; Latent-Tree: the latent tree model BIBREF29 obtaining the best results in the shared task; Berkeley: the Berkeley system with the final feature set BIBREF12 ; LaSO: the structured perceptron system with non-local features BIBREF30 ; Latent-Strc: the latent structure system BIBREF31 ; Model-Stack: the entity-centric system with model stacking BIBREF32 ; and Non-Linear: the non-linear mention-ranking model with feature representations BIBREF33 . Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 ."
],
"extractive_spans": [],
"free_form_answer": "No, supervised models perform better for this task.",
"highlighted_evidence": [
"Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Are resolution mode variables hand crafted?",
"What are resolution model variables?",
"Is the model presented in the paper state of the art?"
],
"question_id": [
"80de3baf97a55ea33e0fe0cafa6f6221ba347d0a",
"f5707610dc8ae2a3dc23aec63d4afa4b40b7ec1e",
"e76139c63da0f861c097466983fbe0c94d1d9810"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"coreference",
"coreference",
"coreference"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Feature set for representing a mention under different resolution modes. The Distance feature is for parameter q, while all other features are for parameter t.",
"Table 2: Corpora statistics. “ON-Dev” and “ON-Test” are the development and testing sets of the OntoNotes corpus.",
"Table 3: F1 scores of different evaluation metrics for our model, together with two deterministic systems and one unsupervised system as baseline (above the dashed line) and seven supervised systems (below the dashed line) for comparison on CoNLL 2012 development and test datasets."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What are resolution model variables?",
"Is the model presented in the paper state of the art?"
] | [
[
"1603.04553-Resolution Mode Variables-0"
],
[
"1603.04553-Results and Comparison-1"
]
] | [
"Variables in the set {str, prec, attr} indicating in which mode the mention should be resolved.",
"No, supervised models perform better for this task."
] | 189 |
1709.10217 | The First Evaluation of Chinese Human-Computer Dialogue Technology | In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. We detail the evaluation scheme, tasks, metrics and how to collect and annotate the data for training, developing and test. The evaluation includes two tasks, namely user intent classification and online testing of task-oriented dialogue. To consider the different sources of the data for training and developing, the first task can also be divided into two sub tasks. Both the two tasks are coming from the real problems when using the applications developed by industry. The evaluation data is provided by the iFLYTEK Corporation. Meanwhile, in this paper, we publish the evaluation results to present the current performance of the participants in the two tasks of Chinese human-computer dialogue technology. Moreover, we analyze the existing problems of human-computer dialogue as well as the evaluation scheme itself. | {
"paragraphs": [
[
"Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc.",
"Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system.",
"From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue.",
"To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail.",
"The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections."
],
[
"The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue."
],
[
"In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information.",
"In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance.",
"It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric."
],
[
"For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following:",
"“æ¥è¯¢æ天ä»åå°æ»¨å°å京çæé´è½¯å§ç«è½¦ç¥¨ï¼ä¸ä¸éºåå¯ã",
"Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.”",
"In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination.",
"We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following.",
"Task completion ratio: The number of completed tasks divided by the number of total tasks.",
"User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively.",
"Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency.",
"Number of dialogue turns: The number of utterances in a task-completed dialogue.",
"Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide.",
"For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30."
],
[
"In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation.",
"For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test.",
"For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017."
],
[
"There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper.",
"Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2."
],
[
"In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks."
],
[
"We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment."
]
],
"section_name": [
"Introduction",
"The First Evaluation of Chinese Human-Computer Dialogue Technology",
"Task 1: User Intent Classification",
"Task 2: Online Testing of Task-oriented Dialogue",
"Evaluation Data",
"Evaluation Results",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"38e82d8bcf6c074c9c9690831b23216b9e65f5e8"
],
"answer": [
{
"evidence": [
"From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue."
],
"extractive_spans": [
"no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9db4359f2b369a8c04c24e66e99cfcf8d9a8b0c2"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"6227c4c03516328f445fb939a101273c7ca1450d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"785eb17b1dacacf3f1abf57eb7ab48225281bd10"
],
"answer": [
{
"evidence": [
"In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance."
],
"extractive_spans": [
"two"
],
"free_form_answer": "",
"highlighted_evidence": [
"In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0d7edc2e80198c1a663b10d64a1cb930426b3f41"
],
"answer": [
{
"evidence": [
"There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper.",
"Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2.",
"FLOAT SELECTED: Table 4: Top 5 results of the closed test of the task 1.",
"FLOAT SELECTED: Table 5: Top 5 results of the open test of the task 1.",
"FLOAT SELECTED: Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively."
],
"extractive_spans": [],
"free_form_answer": "For task 1 best F1 score was 0.9391 on closed and 0.9414 on open test.\nFor task2 best result had: Ratio 0.3175 , Satisfaction 64.53, Fluency 0, Turns -1 and Guide 2",
"highlighted_evidence": [
"Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively.",
"Therefore, Table TABREF16 shows the complete results of the task 2.",
"FLOAT SELECTED: Table 4: Top 5 results of the closed test of the task 1.",
"FLOAT SELECTED: Table 5: Top 5 results of the open test of the task 1.",
"FLOAT SELECTED: Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4cd97a7c1f31679ac84b6c96290bfbac66d90c41"
],
"answer": [
{
"evidence": [
"It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric.",
"We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following.",
"Task completion ratio: The number of completed tasks divided by the number of total tasks.",
"User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively.",
"Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency.",
"Number of dialogue turns: The number of utterances in a task-completed dialogue.",
"Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide."
],
"extractive_spans": [
"For task 1, we use F1-score",
"Task completion ratio",
"User satisfaction degree",
"Response fluency",
"Number of dialogue turns",
"Guidance ability for out of scope input"
],
"free_form_answer": "",
"highlighted_evidence": [
"For task 1, we use F1-score as evaluation metric.",
"We use manual evaluation for task 2.",
"There are five evaluation metrics for task 2 as following.\n\nTask completion ratio: The number of completed tasks divided by the number of total tasks.\n\nUser satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively.\n\nResponse fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency.\n\nNumber of dialogue turns: The number of utterances in a task-completed dialogue.\n\nGuidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"What problems are found with the evaluation scheme?",
"How is the data annotated?",
"What collection steps do they mention?",
"How many intents were classified?",
"What was the result of the highest performing system?",
"What metrics are used in the evaluation?"
],
"question_id": [
"b8b588ca1e876b3094ae561a875dd949c8965b2e",
"2ec640e6b4f1ebc158d13ee6589778b4c08a04c8",
"ab0bb4d0a9796416d3d7ceba0ba9ab50c964e9d6",
"0460019eb2186aef835f7852fc445b037bd43bb7",
"96c09ece36a992762860cde4c110f1653c110d96",
"a9cc4b17063711c8606b8fc1c5eaf057b317a0c9"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: A brief comparison of the open domain chit-chat system and the task-oriented dialogue system.",
"Table 1: An example of user intent with category information.",
"Table 2: An example of the task-oriented human-computer dialogue.",
"Table 3: The statistics of the released data for task 1.",
"Table 4: Top 5 results of the closed test of the task 1.",
"Table 5: Top 5 results of the open test of the task 1.",
"Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"2-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png"
]
} | [
"What was the result of the highest performing system?"
] | [
[
"1709.10217-5-Table6-1.png",
"1709.10217-Evaluation Results-1",
"1709.10217-4-Table5-1.png",
"1709.10217-Evaluation Results-0",
"1709.10217-4-Table4-1.png"
]
] | [
"For task 1 best F1 score was 0.9391 on closed and 0.9414 on open test.\nFor task2 best result had: Ratio 0.3175 , Satisfaction 64.53, Fluency 0, Turns -1 and Guide 2"
] | 190 |
1901.02262 | Multi-style Generative Reading Comprehension | This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics. First, unlike most studies on RC that have focused on extracting an answer span from the provided passages, our model instead focuses on generating a summary from the question and multiple passages. This serves to cover various answer styles required for real-world applications. Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved. This also enables our model to give an answer in the target style. Experiments show that our model achieves state-of-the-art performance on the Q&A task and the Q&A + NLG task of MS MARCO 2.1 and the summary task of NarrativeQA. We observe that the transfer of the style-independent NLG capability to the target style is the key to its success. | {
"paragraphs": [
[
"Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 .",
"The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions.",
"Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model.",
"In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities."
],
[
"The task considered in this paper, is defined as:",
"Problem 1 Given a question with $J$ words $x^q = \\lbrace x^q_1, \\ldots , x^q_J\\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \\lbrace x^{p_k}_1, \\ldots , x^{p_k}_{L}\\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \\lbrace y_1, \\ldots , y_T \\rbrace $ conditioned on the style.",
"In short, for inference, given a set of 3-tuples $(x^q, \\lbrace x^{p_k}\\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \\lbrace x^{p_k}\\rbrace , s, y, a, \\lbrace r^{p_k}\\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise."
],
[
"Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style.",
"Masque directly models the conditional probability $p(y|x^q, \\lbrace x^{p_k}\\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules.",
" 1 The question-passages reader (§ \"Question-Passages Reader\" ) models interactions between the question and passages.",
" 2 The passage ranker (§ \"Passage Ranker\" ) finds relevant passages to the question.",
" 3 The answer possibility classifier (§ \"Answer Possibility Classifier\" ) identifies answerable questions.",
" 4 The answer sentence decoder (§ \"Answer Sentence Decoder\" ) outputs a sequence of words conditioned on the style."
],
[
"Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured.",
"Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \\in \\mathbb {R}^{d_\\mathrm {word} \\times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages.",
"This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \\in \\mathbb {R}^{d \\times L}$ for the $k$ -th passage and $E^q \\in \\mathbb {R}^{d \\times J}$ for the question.",
"It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\\mathrm {LayerNorm}(f(x)+x)$ , where $\\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence.",
"This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism.",
"It first computes a similarity matrix $U^{p_k} \\in \\mathbb {R}^{L{\\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where ",
"$$U^{p_k}_{lj} = {w^a}^\\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \\odot E^q_j ]$$ (Eq. 15) ",
" indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \\in \\mathbb {R}^{3d}$ are learnable parameters. The $\\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \\mathrm {softmax}_j({U^{p_k}}^\\top ) \\in \\mathbb {R}^{J\\times L}$ and $B^{p_k} = \\mathrm {softmax}_{l}(U^{p_k}) \\in \\mathbb {R}^{L \\times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \\rightarrow p_k} \\in \\mathbb {R}^{5d \\times L}$ : ",
"$$\\nonumber [E^{p_k}; \\bar{A}^{p_k}; \\bar{\\bar{A}}^{p_k}; E^{p_k} \\odot \\bar{A}^{p_k}; E^{p_k} \\odot \\bar{\\bar{A}}^{p_k}]$$ (Eq. 16) ",
" and passage-to-question ones $G^{p \\rightarrow q} \\in \\mathbb {R}^{5d \\times J}$ : ",
"$$\\begin{split}\n\\nonumber & [ E^{q} ; \\max _k(\\bar{B}^{p_k}); \\max _k(\\bar{\\bar{B}}^{p_k}); \\\\\n&\\hspace{10.0pt} E^{q} \\odot \\max _k(\\bar{B}^{p_k}); E^{q} \\odot \\max _k(\\bar{\\bar{B}}^{p_k}) ] \\mathrm {\\ \\ where}\n\\end{split}\\\\\n\\nonumber &\\bar{A}^{p_k} = E^q A^{p_k}\\in \\mathbb {R}^{d \\times L}, \\ \\bar{B}^{p_k} = E^{p_k} B^{p_k} \\in \\mathbb {R}^{d \\times J} \\\\\n\\nonumber &\\bar{\\bar{A}}^{p_k} = \\bar{B}^{p_k} A^{p_k} \\in \\mathbb {R}^{d \\times L}, \\ \\bar{\\bar{B}}^{p_k} = \\bar{A}^{p_k} B^{p_k} \\in \\mathbb {R}^{d \\times J}.$$ (Eq. 17) ",
"This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \\in \\mathbb {R}^{d \\times J}$ from $G^{p \\rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \\in \\mathbb {R}^{d \\times L}$ from $G^{q \\rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\\lbrace M^{p_k}\\rbrace $ , are passed on to the answer sentence decoder; the $\\lbrace M^{p_k}\\rbrace $ are also passed on to the passage ranker and answer possibility classifier."
],
[
"The passage ranker maps the output of the modeling layer, $\\lbrace M^{p_k}\\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: ",
"$$\\beta ^{p_k} = \\mathrm {sigmoid}({w^r}^\\top M^{p_k}_1),$$ (Eq. 20) ",
" where $w^r \\in \\mathbb {R}^{d}$ are learnable parameters."
],
[
"The answer possibility classifier maps the output of the modeling layer, $\\lbrace M^{p_k}\\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: ",
"$$P(a) = \\mathrm {sigmoid}({w^c}^\\top [M^{p_1}_1; \\ldots ; M^{p_K}_1]),$$ (Eq. 22) ",
" where $w^c \\in \\mathbb {R}^{Kd}$ are learnable parameters."
],
[
"Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step.",
"Let $y = \\lbrace y_1, \\ldots , y_{T}\\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ .",
"Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style.",
"This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\\lbrace s_1, \\ldots , s_T\\rbrace $ .",
"In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\\mathrm {all}}$ , respectively. The $M^{p_\\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, ",
"$$M^{p_\\mathrm {all}} = [M^{p_1}, \\ldots , M^{p_K}] \\in \\mathbb {R}^{d \\times KL}.$$ (Eq. 27) ",
" The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages.",
"Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview.",
"Let the extended vocabulary, $V_\\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: ",
"$$P^v(y_t) =\\mathrm {softmax}({W^2}^\\top (W^1 s_t + b^1)),$$ (Eq. 31) ",
" where the output embedding $W^2 \\in \\mathbb {R}^{d_\\mathrm {word} \\times V_\\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \\in \\mathbb {R}^{d_\\mathrm {word} \\times d}$ and $b^1 \\in \\mathbb {R}^{d_\\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ .",
"The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack.",
"The layer takes $s_t$ as the query and outputs $\\alpha ^q_t \\in \\mathbb {R}^J$ ( $\\alpha ^p_t \\in \\mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \\in \\mathbb {R}^d$ ( $c^p_t \\in \\mathbb {R}^d$ ) as the context vectors for the question (passages): ",
"$$e^q_j &= {w^q}^\\top \\tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\\\\n\\alpha ^q_t &= \\mathrm {softmax}(e^q), \\\\\nc^q_t &= \\textstyle \\sum _j \\alpha ^q_{tj} M_j^q, \\\\\ne^{p_k}_l &= {w^p}^\\top \\tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\\\\n\\alpha ^p_t &= \\mathrm {softmax}([e^{p_1}; \\ldots ; e^{p_K}]), \\\\\nc^p_t &= \\textstyle \\sum _{l} \\alpha ^p_{tl} M^{p_\\mathrm {all}}_{l},$$ (Eq. 33) ",
" where $w^q$ , $w^p \\in \\mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \\in \\mathbb {R}^{d \\times d}$ , and $b^q$ , $b^p \\in \\mathbb {R}^d$ are learnable parameters.",
" $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: ",
"$$P^q(y_t) &= \\textstyle \\sum _{j: x^q_j = y_t} \\alpha ^q_{tj}, \\\\\nP^p(y_t) &= \\textstyle \\sum _{l: x^{p_{k(l)}}_{l} = y_t} \\alpha ^p_{tl},$$ (Eq. 34) ",
" where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages.",
"The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: ",
"$$P(y_t) = \\lambda ^v P^v(y_t) + \\lambda ^q P^q(y_t) + \\lambda ^p P^p(y_t),$$ (Eq. 36) ",
" where the mixture weights are given by ",
"$$\\lambda ^v, \\lambda ^q, \\lambda ^p = \\mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) ",
" $W^m \\in \\mathbb {R}^{3 \\times 3d}$ , $b^m \\in \\mathbb {R}^3$ are learnable parameters.",
"In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\\beta ^{p_k}$ and word-level attentions $\\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: ",
"$$\\alpha ^p_{tl} & := \\frac{\\alpha ^p_{tl} \\beta ^{p_{k(l)} }}{\\sum _{l^{\\prime }} \\alpha ^p_{tl^{\\prime }} \\beta ^{p_{k(l^{\\prime })}}}.$$ (Eq. 39) "
],
[
"We define the training loss as the sum of losses in ",
"$$L(\\theta ) = L_\\mathrm {dec} + \\gamma _\\mathrm {rank} L_\\mathrm {rank} + \\gamma _\\mathrm {cls} L_\\mathrm {cls}$$ (Eq. 41) ",
" where $\\theta $ is the set of all learnable parameters, and $\\gamma _\\mathrm {rank}$ and $\\gamma _\\mathrm {cls}$ are balancing parameters.",
"The loss of the decoder, $L_\\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\\mathrm {able}$ answerable examples: ",
"$$L_\\mathrm {dec} = - \\frac{1}{N_\\mathrm {able}}\\sum _{(a,y)\\in \\mathcal {D}} \\frac{a}{T} \\sum _t \\log P(y_{t}),$$ (Eq. 42) ",
" where $\\mathcal {D}$ is the training dataset.",
"The losses of the passage ranker, $L_\\mathrm {rank}$ , and the answer possibility classifier, $L_\\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: ",
"$$L_\\mathrm {rank} = - \\frac{1}{NK} \\sum _k \\sum _{r^{p_k}\\in \\mathcal {D}}\n\\biggl (\n\\begin{split}\n&r^{p_k} \\log \\beta ^{p_k} + \\\\\n&(1-r^{p_k}) \\log (1-\\beta ^{p_k})\n\\end{split}\n\\biggr ),\\\\\nL_\\mathrm {cls} = - \\frac{1}{N} \\sum _{a \\in \\mathcal {D}}\n\\biggl (\n\\begin{split}\n&a \\log P(a) + \\\\\n&(1-a) \\log (1-P(a))\n\\end{split}\n\\biggr ).$$ (Eq. 43) "
],
[
"We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL.",
"We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\\mathrm {ext}$ was 5,000.",
"We used the Adam optimization BIBREF27 with $\\beta _1 = 0.9$ , $\\beta _2 = 0.999$ , and $\\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \\times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\\lambda _\\mathrm {rank}$ and $\\lambda _\\mathrm {cls}$ , were set to 0.5 and 0.1.",
"We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9."
],
[
"Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 .",
"Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers.",
"Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder.",
"Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader.",
"We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans.",
"Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking.",
"Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1.",
"Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type.",
"Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary.",
"Appendix \"Reading Comprehension Examples generated by Masque from MS MARCO 2.1\" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs."
],
[
"We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 .",
"The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification."
]
],
"section_name": [
"Introduction",
"Problem Formulation",
"Proposed Model",
"Question-Passages Reader",
"Passage Ranker",
"Answer Possibility Classifier",
"Answer Sentence Decoder",
"Loss Function",
"Setup",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0d82c8d3a311a9f695cae5bd50584efe3d67651c"
],
"answer": [
{
"evidence": [
"Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 ."
],
"extractive_spans": [
"Rouge-L",
"Bleu-1"
],
"free_form_answer": "",
"highlighted_evidence": [
"In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"522ec998f1f29f60ee09a84c6d9dc833d55f516d"
],
"answer": [
{
"evidence": [
"Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"12db0a9ba3a68b18fe3f729a111881ea824c1e0d"
],
"answer": [
{
"evidence": [
"We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL."
],
"extractive_spans": [],
"free_form_answer": "well-formed sentences vs concise answers",
"highlighted_evidence": [
"The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"547a1dfd18e1e0bd505d93780bde493332c5084a"
],
"answer": [
{
"evidence": [
"We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"2d9168d8c9582e71772671fec99190636993f9bc"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported.",
"FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set."
],
"extractive_spans": [],
"free_form_answer": "BiDAF, Deep Cascade QA, S-Net+CES2S, BERT+Multi-PGNet, Selector+CCG, VNET, DECAPROP, MHPGM+NOIC, ConZNet, RMR+A2D",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported.",
"FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"dacf7e1b0d991a27991f3094a5420d21280d2856"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set."
],
"extractive_spans": [],
"free_form_answer": "Bleu-1: 54.11, Bleu-4: 30.43, METEOR: 26.13, ROUGE-L: 59.87",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"18e57e4cefebf25073f74efc3da5763404b4ecd5"
],
"answer": [
{
"evidence": [
"We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL."
],
"extractive_spans": [],
"free_form_answer": "well-formed sentences vs concise answers",
"highlighted_evidence": [
"The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How do they measure the quality of summaries?",
"Does their model also take the expected answer style as input?",
"What do they mean by answer styles?",
"Is there exactly one \"answer style\" per dataset?",
"What are the baselines that Masque is compared against?",
"What is the performance achieved on NarrativeQA?",
"What is an \"answer style\"?"
],
"question_id": [
"6ead576ee5813164684a8cdda36e6a8c180455d9",
"0117aa1266a37b0d2ef429f1b0653b9dde3677fe",
"5455b3cdcf426f4d5fc40bc11644a432fa7a5c8f",
"6c80bc3ed6df228c8ca6e02c0a8a1c2889498688",
"2d274c93901c193cf7ad227ab28b1436c5f410af",
"e63bde5c7b154fbe990c3185e2626d13a1bad171",
"cb8a6f5c29715619a137e21b54b29e9dd48dad7d"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"reading comprehension",
"reading comprehension",
"reading comprehension",
"reading comprehension",
"reading comprehension",
"reading comprehension",
"reading comprehension"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Figure 1: Visualization of how our model generates an answer on MS MARCO. Given an answer style (top: NLG, bottom: Q&A), the model controls the mixture of three distributions for generating words from a vocabulary and copying words from the question and multiple passages at each decoding step.",
"Figure 2: Masque model architecture.",
"Figure 3: Multi-source pointer-generator mechanism. For each decoding step t, mixture weights λv, λq, λp for the probability of generating words from the vocabulary and copying words from the question and the passages are calculated. The three distributions are weighted and summed to obtain the final distribution.",
"Table 1: Numbers of questions used in the experiments.",
"Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported.",
"Table 3: Ablation test results on the NLG dev. set. The models were trained with the subset listed in “Train”.",
"Table 4: Passage ranking results on the ANS dev. set.",
"Figure 4: Precision-recall curve for answer possibility classification on the ALL dev. set.",
"Figure 5: Lengths of answers generated by Masque broken down by the answer style and query type on the NLG dev. set. The error bars indicate standard errors.",
"Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Figure3-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png",
"8-Table5-1.png"
]
} | [
"What do they mean by answer styles?",
"What are the baselines that Masque is compared against?",
"What is the performance achieved on NarrativeQA?",
"What is an \"answer style\"?"
] | [
[
"1901.02262-Setup-0"
],
[
"1901.02262-8-Table5-1.png",
"1901.02262-6-Table2-1.png"
],
[
"1901.02262-8-Table5-1.png"
],
[
"1901.02262-Setup-0"
]
] | [
"well-formed sentences vs concise answers",
"BiDAF, Deep Cascade QA, S-Net+CES2S, BERT+Multi-PGNet, Selector+CCG, VNET, DECAPROP, MHPGM+NOIC, ConZNet, RMR+A2D",
"Bleu-1: 54.11, Bleu-4: 30.43, METEOR: 26.13, ROUGE-L: 59.87",
"well-formed sentences vs concise answers"
] | 191 |
1906.03338 | Dissecting Content and Context in Argumentative Relation Analysis | When assessing relations between argumentative units (e.g., support or attack), computational systems often exploit disclosing indicators or markers that are not part of elementary argumentative units (EAUs) themselves, but are gained from their context (position in paragraph, preceding tokens, etc.). We show that this dependency is much stronger than previously assumed. In fact, we show that by completely masking the EAU text spans and only feeding information from their context, a competitive system may function even better. We argue that an argument analysis system that relies more on discourse context than the argument's content is unsafe, since it can easily be tricked. To alleviate this issue, we separate argumentative units from their context such that the system is forced to model and rely on an EAU's content. We show that the resulting classification system is more robust, and argue that such models are better suited for predicting argumentative relations across documents. | {
"paragraphs": [
[
"In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 .",
"Argumentative relation classification is a sub-task of argument analysis that aims to determine relations between argumentative units A and B, for example, A supports B; A attacks B. Consider the following argumentative units (1) and (2), given the topic (0) “Marijuana should be legalized”:",
"This example is modeled in Figure FIGREF3 .",
"It is clear that (1) has a negative stance towards the topic and (2) has a positive stance towards the topic. Moreover, we can say that (2) attacks (1). In discourse, such a relation is often made explicit through discourse markers: (1). However, (2); On the one hand (1), on the other (2); (1), although (2); Admittedly, (2); etc. In the absence of such markers we must determine this relation by assessing the semantics of the individual argumentative units, including (often implicit) world knowledge about how they are related to each other.",
"In this work, we show that argumentative relation classifiers – when provided with textual context surrounding an argumentative unit's span – are very prone to neglect the actual textual content of the EAU span. Instead they heavily rely on contextual markers, such as conjunctions or adverbials, as a basis for prediction. We argue that a system's capacity of predicting the correct relation based on the argumentative units' content is important in many circumstances, e.g., when an argumentative debate crosses document boundaries. For example, the prohibition of marijuana debate extends across populations and countries – argumentative units for this debate can be recovered from thousands of documents scattered across the world wide web. As a consequence, argumentative relation classification systems should not be (immensely) dependent on contextual clues – in the discussed cross-document setting these clues may even be misleading for such a system, since source and target arguments can be embedded in different textual contexts (e.g., when (1) and (2) stem from different documents it is easy to imagine a textual context where (2) is not introduced by however but instead by an `inverse' form such as e.g. moreover)."
],
[
"It is well-known that the rhetorical and argumentative structure of texts bear great similarities. For example, BIBREF5 , BIBREF6 , BIBREF0 observe that elementary discourse units (EDUs) in RST BIBREF7 share great similarity with elementary argumentative units (EAUs) in argumentation analysis. BIBREF8 experiment with a modified version of the Microtext corpus BIBREF9 , which is an extensively annotated albeit small corpus. Similar to us, they separate argumentative units from discursive contextual markers. While BIBREF8 conduct a human evaluation to investigate the separation of Logos and Pathos aspects of arguments, our work investigates how (de-)contextualization of argumentative units affects automatic argumentative relation classification models."
],
[
"In this section, we describe different formulations of the argumentative relation classification task and describe features used by our replicated model. In order to test our hypotheses, we propose to group all features into three distinct types."
],
[
"Now, we introduce a classification of three different prediction models used in the argumentative relation prediction literature. We will inspect all of them and show that all can suffer from severe issues when focusing (too much) on the context.",
"The model INLINEFORM0 adopts a discourse parsing view on argumentative relation prediction and predicts one outgoing edge for an argumentative unit (one-outgoing edge). Model INLINEFORM1 assumes a connected graph with argumentative units and is tasked with predicting edge labels for unit tuples (labeling relations in a graph). Finally, a model INLINEFORM2 is given two (possibly) unrelated argumentative units and is tasked with predicting connections as well as edge labels (joint edge prediction and labeling).",
" BIBREF13 divide the task into relation prediction INLINEFORM0 and relation class assignment INLINEFORM1 : DISPLAYFORM0 ",
" DISPLAYFORM0 ",
"which the authors describe as argumentative relation identification ( INLINEFORM0 ) and stance detection ( INLINEFORM1 ). In their experiments, INLINEFORM2 , i.e., no distinction is made between features that access only the argument content (EAU span) or only the EAU's embedding context, and some features also consider both (e.g., discourse features). This model adopts a parsing view on argumentative relation classification: every unit is allowed to have only one type of outgoing relation (this follows trivially from the fact that INLINEFORM3 has only one input). Applying such a model to argumentative attack and support relations might impose unrealistic constraints on the resulting argumentation graph: A given premise might in fact attack or support several other premises. The approach may suffice for the case of student argumentative essays, where EAUs are well-framed in a discourse structure, but seems overly restrictive for many other scenarios.",
"Another way of framing the task, is to learn a function DISPLAYFORM0 ",
"Here, an argumentative unit is allowed to be in a attack or support relation to multiple other EAUs. Yet, both INLINEFORM0 and INLINEFORM1 assume that inputs are already linked and only the class of the link is unknown.",
"Thus, we might also model the task in a three-class classification setting to learn a more general function that performs relation prediction and classification jointly (see also, e.g., BIBREF10 ): DISPLAYFORM0 ",
"The model described by Eq. EQREF22 is the most general one: not only does it assume a graph view on argumentative units and their relations (as does Eq. EQREF20 ); in model formulation (Eq. EQREF22 ), an argumentative unit can have no or multiple support or attack relations. It naturally allows for cases where an argumentative unit INLINEFORM0 (supports INLINEFORM1 INLINEFORM2 attacks INLINEFORM3 INLINEFORM4 is-unrelated-to INLINEFORM5 ). Given a set of EAUs mined from different documents, this model enables us to construct a full-fledged argumentation graph."
],
[
"Our feature implementation follows the feature descriptions for Stance recognition and link identification in BIBREF13 . These features and variations of them have been used successfully in several successive works (cf. BIBREF1 , BIBREF16 , BIBREF15 ).",
"For any model the features are indexed by INLINEFORM0 . We create a function INLINEFORM1 which maps from feature indices to feature types. In other words, INLINEFORM2 tells us, for any given feature, whether it is content-based ( INLINEFORM3 ), content-ignorant ( INLINEFORM4 ) or full access ( INLINEFORM5 ). The features for, e.g., the joint prediction model INLINEFORM6 of type INLINEFORM7 ( INLINEFORM8 ) can then simply be described as INLINEFORM9 . Recall that features computed on the basis of the EAU span are content-based ( INLINEFORM10 ), features from the EAU-surrounding text are content-ignorant ( INLINEFORM11 ) and features computed from both are denoted by full-access ( INLINEFORM12 ). Details on the extraction of features are provided below.",
"These consist of boolean values indicating whether a certain word appears in the argumentative source or target EAU or both (and separately, their contexts). More precisely, for any classification instance we extract uni-grams from within the span of the EAU (if INLINEFORM0 ) or solely from the sentence-context surrounding the EAUs (if INLINEFORM1 ). Words which occur in both bags are only visible in the full-access setup INLINEFORM2 and are modeled as binary indicators.",
"Such features consist of syntactic production rules extracted from constituency trees – they are modelled analogously to the lexical features as a bag of production rules. To make a clear division between features derived from the EAU embedding context and features derived from within the EAU span, we divide the constituency tree in two parts, as is illustrated in Figure FIGREF26 .",
"If the EAU is embedded in a covering sentence, we cut the syntax tree at the corresponding edge ( in Figure FIGREF26 ). In this example, the content-ignorant ( INLINEFORM0 ) bag-of-word production rule representation includes the rules INLINEFORM1 and INLINEFORM2 . Analogously to the lexical features, the production rules are modeled as binary indicator features.",
"These features describe shallow statistics such as the ratio of argumentative unit tokens compared to sentence tokens or the position of the argumentative unit in the paragraph. We set these features to zero for the content representation of the argumentative unit and replicate those features that allow us to treat the argumentative unit as a black-box. For example, in the content-based ( INLINEFORM0 ) system that has access only to the EAU, we can compute the #tokens in the EAU, but not the #tokens in EAU divided by #tokens in the sentence. The latter feature is only accessible in the full access system variants. Hence, in the content-based ( INLINEFORM1 ) system most of these statistics are set to zero since they cannot be computed by considering only the EAU span.",
"For the content-based representation we retrieve only discourse relations that are confined within the span of the argumentative unit. In the very frequent case that discourse features cross the boundaries of embedding context and EAU span, we only take them into account for INLINEFORM0 .",
"We use the element-wise sum of 300-dimensional pre-trained GloVe vectors BIBREF24 corresponding to the words within the EAU span ( INLINEFORM0 ) and the words of the EAU-surrounding context ( INLINEFORM1 ). Additionally, we compute the element-wise subtraction of the source EAU vector from the target EAU vector, with the aim of modelling directions in distributional space, similarly to BIBREF25 . Words with no corresponding pre-trained word vector and empty sequences (e.g., no preceding context available) are treated as a zero-vector.",
"Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors."
],
[
"Our first step towards our main experiments is to replicate the competitive argumentative relation classifier of BIBREF13 , BIBREF1 . Hence, for comparison purposes, we first formulate the task exactly as it was done in this prior work, using the model formulation in Eq. EQREF17 , which determines the type of outgoing edge from a source (i.e., tree-like view).",
"The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features.",
"The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 ).",
"At first glance, the results of the purely EAU focused systems ( INLINEFORM0 ) are disappointing, since they fall far behind their competitors. On the other hand, their F1 scores are not devastatingly bad. The strong most-frequent-class baseline is significantly outperformed by the content-based ( INLINEFORM1 ) system, across all three prediction settings.",
"In summary our findings are as follows: (i) models which see the EAU span (content-based, INLINEFORM0 ) are significantly outperformed by models that have no access to the span itself (content-ignorant, INLINEFORM1 ) across all settings; (ii) in two of three prediction settings ( INLINEFORM2 and INLINEFORM3 ), the model which only has access to the context even outperforms the model that has access to all information in the input. The fact that using features derived exclusively from the EAU embedding context ( INLINEFORM4 ) can lead to better results than using a full feature-system ( INLINEFORM5 ) suggests that some information from the EAU can even be harmful. Why this is the case, we cannot answer exactly. A plausible cause might be related to the smaller dimension of the feature space, which makes the SVM less likely to overfit. Still, this finding comes as a surprise and calls for further investigation in future work.",
"A system for argumentative relation classification can be applied in one of two settings: single-document or cross-document, as illustrated in Figure FIGREF42 :",
"in the first case (top), a system is tasked to classify EAUs that appear linearly in one document – here contextual clues can often highlight the relationship between two units. This is the setting we have been considering up to now. However, in the second scenario (bottom), we have moved away from the closed single-document setting and ask the system to classify two EAUs extracted from different document contexts. This setting applies, for instance, when we are mining arguments from multiple sources.",
"In both cases, however, a system that relies more on contextual clues than on the content expressed in the EAUs is problematic: in the single-document setting, such a system will rely on discourse indicators – whether or not they are justified by content – and can thus easily be fooled.",
"In the cross-document setting, discourse-based indicators – being inherently defined with respect to their internal document context – do not have a defined rhetorical function with respect to EAUs in a separate document and thus a system that has learned to rely on such markers within a single-document setting can be seriously misled.",
"We believe that the cross-document setting should be an important goal in argumentation analysis, since it generalizes better to many debates of interest, where EAUs can be found scattered across thousands of documents. For example, for the topic of legalizing marijuana, EAUs may be mined from millions of documents and thus their relations may naturally extend across document boundaries. If a system learns to over-proportionally attend to the EAUs' surrounding contexts it is prone to making many errors.",
"In what follows we are simulating the effects that an overly context-sensitive classifier could have in a cross-document setting, by modifying our experimental setting, and study the effects on the different model types: In one setup – we call it randomized-context – we systematically distort the context of our testing instances by exchanging the context in a randomized manner; in the other setting – called no-context, we are deleting the context around the ADUs to be classified. Randomized-context simulates an open world debate where argumentative units may occur in different contexts, sometimes with discourse markers indicating an opposite class. In other words, in this setting we want to examine effects when porting a context-sensitive system to a multi-document setting. For example, as seen in Figure FIGREF42 , the context of an argumentative unit may change from “However” to “Moreover” – which can happen naturally in open debates.",
"The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model.",
"We calculate the ANOVA classification F scores of the features with respect to our three task formulations INLINEFORM0 and INLINEFORM1 . The F percentiles of features extracted from the EAU surrounding text ( INLINEFORM2 ) and features extracted from the EAU span ( INLINEFORM3 ), are displayed in Figure FIGREF50 .",
"It clearly stands out that features obtained from the EAU surrounding context ( INLINEFORM0 ) are assigned much higher scores compared to features stemming from the EAU span ( INLINEFORM1 ). This holds true for all three task formulations and provides further evidence that models – when given the option – put a strong focus on contextual clues while neglecting the information provided by the EAU span itself."
],
[
"While competitive systems for argumentative relation classification are considered to be robust, our experiments have shown that despite confidence-inspiring scores on unseen testing data, such systems can easily be fooled – they can deliver strong performance scores although the classifier does not have access to the content of the EAUs. In this respect, we have provided evidence that there is a danger in case models focus too much on rhetorical indicators, in detriment of the context. Thus, the following question arises: How can we prevent argumentation models from modeling arguments or argumentative units and their relations in overly naïve ways? A simple and intuitive way is to dissect EAUs from their surrounding document context. Models trained on data that is restricted to the EAUs' content will be forced to focus on the content of EAUs. We believe that this will enhance the robustness of such models and allows them to generalize to cross-document argument relation classification. The corpus of student essays makes such transformations straightforward: only the EAUs were annotated (e.g., “However, INLINEFORM0 A INLINEFORM1 ”). If annotations extend over the EAUs (e.g., only full sentences are annotated, “ INLINEFORM2 However, A INLINEFORM3 ”), such transformations could be performed automatically after a discourse parsing step. When inspecting the student essays corpus, we further observed that an EAU mining step should involve coreference resolution to better capture relations between EAUs that involve anaphors (e.g., “Exercising makes you feel better” and “It INLINEFORM4 increases endorphin levels”).",
"Thus, in order to conduct real-world end-to-end argumentation relation mining for a given topic, we envision a system that addresses three steps: (i) mining of EAUs and (ii) replacement of pronouns in EAUs with referenced entities (e.g., INLINEFORM0 ). Finally (iii), given the cross product of mined EAUs we can apply a model of type INLINEFORM1 to construct a full-fledged argumentation graph, possibly spanning multiple documents. We have shown that in order to properly perform step (iii), we need stronger models that are able to better model EAU contents. Hence, we encourage the argumentation community to test their systems on a decontextualized version of the student essays, including the proposed – and possibly further extended – testing setups, to challenge the semantic representation and reasoning capacities of argument analysis models. This will lead to more realistic performance estimates and increased robustness of systems when addressing desirable multi-document tasks."
],
[
"We have shown that systems which put too much focus on discourse information may be easily fooled – an issue which has severe implications when systems are applied to cross-document argumentative relation classification tasks. The strong reliance on contextual clues is also problematic in single-document contexts, where systems can run a risk of assigning relation labels relying on contextual and rhetorical effects – instead of focusing on content. Hence, we propose that researchers test their argumentative relation classification systems on two alternative versions of the StudentEssay data that reflect different access levels. (i) EAU-span only, where systems only see the EAU spans and (ii) context-only, where systems can only see the EAU-surrounding context. These complementary settings will (i) challenge the semantic capacities of a system, and (ii) unveil the extent to which a system is focusing on the discourse context when making decisions. We will offer our testing environments to the research community through a platform that provides datasets and scripts and a table to trace the results of content-based systems."
],
[
"This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant no. GRK 1994/1 and by the Leibniz ScienceCampus “Empirical Linguistics and Computational Language Modeling”, supported by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Württemberg. "
]
],
"section_name": [
"Introduction",
"Related Work",
"Argumentative Relation Prediction: Models and Features",
"Models",
"Feature implementation",
"Results",
"Discussion",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"5bd1279173e673acdbf3c6fb54244548d0a580c2"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4495a8db2cca0ea3f8739bb39a50d3102f573607"
],
"answer": [
{
"evidence": [
"The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model."
],
"extractive_spans": [
"performances of a purely content-based model naturally stays stable"
],
"free_form_answer": "",
"highlighted_evidence": [
"While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f607b9d41b945da87473a2955ebb329d6fb80f51"
],
"answer": [
{
"evidence": [
"The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features.",
"The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 )."
],
"extractive_spans": [
"BIBREF13",
"majority baseline"
],
"free_form_answer": "",
"highlighted_evidence": [
"The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features.",
"The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0db6d0334e20d45c98db1f1c6092c84d70a5da30"
],
"answer": [
{
"evidence": [
"Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors.",
"Results"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Data and pre-processing section) The data is suited for our experiments because the annotators were explicitly asked to provide annotations on a clausal level.",
"highlighted_evidence": [
"This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors.\n\nResults"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"367804869bfc09365b3a9eb9790561cb929a9047"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"How do they demonstrate the robustness of their results?",
"What baseline and classification systems are used in experiments?",
"How are the EAU text spans annotated?",
"How are elementary argumentative units defined?"
],
"question_id": [
"a22b900fcd76c3d36b5679691982dc6e9a3d34bf",
"fb2593de1f5cc632724e39d92e4dd82477f06ea1",
"476d0b5579deb9199423bb843e584e606d606bc7",
"eddabb24bc6de6451bcdaa7940f708e925010912",
"f0946fb9df9839977f4d16c43476e4c2724ff772"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: A graph representation of a topic (node w/ dashed line), two argumentative premise units (nodes w/ solid line), premise-topic relations (positive or negative) and premise-premise relations (here: attacks).",
"Figure 2: Production rule extraction from constituency parse for two different argumentative units.",
"Table 1: Data set statistics.",
"Table 2: Baseline system replication results.",
"Table 3: Argumentative relation classification models h, f, g with different access to content and context; models of type CI (content-ignorant) have no access to the EAU span. †: significantly better than mfs baseline (p < 0.005); ‡ significantly better than content-based (p < 0.005).",
"Figure 3: Single-document (top) vs. cross-document (bottom) argumentative relation classification. Black edge: gold label; purple edge: predicted label.",
"Figure 4: Randomized-context test set: models are applied to testing instances with randomly flipped contexts. No-context test set: models can only access the EAU span of a testing instance. A bar below/above zero means that a system that can access context (content-ignorant CI or full-access FA) is worse/better than the content-based baseline CB that only has access to the EAU span (its performance is not affected by modified context, cf. Tab. 3).",
"Figure 5: ANOVA F score percentiles for contentbased vs. content-ignorant features in the training data. A higher feature score suggests greater predictive capacity."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"8-Figure5-1.png"
]
} | [
"How are the EAU text spans annotated?"
] | [
[
"1906.03338-Feature implementation-8"
]
] | [
"Answer with content missing: (Data and pre-processing section) The data is suited for our experiments because the annotators were explicitly asked to provide annotations on a clausal level."
] | 193 |
1602.08741 | Gibberish Semantics: How Good is Russian Twitter in Word Semantic Similarity Task? | The most studied and most successful language models were developed and evaluated mainly for English and other close European languages, such as French, German, etc. It is important to study applicability of these models to other languages. The use of vector space models for Russian was recently studied for multiple corpora, such as Wikipedia, RuWac, lib.ru. These models were evaluated against word semantic similarity task. For our knowledge Twitter was not considered as a corpus for this task, with this work we fill the gap. Results for vectors trained on Twitter corpus are comparable in accuracy with other single-corpus trained models, although the best performance is currently achieved by combination of multiple corpora. | {
"paragraphs": [
[
"Word semantic similarity task is an important part of contemporary NLP. It can be applied in many areas, like word sense disambiguation, information retrieval, information extraction and others. It has long history of improvements, starting with simple models, like bag-of-words (often weighted by TF-IDF score), continuing with more complex ones, like LSA BIBREF0 , which attempts to find “latent” meanings of words and phrases, and even more abstract models, like NNLM BIBREF1 . Latest results are based on neural network experience, but are far more simple: various versions of Word2Vec, Skip-gram and CBOW models BIBREF2 , which currently show the State-of-the-Art results and have proven success with morphologically complex languages like Russian BIBREF3 , BIBREF4 .",
"These are corpus-based approaches, where one computes or trains the model from a large corpus. They usually consider some word context, like in bag-of-words, where model is simple count of how often can some word be seen in context of a word being described. This model anyhow does not use semantic information. A step in semantic direction was made by LSA, which requires SVD transformation of co-occurrence matrix and produces vectors with latent, unknown structure. However, this method is rather computationally expensive, and can rarely be applied to large corpora. Distributed language model was proposed, where every word is initially assigned a random fixed-size vector. During training semantically close vectors (or close by means of context) become closer to each other; as matter of closeness the cosine similarity is usually chosen. This trick enables usage of neural networks and other machine learning techniques, which easily deal with fixed-size real vectors, instead of large and sparse co-occurrence vectors.",
"It is worth mentioning non-corpus based techniques to estimate word semantic similarity. They usually make use of knowledge databases, like WordNet, Wikipedia, Wiktionary and others BIBREF5 , BIBREF6 . It was shown that Wikipedia data can be used in graph-based methods BIBREF7 , and also in corpus-based ones. In this paper we are not focusing on non-corpus based techniques.",
"In this paper we concentrate on usage of Russian Twitter stream as training corpus for Word2Vec model in semantic similarity task, and show results comparable with current (trained on a single corpus). This research is part of molva.spb.ru project, which is a trending topic detection engine for Russian Twitter. Thus the choice of language of interest is narrowed down to only Russian, although there is strong intuition that one can achieve similar results with other languages."
],
[
"The primary goal of this paper is to prove usefulness of Russian Twitter stream as word semantic similarity resource. Twitter is a popular social network, or also called \"microblogging service\", which enables users to share and interact with short messages instantly and publicly (although private accounts are also available). Users all over the world generate hundreds of millions of tweets per day, all over the world, in many languages, generating enormous amount of verbal data.",
"Traditional corpora for the word semantic similarity task are News, Wikipedia, electronic libraries and others (e.g. RUSSE workshop BIBREF4 ). It was shown that type of corpus used for training affects the resulting accuracy. Twitter is not usually considered, and intuition behind this is that probably every-day language is too simple and too occasional to produce good results. On the other hand, the real-time nature of this user message stream seems promising, as it may reveal what certain word means in this given moment.",
"The other counter-argument against Twitter-as-Dataset is the policy of Twitter, which disallows publication of any dump of Twitter messages larger than 50K . However, this policy permits publication of Twitter IDs in any amount. Thus the secondary goal of this paper is to describe how to create this kind of dataset from scratch. We provide the sample of Twitter messages used, as well as set of Twitter IDs used during experiments ."
],
[
"Semantic similarity and relatedness task received significant amount of attention. Several \"Gold standard\" datasets were produced to facilitate the evaluation of algorithms and models, including WordSim353 BIBREF8 , RG-65 BIBREF9 for English language and others. These datasets consist of several pairs of words, where each pair receives a score from human annotators. The score represents the similarity between two words, from 0% (not similar) to 100% (identical meaning, words are synonyms). Usually these scores are filled out by a number of human annotators, for instance, 13 in case of WordSim353 . The inter-annotator agreement is measured and the mean value is put into dataset.",
"Until recent days there was no such dataset for Russian language. To mitigate this the “RUSSE: The First Workshop on Russian Semantic Similarity” BIBREF4 was conducted, producing RUSSE Human-Judgements evaluation dataset (we will refer to it as HJ-dataset). RUSSE dataset was constructed the following way. Firstly, datasets WordSim353, MC BIBREF10 and RG-65 were combined and translated. Then human judgements were obtained by crowdsourcing (using custom implementation). Final size of the dataset is 333 word pairs, it is available on-line.",
"The RUSSE contest was followed by paper from its organizers BIBREF4 and several participators BIBREF3 , BIBREF11 , thus filling the gap in word semantic similarity task for Russian language. In this paper we evaluate a Word2Vec model, trained on Russian Twitter corpus against RUSSE HJ-dataset, and show results comparable to top results of other RUSSE competitors."
],
[
"In this section we describe how we receive data from Twitter, how we filter it and how we feed it to the model."
],
[
"Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. There is special type of API, called Streaming API, that provides a real-time stream of tweets. The key difference with regular API is that connection is kept alive as long as possible, and Tweets are sent in real-time to the client. There are three endpoints of Streaming API of our interest: “sample”, “filter” and “firehose”. The first one provides a sample (random subset) of the full Tweet stream. The second one allows to receive Tweets matching some search criteria: matching to one or more search keywords, produced by subset of users, or coming from certain geo location. The last one provides the full set of Tweets, although it is not available by default. In order to get Twitter “firehose” one can contact Twitter, or buy this stream from third-parties.",
"In our case the simplest approach would be to use “sample” endpoint, but it provides Tweets in all possible languages from all over the World, while we are concerned only about one language (Russian). In order to use this endpoint we implemented filtering based on language. The filter is simple: if Tweet does not contain a substring of 3 or more cyrillic symbols, it is considered non-Russian. Although this approach keeps Tweets in Mongolian, Ukrainian and other slavic languages (because they use cyrillic alphabet), the total amount of false-positives in this case is negligible. To demonstrate this we conducted simple experiment: on a random sample of 200 tweets only 5 were in a language different from Russian. In order not to rely on Twitter language detection, we chose to proceed with this method of language-based filtering.",
"However, the amount of Tweets received through “sample” endpoint was not satisfying. This is probably because “sample” endpoint always streams the same content to all its clients, and small portion of it comes in Russian language. In order to force mining of Tweets in Russian language, we chose \"filter\" endpoint, which requires some search query. We constructed heuristic query, containing some auxiliary words, specific to Russian language: conjunctions, pronouns, prepositions. The full list is as follows:",
"russian я, у, к, в, по, на, ты, мы, до, на, она, он, и, да.",
"We evaluated our search query on data obtained from “sample” endpoint, and 95% of Tweets matched it. We consider this coverage as reasonable and now on use “filter” endpoint with the query and language filtering described above. In this paper we work with Tweet stream acquired from 2015/07/21 till 2015/08/04. We refer to parts of the dataset by the day of acquisition: 2015/07/21, etc. Tweet IDs used in our experiments are listed on-line."
],
[
"Corpus-based algorithms like BoW and Word2Vec require text to be tokenized, and sometimes to be stemmed as well. It is common practice to filter out Stop-Words (e.g. BIBREF11 ), but in this work we don’t use it. Morphological richness of Russian language forces us to use stemming, even though models like Word2Vec does not require it. In our experiments stemmed version performs significantly better than unstemmed, so we only report results of stemmed one. To do stemming we use Yandex Tomita Parser , which is an extractor of simple facts from text in Russian language. It is based on Yandex stemmer mystem BIBREF12 . It requires a set of grammar rules and facts (i.e. simple data structures) to be extracted. In this paper we use it with one simple rule:",
"S -> Word interp (SimpleFact.Word);",
"This rule tells parser to interpret each word it sees and return it back immediately. We use Tomita Parser as we find it more user-friendly than mystem. Tomita Parser performs following operations: sentence splitting, tokenization, stemming, removing punctuation marks, transforming words to lowercase. Each Tweet is transformed into one or several lines of tab-separated sequences of words (if there are several sentences or lines in a Tweet). Twitter-specific “Hashtags” and “User mentions” are treated by Tomita Parser as normal words, except that “@” and “#” symbols are stripped off.",
"HJ-dataset contains non-lemmatized words. This is understandable, because the task of this dataset was oriented to human annotators. In several cases plural form is used (consider this pair: \"russianтигр, russianкошачьи\"). In order to compute similarity for those pairs, and having in mind that Twitter data is pre-stemmed, we have to stem HJ-dataset with same parser as well."
],
[
"We use Word2Vec to obtain word vectors from Twitter corpus. In this model word vectors are initialized randomly for each unique word and are fed to a sort of neural network. Authors of Word2Vec propose two different models: Skip-gram and CBOW. The first one is trained to predict the context of the word given just the word vector itself. The second one is somewhat opposite: it is trained to predict the word vector given its context. In our study CBOW always performs worse than Skip-gram, hence we describe only results with Skip-gram model. Those models have several training parameters, namely: vector size, size of vocabulary (or minimal frequency of a word), context size, threshold of downsampling, amount of training epochs. We choose vector size based on size of corpus. We use “context size” as “number of tokens before or after current token”. In all experiments presented in this paper we use one training epoch.",
"There are several implementations of Word2Vec available, including original C utility and a Python library gensim. We use the latter one as we find it more convenient. Output of Tomita Parser is fed directly line-by-line to the model. It produces the set of vectors, which we then query to obtain similarity between word vectors, in order to compute the correlation with HJ-dataset. To compute correlation we use Spearman coefficient, since it was used as accuracy measure in RUSSE BIBREF4 ."
],
[
"In this section we describe properties of data obtained from Twitter, describe experiment protocols and results."
],
[
"In order to train Word2Vec model for semantic similarity task we collected Twitter messages for 15 full days, from 2015/07/21 till 2015/08/04. Each day contains on average 3M of Tweets and 40M of tokens. All properties measured are shown in Table 1. Our first observation was that given one day of Twitter data we cannot estimate all of the words from HJ-dataset, because they appear too rarely. We fixed the frequency threshold on value of 40 occurrences per day and counted how many words from HJ-dataset are below this threshold.",
"Our second observation was that words \"missing\" from HJ-dataset are different from day to day. This is not very surprising having in mind the dynamic nature of Twitter data. Thus estimation of word vectors is different from day to day. In order to estimate the fluctuation of this semantic measure, we conduct training of Word2Vec on each day in our corpus. We fix vector size to 300, context size to 5, downsampling threshold to 1e-3, and minimal word occurrence threshold (also called min-freq) to 40. The results are shown in Table 2. Mean Spearman correlation between daily Twitter splits and HJ-dataset is 0.36 with std.dev. of 0.04. Word pairs for missing words (infrequent ones) were excluded. We also create superset of all infrequent words, i.e. words having frequency below 40 in at least one daily split. This set contains 50 words and produces 76 \"infrequent word\" pairs (out of 333). Every pair containing at least one infrequent word was excluded. On that subset of HJ-dataset mean correlation is 0.29 with std.dev. of 0.03. We consider this to be reasonably stable result."
],
[
"Word2Vec model was designed to be trained on large corpora. There are results of training it in reasonable time with corpus size of 1 billion of tokens BIBREF2 . It was mentioned that accuracy of estimated word vectors improves with size of corpus. Twitter provides an enormous amount of data, thus it is a perfect job for Word2Vec. We fix parameters for the model with following values: vector size of 300, min-freq of 40, context size of 5 and downsampling of 1e-3. We train our model subsequently with 1, 7 and 15 days of Twitter data (each starting with 07/21 and followed by subsequent days) . The largest corpus of 15 days contains 580M tokens. Results of training are shown in Table 3. In this experiment the best result belongs to 7-day corpus with 0.56 correlation with HJ-dataset, and 15-day corpus has a little less, 0.55. This can be explained by following: in order to achieve better results with Word2Vec one should increase both corpus and vector sizes. Indeed, training model with vector size of 600 on full Twitter corpus (15 days) shows the best result of 0.59. It is also worth noting that number of \"missing\" pairs is negligible in 7-days corpus: the only missing word (and pair) is \"russianйель\", Yale, the name of university in the USA. There are no \"missing\" words in 15-days corpus.",
"Training the model on 15-days corpus took 8 hours on our machine with 2 cores and 4Gb of RAM. We have an intuition that further improvements are possible with larger corpus. Comparing our results to ones reported by RUSSE participants, we conclude that our best result of 0.598 is comparable to other results, as it (virtually) encloses the top-10 of results. However, best submission of RUSSE has huge gap in accuracy of 0.16, compared to our Twitter corpus. Having in mind that best results in RUSSE combine several corpora, it is reasonable to compare Twitter results to other single-corpus results. For convenience we replicate results for these corpora, originally presented in BIBREF4 , alongside with our result in Table 5. Given these considerations we conclude that with size of Twitter corpus of 500M one can achieve reasonably good results on task of word semantic similarity."
],
[
"Authors of Word2Vec BIBREF2 and Paragraph Vector BIBREF13 advise to determine the optimal context size for each distinct training session. In our Twitter corpus average length of the sentence appears to be 9.8 with std.dev. of 4.9; it means that most of sentences have less than 20 tokens. This is one of peculiarities of Twitter data: Tweets are limited in size, hence sentences are short. Context size greater than 10 is redundant. We choose to train word vectors with 3 different context size values: 2, 5, 10. We make two rounds of training: first one, with Twitter data from days from 07/21 till 07/25, and second, from 07/26 till 07/30. Results of measuring correlation with HJ-dataset are shown in Table 4. According to these results context size of 5 is slightly better than others, but the difference is negligible compared to fluctuation between several attempts of training."
],
[
"Vector space model is capable to give more information than just measure of semantic distance of two given words. It was shown that word vectors can have multiple degrees of similarity. In particular, it is possible to model simple relations, like \"country\"-\"capital city\", gender, syntactic relations with algebraic operations over these vectors. Authors of BIBREF2 propose to assess quality of these vectors on task of exact prediction of these word relations. However, word vectors learned from Twitter seem to perform poorly on this task. We don’t make systematic research on this subject here because it goes outside of the scope of the current paper, though it is an important direction of future studies.",
"Twitter post often contains three special types of words: user mentions, hashtags and hyperlinks. It can be beneficial to filter them (consider as Stop-Words). In results presented in this paper, and in particular in Tables 3 and 4, we don’t filter such words. It is highly controversial if one should remove hashtags from analysis since they are often valid words or multiwords. It can also be beneficial, in some tasks, to estimate word vectors for a username. Hyperlinks in Twitter posts are mandatory shortened. It is not clear how to treat them: filter out completely, keep them or even un-short them. However, some of our experiments show that filtering of \"User Mentions\" and hyperlinks can improve accuracy on the word semantic relatedness task by 3-5%."
],
[
"In this paper we investigated the use of Twitter corpus for training Word2Vec model for task of word semantic similarity. We described a method to obtain stream of Twitter messages and prepare them for training. We use HJ-dataset, which was created for RUSSE contest BIBREF4 to measure correlation between similarity of word vectors and human judgements on word pairs similarity. We achieve results comparable with results obtained while training Word2Vec on traditional corpora, like Wikipedia and Web pages BIBREF3 , BIBREF11 . This is especially important because Twitter data is highly dynamic, and traditional sources are mostly static (rarely change over time). Thus verbal data acquired from Twitter may be used to estimate word vectors for neologisms, or determine other changes in word semantic, as soon as they appear in human speech."
]
],
"section_name": [
"Introduction",
"Goals of this paper",
"Previous work",
"Data processing",
"Acquiring data",
"Corpus preprocessing",
"Training the model",
"Experimental results",
"Properties of the data",
"Determining optimal corpus size",
"Determining optimal context size",
"Some further observations",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0e0ced62aefb27fde1a0ab5b1516b4455bf569bb"
],
"answer": [
{
"evidence": [
"Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. There is special type of API, called Streaming API, that provides a real-time stream of tweets. The key difference with regular API is that connection is kept alive as long as possible, and Tweets are sent in real-time to the client. There are three endpoints of Streaming API of our interest: “sample”, “filter” and “firehose”. The first one provides a sample (random subset) of the full Tweet stream. The second one allows to receive Tweets matching some search criteria: matching to one or more search keywords, produced by subset of users, or coming from certain geo location. The last one provides the full set of Tweets, although it is not available by default. In order to get Twitter “firehose” one can contact Twitter, or buy this stream from third-parties.",
"In our case the simplest approach would be to use “sample” endpoint, but it provides Tweets in all possible languages from all over the World, while we are concerned only about one language (Russian). In order to use this endpoint we implemented filtering based on language. The filter is simple: if Tweet does not contain a substring of 3 or more cyrillic symbols, it is considered non-Russian. Although this approach keeps Tweets in Mongolian, Ukrainian and other slavic languages (because they use cyrillic alphabet), the total amount of false-positives in this case is negligible. To demonstrate this we conducted simple experiment: on a random sample of 200 tweets only 5 were in a language different from Russian. In order not to rely on Twitter language detection, we chose to proceed with this method of language-based filtering.",
"However, the amount of Tweets received through “sample” endpoint was not satisfying. This is probably because “sample” endpoint always streams the same content to all its clients, and small portion of it comes in Russian language. In order to force mining of Tweets in Russian language, we chose \"filter\" endpoint, which requires some search query. We constructed heuristic query, containing some auxiliary words, specific to Russian language: conjunctions, pronouns, prepositions. The full list is as follows:",
"russian я, у, к, в, по, на, ты, мы, до, на, она, он, и, да."
],
"extractive_spans": [],
"free_form_answer": "They collected tweets in Russian language using a heuristic query specific to Russian",
"highlighted_evidence": [
"Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. There is special type of API, called Streaming API, that provides a real-time stream of tweets. The key difference with regular API is that connection is kept alive as long as possible, and Tweets are sent in real-time to the client. There are three endpoints of Streaming API of our interest: “sample”, “filter” and “firehose”. The first one provides a sample (random subset) of the full Tweet stream. The second one allows to receive Tweets matching some search criteria: matching to one or more search keywords, produced by subset of users, or coming from certain geo location. The last one provides the full set of Tweets, although it is not available by default. In order to get Twitter “firehose” one can contact Twitter, or buy this stream from third-parties.\n\nIn our case the simplest approach would be to use “sample” endpoint, but it provides Tweets in all possible languages from all over the World, while we are concerned only about one language (Russian). In order to use this endpoint we implemented filtering based on language. The filter is simple: if Tweet does not contain a substring of 3 or more cyrillic symbols, it is considered non-Russian. Although this approach keeps Tweets in Mongolian, Ukrainian and other slavic languages (because they use cyrillic alphabet), the total amount of false-positives in this case is negligible. To demonstrate this we conducted simple experiment: on a random sample of 200 tweets only 5 were in a language different from Russian. In order not to rely on Twitter language detection, we chose to proceed with this method of language-based filtering.\n\nHowever, the amount of Tweets received through “sample” endpoint was not satisfying. This is probably because “sample” endpoint always streams the same content to all its clients, and small portion of it comes in Russian language. In order to force mining of Tweets in Russian language, we chose \"filter\" endpoint, which requires some search query. We constructed heuristic query, containing some auxiliary words, specific to Russian language: conjunctions, pronouns, prepositions. The full list is as follows:\n\nrussian я, у, к, в, по, на, ты, мы, до, на, она, он, и, да."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"Which Twitter corpus was used to train the word vectors?"
],
"question_id": [
"e51d0c2c336f255e342b5f6c3cf2a13231789fed"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter"
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1. Properties of Twitter corpus (15 full days)",
"Table 2. Properties of Twitter corpus (average on daily slices)",
"Table 3. Properties of Twitter corpus (different size)",
"Table 4. RSpearman for different context size",
"Table 5. Comparison with current single-corpus trained results"
],
"file": [
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png"
]
} | [
"Which Twitter corpus was used to train the word vectors?"
] | [
[
"1602.08741-Acquiring data-0",
"1602.08741-Acquiring data-3",
"1602.08741-Acquiring data-1",
"1602.08741-Acquiring data-2"
]
] | [
"They collected tweets in Russian language using a heuristic query specific to Russian"
] | 194 |
1911.12579 | A New Corpus for Low-Resourced Sindhi Language with Word Embeddings | Representing words and phrases into dense vectors of real numbers which encode semantic and syntactic properties is a vital constituent in natural language processing (NLP). The success of neural network (NN) models in NLP largely rely on such dense word representations learned on the large unlabeled corpus. Sindhi is one of the rich morphological language, spoken by large population in Pakistan and India lacks corpora which plays an essential role of a test-bed for generating word embeddings and developing language independent NLP systems. In this paper, a large corpus of more than 61 million words is developed for low-resourced Sindhi language for training neural word embeddings. The corpus is acquired from multiple web-resources using web-scrappy. Due to the unavailability of open source preprocessing tools for Sindhi, the prepossessing of such large corpus becomes a challenging problem specially cleaning of noisy data extracted from web resources. Therefore, a preprocessing pipeline is employed for the filtration of noisy text. Afterwards, the cleaned vocabulary is utilized for training Sindhi word embeddings with state-of-the-art GloVe, Skip-Gram (SG), and Continuous Bag of Words (CBoW) word2vec algorithms. The intrinsic evaluation approach of cosine similarity matrix and WordSim-353 are employed for the evaluation of generated Sindhi word embeddings. Moreover, we compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) word representations. Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations. | {
"paragraphs": [
[
"Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources.",
"The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages BIBREF4. Many world languages are rich in such language processing resources integrated in their software tools including English BIBREF5 BIBREF6, Chinese BIBREF7 and other languages BIBREF8 BIBREF9. The Sindhi language lacks the basic computational resources BIBREF10 of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation BIBREF11, multitasking BIBREF12, BIBREF13. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India BIBREF1. But little work has been carried out for the development of LRs such as raw corpus BIBREF14, BIBREF15, annotated corpus BIBREF16, BIBREF17, BIBREF1, BIBREF18. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP).",
"One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space BIBREF19, distributed representations BIBREF20, and distributed semantic models. It is a language modeling approach BIBREF21 used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way BIBREF22 BIBREF23. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps\". More recently NN based models yield state-of-the-art performance in multiple NLP tasks BIBREF24 BIBREF25 with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data.",
"In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe BIBREF26 SG and CBoW BIBREF27 BIBREF20 BIBREF24 algorithms. The popular intrinsic evaluation method BIBREF20 BIBREF28 BIBREF29 of calculating cosine similarity between word vectors and WordSim353 BIBREF30 are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353 word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms BIBREF23 and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) BIBREF25 word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows:",
"We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words.",
"We develop a text cleaning pipeline for the preprocessing of the raw corpus.",
"Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353.",
"We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings.",
"The remaining sections of the paper are organized as; Section SECREF2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section SECREF3 presents the employed methodology, Section SECREF4 consist of statistical analysis of the developed corpus. Section SECREF5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section SECREF6. The discussion and future work are given in Section SECREF7, and lastly, Section SECREF8 presents the conclusion."
],
[
"The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools.",
"The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, BIBREF14 discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus BIBREF1 for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study BIBREF4 provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation BIBREF32 of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken BIBREF16 by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table TABREF9 on the corpus development, word segmentation, and word embeddings, respectively.",
"The power of word embeddings in NLP was empirically estimated by proposing a neural language model BIBREF21 and multitask learning BIBREF12, but recently usage of word embeddings in deep neural algorithms has become integral element BIBREF33 for performance acceleration in deep NLP applications. The CBoW and SG BIBREF27 BIBREF20 popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended BIBREF33 BIBREF24. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well. BIBREF34 proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model BIBREF26 also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks.",
"The performance of Word embeddings is evaluated using intrinsic BIBREF23 BIBREF29 and extrinsic evaluation BIBREF28 methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition BIBREF23, but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method BIBREF28 to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters BIBREF35 has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE BIBREF36 dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA BIBREF37. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features."
],
[
"This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings."
],
[
"We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization."
],
[
"The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter."
],
[
"The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK BIBREF5 for English. Therefore, we design a preprocessing pipeline depicted in Figure FIGREF22 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure FIGREF22. Moreover, we reveal the list of Sindhi stop words BIBREF38 which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in TABREF61. We use python programming language for designing the preprocessing pipeline using regex and string functions.",
"Input: The collected text documents were concatenated for the input in UTF-8 format.",
"Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words.",
"Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses.",
"Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically."
],
[
"The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet BIBREF39 using the unsupervised approach.",
"The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\\overrightarrow{w} $ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe BIBREF26 algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG BIBREF27 BIBREF20, later extended BIBREF33 BIBREF24, well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram."
],
[
"The GloVe is a log-bilinear regression model BIBREF26 which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\\frac{1}{4}$. The Glove’s implementation represents word $w \\in V_{w}$ and context $c \\in V_{c}$ in $D$-dimensional vectors $\\overrightarrow{w}$ and $\\overrightarrow{c}$ in a following way,",
"Where, $b^{\\overrightarrow{w}}$ is row vector $\\left|V_{w}\\right|$ and $b^{\\overrightarrow{c}}$ is $\\left|V_{c}\\right|$ is column vector."
],
[
"The standard CBoW is the inverse of SG BIBREF27 model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\\left\\lbrace w_{1}, w_{2}, \\dots \\dots w_{t}\\right\\rbrace $ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as,",
"Where, $c_{t}$ is context of $t^{\\text{th}}$ word for example with window $w_{t-c}, \\ldots w_{t-1}, w_{t+1}, \\ldots w_{t+c}$ of size $2 c$."
],
[
"The SG model predicts surrounding words by giving input word BIBREF20 with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\\left\\lbrace w_{1}, w_{2}, \\dots \\dots w_{t}\\right\\rbrace $ across the entire training corpus,",
"Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus."
],
[
"Th sub-sampling BIBREF20 approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters BIBREF24 on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus.",
"Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_i )$ is frequency of word $w_{i}$ and $t>0$ are parameters."
],
[
"The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\\frac{6}{6} \\frac{5}{6} \\frac{4}{6} \\frac{3}{6} \\frac{2}{6} \\frac{1}{6}$."
],
[
"The sub-word model BIBREF24 can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta, tab, tabl, table, table>, abl, able, able>, ble, ble>, le>$, we can get all sub-words of \"table\" with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w} \\subset \\lbrace 1, \\ldots , K\\rbrace $. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation,"
],
[
"The position-dependent weighting approach BIBREF40 is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost,",
"Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively."
],
[
"The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41 word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) BIBREF27 BIBREF20 hyperparameter, which affects the value that both models try to optimize for each $(w, c): P M I(w, c)-\\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$)."
],
[
"Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows."
],
[
"The intrinsic evaluation is based on semantic similarity BIBREF23 in word embeddings. The word similarity measure approach states BIBREF35 that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353."
],
[
"The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\\overrightarrow{a}=\\left(a_{1}, a_{2}, a_{3}, \\dots , a_{n}\\right)$ and $\\overrightarrow{b}=\\left({b}_{1}, {b}_{2}, {b}_{3}, \\ldots , {b}_{n}\\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as,",
"However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula,",
"Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\\cos ({\\theta })$, is represented using a dot product and magnitude as,",
"where $a_{i}$ and $b_{i}$ are components of vector $\\overrightarrow{a}$ and $\\overrightarrow{b}$, respectively."
],
[
"The WordSim353 BIBREF42 is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations BIBREF30 for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way,",
"where $r_s$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^i$ is the rank difference between $i^{th}$ observations."
],
[
"The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens."
],
[
"The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law BIBREF43 suggests that if the frequency of letter or word occurrence ranked in descending order such as,",
"Where, $F_{r}$ is the letter frequency of rth rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure FIGREF55; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols."
],
[
"We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning BIBREF24. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table TABREF57). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency."
],
[
"The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the\" in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as,",
"Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$."
],
[
"The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model BIBREF38, such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. DISPLAY_FORM59, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure FIGREF62). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table TABREF61 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe BIBREF26 word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach BIBREF33 BIBREF24 is used to discard such most frequent words in CBoW and SG models."
],
[
"Hyperparameter optimization BIBREF23is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table TABREF64 and discussed in Section SECREF63. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU."
],
[
"The state-of-the-art SG, CBoW BIBREF27 BIBREF33 BIBREF20 BIBREF24 and Glove BIBREF26 word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows:",
"Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate 10, 20, 30 and 40 epochs for each word embedding model, and 40 epochs constantly produce good results.",
"Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models.",
"Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate.",
"Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table TABREF57.",
"Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance.",
"Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time.",
"Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words.",
"Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe BIBREF26.",
"The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG BIBREF24, and GloVe BIBREF26."
],
[
"The cosine similarity matrix BIBREF35 is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. DISPLAY_FORM48. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table TABREF74 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure FIGREF22. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words."
],
[
"Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings.",
"Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able."
],
[
"We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table TABREF80 shows the Spearman correlation results using Eq. DISPLAY_FORM51 on different dimensional embeddings on the translated WordSim353. The Table TABREF80 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English BIBREF27 achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship."
],
[
"We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality BIBREF36 reduction algorithm with PCA BIBREF37 for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table TABREF74) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. FIGREF83) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. FIGREF82 and GloVe Fig. FIGREF84 also show the better cluster formation of words than SdfastText Fig. FIGREF85, respectively."
],
[
"In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet."
],
[
"In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations.",
"Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing."
]
],
"section_name": [
"Introduction",
"Related work",
"Methodology",
"Methodology ::: Task description",
"Methodology ::: Corpus acquisition",
"Methodology ::: Preprocessing",
"Methodology ::: Word embedding models",
"Methodology ::: GloVe",
"Methodology ::: Continuous bag-of-words",
"Methodology ::: Skip gram",
"Methodology ::: Hyperparameters ::: Sub-sampling",
"Methodology ::: Hyperparameters ::: Dynamic context window",
"Methodology ::: Hyperparameters ::: Sub-word model",
"Methodology ::: Hyperparameters ::: Position-dependent weights",
"Methodology ::: Hyperparameters ::: Shifted point-wise mutual information",
"Methodology ::: Hyperparameters ::: Deleting rare words",
"Methodology ::: Evaluation methods",
"Methodology ::: Evaluation methods ::: Cosine similarity",
"Methodology ::: Evaluation methods ::: WordSim353",
"Statistical analysis of corpus",
"Statistical analysis of corpus ::: Letter occurrences",
"Statistical analysis of corpus ::: Letter n-grams frequency",
"Statistical analysis of corpus ::: Word Frequencies",
"Statistical analysis of corpus ::: Stop words",
"Experiments and results",
"Experiments and results ::: Hyperparameter optimization",
"Word similarity comparison of Word Embeddings ::: Nearest neighboring words",
"Word similarity comparison of Word Embeddings ::: Word pair relationship",
"Word similarity comparison of Word Embeddings ::: Comparison with WordSim353",
"Word similarity comparison of Word Embeddings ::: Visualization",
"Discussion and future work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"ff8fd9518421abfced12a1541e4f26b5185fc32c"
],
"answer": [
{
"evidence": [
"Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings.",
"Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able."
],
"extractive_spans": [],
"free_form_answer": "Proposed SG model vs SINDHI FASTTEXT:\nAverage cosine similarity score: 0.650 vs 0.388\nAverage semantic relatedness similarity score between countries and their capitals: 0.663 vs 0.391",
"highlighted_evidence": [
"The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText.",
"Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"80d7f5da1461b4437290ddc0e2474bd1cd298e64"
],
"answer": [
{
"evidence": [
"In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition.",
"Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6d40c2912577783189a8fe21a2a3f6b5d1f11cea"
],
"answer": [
{
"evidence": [
"The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.",
"FLOAT SELECTED: Table 2: Complete statistics of collected corpus from multiple resources."
],
"extractive_spans": [],
"free_form_answer": "908456 unique words are available in collected corpus.",
"highlighted_evidence": [
"The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.",
"FLOAT SELECTED: Table 2: Complete statistics of collected corpus from multiple resources."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0e1c5eb88cfe7910e0f9f0990a926496818ae6cb"
],
"answer": [
{
"evidence": [
"The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter."
],
"extractive_spans": [
"daily Kawish and Awami Awaz Sindhi newspapers",
"Wikipedia dumps",
"short stories and sports news from Wichaar social blog",
"news from Focus Word press blog",
"historical writings, novels, stories, books from Sindh Salamat literary website",
"novels, history and religious books from Sindhi Adabi Board",
" tweets regarding news and sports are collected from twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How does proposed word embeddings compare to Sindhi fastText word representations?",
"Are trained word embeddings used for any other NLP task?",
"How many uniue words are in the dataset?",
"How is the data collected, which web resources were used?"
],
"question_id": [
"5b6aec1b88c9832075cd343f59158078a91f3597",
"a6717e334c53ebbb87e5ef878a77ef46866e3aed",
"a1064307a19cd7add32163a70b6623278a557946",
"8cb9006bcbd2f390aadc6b70d54ee98c674e45cc"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Comparison of existing and proposed work on Sindhi corpus construction and word embeddings.",
"Figure 1: Employed preprocessing pipeline for text cleaning",
"Table 2: Complete statistics of collected corpus from multiple resources.",
"Figure 2: Frequency distribution of letter occurrences",
"Table 3: Length of letter n-grams in words, distinct words, frequency and percentage in corpus.",
"Table 4: Partial list of most frequent Sindhi stop words along with frequency in the developed corpus.",
"Figure 3: Most frequent words after filtration of stop words",
"Table 5: Optimized parameters for CBoW, SG and GloVe models.",
"Table 6: Eight nearest neighboring words of each query word with English translation.",
"Table 7: Word pair relationship using cosine similarity (higher is better).",
"Table 8: Cosine similarity score between country and capital.",
"Table 9: Comparison of semantic and syntactic accuracy of proposed word embeddings using WordSim-353 dataset on 300−D embedding choosing various window size (ws).",
"Figure 4: Visualization of Sindhi CBoW word embeddings",
"Figure 5: Visualization of Sindhi SG word embeddings",
"Figure 6: visualization of Sindhi GloVe word embeddings",
"Figure 7: Visualization of SdfastText word embeddings"
],
"file": [
"4-Table1-1.png",
"5-Figure1-1.png",
"8-Table2-1.png",
"9-Figure2-1.png",
"10-Table3-1.png",
"11-Table4-1.png",
"12-Figure3-1.png",
"12-Table5-1.png",
"14-Table6-1.png",
"15-Table7-1.png",
"15-Table8-1.png",
"16-Table9-1.png",
"17-Figure4-1.png",
"17-Figure5-1.png",
"17-Figure6-1.png",
"18-Figure7-1.png"
]
} | [
"How does proposed word embeddings compare to Sindhi fastText word representations?",
"How many uniue words are in the dataset?"
] | [
[
"1911.12579-Word similarity comparison of Word Embeddings ::: Word pair relationship-0",
"1911.12579-Word similarity comparison of Word Embeddings ::: Word pair relationship-1"
],
[
"1911.12579-Statistical analysis of corpus-0",
"1911.12579-8-Table2-1.png"
]
] | [
"Proposed SG model vs SINDHI FASTTEXT:\nAverage cosine similarity score: 0.650 vs 0.388\nAverage semantic relatedness similarity score between countries and their capitals: 0.663 vs 0.391",
"908456 unique words are available in collected corpus."
] | 195 |
1707.00110 | Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments. | {
"paragraphs": [
[
"Sequence-to-sequence models BIBREF0 , BIBREF1 have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) BIBREF2 , BIBREF3 , text summarization BIBREF4 , BIBREF5 , speech recognition BIBREF6 , BIBREF7 , image captioning BIBREF8 , and conversational modeling BIBREF9 , BIBREF10 .",
"The most popular approaches are based on an encoder-decoder architecture consisting of two recurrent neural networks (RNNs) and an attention mechanism that aligns target to source tokens BIBREF2 , BIBREF11 . The typical attention mechanism used in these architectures computes a new attention context at each decoding step based on the current state of the decoder. Intuitively, this corresponds to looking at the source sequence after the output of every single target token.",
"Inspired by how humans process sentences, we believe it may be unnecessary to look back at the entire original source sequence at each step. We thus propose an alternative attention mechanism (section \"Memory-Based Attention Model\" ) that leads to smaller computational time complexity. Our method predicts $K$ attention context vectors while reading the source, and learns to use a weighted average of these vectors at each step of decoding. Thus, we avoid looking back at the source sequence once it has been encoded. We show (section \"Experiments\" ) that this speeds up inference while performing on-par with the standard mechanism on both toy and real-world WMT translation datasets. We also show that our mechanism leads to larger speedups as sequences get longer. Finally, by visualizing the attention scores (section \"Visualizing Attention\" ), we verify that the proposed technique learns meaningful alignments, and that different attention context vectors specialize on different parts of the source."
],
[
"Our models are based on an encoder-decoder architecture with attention mechanism BIBREF2 , BIBREF11 . An encoder function takes as input a sequence of source tokens $\\mathbf {x} = (x_1, ..., x_m)$ and produces a sequence of states $\\mathbf {s} = (s_1, ..., s_m)$ .The decoder is an RNN that predicts the probability of a target sequence $\\mathbf {y} = (y_1, ..., y_T \\mid \\mathbf {s})$ . The probability of each target token $y_i \\in \\lbrace 1, ... ,|V|\\rbrace $ is predicted based on the recurrent state in the decoder RNN, $h_i$ , the previous words, $y_{<i}$ , and a context vector $c_i$ . The context vector $c_i$ , also referred to as the attention vector, is calculated as a weighted average of the source states. ",
"$$c_i & = \\sum _{j}{\\alpha _{ij} s_j} \\\\\n{\\alpha }_{i} & = \\text{softmax}(f_{att}(h_i, \\mathbf {s}))$$ (Eq. 3) ",
"Here, $f_{att}(h_i, \\mathbf {s})$ is an attention function that calculates an unnormalized alignment score between the encoder state $s_j$ and the decoder state $h_i$ . Variants of $f_{att}$ used in BIBREF2 and BIBREF11 are: $\nf_{att}(h_i, s_j)=\n{\\left\\lbrace \\begin{array}{ll}\nv_a^T \\text{tanh}(W_a[h_i, s_j]),& \\emph {Bahdanau} \\\\\nh_i^TW_as_j & \\emph {Luong}\n\\end{array}\\right.}\n$ ",
"where $W_a$ and $v_a$ are model parameters learned to predict alignment. Let $|S|$ and $|T|$ denote the lengths of the source and target sequences respectively and $D$ denoate the state size of the encoder and decoder RNN. Such content-based attention mechanisms result in inference times of $O(D^2|S||T|)$ , as each context vector depends on the current decoder state $h_i$ and all encoder states, and requires an $O(D^2)$ matrix multiplication.",
"The decoder outputs a distribution over a vocabulary of fixed-size $|V|$ : ",
"$$P(y_i \\vert y_{<i}, \\mathbf {x}) = \\text{softmax}(W[s_i; c_i] + b)$$ (Eq. 5) ",
" The model is trained end-to-end by minimizing the negative log likelihood of the target words using stochastic gradient descent."
],
[
"Our proposed model is shown in Figure 1 . During encoding, we compute an attention matrix $C \\in \\mathbb {R}^{K \\times D}$ , where $K$ is the number of attention vectors and a hyperparameter of our method, and $D$ is the dimensionality of the top-most encoder state. This matrix is computed by predicting a score vector $\\alpha _t \\in \\mathbb {R}^K$ at each encoding time step $t$ . $C$ is then a linear combination of the encoder states, weighted by $\\alpha _t$ : ",
"$$C_k & = \\sum _{t=0}^{|S|}{\\alpha _{tk} s_t} \\\\\n\\alpha _t & = \\text{softmax}(W_\\alpha s_t) ,$$ (Eq. 7) ",
" where $W_{\\alpha }$ is a parameter matrix in $\\mathbb {R}^{K\\times D}$ .",
"The computational time complexity for this operation is $O(KD|S|)$ . One can think of C as compact fixed-length memory that the decoder will perform attention over. In contrast, standard approaches use a variable-length set of encoder states for attention. At each decoding step, we similarly predict $K$ scores $\\beta \\in \\mathbb {R}^K$ . The final attention context $c$ is a linear combination of the rows in $C$ weighted by the scores. Intuitively, each decoder step predicts how important each of the $K$ attention vectors is. ",
"$$c & = \\sum _{i=0}^{K}{\\beta _i C_i} \\\\\n\\beta & = \\text{softmax}(W_\\beta h)$$ (Eq. 8) ",
" Here, $h$ is the current state of the decoder, and $W_\\beta $ is a learned parameter matrix. Note that we do not access the encoder states at each decoder step. We simply take a linear combination of the attention matrix $C$ pre-computed during encoding - a much cheaper operation that is independent of the length of the source sequence. The time complexity of this computation is $O(KD|T|)$ as multiplication with the $K$ attention matrices needs to happen at each decoding step.",
"Summing $O(KD|S|)$ from encoding and $O(KD|T|)$ from decoding, we have a total linear computational complexity of $O(KD(|S| + |T|)$ . As $D$ is typically very large, 512 or 1024 units in most applications, we expect our model to be faster than the standard attention mechanism running in $O(D^2|S||T|)$ . For long sequences (as in summarization, where |S| is large), we also expect our model to be faster than the cheaper dot-based attention mechanism, which needs $O(D|S||T|)$ computation time and requires encoder and decoder states sizes to match.",
"We also experimented with using a sigmoid function instead of the softmax to score the encoder and decoder attention scores, resulting in 4 possible combinations. We call this choice the scoring function. A softmax scoring function calculates normalized scores, while the sigmoid scoring function results in unnormalized scores that can be understood as gates."
],
[
"Our memory-based attention model can be understood intuitively in two ways. We can interpret it as \"predicting\" the set of attention contexts produced by a standard attention mechanism during encoding. To see this, assume we set $K \\approx |T|$ . In this case, we predict all $|T|$ attention contexts during the encoding stage and learn to choose the right one during decoding. This is cheaper than computing contexts one-by-one based on the decoder and encoder content. In fact, we could enforce this objective by first training a regular attention model and adding a regularization term to force the memory matrix $C$ to be close to the $T\\times D$ vectors computed by the standard attention. We leave it to future work to explore such an objective.",
"Alternatively, we can interpret our mechanism as first predicting a compact $K \\times D$ memory matrix, a representation of the source sequence, and then performing location-based attention on the memory by picking which row of the matrix to attend to. Standard location-based attention mechanism, by contrast, predicts a location in the source sequence to focus on BIBREF11 , BIBREF8 ."
],
[
"In the above formulation, the predictions of attention contexts are symmetric. That is, $C_i$ is not forced to be different from $C_{j\\ne i}$ . While we would hope for the model to learn to generate distinct attention contexts, we now present an extension that pushes the model into this direction. We add position encodings to the score matrix that forces the first few context vector $C_1, C_2, ...$ to focus on the beginning of the sequence and the last few vectors $...,C_{K-1}, C_K$ to focus on the end (thereby encouraging in-between vectors to focus on the middle).",
"Explicitly, we multiply the score vector $\\alpha $ with position encodings $l_s\\in \\mathbb {R}^{K}$ : ",
"$$C^{PE} & = \\sum _{s=0}^{|S|}{\\alpha ^{PE} h_s} \\\\\n\\alpha ^{PE}_s & = \\text{softmax}(W_\\alpha h_s \\circ l_s)$$ (Eq. 11) ",
"To obtain $l_s$ we first calculate a constant matrix $L$ where we define each element as ",
"$$L_{ks} & = (1-k/K)(1-s/\\mathcal {S})+\\frac{k}{K}\\frac{s}{\\mathcal {S}},$$ (Eq. 12) ",
" adapting a formula from BIBREF13 . Here, $k\\in \\lbrace 1,2,...,K\\rbrace $ is the context vector index and $\\mathcal {S}$ is the maximum sequence length across all source sequences. The manifold is shown graphically in Figure 2 . We can see that earlier encoder states are upweighted in the first context vectors, and later states are upweighted in later vectors. The symmetry of the manifold and its stationary point having value 0.5 both follow from Eq. 12 . The elements of the matrix that fall beyond the sequence lengths are then masked out and the remaining elements are renormalized across the timestep dimension. This results in the jagged array of position encodings $\\lbrace l_{ks}\\rbrace $ ."
],
[
"Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\\in \\lbrace 10, 50, 100, 200\\rbrace $ unique to each dataset.",
"All models are implemented using TensorFlow based on the seq2seq implementation of BIBREF15 and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM BIBREF16 encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention BIBREF2 . Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam BIBREF17 at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses. We decode using beam search with a beam",
"size of 10 BIBREF18 .",
"Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational time and representational power. A large $K$ allows us to compute complex source representations, while a $K$ of 1 limits the source representation to a single vector. We can see that performance consistently increases with $K$ up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin.",
"That we are able to represent the source sequence with a fixed size matrix with fewer than $|S|$ rows suggests that traditional attention mechanisms may be representing the source with redundancies and wasting computational resources. This makes intuitive sense for the toy task, which should require a relatively simple representation.",
"The last column shows that our technique significantly speeds up the inference process. The gap in inference speed increases as sequences become longer. We measured inference time on the full validation set of 1,000 examples, not including data loading or model construction times.",
"Figure 3 shows the learning curves for sequence length 200. We see that $K=1$ is unable to fit the data distribution, while $K\\in \\lbrace 32, 64\\rbrace $ fits the data almost as quickly as the attention-based model. Figure 3 shows the effect of varying the encoder and decoder scoring functions between softmax and sigmoid. All combinations manage to fit the data, but some converge faster than others. In section \"Visualizing Attention\" we show that distinct alignments are learned by different function combinations."
],
[
"Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016.",
"We use a similar setup to the Toy Copy task, but use 512 RNN and embedding units, train using 8 distributed workers with 1 GPU each, and train for at most 1M steps. We save checkpoints every 30 minutes during training, and choose the best based on the validation BLEU score.",
"Table 2 compares our approach with and without position encodings, and with varying values for hyperparameter $K$ , to baseline models with regular attention mechanism. Learning curves are shown in Figure 4 . We see that our memory attention model with sufficiently high $K$ performs on-par with, or slightly better, than the attention-based baseline model despite its simpler nature. Across the board, models with $K=64$ performed better than corresponding models with $K=32$ , suggesting that using a larger number of attention vectors can capture a richer understanding of source sequences. Position encodings also seem to consistently improve model performance.",
"Table 3 shows that our model results in faster decoding time even on a complex dataset with a large vocabulary of 16k. We measured decoding time over the full validation set, not including time used for model setup and data loading, averaged across 10 runs. The average sequence length for examples in this data was 35, and we expect more significant speedups for tasks with longer sequences, as suggested by our experiments on toy data. Note that in our NMT examples/experiments, $K\\approx T$ , but we obtain computational savings from the fact that $K \\ll D$ . We may be able to set $K \\ll T$ , as in toy copying, and still get very good performance in other tasks. For instance, in summarization the source is complex but the representation of the source required to perform the task is \"simple\" (i.e. all that is needed to generate the abstract).",
"Figure 5 shows the effect of using sigmoid and softmax function in the encoders and decoders. We found that softmax/softmax consistently performs badly, while all other combinations perform about equally well. We report results for the best combination only (as chosen on the validation set), but we found this choice to only make a minor difference."
],
[
"A useful property of the standard attention mechanism is that it produces meaningful alignment between source and target sequences. Often, the attention mechanism learns to progressively focus on the next source token as it decodes the target. These visualizations can be an important tool in debugging and evaluating seq2seq models and are often used for unknown token replacement.",
"This raises the question of whether or not our proposed memory attention mechanism also learns to generate meaningful alignments. Due to limiting the number of attention contexts to a number that is generally less than the sequence length, it is not immediately obvious what each context would learn to focus on. Our hope was that the model would learn to focus on multiple alignments at the same time, within the same attention vector. For example, if the source sequence is of length 40 and we have $K=10$ attention contexts, we would hope that $C_1$ roughly focuses on tokens 1 to 4, $C_2$ on tokens 5 to 8, and so on. Figures 6 and 7 show that this is indeed the case. To generate this visualization we multiply the attention scores $\\alpha $ and $\\beta $ from the encoder and decoder. Figure 8 shows a sample translation task visualization.",
"Figure 6 suggests that our model learns distinct ways to use its memory depending on the encoder and decoder functions. Interestingly, using softmax normalization results in attention maps typical of those derived from using standard attention, i.e. a relatively linear mapping between source and target tokens. Meanwhile, using sigmoid gating results in what seems to be a distributed representation of the source sequences across encoder time steps, with multiple contiguous attention contexts being accessed at each decoding step."
],
[
"Our contributions build on previous work in making seq2seq models more computationally efficient. BIBREF11 introduce various attention mechanisms that are computationally simpler and perform as well or better than the original one presented in BIBREF2 . However, these typically still require $O(D^2)$ computation complexity, or lack the flexibility to look at the full source sequence. Efficient location-based attention BIBREF8 has also been explored in the image recognition domain.",
" BIBREF3 presents several enhancements to the standard seq2seq architecture that allow more efficient computation on GPUs, such as only attending on the bottom layer. BIBREF20 propose a linear time architecture based on stacked convolutional neural networks. BIBREF21 also propose the use of convolutional encoders to speed up NMT. BIBREF22 propose a linear attention mechanism based on covariance matrices applied to information retrieval. BIBREF23 enable online linear time attention calculation by enforcing that the alignment between input and output sequence elements be monotonic. Previously, monotonic attention was proposed for morphological inflection generation by BIBREF24 ."
],
[
"In this work, we propose a novel memory-based attention mechanism that results in a linear computational time of $O(KD(|S| + |T|))$ during decoding in seq2seq models. Through a series of experiments, we demonstrate that our technique leads to consistent inference speedups as sequences get longer, and can fit complex data distributions such as those found in Neural Machine Translation. We show that our attention mechanism learns meaningful alignments despite being constrained to a fixed representation after encoding. We encourage future work that explores the optimal values of $K$ for various language tasks and examines whether or not it is possible to predict $K$ based on the task at hand. We also encourage evaluating our models on other tasks that must deal with long sequences but have compact representations, such as summarization and question-answering, and further exploration of their effect on memory and training speed."
]
],
"section_name": [
"Introduction",
"Sequence-to-Sequence Model with Attention",
"Memory-Based Attention Model",
"Model Interpretations",
"Position Encodings (PE)",
"Toy Copying Experiment",
"Machine Translation",
"Visualizing Attention",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0e7135bdd269d4e83630b27b6ae64fbe62e9e5d4"
],
"answer": [
{
"evidence": [
"Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational time and representational power. A large $K$ allows us to compute complex source representations, while a $K$ of 1 limits the source representation to a single vector. We can see that performance consistently increases with $K$ up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin.",
"All models are implemented using TensorFlow based on the seq2seq implementation of BIBREF15 and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM BIBREF16 encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention BIBREF2 . Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam BIBREF17 at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses. We decode using beam search with a beam"
],
"extractive_spans": [],
"free_form_answer": "standard parametrized attention and a non-attention baseline",
"highlighted_evidence": [
"Both beat the non-attention baseline by a significant margin.",
"For the attention baseline, we use the standard parametrized attention BIBREF2 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
},
{
"annotation_id": [
"3dc877f4b4aaad7a07dbfb97b365bf847acd1161"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: BLEU scores and computation times with varyingK and sequence length compared to baseline models with and without attention."
],
"extractive_spans": [],
"free_form_answer": "Ranges from 44.22 to 100.00 depending on K and the sequence length.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: BLEU scores and computation times with varyingK and sequence length compared to baseline models with and without attention."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7b010301cc5c61449c64ae40c8e41551fe35d67c"
],
"answer": [
{
"evidence": [
"Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\\in \\lbrace 10, 50, 100, 200\\rbrace $ unique to each dataset.",
"Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016."
],
"extractive_spans": [],
"free_form_answer": "Sequence Copy Task and WMT'17",
"highlighted_evidence": [
"To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 .",
"For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which baseline methods are used?",
"How much is the BLEU score?",
"Which datasets are used in experiments?"
],
"question_id": [
"2d3bf170c1647c5a95abae50ee3ef3b404230ce4",
"6e8c587b6562fafb43a7823637b84cd01487059a",
"ab9453fa2b927c97b60b06aeda944ac5c1bfef1e"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"efficient",
"efficient",
"efficient"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Memory Attention model architecture. K attention vectors are predicted during encoding, and a linear combination is chosen during decoding. In our example,K=3.",
"Figure 2: Surface for the position encodings.",
"Table 1: BLEU scores and computation times with varyingK and sequence length compared to baseline models with and without attention.",
"Figure 3: Training Curves for the Toy Copy task",
"Figure 4: Comparing training curves for en-fi and en-tr with sigmoid encoder scoring and softmax decoder scoring and position encoding. Note that en-tr curves converged very quickly.",
"Table 2: BLEU scores on WMT’17 translation datasets from the memory attention models and regular attention baselines. We picked the best out of the four scoring function combinations on the validation set. Note that en-tr does not have an official test set. Best test scores on each dataset are highlighted.",
"Table 3: Decoding time, averaged across 10 runs, for the en-de validation set (2169 examples) with average sequence length of 35. Results are similar for both PE and non-PE models.",
"Figure 5: Comparing training curves for en-fi for different encoder/decoder scoring functions for our models atK=64.",
"Figure 6: Attention scores at each step of decoding for on a sample from the sequence length 100 toy copy dataset. Individual attention vectors are highlighted in blue. (y-axis: source tokens; x-axis: target tokens)",
"Figure 7: Attention scores at each step of decoding for K = 4 on a sample with sequence length 11. The subfigure on the left color codes each individual attention vector. (y-axis: source; x-axis: target)",
"Figure 8: Attention scores at each step of decoding for en-de WMT translation task using model with sigmoid scoring functions and K=32. The left subfigure displays each individual attention vector separately while the right subfigure displays the full combined attention. (y-axis: source; x-axis: target)"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Table1-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Figure5-1.png",
"8-Figure6-1.png",
"8-Figure7-1.png",
"8-Figure8-1.png"
]
} | [
"Which baseline methods are used?",
"How much is the BLEU score?",
"Which datasets are used in experiments?"
] | [
[
"1707.00110-Toy Copying Experiment-3",
"1707.00110-Toy Copying Experiment-1"
],
[
"1707.00110-4-Table1-1.png"
],
[
"1707.00110-Machine Translation-0",
"1707.00110-Toy Copying Experiment-0"
]
] | [
"standard parametrized attention and a non-attention baseline",
"Ranges from 44.22 to 100.00 depending on K and the sequence length.",
"Sequence Copy Task and WMT'17"
] | 199 |
1909.01013 | Duality Regularization for Unsupervised Bilingual Lexicon Induction | Unsupervised bilingual lexicon induction naturally exhibits duality, which results from symmetry in back-translation. For example, EN-IT and IT-EN induction can be mutually primal and dual problems. Current state-of-the-art methods, however, consider the two tasks independently. In this paper, we propose to train primal and dual models jointly, using regularizers to encourage consistency in back translation cycles. Experiments across 6 language pairs show that the proposed method significantly outperforms competitive baselines, obtaining the best-published results on a standard benchmark. | {
"paragraphs": [
[
"Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9.",
"Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”.",
"We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx."
],
[
"UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets.",
"Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output.",
"Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective."
],
[
"We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\\lbrace x_1,...,x_n\\rbrace $ and $Y=\\lbrace y_1,...,y_m\\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\\mathcal {F}:X\\rightarrow Y$ such that for each $x_i$, $\\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\\mathcal {G}:Y\\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings."
],
[
"BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\\mathcal {F}$ tries to generate “fake” word embeddings $\\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\\mathcal {F}}$max$_{D{_y}}\\ell _{adv}(\\mathcal {F},D_y,X,Y)$, where",
"$P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\\mathcal {G}}$max$_{D_x}\\ell _{adv}(\\mathcal {G},D_x,Y,X)$, where $\\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator.",
"Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required."
],
[
"We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4.",
"Cycle Consistency Loss. We introduce",
"where $\\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model.",
"Full objective. The final objective is:"
],
[
"We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores:",
"Where $\\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations."
],
[
"We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively."
],
[
"Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies."
],
[
"We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\\leftrightarrow $ Malay (MS) and English $\\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry.",
"Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization."
],
[
"In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$).",
"Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima.",
"Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised."
],
[
"We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Approach ::: Baseline Adversarial Model",
"Approach ::: Regularizers for Dual Models",
"Approach ::: Model Selection",
"Experiments",
"Experiments ::: Experimental Settings",
"Experiments ::: The Effectiveness of Dual Learning",
"Experiments ::: Comparison with the State-of-the-art",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"b9a984425cbc2d5d4e9ee47b1389f956badcb464"
],
"answer": [
{
"evidence": [
"We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4."
],
"extractive_spans": [
"an adversarial loss ($\\ell _{adv}$) for each model as in the baseline",
"a cycle consistency loss ($\\ell _{cycle}$) on each side"
],
"free_form_answer": "",
"highlighted_evidence": [
"We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0e8bac71d1d4d344b19e68d3a517f0602009c7b8"
],
"answer": [
{
"evidence": [
"Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima.",
"FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs."
],
"extractive_spans": [],
"free_form_answer": "New best results of accuracy (P@1) on Vecmap:\nOurs-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43",
"highlighted_evidence": [
"Table TABREF15 shows the final results on Vecmap.",
"FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"208ff0e360529ceb1220d1c11abc0b48d2208cd3"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs.",
"Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima."
],
"extractive_spans": [],
"free_form_answer": "Proposed method vs best baseline result on Vecmap (Accuracy P@1):\nEN-IT: 50 vs 50\nIT-EN: 42.67 vs 42.67\nEN-DE: 51.6 vs 51.47\nDE-EN: 47.22 vs 46.96\nEN-FI: 35.88 vs 36.24\nFI-EN: 39.62 vs 39.57\nEN-ES: 39.47 vs 39.30\nES-EN: 36.43 vs 36.06",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs.",
"Table TABREF15 shows the final results on Vecmap."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"55e2519b0e80ebeca6f4334336688963a9a7da25"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"259abfe9d7fa091be049c2554871e822c006e168"
],
"answer": [
{
"evidence": [
"Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization.",
"FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap."
],
"extractive_spans": [],
"free_form_answer": "EN<->ES\nEN<->DE\nEN<->IT\nEN<->EO\nEN<->MS\nEN<->FI",
"highlighted_evidence": [
"Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12.",
"FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a2a38b25d3dca1acd3bc852e88bb4ee8038f3cee"
],
"answer": [
{
"evidence": [
"In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$)."
],
"extractive_spans": [
"Procrustes",
"GPA",
"GeoMM",
"GeoMM$_{semi}$",
"Adv-C-Procrustes",
"Unsup-SL",
"Sinkhorn-BT"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What regularizers were used to encourage consistency in back translation cycles?",
"What are new best results on standard benchmark?",
"How better is performance compared to competitive baselines?",
"How big is data used in experiments?",
"What 6 language pairs is experimented on?",
"What are current state-of-the-art methods that consider the two tasks independently?"
],
"question_id": [
"3a8d65eb8e1dbb995981a0e02d86ebf3feab107a",
"d0c79f4a5d5c45fe673d9fcb3cd0b7dd65df7636",
"54c7fc08598b8b91a8c0399f6ab018c45e259f79",
"5112bbf13c7cf644bf401daecb5e3265889a4bfc",
"03ce42ff53aa3f1775bc57e50012f6eb1998c480",
"ebeedbb8eecdf118d543fdb5224ae610eef212c8"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: (a) Inconsistency between primal model F and the dual model G. (b) An ideal scenario.",
"Figure 2: The proposed framework. (a)X → F(X)→ G(F(X))→ X; (b) Y → G(Y )→ F(G(Y ))→ Y .",
"Table 1: Accuracy on MUSE and Vecmap.",
"Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"4-Table4-1.png"
]
} | [
"What are new best results on standard benchmark?",
"How better is performance compared to competitive baselines?",
"What 6 language pairs is experimented on?"
] | [
[
"1909.01013-4-Table4-1.png",
"1909.01013-Experiments ::: Comparison with the State-of-the-art-1"
],
[
"1909.01013-4-Table4-1.png",
"1909.01013-Experiments ::: Comparison with the State-of-the-art-1"
],
[
"1909.01013-Experiments ::: The Effectiveness of Dual Learning-1",
"1909.01013-3-Table1-1.png"
]
] | [
"New best results of accuracy (P@1) on Vecmap:\nOurs-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43",
"Proposed method vs best baseline result on Vecmap (Accuracy P@1):\nEN-IT: 50 vs 50\nIT-EN: 42.67 vs 42.67\nEN-DE: 51.6 vs 51.47\nDE-EN: 47.22 vs 46.96\nEN-FI: 35.88 vs 36.24\nFI-EN: 39.62 vs 39.57\nEN-ES: 39.47 vs 39.30\nES-EN: 36.43 vs 36.06",
"EN<->ES\nEN<->DE\nEN<->IT\nEN<->EO\nEN<->MS\nEN<->FI"
] | 200 |
1910.10408 | Controlling the Output Length of Neural Machine Translation | The recent advances introduced by neural machine translation (NMT) are rapidly expanding the application fields of machine translation, as well as reshaping the quality level to be targeted. In particular, if translations have to fit some given layout, quality should not only be measured in terms of adequacy and fluency, but also length. Exemplary cases are the translation of document files, subtitles, and scripts for dubbing, where the output length should ideally be as close as possible to the length of the input text. This paper addresses for the first time, to the best of our knowledge, the problem of controlling the output length in NMT. We investigate two methods for biasing the output length with a transformer architecture: i) conditioning the output to a given target-source length-ratio class and ii) enriching the transformer positional embedding with length information. Our experiments show that both methods can induce the network to generate shorter translations, as well as acquiring interpretable linguistic skills. | {
"paragraphs": [
[
"The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence.",
"Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6.",
"Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions).",
"In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string.",
"We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01."
],
[
"Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization."
],
[
"Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\\text{PE}$):",
"for $i=1,\\ldots ,d/2$."
],
[
"Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length."
],
[
"We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length."
],
[
"Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group."
],
[
"Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:",
"where $i=1,\\ldots ,d/2$.",
"Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:",
"where $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9."
],
[
"We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length."
],
[
"Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences."
],
[
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder.",
"In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules.",
"For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding."
],
[
"We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29.",
"Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies."
],
[
"To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations.",
"The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$)."
],
[
"We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios."
],
[
"The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00.",
"Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality.",
"Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness.",
"Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences."
],
[
"Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\\text{src}$ and LR$^\\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal.",
"Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\\text{src}$=1.05), which are also much shorter than the reference (LR$^\\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality.",
"Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline.",
"Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal.",
"Controlling output length. In order to achieve LR$^\\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11).",
"Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\\sim 0.11$ for relative encoding and $\\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used."
],
[
"After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$).",
"Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set."
],
[
"As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT.",
"In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9.",
"The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35."
],
[
"In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse."
]
],
"section_name": [
"Introduction",
"Background",
"Background ::: Transformer",
"Background ::: Length encoding in summarization",
"Methods",
"Methods ::: Length Token Method",
"Methods ::: Length Encoding Method",
"Methods ::: Combining the two methods",
"Methods ::: Fine-Tuning for length control",
"Experiments ::: Data and Settings",
"Experiments ::: Models",
"Experiments ::: Evaluation",
"Results",
"Results ::: Small Data condition",
"Results ::: Large data condition",
"Results ::: Human Evaluation and Analysis",
"Related works",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0f04331cbdb88dc33e06b6b970c11db7cc4e842d"
],
"answer": [
{
"evidence": [
"We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d897b5cc9f257c8fd1a930a6bc1b7e1d73005efb"
],
"answer": [
{
"evidence": [
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder."
],
"extractive_spans": [
"English$\\rightarrow $Italian/German portions of the MuST-C corpus",
"As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6c4be2329714531078bea6390c6892868f51944e"
],
"answer": [
{
"evidence": [
"Methods ::: Length Encoding Method",
"Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:",
"where $i=1,\\ldots ,d/2$.",
"Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:",
"where $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9."
],
"extractive_spans": [],
"free_form_answer": "They introduce new trigonometric encoding which besides information about position uses additional length information (abs or relative).",
"highlighted_evidence": [
"Methods ::: Length Encoding Method\nInspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:\n\nwhere $i=1,\\ldots ,d/2$.\n\nSimilarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:\n\nwhere $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f51792ec82eea4ff8587745ac8140a8357572bed"
],
"answer": [
{
"evidence": [
"Methods ::: Length Token Method",
"Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group."
],
"extractive_spans": [],
"free_form_answer": "They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group.",
"highlighted_evidence": [
"Methods ::: Length Token Method\nOur first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"498073e28e7f3074adbd65f4b3680a421b721175"
],
"answer": [
{
"evidence": [
"We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01."
],
"extractive_spans": [
"two translation directions (En-It and En-De)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source.",
"En-It, En-De in both directions"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6bfc48103d84dc0223b89994e5583504b0fb8bf8"
],
"answer": [
{
"evidence": [
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder."
],
"extractive_spans": [
"English$\\rightarrow $Italian/German portions of the MuST-C corpus",
"As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"223910aa36816d4bd67012d8c487b2f175bfea2e"
],
"answer": [
{
"evidence": [
"Methods ::: Combining the two methods",
"We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Methods ::: Combining the two methods\nWe further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"two",
"two",
"two"
],
"paper_read": [
"",
"",
"",
"",
"no",
"no",
"no"
],
"question": [
"Do they conduct any human evaluation?",
"What dataset do they use for experiments?",
"How do they enrich the positional embedding with length information",
"How do they condition the output to a given target-source class?",
"Which languages do they focus on?",
"What dataset do they use?",
"Do they experiment with combining both methods?"
],
"question_id": [
"22c36082b00f677e054f0f0395ed685808965a02",
"85a7dbf6c2e21bfb7a3a938381890ac0ec2a19e0",
"90bc60320584ebba11af980ed92a309f0c1b5507",
"f52b2ca49d98a37a6949288ec5f281a3217e5ae8",
"228425783a4830e576fb98696f76f4c7c0a1b906",
"9d1135303212356f3420ed010dcbe58203cc7db4",
"d8bf4a29c7af213a9a176eb1503ec97d01cc8f51"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: German and Italian human and machine translations (MT) are usually longer than their English source (SRC). We investigate enhanced NMT (MT*) that can also generate translations shorter than the source length. Text in red exceeds the length of the source, while underlined words point out the different translation strategy of the enhanced NMT model.",
"Figure 2: Training NMT with three length ratio classes permits to get outputs of different length at inference time.",
"Figure 3: Transformer architecture with decoder input enriched with (relative) length embedding computed according to the desired target string length (12 characters in the example).",
"Table 1: Train, validation and test data size in number of examples.",
"Table 2: Train data category after assigning the length tokens (normal, short and long).",
"Table 3: Performance of the baseline and models with length information trained from scratch and or by fine-tuning, in terms of BLEU, BLEU∗, mean length ratio of the output against the source (LRsrc) and the reference (LRref ). italics shows the best performing model under each category, while bold shows the wining strategy.",
"Table 4: Large scale experiments comparing the baseline, length token, length encoding and their combination.",
"Table 5: Results for En-It with Tok+Enc Rel by scaling the target length with different constant factors.",
"Table 6: Manual evaluation on En-It (large data) ranking translation quality of the baseline (standard) and token short translation against the reference translation.",
"Table 7: Examples of shorter translation fragments obtained by paraphrasing (italics), drop of words (red), and change of verb tense (underline)."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"4-Figure3-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png"
]
} | [
"How do they enrich the positional embedding with length information",
"How do they condition the output to a given target-source class?"
] | [
[
"1910.10408-Methods ::: Length Encoding Method-0",
"1910.10408-Methods ::: Length Encoding Method-3",
"1910.10408-Methods ::: Length Encoding Method-2",
"1910.10408-Methods ::: Length Encoding Method-1"
],
[
"1910.10408-Methods ::: Length Token Method-0"
]
] | [
"They introduce new trigonometric encoding which besides information about position uses additional length information (abs or relative).",
"They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group."
] | 203 |
2002.00876 | Torch-Struct: Deep Structured Prediction Library | The literature on structured prediction for NLP describes a rich collection of distributions and algorithms over sequences, segmentations, alignments, and trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based frameworks. Torch-Struct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library. Torch-Struct is available at this https URL. | {
"paragraphs": [
[
"Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such linear programming relaxations and greedy search.",
"Structured prediction has played a key role in the history of natural language processing. Example methods include techniques for sequence labeling and segmentation BIBREF0, BIBREF4, discriminative dependency and constituency parsing BIBREF10, BIBREF8, unsupervised learning for labeling and alignment BIBREF11, BIBREF12, approximate translation decoding with beam search BIBREF9, among many others.",
"In recent years, research into deep structured prediction has studied how these approaches can be integrated with neural networks and pretrained models. One line of work has utilized structured prediction as the final final layer for deep models BIBREF13, BIBREF14. Another has incorporated structured prediction within deep learning models, exploring novel models for latent-structure learning, unsupervised learning, or model control BIBREF15, BIBREF16, BIBREF17. We aspire to make both of these use-cases as easy to use as standard neural networks.",
"The practical challenge of employing structured prediction is that many required algorithms are difficult to implement efficiently and correctly. Most projects reimplement custom versions of standard algorithms or focus particularly on a single well-defined model class. This research style makes it difficult to combine and try out new approaches, a problem that has compounded with the complexity of research in deep structured prediction.",
"With this challenge in mind, we introduce Torch-Struct with three specific contributions:",
"Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.",
"Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python.",
"Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization.",
"In this system description, we first motivate the approach taken by the library, then present a technical description of the methods used, and finally present several example use cases."
],
[
"Several software libraries target structured prediction. Optimization tools, such as SVM-struct BIBREF18, focus on parameter estimation. Model libraries, such as CRFSuite BIBREF19 or CRF++ BIBREF20, implement inference for a fixed set of popular models, such as linear-chain CRFs. General-purpose inference libraries, such as PyStruct BIBREF21 or TurboParser BIBREF22, utilize external solvers for (primarily MAP) inference such as integer linear programming solvers and ADMM. Probabilistic programming languages, for example languages that integrate with deep learning such as Pyro BIBREF23, allow for specification and inference over some discrete domains. Most ambitiously, inference libraries such as Dyna BIBREF24 allow for declarative specifications of dynamic programming algorithms to support inference for generic algorithms. Torch-Struct takes a different approach and integrates a library of optimized structured distributions into a vectorized deep learning system. We begin by motivating this approach with a case study."
],
[
"While structured prediction is traditionally presented at the output layer, recent applications have deployed structured models broadly within neural networks BIBREF15, BIBREF25, BIBREF16. Torch-Struct aims to encourage this general use case.",
"To illustrate, we consider a latent tree model. ListOps BIBREF26 is a dataset of mathematical functions. Each data point consists of a prefix expression $x$ and its result $y$, e.g.",
"Models such as a flat RNN will fail to capture the hierarchical structure of this task. However, if a model can induce an explicit latent $z$, the parse tree of the expression, then the task is easy to learn by a tree-RNN model $p(y | x, z)$ BIBREF16, BIBREF27.",
"A popular approach is a latent-tree RL model which we briefly summarize. The objective is to maximize the probability of the correct prediction under the expectation of a prior tree model, $p(z|x ;\\phi )$,",
"Computing the expectation is intractable so policy gradient is used. First a tree is sampled $\\tilde{z} \\sim p(z | x;\\phi )$, then the gradient with respect to $\\phi $ is approximated as,",
"where $b$ is a variance reduction baseline. A common choice is the self-critical baseline BIBREF28,",
"Finally an entropy regularization term is added to the objective encourage exploration of different trees, $ O + \\lambda \\mathbb {H}(p(z\\ |\\ x;\\phi ))$.",
"Even in this brief overview, we can see how complex a latent structured learning problem can be. To compute these terms, we need 5 different properties of the tree model $p(z\\ | x; \\phi )$:",
"[description]font=",
"[itemsep=-2pt]",
"Policy gradient, $\\tilde{z} \\sim p(z \\ |\\ x ; \\phi )$",
"Score policy samples, $p(z \\ | \\ x; \\phi )$",
"Backpropagation, $\\frac{\\partial }{\\partial \\phi } p(z\\ |\\ x; \\phi )$",
"Self-critical, $\\arg \\max _z p(z \\ |\\ x;\\phi )$",
"Objective regularizer, $\\mathbb {H}(p(z\\ |\\ x;\\phi ))$",
"For structured models, each of these terms is non-trivial to compute. A goal of Torch-Struct is to make it seamless to deploy structured models for these complex settings. To demonstrate this, Torch-Struct includes an implementation of this latent-tree approach. With a minimal amount of user code, the implementation achieves near perfect accuracy on the ListOps dataset."
],
[
"The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\\ell $, the user can request samples $z \\sim \\textsc {CRF}(\\ell )$, probabilities $\\textsc {CRF}(z;\\ell )$, modes $\\arg \\max _z \\textsc {CRF}(\\ell )$, or other distributional properties such as $\\mathbb {H}(\\textsc {CRF}(\\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning.",
"Figure FIGREF11 demonstrates this API for a binary tree CRF over an ordered sequence, such as $p(z \\ | \\ y ;\\phi )$ from the previous section. The distribution takes in log-potentials $\\ell $ which score each possible span in the input. The distribution converts these to probabilities of a specific tree. This distribution can be queried for predicting over the set of trees, sampling a tree for model structure, or even computing entropy over all trees.",
"Table TABREF2 shows all of the structures and distributions implemented in Torch-Struct. While each is internally implemented using different specialized algorithms and optimizations, from the user's perspective they all utilize the same external distributional API, and pass a generic set of distributional tests. This approach hides the internal complexity of the inference procedure, while giving the user full access to the model."
],
[
"We now describe the technical approach underlying the library. To establish notation first consider the implementation of a categorical distribution, Cat($\\ell $), with one-hot categories $z$ with $z_i = 1$ from a set $\\cal Z$ and probabilities given by the softmax,",
"Define the log-partition as $A(\\ell ) = \\mathrm {LSE}(\\ell )$, i.e. log of the denominator, where $\\mathrm {LSE}$ is the log-sum-exp operator. Computing probabilities or sampling from this distribution, requires enumerating $\\cal Z$ to compute the log-partition $A$. A useful identity is that derivatives of $A$ yield category probabilities,",
"Other distributional properties can be similarly extracted from variants of the log-partition. For instance, define $A^*(\\ell ) = \\log \\max _{j=1}^K \\exp \\ell _j$ then: $\\mathbb {I}(z^*_i = 1) = \\frac{\\partial }{\\partial \\ell _i} A^*(\\ell ) $.",
"Conditional random fields, CRF($\\ell $), extend the softmax to combinatorial spaces where ${\\cal Z}$ is exponentially sized. Each $z$, is now represented as a binary vector over polynomial-sized set of parts, $\\cal P$, i.e. ${\\cal Z} \\subset \\lbrace 0, 1\\rbrace ^{|\\cal P|}$. Similarly log-potentials are now defined over parts $\\ell \\in \\mathbb {R}^{|\\cal P|}$. For instance, in Figure FIGREF11 each span is a part and the $\\ell $ vector is shown in the top-left figure. Define the probability of a structure $z$ as,",
"Computing probabilities or sampling from this distribution, requires computing the log-partition term $A$. In general computing this term is now intractable, however for many core algorithms in NLP there are exist efficient combinatorial algorithms for this term (as enumerated in Table TABREF2).",
"Derivatives of the log-partition again provide distributional properties. For instance, the marginal probabilities of parts are given by,",
"Similarly derivatives of $A^*$ correspond to whether a part appears in the argmax structure. $\\mathbb {I}(z^*_p = 1) = \\frac{\\partial }{\\partial \\ell _p} A^*(\\ell ) $.",
"While these gradient identities are well-known BIBREF30, they are not commonly deployed. Computing CRF properties is typically done through two-step specialized algorithms, such as forward-backward, inside-outside, or similar variants such as viterbi-backpointers BIBREF31. In our experiments, we found that using these identities with auto-differentiation on GPU was often faster, and much simpler, than custom two-pass approaches. Torch-Struct is thus designed around using gradients for distributional computations."
],
[
"Torch-Struct is a collection of generic algorithms for CRF inference. Each CRF distribution object, $\\textsc {CRF}(\\ell )$, is constructed by providing $\\ell \\in \\mathbb {R}^{|{\\cal P}|}$ where the parts $\\cal P$ are specific to the type of distribution. Internally, each distribution is implemented through a single Python function for computing the log-partition function $A(\\ell )$. From this function, the library uses auto-differentiation and the identities from the previous section, to define a complete distribution object. The core models implemented by the library are shown in Table TABREF2.",
"To make the approach concrete, we consider the example of a linear-chain CRF.",
"latent](a)$z_1$; latent, right = of a](b)$z_2$; latent, right = of b](c)$z_3$; (a) – (b) – (c);",
"The model has $C$ labels per node with a length $T=2$ edges utilizing a first-order linear-chain (Markov) model. This model has $2\\times C \\times C$ parts corresponding to edges in the chain, and thus requires $\\ell \\in \\mathbb {R}^{2\\times C \\times C}$. The log-partition function $A(\\ell )$ factors into two reduce computations,",
"Computing this function left-to-right using dynamic programming yield the standard forward algorithm for sequence models. As we have seen, the gradient with respect to $\\ell $ produces marginals for each part, i.e. the probability of a specific labeled edge.",
"We can further extend the same function to support generic semiring dynamic programming BIBREF34. A semiring is defined by a pair $(\\oplus , \\otimes )$ with commutative $\\oplus $, distribution, and appropriate identities. The log-partition utilizes $\\oplus , \\otimes = \\mathrm {LSE}, +$, but we can substitute alternatives.",
"For instance, utilizing the log-max semiring $(\\max , +)$ in the forward algorithm yields the max score. As we have seen, its gradient with respect to $\\ell $ is the argmax sequence, negating the need for a separate argmax (Viterbi) algorithm. Some distributional properties cannot be computed directly through gradient identities but still use a forward-backward style compute structure. For instance, sampling requires first computing the log-partition term and then sampling each part, (forward filtering / backward sampling). We can compute this value by overriding each backpropagation operation for the $\\bigoplus $ to instead compute a sample.",
"Table TABREF16 shows the set of semirings and backpropagation steps for computing different terms of interest. We note that many of the terms necessary in the case-study can be computed with variant semirings, negating the need for specialized algorithms."
],
[
"Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms."
],
[
"The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \\bigoplus _c \\ell _{t, \\cdot , c} \\otimes \\ell _{t^{\\prime }, c, \\cdot }$. Under this approach, we only need $O(\\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models."
],
[
"Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$,",
"In order to vectorize this loop over $i, j$, we reindex the chart. Instead of using a single chart $C$, we split it into two parts: one right-facing $C_r[i, d] = C[i, i+d]$ and one left facing, $C_l[i+d, T-d] = C[i, i+d]$. After this reindexing, the update can be written.",
"Unlike the original, this formula can easily be computed as a vectorized semiring dot product. This allows use to compute $C_r[\\cdot , d]$ in one operation. Variants of this same approach can be used for all the parsing models employed."
],
[
"The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing,",
"where $q = \\max _n T_{m,n} + U_{n, o}$. To optimize this operation on GPU we utilize the TVM language BIBREF36 to layout the CUDA loops and tune it to hardware."
],
[
"We present Torch-Struct, a library for deep structured prediction. The library achieves modularity through its adoption of a generic distributional API, completeness by utilizing CRFs and semirings to make it easy to add new algorithms, and efficiency through core optimizations to vectorize important dynamic programming steps. In addition to the problems discussed so far, Torch-Struct also includes several other example implementations including supervised dependency parsing with BERT, unsupervised tagging, structured attention, and connectionist temporal classification (CTC) for speech. The full library is available at https://github.com/harvardnlp/pytorch-struct.",
"In the future, we hope to support research and production applications employing structured models. We also believe the library provides a strong foundation for building generic tools for interpretablity, control, and visualization through its probabilistic API. Finally, we hope to explore further optimizations to make core algorithms competitive with highly-optimized neural network components."
],
[
"We thank Yoon Kim, Xiang Lisa Li, Sebastian Gehrmann, Yuntian Deng, and Justin Chiu for discussion and feedback on the project. The project was supported by NSF CAREER 1845664, NSF 1901030, and research awards by Sony and AWS."
]
],
"section_name": [
"Introduction",
"Related Work",
"Motivating Case Study",
"Library Design",
"Technical Approach ::: Conditional Random Fields",
"Technical Approach ::: Dynamic Programming and Semirings",
"Optimizations",
"Optimizations ::: a) Parallel Scan Inference",
"Optimizations ::: b) Vectorized Parsing",
"Optimizations ::: c) Semiring Matrix Operations",
"Conclusion and Future Work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"83b0d2c9df28b611f74cbc625a6fa50df1bba8ae"
],
"answer": [
{
"evidence": [
"The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\\ell $, the user can request samples $z \\sim \\textsc {CRF}(\\ell )$, probabilities $\\textsc {CRF}(z;\\ell )$, modes $\\arg \\max _z \\textsc {CRF}(\\ell )$, or other distributional properties such as $\\mathbb {H}(\\textsc {CRF}(\\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"363475920554b38997e8edef0aafd969ed8e7fcc"
],
"answer": [
{
"evidence": [
"With this challenge in mind, we introduce Torch-Struct with three specific contributions:",
"Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.",
"Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python.",
"Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization."
],
"extractive_spans": [],
"free_form_answer": "It uses deep learning framework (pytorch)",
"highlighted_evidence": [
"With this challenge in mind, we introduce Torch-Struct with three specific contributions:\n\nModularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.\n\nCompleteness: a broad array of classical algorithms are implemented and new models can easily be added in Python.\n\nEfficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"41a5e7f9002bc00be615405addaa6e72f4201759"
],
"answer": [
{
"evidence": [
"Optimizations ::: a) Parallel Scan Inference",
"The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \\bigoplus _c \\ell _{t, \\cdot , c} \\otimes \\ell _{t^{\\prime }, c, \\cdot }$. Under this approach, we only need $O(\\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models.",
"Optimizations ::: b) Vectorized Parsing",
"Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$,",
"Optimizations ::: c) Semiring Matrix Operations",
"The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing,"
],
"extractive_spans": [
"Typical implementations of dynamic programming algorithms are serial in the length of the sequence",
"Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized",
"Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient"
],
"free_form_answer": "",
"highlighted_evidence": [
"Parallel Scan Inference\nThe commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence.",
"Vectorized Parsing\nComputational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized.",
"Semiring Matrix Operations\nThe two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0f255bdea6c34801b2ab038ea6710f9481bc417a"
],
"answer": [
{
"evidence": [
"Optimizations ::: a) Parallel Scan Inference",
"Optimizations ::: b) Vectorized Parsing",
"Optimizations ::: c) Semiring Matrix Operations",
"Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms."
],
"extractive_spans": [
"Parallel Scan Inference",
"Vectorized Parsing",
"Semiring Matrix Operations"
],
"free_form_answer": "",
"highlighted_evidence": [
"a) Parallel Scan Inference",
"b) Vectorized Parsing",
"c) Semiring Matrix Operations",
"Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Does API provide ability to connect to models written in some other deep learning framework?",
"Is this library implemented into Torch or is framework agnostic?",
"What baselines are used in experiments?",
"What general-purpose optimizations are included?"
],
"question_id": [
"1d9b953a324fe0cfbe8e59dcff7a44a2f93c568d",
"093039f974805952636c19c12af3549aa422ec43",
"8df89988adff57279db10992846728ec4f500eaa",
"94edac71eea1e78add678fb5ed2d08526b51016b"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Distribution of binary trees over an 1000- token sequence. Coloring shows the marginal probabilities of every span. Torch-Struct is an optimized collection of common CRF distributions used in NLP designed to integrate with deep learning frameworks.",
"Table 1: Models and algorithms implemented in Torch-Struct. Notation is developed in Section 5. Parts are described in terms of sequence lengths N,M , label size C, segment length K, and layers / grammar size L,G. Lines of code (LoC) is from the log-partition (A(`)) implementation. T/S is the tokens per second of a batched computation, computed with batch 32, N = 25, C = 20,K = 5, L = 3 (K80 GPU run on Google Colab).",
"Figure 2: Latent Tree CRF example. (a) Logpotentials ` for each part/span. (b) Marginals for CRF(`) computed by backpropagation. (c) Mode tree argmaxz CRF(z; `). (d) Sampled tree z ∼ CRF(`).",
"Table 2: (Top) Semirings implemented in Torch-Struct. Backprop/Gradients gives overridden backpropagation computation and value computed by this combination. (Bot) Example of gradients from different semirings on sequence alignment with dynamic time warping.",
"Figure 3: Speed impact of optimizations. Time is given in seconds for 10 runs with batch 16 executed on Google Colab. (a) Speed of a linear-chain forward with 20 classes for lengths up to 500. Compares left-to-right ordering to parallel scan. (b) Speed of CKY inside with lengths up to 80. Compares inner loop versus vectorization. (c) Speed of linear-chain forward of length 20 with up to 100 classes. Compares broadcast-reduction versus CUDA semiring kernel. (Baseline memory is exhausted after 100 classes.)",
"Figure 4: Parallel scan implementation of the linearchain CRF inference algorithm. Here ⊕ ⊗ represents a semiring matrix operation and I is padding."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png"
]
} | [
"Is this library implemented into Torch or is framework agnostic?"
] | [
[
"2002.00876-Introduction-4",
"2002.00876-Introduction-6",
"2002.00876-Introduction-7",
"2002.00876-Introduction-5"
]
] | [
"It uses deep learning framework (pytorch)"
] | 205 |
1905.13413 | Improving Open Information Extraction via Iterative Rank-Aware Learning | Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method. Code and data are available at https://github.com/jzbjyb/oie_rank. | {
"paragraphs": [
[
"Open information extraction (IE, sekine2006demand, Banko:2007:OIE) aims to extract open-domain assertions represented in the form of $n$ -tuples (e.g., was born in; Barack Obama; Hawaii) from natural language sentences (e.g., Barack Obama was born in Hawaii). Open IE started from rule-based BIBREF0 and syntax-driven systems BIBREF1 , BIBREF2 , and recently has used neural networks for supervised learning BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 .",
"A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions.",
"To calibrate open IE confidences and make them more globally comparable across different sentences, we propose an iterative rank-aware learning approach, as outlined in fig:arch. Given extractions generated by the model as training samples, we use a binary classification loss to explicitly increase the confidences of correct extractions and decrease those of incorrect ones. Without adding additional model components, this training paradigm naturally leads to a better open IE model, whose extractions can be further included as training samples. We further propose an iterative learning procedure that gradually improves the model by incrementally adding extractions to the training data. Experiments on the OIE2016 dataset BIBREF8 indicate that our method significantly outperforms both neural and non-neural models."
],
[
"We briefly revisit the formulation of open IE and the neural network model used in our paper."
],
[
"Given sentence $\\mathbf {s}=(w_1, w_2, ..., w_n)$ , the goal of open IE is to extract assertions in the form of tuples $\\mathbf {r}=(\\mathbf {p}, \\mathbf {a}_1, \\mathbf {a}_2, ..., \\mathbf {a}_m)$ , composed of a single predicate and $m$ arguments. Generally, these components in $\\mathbf {r}$ need not to be contiguous, but to simplify the problem we assume they are contiguous spans of words from $\\mathbf {s}$ and there is no overlap between them.",
"Methods to solve this problem have recently been formulated as sequence-to-sequence generation BIBREF4 , BIBREF5 , BIBREF6 or sequence labeling BIBREF3 , BIBREF7 . We adopt the second formulation because it is simple and can take advantage of the fact that assertions only consist of words from the sentence. Within this framework, an assertion $\\mathbf {r}$ can be mapped to a unique BIO BIBREF3 label sequence $\\mathbf {y}$ by assigning $O$ to the words not contained in $\\mathbf {r}$ , $B_{p}$ / $I_{p}$ to the words in $\\mathbf {p}$ , and $B_{a_i}$ / $I_{a_i}$ to the words in $\\mathbf {a}_i$ respectively, depending on whether the word is at the beginning or inside of the span.",
"The label prediction $\\hat{\\mathbf {y}}$ is made by the model given a sentence associated with a predicate of interest $(\\mathbf {s}, v)$ . At test time, we first identify verbs in the sentence as candidate predicates. Each sentence/predicate pair is fed to the model and extractions are generated from the label sequence."
],
[
"Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $\n\\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)].\n$ ",
"The probability of the label at each position is calculated independently using a softmax function: $\nP(y_t|\\mathbf {s}, v) \\propto \\text{exp}(\\mathbf {W}_{\\text{label}}\\mathbf {h}_t + \\mathbf {b}_{\\text{label}}),\n$ ",
"where $\\mathbf {h}_t$ is the hidden state of the last layer. At decoding time, we use the Viterbi algorithm to reject invalid label transitions BIBREF9 , such as $B_{a_2}$ followed by $I_{a_1}$ .",
"We use average log probability of the label sequence BIBREF5 as its confidence: ",
"$$c(\\mathbf {s}, v, \\hat{\\mathbf {y}}) = \\frac{\\sum _{t=1}^{|\\mathbf {s}|}{\\log {P(\\hat{y_t}|\\mathbf {s}, v)}}}{|\\mathbf {s}|}.$$ (Eq. 7) ",
"The probability is trained with maximum likelihood estimation (MLE) of the gold extractions. This formulation lacks an explicit concept of cross-sentence comparison, and thus incorrect extractions of one sentence could have higher confidence than correct extractions of another sentence."
],
[
"In this section, we describe our proposed binary classification loss and iterative learning procedure."
],
[
"To alleviate the problem of incomparable confidences across sentences, we propose a simple binary classification loss to calibrate confidences to be globally comparable. Given a model $\\theta ^\\prime $ trained with MLE, beam search is performed to generate assertions with the highest probabilities for each predicate. Assertions are annotated as either positive or negative with respect to the gold standard, and are used as training samples to minimize the hinge loss: ",
"$$\\hspace{-2.84526pt}\\hat{\\theta } = \\underset{\\theta }{\\operatornamewithlimits{arg\\,min}}\\hspace{-8.53581pt}\\underset{\\begin{array}{c}\\mathbf {s} \\in \\mathcal {D}\\\\ v, \\hat{\\mathbf {y}} \\in g_{\\theta ^\\prime }(\\mathbf {s})\\end{array}}{\\operatorname{\\mathbb {E}}}\\hspace{-11.38109pt}\\max {(0,1-t \\cdot c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}}))},$$ (Eq. 9) ",
"where $\\mathcal {D}$ is the training sentence collection, $g_{\\theta ^\\prime }$ represents the candidate generation process, and $t \\in \\lbrace 1,-1\\rbrace $ is the binary annotation. $c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}})$ is the confidence score calculated by average log probability of the label sequence.",
"The binary classification loss distinguishes positive extractions from negative ones generated across different sentences, potentially leading to a more reliable confidence measure and better ranking performance."
],
[
"Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\\mathcal {D}$ , initial model $\\theta ^{(0)}$ model after convergence $\\theta $ $t \\leftarrow 0$ # iteration",
" $\\mathcal {E} \\leftarrow \\emptyset $ # generated extractions",
"not converge $\\mathcal {E} \\leftarrow \\mathcal {E} \\cup \\lbrace (\\mathbf {s}, v, \\hat{\\mathbf {y}})|v,\\hat{\\mathbf {y}} \\in g_{\\theta ^{(t)}}(\\mathbf {s}), \\forall \\mathbf {s} \\in \\mathcal {D}\\rbrace $ ",
" $\\theta ^{(t+1)} \\leftarrow \\underset{\\theta }{\\operatornamewithlimits{arg\\,min}}\\hspace{-8.53581pt}\\underset{(\\mathbf {s}, v, \\hat{\\mathbf {y}})\\in \\mathcal {E}}{\\operatorname{\\mathbb {E}}}\\hspace{-8.53581pt}\\max {(0,1-t \\cdot c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}}))}$ ",
" $t \\leftarrow t+1$ Iterative learning. "
],
[
"We use the OIE2016 dataset BIBREF8 to evaluate our method, which only contains verbal predicates. OIE2016 is automatically generated from the QA-SRL dataset BIBREF13 , and to remove noise, we remove extractions without predicates, with less than two arguments, and with multiple instances of an argument. The statistics of the resulting dataset are summarized in tab:data.",
"We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts.",
"We compare our method with both competitive neural and non-neural models, including RnnOIE BIBREF3 , OpenIE4, ClausIE BIBREF2 , and PropS BIBREF14 .",
"Our implementation is based on AllenNLP BIBREF15 by adding binary classification loss function on the implementation of RnnOIE. The network consists of 4 BiLSTM layers (2 forward and 2 backward) with 64-dimensional hidden units. ELMo BIBREF16 is used to map words into contextualized embeddings, which are concatenated with a 100-dimensional predicate indicator embedding. The recurrent dropout probability is set to 0.1. Adadelta BIBREF17 with $\\epsilon =10^{-6}$ and $\\rho =0.95$ and mini-batches of size 80 are used to optimize the parameters. Beam search size is 5."
],
[
"tab:expmain lists the evaluation results. Our base model (RnnOIE, sec:oie) performs better than non-neural systems, confirming the advantage of supervised training under the sequence labeling setting. To test if the binary classification loss (E.q. 9 , sec:ours) could yield better-calibrated confidence, we perform one round of fine-tuning of the base model with the hinge loss ( $+$ Binary loss in tab:expmain). We show both the results of using the confidence (E.q. 7 ) of the fine-tuned model to rerank the extractions of the base model (Rerank Only), and the end-to-end performance of the fine-tuned model in assertion generation (Generate). We found both settings lead to improved performance compared to the base model, which demonstrates that calibrating confidence using binary classification loss can improve the performance of both reranking and assertion generation. Finally, our proposed iterative learning approach (alg:iter, sec:ours) significantly outperforms non-iterative settings.",
"We also investigate the performance of our iterative learning algorithm with respect to the number of iterations in fig:iter. The model obtained at each iteration is used to both rerank the extractions generated by the previous model and generate new extractions. We also report results of using only positive samples for optimization. We observe the AUC and F1 of both reranking and generation increases simultaneously for the first 6 iterations and converges after that, which demonstrates the effectiveness of iterative training. The best performing iteration achieves AUC of 0.125 and F1 of 0.315, outperforming all the baselines by a large margin. Meanwhile, using both positive and negative samples consistently outperforms only using positive samples, which indicates the necessity of exposure to the errors made by the system.",
"tab:casererank compares extractions from RnnOIE before and after reranking. We can see the order is consistent with the annotation after reranking, showing the additional loss function's efficacy in calibrating the confidences; this is particularly common in extractions with long arguments. tab:casegen shows a positive extraction discovered after iterative training (first example), and a wrong extraction that disappears (second example), which shows that the model also becomes better at assertion generation.",
"Why is the performance still relatively low? We randomly sample 50 extractions generated at the best performing iteration and conduct an error analysis to answer this question. To count as a correct extraction, the number and order of the arguments should be exactly the same as the ground truth and syntactic heads must be included, which is challenging considering that the OIE2016 dataset has complex syntactic structures and multiple arguments per predicate.",
"We classify the errors into three categories and summarize their proportions in tab:err. “Overgenerated predicate” is where predicates not included in ground truth are overgenerated, because all the verbs are used as candidate predicates. An effective mechanism should be designed to reject useless candidates. “Wrong argument” is where extracted arguments do not coincide with ground truth, which is mainly caused by merging multiple arguments in ground truth into one. “Missing argument” is where the model fails to recognize arguments. These two errors usually happen when the structure of the sentence is complicated and coreference is involved. More linguistic information should be introduced to solve these problems."
],
[
"We propose a binary classification loss function to calibrate confidences in open IE. Iteratively optimizing the loss function enables the model to incrementally learn from trial and error, yielding substantial improvement. An error analysis is performed to shed light on possible future directions."
],
[
"This work was supported in part by gifts from Bosch Research, and the Carnegie Bosch Institute."
]
],
"section_name": [
"Introduction",
"Neural Models for Open IE",
"Problem Formulation",
"Model Architecture and Decoding",
"Iterative Rank-Aware Learning",
"Binary Classification Loss",
"Iterative Learning",
"Experimental Settings",
"Evaluation Results",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"23c5a7ddd1f154488e822601198303f3e02cc4f7"
],
"answer": [
{
"evidence": [
"Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\\mathcal {D}$ , initial model $\\theta ^{(0)}$ model after convergence $\\theta $ $t \\leftarrow 0$ # iteration",
"A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions.",
"We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts."
],
"extractive_spans": [],
"free_form_answer": "No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods.",
"highlighted_evidence": [
"Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision.",
"For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 ",
"We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
},
{
"annotation_id": [
"250e402e903ac21b69fd0cc88469064e3efc5d04"
],
"answer": [
{
"evidence": [
"Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)]. $"
],
"extractive_spans": [],
"free_form_answer": "word embeddings",
"highlighted_evidence": [
"Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)]. $"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"How does this compare to traditional calibration methods like Platt Scaling?",
"What's the input representation of OpenIE tuples into the model?"
],
"question_id": [
"ca7e71131219252d1fab69865804b8f89a2c0a8f",
"d77c9ede2727c28e0b5a240b2521fd49a19442e0"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"information extraction",
"information extraction"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Iterative rank-aware learning.",
"Table 1: Dataset statistics.",
"Table 2: Case study of reranking effectiveness. Red for predicate and blue for arguments.",
"Figure 2: AUC and F1 at different iterations.",
"Table 4: AUC and F1 on OIE2016.",
"Table 5: Proportions of three errors."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Figure2-1.png",
"4-Table4-1.png",
"5-Table5-1.png"
]
} | [
"How does this compare to traditional calibration methods like Platt Scaling?"
] | [
[
"1905.13413-Introduction-1",
"1905.13413-Iterative Learning-0",
"1905.13413-Experimental Settings-1"
]
] | [
"No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods."
] | 207 |
2003.05995 | CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues | Large corpora of task-based and open-domain conversational dialogues are hugely valuable in the field of data-driven dialogue systems. Crowdsourcing platforms, such as Amazon Mechanical Turk, have been an effective method for collecting such large amounts of data. However, difficulties arise when task-based dialogues require expert domain knowledge or rapid access to domain-relevant information, such as databases for tourism. This will become even more prevalent as dialogue systems become increasingly ambitious, expanding into tasks with high levels of complexity that require collaboration and forward planning, such as in our domain of emergency response. In this paper, we propose CRWIZ: a framework for collecting real-time Wizard of Oz dialogues through crowdsourcing for collaborative, complex tasks. This framework uses semi-guided dialogue to avoid interactions that breach procedures and processes only known to experts, while enabling the capture of a wide variety of interactions. The framework is available at https://github.com/JChiyah/crwiz | {
"paragraphs": [
[
"Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, such as Amazon Mechanical Turk and Figure-eight. These data collection methods have the benefits of being cost-effective, time-efficient to collect and scalable, enabling the collection of large numbers of dialogues.",
"Where this crowdsourcing method has its limitations is when specific domain expert knowledge is required, rather than general conversation. These tasks include, for example, call centre agents BIBREF3 or clerks with access to a database, as is required for tourism information and booking BIBREF2. In the near future, there will be a demand to extend this to workplace-specific tasks and procedures. Therefore, a method of gathering crowdsourced dialogue data is needed that ensures compliance with such procedures, whilst providing coverage of a wide variety of dialogue phenomena that could be observed in deployment of a trained dialogue system.",
"Wizard-of-Oz data collections in the past have provided such a mechanism. However, these have traditionally not been scalable because of the scarcity of Wizard experts or the expense to train up workers. This was the situation with an initial study reported in BIBREF4, which was conducted in a traditional lab setting and where the Wizard (an academic researcher) had to learn, through training and reading manuals, how best to perform operations in our domain of emergency response.",
"We present the CRWIZ Intelligent Wizard Interface that enables a crowdsourced Wizard to make intelligent, relevant choices without such intensive training by providing a restricted list of valid and relevant dialogue task actions, which changes dynamically based on the context, as the interaction evolves.",
"Prior crowdsourced wizarded data collections have divided the dialogue up into turns and each worker's job consists of one turn utterance generation given a static dialogue context, as in the MultiWoZ dataset BIBREF2. However, this can limit naturalness of the dialogues by restricting forward planning, collaboration and use of memory that humans use for complex multi-stage tasks in a shared dynamic environment/context.",
"Our scenario is such a complex task. Specifically, our scenario relates to using robotics and autonomous systems on an offshore energy platform to resolve an emergency and is part of the EPSRC ORCA Hub project BIBREF5. The ORCA Hub vision is to use teams of robots and autonomous intelligent systems to work on offshore energy platforms to enable cheaper, safer and more efficient working practices. An important part of this is ensuring safety of robots in complex, dynamic and cluttered environments, co-operating with remote operators. With this data collection method reported here, we aim to automate a conversational Intelligent Assistant (Fred), who acts as an intermediary between the operator and the multiple robotic systems BIBREF6, BIBREF7. Emergency response is clearly a high-stakes situation, which is difficult to emulate in a lab or crowdsourced data collection environment. Therefore, in order to foster engagement and collaboration, the scenario was gamified with a monetary reward given for task success.",
"In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows:",
"The release of a platform for the CRWIZ Intelligent Wizard Interface to allow for the collection of dialogue data for longer complex tasks, by providing a dynamic selection of relevant dialogue acts.",
"A survey of existing datasets and data collection platforms, with a comparison to the CRWIZ data collection for Wizarded crowdsourced data in task-based interactions."
],
[
"Table TABREF3 gives an overview of prior work and datasets. We report various factors to compare to the CRWIZ dataset corresponding to columns in Table TABREF3: whether or not the person was aware they were talking to a bot; whether each dialogue had a single or multiple participants per role; whether the data collection was crowdsourced; and the modality of the interaction and the domain. As we see from the bottom row, none of the datasets reported in the table meet all the criteria we are aiming for, exemplifying the need for a new and novel approach.",
"Collecting large amounts of dialogue data can be very challenging as two interlocutors are required to create a conversation. If one of the partners in the conversation is a machine as in BIBREF0, the challenge becomes slightly easier since only one partner is lacking. However, in most cases these datasets are aimed at creating resources to train the conversational system itself. Self-authoring the dialogues BIBREF16 or artificially creating data BIBREF1 could be a solution to rapidly collect data, but this solution has been shown to produce low quality unnatural data BIBREF17.",
"One way to mitigate the necessity of pairing two users simultaneously is to allow several participants to contribute to the dialogue, one turn at the time. This approach has been used both in task-oriented BIBREF10, BIBREF2, BIBREF9 and chitchat BIBREF17. This means that the same dialogue can be authored by several participants. However, this raises issues in terms of coherence and forward-planning. These can be addressed by carefully designing the data collection to provide the maximum amount of information to the participants (e.g. providing the task, personality traits of the bot, goals, etc.) but then this adds to cognitive load, time, cost and participant fatigue.",
"Pairing is a valid option, which has been used in a number of recent data collections in various domains, such as navigating in a city BIBREF13, playing a negotiation game BIBREF14, talking about a person BIBREF18, playing an image game BIBREF8 or having a chat about a particular image that is shown to both participants BIBREF21, BIBREF22. Pairing frameworks exist such as Slurk BIBREF23. Besides its pairing management feature, Slurk is designed in order to allow researchers to modify it and implement their own data collection rapidly.",
"The scenarios for the above-mentioned data collections are mostly intuitive tasks that humans do quite regularly, unlike our use-case scenario of emergency response. Role playing is one option. For example, recent work has tried to create datasets for non-collaborative scenarios BIBREF24, BIBREF25, requesting participants to incarnate a particular role during the data collection. This is particularly challenging when the recruitment is done via a crowdsourcing platform. In BIBREF25, the motivation for the workers to play the role is intrinsic to the scenario. In this data collection, one of the participants tries to persuade their partner to contribute to a charity with a certain amount of money. As a result of their dialogue, the money that the persuadee committed to donate was actually donated to a charity organising. However, for scenarios such as ours, the role playing requires a certain expertise and it is questionable whether the desired behaviour would be achieved simply by letting two non-experts converse with free text.",
"Therefore, in recent data collections, there have been a number of attempts to control the data quality in order to produce a desired behaviour. For example, in BIBREF15, the data collection was done with a limited number of subjects who performed the task several days in a row, behaving both as the Wizard and the customer of a travel agency. The same idea was followed in BIBREF12, where a number of participants took part in the data collection over a period of 6 months and, in BIBREF3, BIBREF19 where a limited number of subjects were trained to be the Wizard. This quality control, however, naturally comes with the cost of recruiting and paying these subjects accordingly.",
"The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:",
"A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.",
"Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios."
],
[
"The CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions.",
"Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"The CRWIZ framework is domain-agnostic, but the data collected with it corresponds to the emergency response domain.",
"System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:",
"Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.",
"Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.",
"Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface.",
"The advantage of the CRWIZ framework is that it can easily be adapted to different domains and procedures by simply modifying the dialogue states loaded at initialisation. These files are in YAML format and have a simple structure that defines their NLG templates (the FSM will pick one template at random if there is more than one) and the states that it can transition to. Note, that some further modifications may be necessary if the scenario is a slot-filling dialogue requiring specific information at various stages.",
"Once the dialogue between the participants finishes, they receive a code in the chat, which can then be submitted to the crowdsourcing platform for payment. The CRWIZ framework generates a JSON file in its log folder with all the information regarding the dialogue, including messages sent, FSM transitions, world state at each action, etc. Automatic evaluation metrics and annotations are also appended such as number of turns per participant, time taken or if one of the participants disconnected. Paying the crowdworkers can be done by just checking that there is a dialogue file with the token that they entered."
],
[
"We set up a crowdsourced data collection through Amazon Mechanical Turk, in which two participants chatted with each other in a setting involving an emergency at an offshore facility. As mentioned above, participants had different roles during the interaction: one of them was an Operator of the offshore facility whereas the other one acted as an Intelligent Emergency Assistant. Both of them had the same goal of resolving the emergency and avoiding evacuation at all costs, but they had different functions in the task:",
"The Operator was responsible for the facility and had to give instructions to the Emergency Assistant to perform certain actions, such as deploying emergency robots. Participants in the role of Operator were able to chat freely with no restrictions and were additionally given a map of the facility and a list of available robots (see Figure FIGREF8).",
"The Emergency Assistant had to help the Operator handle the emergency by providing guidance and executing actions. Participants in the role of Emergency Assistant had predefined messages depending on the task progress. They had to choose between one of the options available, depending on which made sense at the time, but they also had the option to write their own message if necessary. The Emergency Assistant role mimics that of the Wizard in a Wizard-of-Oz experiment (see Figure FIGREF11).",
"The participants had a limited time of 6 minutes to resolve the emergency, which consisted of the following sub-tasks: 1) identify and locate the emergency; 2) resolve the emergency; and 3) assess the damage caused. They had four robots available to use with different capabilities: two ground robots with wheels (Husky) and two Quadcopter UAVs (Unmanned Aerial Vehicles). For images of these robots, see Figure FIGREF8. Some robots could inspect areas whereas others were capable of activating hoses, sprinklers or opening valves. Both participants, regardless of their role, had a list with the robots available and their capabilities, but only the Emergency Assistant could control them. This control was through high-level actions (e.g. moving a robot to an area, or ordering the robot to inspect it) that the Emergency Assistant had available as buttons in their interface, as shown in Figure FIGREF11. For safety reasons that might occur in the real world, only one robot could be active doing an action at any time. The combinations of robots and capabilities meant that there was not a robot that could do all three steps of the task mentioned earlier (inspect, resolve and assess damage), but the robots could be used in any order allowing for a variety of ways to resolve the emergency.",
"Participants would progress through the task when certain events were triggered by the Emergency Assistant. For instance, inspecting the area affected by an alarm would trigger the detection of the emergency. After locating the emergency, other dialogue options and commands would open up for the Emergency Assistant. In order to give importance to the milestones in the dialogue, these events were also signalled by GIFs (short animated video snippets) in the chat that both participants could see (e.g. a robot finding a fire), as in Figure FIGREF12. The GIFs were added for several reasons: to increase participant engagement and situation awareness, to aid in the game and to show progress visually. Note that there was no visual stimuli in the original WoZ study BIBREF4 but they were deemed necessary here to help the remote participants contextualise the scenario. These GIFs were produced using a Digital Twin simulation of the offshore facility with the various types of robots. See BIBREF26 for details on the Digital Twin."
],
[
"The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.",
"The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available.",
"The Emergency Assistant interface contains a button to get a hint if they get stuck at any point of the conversation. This hint mechanism, when activated, highlights one of the possible dialogue options or robot buttons. This highlighted transition was based on the observed probability distribution of transitions from BIBREF4 to encourage more collaborative interaction than a single straight answer.",
"As in the real world, robot actions during the task were simulated to take a certain period of time, depending on the robot executing it and the action. The Emergency Assistant had the option to give status updates and progress reports during this period. Several dialogue options were available for the Emergency Assistant whilst waiting. The time that robots would take to perform actions was based on simulations run on a Digital Twin of the offshore facility implemented in Gazebo BIBREF26. Specifically, we pre-simulated typical robot actions, with the robot's progress and position reflected in the Wizard interface with up-to-date dialogue options for the Emergency Assistant. Once the robot signals the end of their action, additional updated dialogue options and actions are available for the Emergency Assistant. This simulation allowed us to collect dialogues with a realistic embedded world state."
],
[
"We used Amazon Mechanical Turk (AMT) for the data collection. We framed the task as a game to encourage engagement and interaction. The whole task, (a Human Intelligence Task (HIT) in AMT) consisted of the following:",
"Reading an initial brief set of instructions for the overall task.",
"Waiting for a partner for a few seconds before being able to start the dialogue.",
"When a partner was found, they were shown the instructions for their assigned role. As these were different, we ensured that they both took around the same time. The instructions had both a text component and a video explaining how to play, select dialogues, robots, etc.",
"Playing the game to resolve the emergency. This part was limited to 6 minutes.",
"Filling a post-task questionnaire about partner collaboration and task ease.",
"The participants received a game token after finishing the game that would allow them to complete the questionnaire and submit the task. This token helped us link their dialogue to the responses from the questionnaire.",
"Several initial pilots helped to define the total time required as 10 minutes for all the steps above. We set the HIT in AMT to last 20 minutes to allow additional time should any issues arise. The pilots also helped setting the payment for the workers. Initially, participants were paid a flat amount of $1.4 per dialogue. However, we found that offering a tiered payment tied to the length of the dialogue and bonus for completing the task was the most successful and cost-effective method to foster engagement and conversation:",
"$0.5 as base for attempting the HIT, reading the instructions and completing the questionnaire.",
"$0.15 per minute during the game, for a maximum of $0.9 for the 6 minutes.",
"$0.2 additional bonus if the participants were able to successfully avoid the evacuation of the offshore facility.",
"The pay per worker was therefore $1.4 for completing a whole dialogue and $1.6 for those who resolved the emergency for a 10-minute HIT. This pay is above the Federal minimum wage in the US ($7.25/hr or $0.12/min) at the time of the experiment.",
"The post-task questionnaire had four questions rated in 7-point rating scales that are loosely based on the PARADISE BIBREF27 questions for spoken dialogue systems:",
"Partner collaboration: “How helpful was your partner?” on a scale of 1 (not helpful at all) to 7 (very helpful).",
"Information ease: “In this conversation, was it easy to get the information that I needed?” on a scale of 1 (no, not at all) to 7 (yes, completely).",
"Task ease: “How easy was the task?” on a scale of 1 (very easy) to 7 (very difficult).",
"User expertise: “In this conversation, did you know what you could say or do at each point of the dialog?” on a scale of 1 (no, not at all) to 7 (yes, completely).",
"At the end, there was also an optional entry to give free text feedback about the task and/or their partner."
],
[
"For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4."
],
[
"Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.",
"Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.",
"Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“."
],
[
"In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.",
"Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.",
"The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate."
],
[
"It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use."
],
[
"In future work, we want to expand and improve the platform. Dialogue system development can greatly benefit from better ways of obtaining data for rich task-oriented domains such as ours. Part of fully exploiting the potential of crowdsourcing services lies in having readily available tools that help in the generation and gathering of data. One such tool would be a method to take a set of rules, procedures or business processes and automatically convert to a FSM, in a similar way to BIBREF28, ready to be uploaded to the Wizard interface.",
"Regarding quality and coherence, dialogues are particularly challenging to automatically rate. In our data collection, there was not a correct or wrong dialogue option for the messages that the Emergency Assistant sent during the conversation, but some were better than others depending on the context with the Operator. This context is not easily measurable for complex tasks that depend on a dynamic world state. Therefore, we leave to future work automatically measuring dialogue quality through the use of context.",
"The introduction of Instructional Manipulation Checks BIBREF29 before the game to filter out inattentive participants could improve the quality of the data (Crowdworkers are known for performing multiple tasks at once). Goodman2013 also recommend including screening questions that check both attention and language comprehension for AMT participants. Here, there is a balance that needs to be investigated between experience and quality of crowdworkers and the need for large numbers of participants in order to be quickly paired.",
"We are currently exploring using the data collected to train dialogue models for the emergency response domain using Hybrid Code Networks BIBREF30."
],
[
"In conclusion, this paper described a new, freely available tool to collect crowdsourced dialogues in rich task-oriented settings. By exploiting the advantages of both the Wizard-of-Oz technique and crowdsourcing services, we can effortlessly obtain dialogues for complex scenarios. The predefined dialogue options available to the Wizard intuitively guide the conversation and allow the domain to be deeply explored without the need for expert training. These predefined options also reinforce the feeling of a true Wizard-of-Oz experiment, where the participant who is not the Wizard thinks that they are interacting with a non-human agent.",
"As the applications for task-based dialogue systems keep growing, we will see the need for systematic ways of generating dialogue corpora in varied, richer scenarios. This platform aims to be the first step towards the simplification of crowdsourcing data collections for task-oriented collaborative dialogues where the participants are working towards a shared common goal. The code for the platform and the data are also released with this publication."
],
[
"This work was supported by the EPSRC funded ORCA Hub (EP/R026173/1, 2017-2021). Chiyah Garcia's PhD is funded under the EPSRC iCase EP/T517471/1 with Siemens."
]
],
"section_name": [
"Introduction",
"Related Work",
"System Overview",
"Data Collection",
"Data Collection ::: Implementation",
"Data Collection ::: Deployment",
"Data Analysis",
"Data Analysis ::: Subjective Data",
"Data Analysis ::: Single vs Multiple Wizards",
"Data Analysis ::: Limitations",
"Data Analysis ::: Future Work",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"67953a768253175e8b82edaf51cba6604a936010"
],
"answer": [
{
"evidence": [
"In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows:",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions."
],
"extractive_spans": [
"pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"f0e709e5450f68728ceb216c496d69a43f916281"
],
"answer": [
{
"evidence": [
"The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:",
"A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.",
"Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios.",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.",
"The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available."
],
"extractive_spans": [
"The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard."
],
"free_form_answer": "",
"highlighted_evidence": [
"By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:\n\nA guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.\n\nProviding several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios.",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.",
"The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"37067c20bb2afc29e9dbc7ddf9e82c1fb7f7f4ad"
],
"answer": [
{
"evidence": [
"For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.",
"Data Analysis ::: Subjective Data",
"Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.",
"Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.",
"Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.",
"Data Analysis ::: Single vs Multiple Wizards",
"In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.",
"Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.",
"The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.",
"Data Analysis ::: Limitations",
"It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use."
],
"extractive_spans": [],
"free_form_answer": "Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings than those which did not resolve the emergency. Qualitative results showed that participants believed that they were interacting with an automated assistant.",
"highlighted_evidence": [
"For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). ",
"The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.\n\nData Analysis ::: Subjective Data\nTable TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.\n\nMann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.\n\nRegarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.\n\nData Analysis ::: Single vs Multiple Wizards\nIn Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.\n\nPerhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.\n\nThe task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.\n\nData Analysis ::: Limitations\nIt is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"41e378720c8fbac9cf7c973a8dca6c412c11d07a"
],
"answer": [
{
"evidence": [
"Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:",
"Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.",
"Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.",
"Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface."
],
"extractive_spans": [
"The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions."
],
"free_form_answer": "",
"highlighted_evidence": [
"Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. ",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:\n\nVerbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.\n\nNon-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.\n\nSubmitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How is dialogue guided to avoid interactions that breach procedures and processes only known to experts?",
"What is meant by semiguided dialogue, what part of dialogue is guided?",
"Is CRWIZ already used for data collection, what are the results?",
"How does framework made sure that dialogue will not breach procedures?"
],
"question_id": [
"bc26eee4ef1c8eff2ab8114a319901695d044edb",
"9c94ff8c99d3e51c256f2db78c34b2361f26b9c2",
"8e9de181fa7d96df9686d0eb2a5c43841e6400fa",
"ff1595a388769c6429423a75b6e1734ef88d3e46"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Comparison of relevant recent works. In order, the columns refer to: the dataset and reference; if the dataset was generated using Wizard-of-Oz techniques; if there was a unique participant per role for the whole dialogue; if the dataset was crowdsourced; the type of interaction modality used; and finally, the type of task or domain that the dataset covers. † The participants were aware that the dialogue was authored by humans. ‡ The participants were volunteers without getting paid.",
"Figure 1: Interface shown to those in the Operator role running on the Slurk interaction server. It has a similar layout to other chat applications with the chat window on the left and a field to send messages at the bottom. The right side is used to display additional information.",
"Figure 2: Interface shown to those in the Emergency Assistant Wizard role running on the Slurk interaction server. The chat window is on the left, with the dialogue options and buttons to control the robots on the right. The chat here shows GIFs that appear to increase engagement and show game progress visually.",
"Figure 3: Some of the GIFs shown during the game. A and B are Husky robots assessing damages and inspecting a fire respectively. C and D show Quadcopter UAVs moving and inspecting an area.",
"Figure 4: Frequency of the top-10 Emergency Assistant dialogue acts in the data collected. There were 40 unique dialogue acts, each with two or more distinct formulations on average. Most of them also had slots to fill with contextual information, such as the name of the robot. Dialogue acts are colour-coded based on 3 main types.",
"Figure 5: Frequency of the top-10 Emergency Assistant dialogue acts in (Lopes et al., 2019).",
"Table 2: Interaction features of the dialogues collected. We compare it with the results of the Wizard-of-Oz experiment in a controlled setting from (Lopes et al., 2019).",
"Table 3: Distribution of the types of dialogue acts in the data collected with CRWIZ, compared with (Lopes et al., 2019).",
"Table 4: Subjective ratings for the post-task survey reporting Mean, Median, Mode and Standard Deviation (SD). Scales were on a 7-point rating scale. “Dialogues Collected” refers to all the dialogues collected after filtering, whereas the other columns are for the dialogues that did not resolved the emergency (“Emergency Not Resolved Dialogues”) and those that did (“Emergency Resolved Dialogues”). Higher is better (Q3 reversed for this table). Highest numbers are bold. * indicates significant differences (p < 0.05, Mann-Whitney-U) between Emergency Resolved and Emergency Not Resolved dialogues.",
"Table 5: Interaction between participants from one of the dialogues collected."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png"
]
} | [
"Is CRWIZ already used for data collection, what are the results?"
] | [
[
"2003.05995-Data Analysis ::: Subjective Data-0",
"2003.05995-Data Analysis-0",
"2003.05995-Data Analysis ::: Single vs Multiple Wizards-0",
"2003.05995-Data Analysis ::: Single vs Multiple Wizards-1",
"2003.05995-Data Analysis ::: Subjective Data-2",
"2003.05995-Data Analysis ::: Limitations-0",
"2003.05995-Data Analysis ::: Single vs Multiple Wizards-2",
"2003.05995-Data Analysis ::: Subjective Data-1"
]
] | [
"Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings than those which did not resolve the emergency. Qualitative results showed that participants believed that they were interacting with an automated assistant."
] | 211 |
1907.02636 | Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling | Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage. Thus, they exert an important role in the field of cybersecurity. However, state-of-the-art IOCs detection systems rely heavily on hand-crafted features with expert knowledge of cybersecurity, and require large-scale manually annotated corpora to train an IOC classifier. In this paper, we propose using an end-to-end neural-based sequence labelling model to identify IOCs automatically from cybersecurity articles without expert knowledge of cybersecurity. By using a multi-head self-attention module and contextual features, we find that the proposed model is capable of gathering contextual information from texts of cybersecurity articles and performs better in the task of IOC identification. Experiments show that the proposed model outperforms other sequence labelling models, achieving the average F1-score of 89.0% on English cybersecurity article test set, and approximately the average F1-score of 81.8% on Chinese test set. | {
"paragraphs": [
[
"Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnets, MD5 hashes of attack files, etc. They are frequently described in cybersecurity articles, many of which are written in unstructured text, describing attack tactics, technique and procedures. For example, a snippet from a cybersecurity article is shown in Fig. FIGREF1 . From the text , token “INST.exe” is the name of an executable file of a malicious software, and the file “ntdll.exe” downloaded by “INST.exe” is a malicious file as well. Obviously, these kinds of IOCs can be then utilized for early detection of future attack attempts by using intrusion detection systems and antivirus software, and thus, they exert an important role in the field of cybersecurity. However, with the rapid evolvement of cyber threats, the IOC data are produced at a high volume and velocity every day, which makes it increasingly hard for human to gather and manage them.",
"A number of systems are proposed to help discover and gather malicious information and IOCs from various types of data sources BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . However, most of those systems consist of several components that identify IOCs by using human-crafted features that heavily rely on specific language knowledge such as dependency structure, and they often have to be pre-defined by experts in the field of the cybersecurity. Furthermore, they need a large amount of annotated data used as the training data to train an IOC classifier. Those training data are frequently difficult to be crowed-sourced, because non-experts can hardly distinguish IOCs from those non-malicious IPs or URLs. Thus, it is a time-consuming and laborious task to construct such systems for different languages.",
"In this work, we consider the task of collecting IOCs from cybersecurity articles as a task of sequence labelling of natural language processing (NLP). By applying a sequence labelling model, each token in an unstructured input text is assigned with a label, and tokens assigned with IOC labels are then collected as IOCs. Recently, sequence labelling models have been utilized in many NLP tasks. Huang et al. BIBREF6 proposed using a sequence labelling model based on the bidirectional long short-term memory (LSTM) BIBREF7 for the task of named entity recognition (NER). Chiu et al. BIBREF8 and Lample et al. BIBREF9 proposed integrating LSTM encoders with character embedding and the neural sequence labelling model to achieve a remarkable performance on the task of NER as well as part-of-speech (POS) tagging. Besides, Dernoncourt et al. BIBREF10 and Jiang et al. BIBREF11 proposed applying the neural sequence labelling model to the task of de-identification of medical records.",
"Among the previous studies of the neural sequence labelling task, Zhou el al. BIBREF12 firstly propose using an end-to-end neural sequence labelling model to fully automate the process of IOCs identification. Their model is on the basis of an artificial neural networks (ANN) with bidirectional LSTM and CRF. However, their newly introduced spelling features bring a more extraction of false positives, i.e., tokens that are similar to IOCs but not malicious. In this paper, we further introduce a multi-head self-attention module and contextual features to the ANN model so that the proposed model can perform better in gathering the contextual information from the unstructured text for the task of IOCs identification. Based on the results of our experiments, our proposed approach achieves an average precision of 93.1% and the recall of 85.2% on English cybersecurity article test set, and an average precision of 82.9% and recall of 80.7% on Chinese test set. We further evaluate the proposed model by training the model using both the English dataset and Chinese dataset, which even achieves better performance."
],
[
"Fig. FIGREF2 shows the 3 components (layers) of the proposed neural network architecture."
],
[
"The token embedding layer takes a token as input and outputs its vector representation. As shown in Fig. FIGREF2 , given an input sequence of tokens INLINEFORM0 , the output vector INLINEFORM1 ( INLINEFORM2 ) of each token INLINEFORM3 results from the concatenation of two different types of embeddings: token embedding INLINEFORM4 and the character-based token embeddings INLINEFORM5 , INLINEFORM6 that come from the output of a character-level bi-LSTM encoder."
],
[
"The Sequence Representation Layer takes the sequence of embeddings INLINEFORM0 ( INLINEFORM1 ) as input, and outputs a sequence INLINEFORM2 , where the INLINEFORM3 element of INLINEFORM4 represents the probability that the INLINEFORM5 token has the label INLINEFORM6 .",
"Different from the previous work of sequence labelling in news articles or patient notes BIBREF9 , BIBREF10 , sentences from a cybersecurity report often contain a large number of tokens as well as lists of IOCs with little context, making it much more difficult for LSTM to encode the input sentence correctly. Therefore, instead of the token LSTM layer in BIBREF12 , we propose sequence representation layer that consists of 3 modules, i.e., attention-based Bi-LSTM module, multi-head self-attention module and token feature module.",
"Considering that tokens cannot contribute equally to the representation of the input sequence, we introduce attention mechanism to Bi-LSTM to extract such tokens that are crucial to the meaning of the sentence. Then, we aggregate the representation of those informative words to form the vector of the input sequence. The attention mechanism is similar to the one proposed by Yang et al. BIBREF13 , which is defined as follows: DISPLAYFORM0 ",
"That is to say, we first compute the INLINEFORM0 as a hidden representation of the hidden states of Bi-LSTM INLINEFORM1 for INLINEFORM2 input token, where INLINEFORM3 is obtained by concatenating the INLINEFORM4 hidden states of forward and backward LSTM, i.e., INLINEFORM5 . Then, we measure the importance of the INLINEFORM6 token with a trainable vector INLINEFORM7 and get a normalized importance weight INLINEFORM8 through a softmax function. After that, the sentence vector INLINEFORM9 is computed as a weighted sum of INLINEFORM10 ( INLINEFORM11 ). Here, weight matrix INLINEFORM12 , bias INLINEFORM13 and vector INLINEFORM14 are randomly initialized and jointly learned during the training process. Note that each input sentence merely has one sentence vector INLINEFORM15 as its weighted representation, and INLINEFORM16 is then used as a part of the INLINEFORM17 output of attention-based Bi-LSTM module, where INLINEFORM18 ( INLINEFORM19 ).",
"Motivated by the successful application of self-attention in many NLP tasks BIBREF14 , BIBREF15 , we add a multi-head self-attention module to enhance the embedding of each word with the information of other words in a text adaptively. By means of this, the local text regions where convolution performs carry the global information of text. Following the encoder part of Vaswani et al. BIBREF14 , multi-head self-attention module is composed of a stack of several identical layers, each of which consists of a multi-head self-attention mechanism and two convolutions with kernel size 1. Given the sequence of embeddings INLINEFORM0 as input, and the output is defined as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are parameter matrices for the projections of queries INLINEFORM3 , keys INLINEFORM4 and values INLINEFORM5 in the INLINEFORM6 head, respectively. Here, INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are set as the input sequence INLINEFORM10 ( INLINEFORM11 ). The INLINEFORM12 is then given to the two convolutions and the output of multi-head self-attention INLINEFORM13 ( INLINEFORM14 ) is obtained.",
"Furthermore, we introduce some features to defined IOCs to improve the performance of the proposed model on a very small amount of training data. Here, we define two types of features, i.e., spelling features and contextual features, and map each token INLINEFORM0 ( INLINEFORM1 ) to a feature vector INLINEFORM2 , where INLINEFORM3 is the spelling feature vector and INLINEFORM4 is the contextual feature vector. Note that the values of features are then jointly learned during the process of training. In Section SECREF3 , we will explain the features in more detail.",
"As shown in Fig. FIGREF2 , the vector INLINEFORM0 ( INLINEFORM1 ) is a concatenation of the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . Each vector INLINEFORM5 is then given to a feed-forward neural network with one hidden layer, which outputs the corresponding probability vector INLINEFORM6 ."
],
[
"We also introduce a CRF layer to output the most likely sequence of predicted labels. The score of a label sequence INLINEFORM0 is defined as the sum of the probabilities of unigram labels and the bigram label transition probabilities: DISPLAYFORM0 ",
"where INLINEFORM0 is a matrix that contains the transition probabilities of two subsequent labels. Vector INLINEFORM1 is the output of the token LSTM layer, and INLINEFORM2 is the probability of label INLINEFORM3 in INLINEFORM4 . INLINEFORM5 is the probability that a token with label INLINEFORM6 is followed by a token with the label INLINEFORM7 . Subsequently, these scores are turned into probabilities of the label sequence by taking a softmax function over all possible label sequences."
],
[
"We extract a vector of features for each tokens of input sequences. In this section, we present each feature category in detail."
],
[
"Since the IOCs tend to follow fixed patterns, we predefined several regular expressions and spelling rules to identify IOCs. For example, to identify a URL, we defined a regular expression INLINEFORM0 and set the value of the URL feature to 1 when the input token matches the regular expression. However, such expressions and spelling rules could introduce false positives, i.e., tokens that have the same spelling patterns as IOCs but are not malicious. In this work, we further introduce the contextual features as described next."
],
[
"IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as \"download\", \"malware\", \"malicious\", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.",
"Taking the above into account, we introduce the contextual feature vector INLINEFORM0 for a given input token INLINEFORM1 , where the INLINEFORM2 element of INLINEFORM3 is defined as follows: DISPLAYFORM0 ",
" INLINEFORM0 is the frequency of token INLINEFORM1 in the whole corpus, while INLINEFORM2 is the frequency of contextual keyword INLINEFORM3 from the windowed portions of the texts centering on the token INLINEFORM4 in the whole corpus and INLINEFORM5 is the size of window. The set of contextual keywords INLINEFORM6 are automatically extracted from the annotated texts, where each contextual keyword INLINEFORM7 ( INLINEFORM8 ) satisfies the following conditions:",
" INLINEFORM0 , where INLINEFORM1 is the set of manually annotated IOCs and INLINEFORM2 is a the lower bound of the frequency.",
" INLINEFORM0 is not a punctuation or stopword.",
"Note that we extract contextual keywords only from manually annotated data (e.g., training set), while we compute the contextual feature vector in all of the unlabeled data. According to this definition, it is obvious that the dimension of the contextual feature vector is as the same as the number of extracted contextual keywords. The size of window INLINEFORM0 and the lower bound of frequency INLINEFORM1 are then tuned by the validation set."
],
[
"The feature vector for an input token is the concatenation of the token spelling feature vector and the contextaul feature vector. Here, to elucidate the best usage of the feature vector, we evaluate the feature vector by concatenating it at different locations in the proposed model, i.e., the input of the token LSTM layer ( INLINEFORM0 ), the hidden state of the token LSTM ( INLINEFORM1 ), and the output of token LSTM ( INLINEFORM2 ). Among them, to concatenate the feature vector with the LSTM hidden state vector and the sentence vector of attention in the token LSTM layer, as shown in Section SECREF4 , achieved the best performance. We speculate that the features played an important role in the task of IOCs identification and feature vectors near the output layer were able to improve the performance more significantly than those at other locations."
],
[
"For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training.",
"For Chinese dataset, we crawl 5,427 cybersecurity articles online from 35 cybersecurity blogs which are published from 2001 to 2018. All of these cybersecurity articles are used to train the Chinese word embedding. Afterwards, we randomly select 607 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 122 articles as the validation set and 122 articles as the test set; the remaining articles are used for training.",
"TABLE TABREF20 shows statistics of the datasets. The output labels are annotated with the BIO (which stands for “Begin”, “Inside” and “Outside”) scheme."
],
[
"For pre-trained token embedding, we apply word2vec BIBREF17 to all crawled 687 English APT reports and 5,427 Chinese cybersecurity articles described in Section SECREF21 respectively. The word2vec models are trained with a window size of 8, a minimum vocabulary count of 1, and 15 iterations. The negative sampling number of word2vec is set to 8 and the model type is skip-gram. The dimension of the output token embedding is set to 100.",
"The ANN model is trained with the stochastic gradient descent to update all parameters, i.e., token embedding, character embedding, parameters of Bi-LSTM, weights of sentence attention, weights of multi-head self-attention, token features, and transition probabilities of CRF layers at each gradient step. For regularization, the dropout is applied to the output of each sub layer of the ANN model. Further training details are given below: (a) For attention-based Bi-LSTM module, dimensions of character embedding, hidden states of character-based token embedding LSTM, hidden states of Bi-LSTM, and sentence attention are set to 25, 25, 100 and 100, respectively. For multi-head self-attention module, we employ a stack of 6 multi-head self attention layer, each of which has 4 head and dimension of each head is set to 64. (b) All of the ANN’s parameters are initialized with a uniform distribution ranging from -1 to 1. (c) We train our model with a fixed learning rate of 0.005. The minimum number of epochs for training is set as 30. After the first 30 epochs had been trained, we compute the average F1-score of the validation set by the use of the currently produced model after every epoch had been trained, and stop the training process when the average F1-score of validation set fails to increase during the last ten epochs. We train our model for, if we do not early stop the training process, 100 epochs as the maximum number. (d) We rescale the normalized gradient to ensure that its norm does not exceed 5. (e) The dropout probability is set to 0.5."
],
[
"As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331.",
"Furthermore, we quantitatively compare our study with other typical works of sequence labelling, i.e., the work of Huang et al. BIBREF6 , the work of Lample et al. BIBREF9 and the work of Rei et al. BIBREF18 . Huang et al. BIBREF6 proposed a bidirectional LSTM model with a CRF layer, including hand-crafted features specialized for the task of sequence labelling. Lample et al. BIBREF9 described a model where the character-level representation was concatenated with word embedding and Rei et al. BIBREF18 improved the model by introducing an attention mechanism to the character-level representations. We train these models by employing the same training set and training parameters as the proposed model. As shown in TABLE TABREF24 , the proposed model obtains the highest precision, recall and F1-score than other models in the task of IOCs extraction. Compared with the second-best model of Lample et al. BIBREF9 , the performance gain of the proposed model on the English dataset is approximately 10.1% of precision and 10.0% of recall. The performance gain of the proposed model on the Chinese dataset is approximately 4.2% of precision and 9.0% of recall.",
"We also quantitatively compare our study with the work of Zhou et al. BIBREF12 , which proposed a bidirectional LSTM model with a CRF layer, including hand-crafted spelling features for the task of IOC identification. As shown in TABLE TABREF24 , the proposed model obtains a slightly higher F1-score on the English dataset and significantly higher F1-score on the Chinese dataset.",
"TABLE TABREF26 compares several examples of correct IOC extraction produced by the proposed model with one by the work of Lample et al. BIBREF9 . In the first example, the model of Lample et al. BIBREF9 fails to identify the malicious URL “http://www7.chrome-up.date/0m5EE”, because the token only appears in the test set and consists of several parts that are uncommon for URLs, such as “www7” and “date”, and thus both the token embedding and the character embedding lack proper information to represent the token as a malicious URL. The proposed model correctly identifies the URL, where the token is defined as a URL by spelling features and is then identified as a malicious URL by the use of the context information. In the second example, the model of Lample et al. BIBREF9 fails to identify token “cr.sh” of the input Chinese text as a malicious file name, while the token is assigned with a correct label by the proposed model. It is mainly because that the token “cr.sh” is defined as a token of file information by spelling features and tends to co-occur with words, “”(download) and “”(mining software). These two words often appear nearby malicious file information and are then extracted as contextual keywords in Section SECREF14 . The token “cr.sh” is then correctly identified as a token of malicious file information by the use of the contextual features."
],
[
"The proposed model provides an intuitive way to inspect the contextual information of each given token. As described in Section SECREF14 , we initialize the contextual features of each given token using the automatically extracted contextual keywords and jointly learn them during the process of training with the whole ANN model. To prove the effectiveness of the contextual features, we visualize the learned weights martix of each contextual keyword of contextual feature and show several examples in Fig. FIGREF28 . Each row of the matrix in each plot indicates the weights of contextual keywords for the given tokens. From this we see which contextual keyword were considered more important to represent the contextual information of the given token. We can see from the matrix in Fig. FIGREF28 that, for the token “spearphshing”, which is an email-spoofing attack method, the contextual keyword “email” has the largest weight. For the malware “SunOrcal”, which drops several malicious executable files, contextual keywords “droppper” and “dropper” have larger weights than other contextual keywords such as “ascii”, “port” and “type”. For non-IOC token “socket”, contextual keywords “gateway” and “port” yield larger weights than other keywords because \"socket\" tends to co-occur with “gateway” and “port”.",
"We further calculate the average weight of each contextual keyword and show the top 10 and bottom 10 largest weighted contextual keywords in TABLE TABREF29 . From this we see that contextual keywords such as, “hash” and “filename”, which tends to co-occur with malicious filenames, have the largest weights for IOCs, while the contextual keywords such as “ascii”, “password” have the largest weights for non-IOCs. Here, it is interesting to find that contextual keyword “dropped” and “droppper”, which tend to co-occur with malicious file information and malwares, yield large weights for IOCs but small weights for non-IOCs. The proposed ANN model benefits from the differences of contextual information between IOCs and non-IOCs that is represented by the contextual features, and thus, achieves better performance than the previous works."
],
[
"Even though security articles are written in different languages, most of the IOCs are written in English, and are described in a similar pattern. Therefore, using multilingual corpora could be a solution for addressing the lack of annotated data, and the performance of the proposed model is expected to be improved by extending the training set. To examine the hypothesis, we ran a number of additional experiments using both the English dataset and Chinese dataset, both of which are described in Section SECREF21 and are not parallel data or comparable data.",
"As pre-trained word embeddings for the bilingual training dataset, we applied a cross-lingual word embedding obtained by the work of Duong el al BIBREF19 , where the English-Chinese cross-lingual dictionary is obtained by simply translating all the English words from English dataset to Chinese and Chinese words from Chinese dataset to English using Google translation. As contextual feature vector, we concatenate the contextual feature vector obtained from English dataset with the contextual feature vector obtained from Chinese dataset. Then we merge the English training set and the Chinese training set into one set and train the proposed model with the merged bilingual training set. TABLE TABREF31 shows that the proposed model trained with the English training set and Chinese training set achieves a small improvement of F1-score on English test set when compared with the model trained with only English training set, and a great improvement of F1-score on Chinese test set when compared with the model trained with only Chinese training set.",
"We compare scores of each label when the proposed model is trained with different training sets in TABLE TABREF32 . When using the English test set, the F1-scores of labels “attack method”, “attack target” and “malware” by the model trained with the English training set and Chinese training set are lower than those scores by the model trained with only the English training set. It is mainly because that tokens of these labels can be written in different languages, which harms the model trained with the bilingual training data set. In contrast, benefiting from the extension of training set, for types of labels that are often written in English, e.g., “domain ”, “file imformation”, “IPv4” and “vlunerability”, the proposed model trained with the English training set and the Chinese training set achieves higher scores than the model trained with only the English training set. When using the Chinese test set, the proposed model trained with the English training set and the Chinese training set obtained a obviously higher F1-scores than the model trained with only the Chinese training set for almost all the types of labels. It is interesting to find that types of labels “e-mail address”, “attack method”, “attacker”, which lack of instances in Chinese training set, show the biggest improvement by using the model trained with the bilingual training set."
],
[
"To conclude, in this paper, we newly introduce a multi-head self-attention module and contextual features to the neural based sequence labelling model, which significantly improved the performance in the task of IOC identification. Based on the evaluation results of our experiments, our proposed model is proved effective on both the English test set and the Chinese test set. We further evaluated the proposed model by training the proposed model using both the English training set and the Chinese training set and compared it with models that are trained with only one training set, where the model trained with the merged bilngual training set performs better.",
"One of our future works is to integrate the contextual embeddings from the bidirectional language model into our proposed model. The pretrained neural language models are proved effective in the sequence labelling models BIBREF26 , BIBREF27 , BIBREF28 . It is expected to improve the performance of the proposed model by integrating both the contextual features and contextual embeddings into the neural sequence labelling model."
]
],
"section_name": [
"Introduction",
"Model",
"Token Embedding Layer",
"Sequence Representation Layer",
"CRF Layer",
"Features",
"Spelling Features",
"Contextual Features",
"Usage of Features",
"Datasets",
"Training Details",
"Results",
"Analysis of Contextual Features",
"Training the Proposed Model with Bilingual Data",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"102b5f1010602ad1ea20ccdc52d330557bfc7433"
],
"answer": [
{
"evidence": [
"As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331."
],
"extractive_spans": [
"As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"aef28565f179d4c9f16d43c8a36ed736718157fc"
],
"answer": [
{
"evidence": [
"IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as \"download\", \"malware\", \"malicious\", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords."
],
"extractive_spans": [],
"free_form_answer": "The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords.",
"highlighted_evidence": [
" In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"e9486a8eb7bfa181261aef55adfe2acf4a011664"
],
"answer": [
{
"evidence": [
"For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training."
],
"extractive_spans": [
" from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018"
],
"free_form_answer": "",
"highlighted_evidence": [
"For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"c3aaa905861aab52233d0a80bb71b8c517cc2e94"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What is used a baseline?",
"What contextual features are used?",
"Where are the cybersecurity articles used in the model sourced from?",
"What type of hand-crafted features are used in state of the art IOC detection systems?"
],
"question_id": [
"08b87a90139968095433f27fc88f571d939cd433",
"ef872807cb0c9974d18bbb886a7836e793727c3d",
"4db3c2ca6ddc87209c31b20763b7a3c1c33387bc",
"63337fd803f6fdd060ebd0f53f9de79d451810cd"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 2. ANN model of sequence labeling for IOCs automatic identification",
"TABLE I STATISTICS OF DATASETS (NUMBERS OF TRAINING / VALIDATION / TEST SET)",
"TABLE II EVALUATION RESULTS (MICRO AVERAGE FOR 11 LABELS)",
"TABLE III EXAMPLES OF CORRECT IDENTIFICATION BY THE PROPOSED MODEL",
"Fig. 3. Heatmap of part of contextual features martix in the English dataset",
"TABLE IV TOP 10 AND BOTTOM 10 LARGEST WEIGHTED CONTEXTUAL KEYWORDS OF CONTEXTUAL FEATURE IN THE ENGLISH DATASET",
"TABLE V COMPARISON OF EVALUATION RESULTS WHEN TRAINING THE PROPOSED MODEL WITH DIFFERENT TRAINING SETS (MICRO AVERAGE PRECISION / RECALL / F1-SCORE FOR 11 LABELS)",
"TABLE VI EVALUATION RESULTS FOR EACH LABEL WHEN TRAINING THE PROPOSED MODEL WITH DIFFERENT TRAINING SETS (PRECISION / RECALL / F1-SCORE)"
],
"file": [
"3-Figure2-1.png",
"4-TableI-1.png",
"5-TableII-1.png",
"6-TableIII-1.png",
"6-Figure3-1.png",
"6-TableIV-1.png",
"7-TableV-1.png",
"8-TableVI-1.png"
]
} | [
"What contextual features are used?"
] | [
[
"1907.02636-Contextual Features-0"
]
] | [
"The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords."
] | 214 |
1605.08675 | Boosting Question Answering by Deep Entity Recognition | In this paper an open-domain factoid question answering system for Polish, RAFAEL, is presented. The system goes beyond finding an answering sentence; it also extracts a single string, corresponding to the required entity. Herein the focus is placed on different approaches to entity recognition, essential for retrieving information matching question constraints. Apart from traditional approach, including named entity recognition (NER) solutions, a novel technique, called Deep Entity Recognition (DeepER), is introduced and implemented. It allows a comprehensive search of all forms of entity references matching a given WordNet synset (e.g. an impressionist), based on a previously assembled entity library. It has been created by analysing the first sentences of encyclopaedia entries and disambiguation and redirect pages. DeepER also provides automatic evaluation, which makes possible numerous experiments, including over a thousand questions from a quiz TV show answered on the grounds of Polish Wikipedia. The final results of a manual evaluation on a separate question set show that the strength of DeepER approach lies in its ability to answer questions that demand answers beyond the traditional categories of named entities. | {
"paragraphs": [
[
"A Question Answering (QA) system is a computer program capable of understanding questions in a natural language, finding answers to them in a knowledge base and providing answers in the same language. So broadly defined task seems very hard; BIBREF0 describes it as AI-Complete, i.e. equivalent to building a general artificial intelligence. Nonetheless, the field has attracted a lot of attention in Natural Language Processing (NLP) community as it provides a way to employ numerous NLP tools in an exploitable end-user system. It has resulted in valuable contributions within TREC competitions BIBREF1 and, quite recently, in a system called IBM Watson BIBREF2 , successfully competing with humans in the task.",
"However, the problem remains far from solved. Firstly, solutions designed for English are not always easily transferable to other languages with more complex syntax rules and less resources available, such as Slavonic. Secondly, vast complexity and formidable hardware requirements of IBM Watson suggest that there is still a room for improvements, making QA systems smaller and smarter.",
"This work attempts to contribute in both of the above areas. It introduces RAFAEL (RApid Factoid Answer Extraction aLgorithm), a complete QA system for Polish language. It is the first QA system designed to use an open-domain plain-text knowledge base in Polish to address factoid questions not only by providing the most relevant sentence, but also an entity, representing the answer itself. The Polish language, as other Slavonic, features complex inflection and relatively free word order, which poses additional challenges in QA. Chapter SECREF2 contains a detailed description of the system architecture and its constituents.",
"In the majority of such systems, designers' attention focus on different aspects of a sentence selection procedure. Herein, a different idea is incorporated, concentrating on an entity picking procedure. It allows to compare fewer sentences, likely to contain an answer. To do that, classical Named Entity Recognition (NER) gets replaced by Deep Entity Recognition. DeepER, introduced in this work, is a generalisation of NER which, instead of assigning each entity to one of several predefined NE categories, assigns it to a WordNet synset.",
"For example, let us consider a question: Which exiled European monarch returned to his country as a prime minister of a republic?. In the classical approach, we recognise the question as concerning a person and treat all persons found in texts as potential answers. Using DeepER, it is possible to limit the search to persons being monarchs, which results in more accurate answers. In particular, we could utilise information that Simeon II (our answer) is a tsar; thanks to WordNet relations we know that it implies being a monarch. DeepER is a generalisation of NER also from another point of view – it goes beyond the classical named entity categories and treats all entities equally. For example, we could answer a question Which bird migrates from the Arctic to the Antarctic and back every year?, although arctic tern is not recognized as NE by NER systems. Using DeepER, we may mark it as a seabird (hence a bird) and include among possible answers. Chapter SECREF3 outlines this approach.",
"The entity recognition process requires an entities library, containing known entities, their text representations (different ways of textual notation) and WordNet synsets, to which they belong. To obtain this information, the program analyses definitions of entries found in encyclopaedia (in this case the Polish Wikipedia). In previous example, it would use a Wikipedia definition: The Arctic Tern (Sterna paradisaea) is a seabird of the tern family Sternidae. This process, involving also redirect and disambiguation pages, is described in section SECREF40 . Next, having all the entities and their names, it suffices to locate their mentions in a text. The task (section SECREF73 ) is far from trivial because of a complicated named entity inflection in Polish (typical for Slavonic languages, see BIBREF3 ).",
"DeepER framework provides also another useful service, i.e. automatic evaluation. Usually QA systems are evaluated by verifying accordance between obtained and actual answer based on a human judgement. Plain string-to-string equality is not enough, as many entities have different text representations, e.g. John F. Kennedy is as good as John Fitzgerald Kennedy and John Kennedy, or JFK (again, the nominal inflection in Polish complicates the problem even more). However, with DeepER, a candidate answer can undergo the same recognition process and be compared to the actual expected entity, not string.",
"Thanks to automatic evaluation vast experiments requiring numerous evaluations may be performed swiftly; saving massive amount of time and human resources. As a test set, authentic questions from a popular Polish quiz TV show are used. Results of experiments, testing (among others) the optimal context length, a number of retrieved documents, a type of entity recognition solution, appear in section SECREF88 .",
"To avoid overfitting, the final system evaluation is executed on a separate test set, previously unused in development, and is checked manually. The results are shown in section SECREF93 and discussed in chapter SECREF6 . Finally, chapter SECREF7 concludes the paper."
],
[
"As stated in previous chapter, RAFAEL is a computer system solving a task of Polish text-based, open-domain, factoid question answering. It means that provided questions, knowledge base and returned answers are expressed in Polish and may belong to any domain. The system analyses the knowledge base, consisting of a set of plain text documents, and returns answers (as concise as possible, e.g. a person name), supplied with information about supporting sentences and documents.",
"What are the kinds of requests that fall into the category of factoid questions? For the purpose of this study, it is understood to include the following types:",
"Although the above list rules out many challenging types of questions, demanding more elaborate answers (e.g. Why was JFK killed?, What is a global warming?, How to build a fence?), it still involves very distinct problems. Although RAFAEL can recognize factoid questions from any of these types and find documents relevant to them (see more in section SECREF18 and BIBREF4 ), its answering capabilities are limited to those requesting single unnamed entities and named entities. In this document, they are called entity questions.",
"The task description here is similar to the TREC competitions and, completed with test data described in section SECREF80 , could play a similar role for Polish QA, i.e. provide a possibility to compare different solutions of the same problem. More information about the task, including its motivation, difficulties and a feasibility study for Polish could be found in BIBREF5 ."
],
[
"The problem of Question Answering is not new to the Polish NLP community (nor working on other morphologically rich languages), but none of studies presented so far coincides with the notion of plain text-based QA presented above.",
"First Polish QA attempts date back to 1985, when BIBREF6 presented a Polish interface to ORBIS database, containing information about the solar system. The database consisted of a set of PROLOG rules and the role of the system (called POLINT) was to translate Polish questions to appropriate queries. Another early solution, presented by BIBREF7 , could only work in a restricted domain (business information).",
"A system dealing with a subset of the TREC tasks was created for Bulgarian by BIBREF8 . His solution answers only three types of questions: Definition, Where-Is and Temporal. He was able to achieve good results with 100 translated TREC questions, using several manually created answer patterns, without NER or any semantic information. Another system for Bulgarian BIBREF9 participated in the CLEF 2005 competition. Its answer extraction module bases on partial grammars, playing a role of patterns for different types of questions. They could answer correctly 37 of 200 questions, of which only 16 belong to the factoid type. Previously the same team BIBREF10 took part in a Bulgarian-English track of the CLEF 2004, in which Bulgarian questions were answered using English texts.",
"A QA solution was also created for Slovene BIBREF11 . The task there is to answer students' questions using databases, spreadsheet files and a web service. Therefore, it differs from the problem discussed above by limited domain (issues related to a particular faculty) and the non-textual knowledge base. Unfortunately, no quantitative results are provided in this work.",
"More recently, several elements of a Polish QA system called Hipisek were presented by BIBREF12 . It bases on a fairly common scheme of transforming a question into a search query and finding the most appropriate sentence, satisfying question constrains. Unfortunately, a very small evaluation set (65 question) and an unspecified knowledge base (gathered by a web crawler) make it difficult to compare the results. In their later works BIBREF13 , BIBREF14 , the team concentrated on spatial reasoning using a knowledge base encoded as a set of predicates.",
"The approach presented by BIBREF15 is the closest to the scope of this work, as it includes analysis of Polish Wikipedia content and evaluation is based on questions translated from a TREC competition. Unfortunately, it heavily relies on a structure of Wikipedia entries, making it impossible to use with an arbitrary textual corpus.",
"A non-standard approach to answer patterns has been proposed by BIBREF16 . In their Czech open-domain QA system they used a set of templates associated with question types, but also presented a method to learn them semi-automatically from search results. BIBREF17 in their Bulgarian QA system concentrated on semantic matching between between a question and a possible answer checked using dependency parsing. However, they provide no data regarding an answering precision of the whole system.",
"The last Polish system worth mentioning has been created by BIBREF18 . Generally, their task, called Open Domain Question Answering (ODQA), resembles what is treated here, but with one major difference. A document is considered an answer; therefore they focus on improving ranking in a document retrieval stage. They have found out that it could benefit from taking nearness of query terms occurrences into account.",
"As some of Slavonic languages lack necessary linguistic tools and resources, only partial solutions of QA problems exist for them, e.g. document retrieval for Macedonian BIBREF19 , question classification for Croatian BIBREF20 or answer validation for Russian BIBREF21 .",
"The idea of DeepER in a nutshell is to improve QA by annotating a text with WordNet synsets using an entity base created by understanding definitions found in encyclopaedia. Parts of this concept have already appeared in the NLP community.",
"A technique of coordinating synsets assigned to a question and a possible answer emerged in a study by BIBREF45 . While a question analysis there seems very similar to this work, entity library (called proper noun ontology) generation differs a lot. The author analysed 1 GB of newswire text and extracted certain expressions, e.g. \"X, such as Y\" implies that Y is an instance of X. Albeit precision of resulting base was not very good (47 per cent for non-people proper names), it led to a substantial improvement of QA performance.",
"The idea of analysing encyclopaedic definitions to obtain this type of information already appeared, but was employed for different applications. For example, BIBREF46 described a method of building a gazetteer by analysing hyperonymy branches of nouns of first sentences in Wikipedia definitions. Unlike in this work, an original synset was replaced by a coarse-grained NER category. Another example of application is a NE recognizer BIBREF47 using words from a definition as additional features for a standard CRF classifier. In their definition analysis only the last word of the first nominal group was used.",
"Other researchers dealt with a task explicitly defined as classifying Wikipedia entries to NER categories. For example BIBREF48 addressed the problem by combining traditional text classification techniques (bag of words) with contexts of entity mentions. Others BIBREF49 thoroughly examined article categories as a potential source of is-a relations in a taxonomy (99 per cent of entries have at least one category). Inhomogeneity of categories turned out as the main problem, dealt with by a heuristic classifier, assigning is-a and not-is-a labels. Categories were also used as features in a NER task BIBREF50 , but it required a set of manually designed patterns to differentiate between categories of different nature.",
"Exploring a correspondence between Wikipedia entries and WordNet synsets found an application in automatic enriching ontologies with encyclopaedic descriptions BIBREF51 . However, only NEs already appearing in the WordNet were considered. The task (solved by bag-of-words similarity) is non-trivial only in case of polysemous words, e.g. which of the meanings of Jupiter corresponds to which Wikipedia article? Others BIBREF52 concentrated on the opposite, i.e. extending the WordNet by NEs that are not there yet by adding titles of entries as instances of synsets corresponding to their common category.",
"Also, some see Wikipedia as an excellent source of high-quality NER training data. Again, it requires to project entries to NE categories. A thorough study of this problem, presented by BIBREF53 , utilizes features extracted from article content (bag of words), categories, keywords, inter-article and inter-language links. A final annotated corpus turns out as good for NER training as a manually annotated gold standard.",
"Finally, some researchers try to generalise NER to other categories, but keep the same machine-learning-based approach. For example, BIBREF54 developed a tagger, assigning words in a text to one of 41 supersenses. Supersenses include NE categories, but also other labels, such as plant, animal or shape. The authors projected word-sense annotations of publicly available corpora to supersenses and applied perceptron-trained Hidden Markov Model for sequence classification, obtaining precision and recall around 77 per cent."
],
[
"A general architectural scheme of RAFAEL (figure FIGREF11 ) has been inspired by similar systems developed for English; for examples see works by BIBREF22 and BIBREF23 .",
"Two of the steps in the diagram concern offline processing of a knowledge base. Firstly, it is indexed by a search engine to ensure efficient searching in further stages (INDEXING). Secondly, it may be annotated using a set of tools (NLP), but this could also happen at an answering stage for selected documents only.",
"After the system receives a question, it gets analysed (QUESTION ANALYSIS) and transformed into a data structure, called question model. One of its constituents, a search query, is used to find a set of documents, which are probably appropriate for the current problem (SEARCH). For each of the documents, all entity mentions compatible with an obtained question type (e.g. monarchs), are extracted (ENTITY RECOGNITION). For each of them, a context is generated (CONTEXT GENERATION). Finally, a distance between a question content and the entity context is computed to asses its relevance (DISTANCE MEASURE). All the mentions and their distance scores are stored and, after no more documents are left, used to select the best match (BEST ENTITY SELECTION). The system returns the entity, supplied with information about a supporting sentence and a document, as an answer."
],
[
"Knowledge base (KB) processing consists of two elements: indexing and annotating. The objective of the first is to create an index for efficient searching using a search engine. In the system, Lucene 3.6 is used to build two separate full-text indices: regular and stemmed using a built-in stemmer for Polish, Stempel BIBREF24 .",
"Secondly, texts go through a cascade of annotation tools, enriching it with the following information:",
"Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,",
"Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,",
"Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,",
"Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .",
"All the annotations are stored in a variant of TEI P5 standard, designed for the National Corpus of Polish BIBREF31 . As noted previously, the process of annotating is not indispensable at the stage of offline KB processing; it could be as well executed only on documents returned from the search engine (for example see Webclopedia by BIBREF22 or LASSO by BIBREF23 ). However, since during evaluation experiments the same documents undergo the process hundreds of times, it seems reasonable to process the whole KB only once."
],
[
"The goal of question analysis is to examine a question and extract all the information that suffices for answer finding. A resulting data structure, called question model, contains the following elements:",
"Question type – a description of expected answer type, instructing the system, what type of data could be returned as an answer. It has three levels of specificity:",
"General question type – one of the types of factoid questions, enumerated at the beginning of this chapter,",
"Named entity type – applicable only in case general type equals named entity. Possible values are the following: place, continent, river, lake, mountain, mountain range, island, archipelago, sea, celestial body, country, state, city, nationality, person, first name, last name, band, dynasty, organisation, company, event, date, century, year, period, number, quantity, vehicle, animal, title.",
"Focus synset – applicable in case of entity questions; a WordNet synset, to which a question focus belongs; necessary for DeepER.",
"Search query – used to find possibly relevant documents,",
"Question content – the words from question which are supposed to appear also in context of an answer.",
"The task presented above, called question classification, is an example of text classification with very short texts. It could be tackled by a general-purpose classifier; for example, BIBREF11 used SVMs (Support Vector Machines) for closed-domain Slovene QA system; BIBREF32 employed SNoW (Sparse Network of Winnows) for hierarchical classification of TREC questions. For Polish results are not satisfactory BIBREF4 because of data sparsity.",
"However, sometimes a solution seems quite evident, as part of the question types enforce its structure. For example, when it begins with Who or When, it belongs to person and date question types, respectively. That is why a set of 176 regular expressions (in case of RAFAEL) suffices to deal with them. They match only a subset of questions (36.15 per cent of the training set), but are highly unambiguous (precision of classification equals 95.37 per cent). Nevertheless, some BIBREF33 use solely such patterns, but need a great number of them (1,273).",
"Unfortunately, most of entity questions are ambiguous, i.e. it is not enough to inspect an interrogative pronoun to find an answer type. They may begin with what or which, followed by a question focus. For example, let us consider a question Which russian submarine sank in 2000 with its whole crew?. Its focus (russian submarine) carries information that the question could be answered by a named entity of type vehicle. The whole process of focus analysis is shown in figure FIGREF25 . The first nominal group after a pronoun serves as a possible lexeme name in plWordNet 2.1 BIBREF34 . As long as there are no results, it gets replaced by its semantic head. When a matching lexeme exists in WordNet, a set of all its hypernyms is extracted. If any of the elements in the set correspond to one of the named entity types, this type is recorded in the question model. Otherwise the general question type takes the value unnamed entity. A WordNet-assisted focus analysis was also implemented in one of solutions participating in a TREC competition BIBREF35 .",
"Search query generation is described in the next chapter. The last element of a question model, called question content, contains segments, which are to be compared with texts to find the best answer. It includes all the words of the interrogative sentence except those included in the matched pattern (Which, ?) and the focus (submarine). In our example the following are left: russian, sank, in, 2000, with, its, whole, crew. An entity mention, which context resembles this set, will be selected as an answer (see details in section SECREF33 ).",
"The question analysis stage explained above follows a design presented in previous works BIBREF4 , BIBREF36 , where more details could be found. The major difference lies in result processing – an original synset is not only projected to one of the named entity types, but also recorded as a focus synset in question type, utilised in DeepER to match entity types. In our example, it would only consider submarines as candidate answers."
],
[
"The use of search engines in QA systems is motivated mainly by performance reasons. Theoretically, we could analyse every document in a text base and find the most relevant to our query. However, it would take excessive amount of time to process the documents, majority of which belong to irrelevant domains (839,269 articles in the test set). A search engine is used to speed up the process by selecting a set of documents and limiting any further analysis to them.",
"As described in section SECREF12 , a knowledge base is indexed by Lucene offline. Given a question, we need to create a search query. The problem is that an answer in the knowledge base is probably expressed differently than the question. Hence, a query created directly from words of the question would not yield results, unless using a highly-redundant KB, such as the WWW (for this type of solution see BIBREF37 ). Therefore, some of the query terms should be dropped – based on their low IDF BIBREF38 or more complex heuristics BIBREF23 . On the other hand, the query may be expanded with synonyms BIBREF22 or derived morphological forms BIBREF38 .",
"Finally, we need to address term matching issue – how to compare a query keyword and a text word in a morphologically-rich language, such as Polish? Apart from exact match, it also is possible to use a stemmer or fuzzy queries, available in Lucene (accepting a predefined Levenshtein distance between matching strings).",
"Previous experiments BIBREF36 led to the following query generation procedure:",
"Remove all words matched by a regular expression at the classification stage (What, Which, etc.),",
"Keep a question focus,",
"Connect all the remaining words by OR operator,",
"Use fuzzy term matching strategy with absolute distance equal 3 characters and fixed prefix.",
"Lucene handles a query and yields a ranked document list, of which N first get transferred to further analysis. The influence of value of N on answering performance is evaluated in section SECREF88 ."
],
[
"Having a set of proposed documents and a question type, the next step is to scan them and find all mentions of entities with appropriate types. RAFAEL includes two approaches to the problem: classical Named Entity Recognition (NER) and novel Deep Entity Recognition.",
"Three NERs for Polish are employed: NERF, Liner2 and Quant. NERF BIBREF29 is a tool designed within the project of the National Corpus of Polish and bases on linear-chain conditional random fields (CRF). It recognizes 13 types of NEs, possibly nested (e.g. Warsaw in University of Warsaw). Liner2 BIBREF30 also employs CRFs, but differentiates NEs of 56 types (which could be reduced to 5 for higher precision). Annotation using both of the tools happens offline within the KB preprocessing, so in the currently described stage it suffices to browse the annotations and find matching entities. As the above tools lack recognition of quantitative expressions, a new one has been developed especially for RAFAEL and called Quant. It is able to handle both numbers and quantities (using WordNet) in a variety of notations.",
"Appendix A contains details of implementation of named entity recognition in RAFAEL, including a description of Quant and a mapping between question types and named entity types available in NERF and Liner2. An alternative being in focus of this work, i.e. DeepER approach, is thorougly discussed in chapter SECREF3 .",
"RAFAEL may use any of the two approaches to entity recognition: NER (via NERF, Liner2 and Quant) or novel DeepER; this choice affects its overall performance. Experiments showing precision and recall of the whole system with respect to applied entity recognition technique are demonstrated in section SECREF88 .",
"An entity recognition step is performed within the question answering process and aims at selecting all entity mentions in a given annotated document. Before it begins, the entity library is read into a PATRICIA trie, a very efficient prefix tree. In this structure, every entity name becomes a key for storing a corresponding list of entities.",
"When a document is ready for analysis, it is searched for strings that match any of the keys in the trie. The candidate chunks (sequences of segments) come from three sources:",
"lemmata of words and syntactic groups,",
"sequences of words in surface forms (as they appear in text),",
"sequences of words in base forms (lemmata).",
"The last two techniques are necessary, because a nominal group lemmatisation often fails, especially in case of proper names. Their rich inflection in Polish BIBREF3 means that a nominal suffix of an entity may be hard to predict. Therefore, a chunk is considered to match an entity name if:",
"they share a common prefix,",
"an unmatched suffix in neither of them is longer than 3 characters,",
"the common prefix is longer than the unmatched chunk suffix.",
"Given a list of entity mentions, RAFAEL checks their compatibility with a question model. Two of its constituents are taken into account: a general question type and a synset. An entity mention agrees with NAMED_ENTITY type if its first segment starts with a capital letter and always agrees with UNNAMED_ENTITY. To pass a semantic agreement test, the synset of the question model needs to be a (direct or indirect) hypernym of one of the synsets assigned to the entity. For example, list of synsets assigned to entity Jan III Sobieski contains <król.1> (king), so it matches a question focus <władca.1, panujący.1, hierarcha.2, pan.1> (ruler) through a hypernymy path <władca.1, panujący.1, hierarcha.2, pan.1> INLINEFORM0 <monarcha.1, koronowana głowa.1> (monarch) INLINEFORM1 <król.1>. All the mentions of entities satisfying these conditions are returned for further processing."
],
[
"When a list of entity mentions in a given document is available, we need to decide which of them most likely answers the question. The obvious way to do that is to compare surroundings of every mention with the content of the question. The procedure consists of two steps: context generation and similarity measurement.",
"The aim of a context generation step is to create a set of segments surrounding an entity, to which they are assigned. Without capabilities of full text understanding, two approximate approaches seem legitimate:",
"Sentence-based – for a given entity mention, a sentence in which it appears, serves as a context,",
"Segment-based – for a given entity mention, every segment sequence of length M, containing the entity, is a context.",
"Both of them have some advantages: relying on a single sentence ensures relation between an entity and a context, whereas the latter provides possibility of modifying context length. Obviously, the value of M should be proportional to question (precisely, its content) length.",
"The method of treating sentences as a context has gained most popularity (see work of BIBREF39 ), but a window of fixed size also appears in the literature; for example BIBREF38 used one with M=140 bytes.",
"The context generation is also related to another issue, i.e. anaphoric expressions. Some segments (e.g. this, him, they) may refer to entities that occurred earlier in a text and therefore harm a similarity estimation. It could be tackled by applying anaphora resolution, but a solution for Polish BIBREF40 remains in an early stage. Observations show that the majority of anaphora refer to an entity in a document title, so the problem is partially bypassed by adding a title to a context.",
"An influence of the context generation techniques on final results is shown in section SECREF88 .",
"To measure a similarity between a question content (explained in section SECREF18 ) and an entity context (generated by the procedures in previous section), a Jaccard similarity index BIBREF41 is computed. However, not all word co-occurrences matter equally (e.g. compare this and Honolulu), so word weights are used: INLINEFORM0 ",
"The sets INLINEFORM0 and INLINEFORM1 contain segments in base forms, whereas INLINEFORM2 denotes a weight of an INLINEFORM3 -th base form, equal to its scaled IDF computed on a document set INLINEFORM4 : INLINEFORM5 ",
"The Jaccard index is a popular solution for sentence similarity measurement in QA (for example see a system by BIBREF42 ). In case of selecting relevant documents, cosine measure is also applied. BIBREF18 compared it to Minimal Span Weighting (MSW) and observed that the latter performs better, as it takes into account a distance between matched words. A study of different techniques for sentence similarity assessment could be found in BIBREF39 .",
"At this stage, a large set of pairs of entity mention and its contexts with scores assigned, is available. Which of them answers the question? Choosing the one with the highest score seems an obvious solution, but we could also aggregate scores of different mentions corresponding to the same answer (entity), e.g. compute their sum or mean. However, such experiments did not yield improvement, so RAFAEL returns only a single answer with the highest score.",
"An answer consists of the following elements: an answer string, a supporting sentence, a supporting document and a confidence value (the score). A sentence and a document, in which the best mention appeared, are assumed to support the answer. Thanks to properties of Jaccard similarity, the mention score ranges between 0 for completely unrelated sentences to 1 for practically (ignoring inflection and a word order) the same. Therefore, it may serve as an answer confidence.",
"When no entity mentions satisfying constraints of a question are found, no answer is returned. This type of result could also be used when the best confidence score is below a predefined value; performance of such technique are shown in section SECREF88 . The refusal to answer in case of insufficient confidence plays an important role in Jeopardy!, hence in IBM Watson BIBREF2 , but it was also used to improve precision in other QA systems BIBREF43 ."
],
[
"Deep Entity Recognition procedure is an alternative to applying Named Entity Recognition in QA to find entities matching question constraints. It scans a text and finds words and multi-word expressions, corresponding to entities. However, it does not assign them to one of several NE categories; instead, WordNet synsets are used. Therefore, named entities are differentiated more precisely (e.g. monarchs and athletes) and entities beyond the classical NE categories (e.g. species, events, devices) could also be recognised in a text.",
"It does not seem possible to perform this task relying solely on features extracted from words and surrounding text (as in NER), so it is essential to build an entity library. Such libraries already exist (Freebase, BabelNet, DBpedia or YAGO) and could provide an alternative for DeepER, but they concentrate on English. The task of adaptation of such a base to another language is far from trivial, especially for Slavonic languages with complex NE inflection BIBREF3 . An ontology taking into account Polish inflection (Prolexbase) has been created by BIBREF44 , but it contains only 40,000 names, grouped into 34 types."
],
[
"An entity library for DeepER contains knowledge about entities that is necessary for deep entity recognition. Each of them consists of the following elements (entity #9751, describing the Polish president, Bronisław Komorowski):",
"Main name: Bronisław Komorowski,",
"Other names (aliases): Bronisław Maria Komorowski, Komorowski,",
"Description URL: http://pl.wikipedia.org/wiki/?curid=121267,",
"plWordNet synsets:",
"<podsekretarz1, podsekretarz stanu1, wiceminister1> (vice-minister, undersecretary),",
"<wicemarszałek1> (vice-speaker of the Sejm, the Polish parliament),",
"<polityk1> (politician),",
"<wysłannik1, poseł1, posłaniec2, wysłaniec1, posłannik1> (member of a parliament),",
"<marszałek1> (speaker of the Sejm),",
"<historyk1> (historian),",
"<minister1> (minister),",
"<prezydent1, prezydent miasta1> (president of a city, mayor).",
"A process of entity library extraction is performed offline, before question answering. The library built for deep entity recognition in RAFAEL, based on the Polish Wikipedia (857,952 articles, 51,866 disambiguation pages and 304,823 redirections), contains 809,786 entities with 1,169,452 names (972,592 unique). The algorithm does not depend on any particular feature of Wikipedia, so any corpus containing entity definitions could be used.",
"Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account.",
"The whole process is more complicated than the simple example shows. Generally, it consists of the following steps:",
"Prepare a corpus – data format and annotation process is the same as for a knowledge base, used in question answering, see section SECREF12 . It differs in scope of page categories, including not only articles, but also disambiguation and redirection pages.",
"For each of article pages, extract the first paragraph and apply readDefinition function. If a resulting entity has a non-empty synset list, add it to the library. If some of the redirection pages point to the entity name, add their names as entity aliases.",
"For each of disambiguation pages, extract all items and apply readDefinition function. If an item refers to an existing entity, extend it with extracted synsets and disambiguation page name. Create a new entity otherwise. Add redirection names as previously.",
"Save the obtained base for future use.",
"Function readDefinition( INLINEFORM0 ) – interprets a definition to assign synsets to an entity. INLINEFORM1 - annotated first paragraph of an encyclopaedic entry INLINEFORM2 - synsets describing an entity INLINEFORM3 := {} INLINEFORM4 := removeInBrackets( INLINEFORM5 ) INLINEFORM6 := removeInQuotes( INLINEFORM7 ) INLINEFORM8 in INLINEFORM9 INLINEFORM10 matches INLINEFORM11 INLINEFORM12 := match( INLINEFORM13 , INLINEFORM14 ).group(2) break INLINEFORM15 := removeDefinitionPrefixes( INLINEFORM16 ) INLINEFORM17 := split( INLINEFORM18 , INLINEFORM19 ) INLINEFORM20 in INLINEFORM21 INLINEFORM22 := firstGroupOrWord( INLINEFORM23 ) isNominal( INLINEFORM24 ) INLINEFORM25 := INLINEFORM26 INLINEFORM27 extractSynsets( INLINEFORM28 ) break INLINEFORM29 ",
"The readDefinition function (shown as algorithm SECREF40 ) analyses a given paragraph of text and extracts a set of synsets, describing an entity, to which it corresponds, as exemplified by figure FIGREF54 . Simplifying, it is done by removing all unnecessary text (in brackets or quotes), splitting it on predefined separators (commas, full stops, semicolons) and applying extractSynsets function with an appropriate stop criterion. The readDefinition makes use of the following elements:",
"removes everything that is between brackets ([], () or {}) from the text (step (1) in figure FIGREF54 ).",
"removes everything between single or double quotes from the text (step (1) in the example).",
"contains patterns of strings separating a defined concept from a definition, e.g. hyphens or dashes (used in step (2) of the example) or jest to (is a).",
"removes expressions commonly prefixing a nominal group, such as jeden z (one of), typ (a type of) or klasa (a class of), not present in the example.",
"a set of three characters that separate parts of a definition: \".\", \",\" and \";\".",
"returns the longest syntactic element (syntactic group or word) starting at the beginning of a chunk (step (4) in the example).",
"decides, whether a chunk is a noun in nominative, a nominal group or a coordination of nominal groups.",
"Function extractSynsets( INLINEFORM0 ) – recursively extracts synsets from a nominal chunk. INLINEFORM1 - a nominal chunk (a syntactic group or a single noun) INLINEFORM2 - WordNet synsets corresponding to INLINEFORM3 INLINEFORM4 := lemmatise( INLINEFORM5 ) inWordNet( INLINEFORM6 ) getLexemes( INLINEFORM7 ).synset(0) isCoordination( INLINEFORM8 ) INLINEFORM9 := {} INLINEFORM10 in INLINEFORM11 INLINEFORM12 := INLINEFORM13 INLINEFORM14 extractSynsets( INLINEFORM15 ) INLINEFORM16 isGroup( INLINEFORM17 ) extractSynsets( INLINEFORM18 .semanticHead) {}",
"The extractSynsets function (shown as algorithm SECREF40 ) accepts a nominal chunk and extracts WordNet synsets, corresponding to it. It operates recursively to dispose any unnecessary chunk elements and find the longest subgroup, having a counterpart in WordNet. It corresponds to step (5) in figure FIGREF54 and uses the following elements:",
"returns a lemma of a nominal group.",
"checks whether a given text corresponds to a lexeme in WordNet.",
"return a list of WordNet lexemes corresponding to a given text.",
"return a synset including a lexeme in a given word sense number.",
"return TRUE iff a given chunk is a coordination group.",
"return TRUE iff a given chunk is a group.",
"is an element of a syntactic group, denoted as a semantic head.",
"A few of design decisions reflected in these procedures require further comment. First of all, they differ a lot from the studies that involve a definition represented with a bag of words BIBREF48 , BIBREF51 , BIBREF53 . Here, a certain definition structure is assumed, i.e. a series of nominal groups divided by separators. What is more, as the full stop belongs to them, the series may continue beyond a single sentence, which has improved recall in preliminary experiments. Availability of a shallow parsing layer and group lemmatisation allows to query WordNet by syntactic groups instead of single nouns, as in work of BIBREF46 . As word order is relatively free in Polish, a nominal group cannot be assumed to end with a noun, like BIBREF47 did. Instead, a semantic head of a group is used.",
"Finally, the problem of lack of word sense disambiguation remains – the line getLexemes( INLINEFORM0 ).synset(0) means that always a synset connected to the first meaning of a lexeme is selected. We assume that it corresponds to the most common meaning, but that is not always the case – in our example at figure FIGREF54 <prezydent.1, prezydent miasta.1> (president of a city, i.e. mayor) precedes <prezydent.2> (president of a country, the obvious meaning). However, it does not have to harm QA performance as far as the question analysis module (section SECREF18 ) functions analogously, e.g. in case of a question beginning with który prezydent... (which president...). Therefore, the decision has been motivated by relatively good performance of this solution in previously performed experiments on question analysis BIBREF36 . It also works in other applications, e.g. gazetteers generation BIBREF46 .",
"To assess quality of the entity library, its content has been compared with synsets manually extracted from randomly selected 100 Wikipedia articles. 95 of them contain a description of an entity in the first paragraph. Among those, DeepER entity library includes 88 (per-entity recall 92.63 per cent). 135 synsets have been manually assigned to those entities, while the corresponding set in library contains 133 items. 106 of them are equal (per-synset precision 79,70 per cent), while 13 differ only by word sense. 16 of manually extracted synsets hove no counterpart in the entity library (per-synset recall 88.15 per cent), which instead includes 14 false synsets."
],
[
"Evaluation of RAFAEL is typical for factoid QA systems: given a knowledge base and and questions, its responses are compared to the expected ones, prepared in advance. Section SECREF80 describes data used in this procedure, whereas section SECREF87 explains how an automatic evaluation is possible without human labour."
],
[
"The Polish Wikipedia serves as a knowledge base. It has been downloaded from a project site as a single database dump at 03.03.2013, from which plain text files have been extracted using Wikipedia Extractor 2.2 script. It means that only plain text is taken into account – without lists, infoboxes, tables, etc. This procedure leads to a corpus with 895,486 documents, containing 168,982,550 segments, which undergo the annotation process, described in section SECREF12 .",
"The questions that are to be answered with the knowledge base come from two separate sets:",
"Development set bases on 1500 (1130 after filtering) questions from a Polish quiz TV show, called Jeden z dziesięciu BIBREF55 . It was involved in previous experiments BIBREF4 , BIBREF36 .",
"Evaluation set bases on an open dataset for Polish QA systems, published by BIBREF56 . It has been gathered from Did you know... column, appearing in the main page of the Polish Wikipedia. It contains 4721 questions, from which 1000 have been analysed, which resulted in 576 satisfying the task constrains, given in chapter SECREF2 .",
"Table TABREF85 shows a distribution of different question types and named entity types in the sets.",
"To each of the questions from both sets some information has been assigned manually. It includes an identification number, an expected answer string, a general question type, a named entity type (if applicable) and an expected source document. Table TABREF86 contains several exemplary questions from the development set.",
"The additional information (question types and expected documents) makes it possible to evaluate only selected modules of the whole QA system. For example, we could test question classification by comparing results against given question types or entity selection by analysing only the relevant document."
],
[
"Thanks to availability of the DeepER entity library, it is possible to automatically perform answer evaluation for all the question types that are recognised by this technique (UNNAMED_ENTITY and NAMED_ENTITY excluding dates, numbers and quantities).",
"Both an expected and obtained answer are represented as short strings, e.g. Bronisław Komorowski. However, it does not suffice to check their exact equality. That is caused by existence of different names for one entity (Bronisław Maria Komorowski or Komorowski), but also rich nominal inflection (Komorowskiego, Komorowskiemu, ...).",
"In fact, we want to compare entities, not names. Hence, deep entity recognition is a natural solution here. To check correctness of an answer, we use it as an input for the recognition process, described in section SECREF73 . Then, it is enough to check whether the expected answer appears in any of lists of names, assigned to the recognized entities. For example, let us consider a question: Kto jest obecnie prezydentem Polski? (Who is the current president of Poland?) with expected answer Bronisław Komorowski and a system answer Komorowski. The DeepER process finds many entities in the string (all the persons bearing this popular surname). One of them is the question goal, hence, has Bronisław Komorowski in its list of names.",
"As the process of entity recognition is imperfect, so is the automatic evaluation. However, it still lets us to notice general trends in answering performance with respect to several factors. Of course, the final evaluation needs to be checked manually."
],
[
"As mentioned in previous section, the results consist of two groups: experiments, showing an influence of some aspects of algorithm on performance, and a final assessment. Both use the Polish Wikipedia as a knowledge base, whereas the questions asked belong to development and evaluation sets, respectively. In this section, recall measures percentage of questions, to which RAFAEL gave any answer, whereas precision denotes percentage of question answered correctly.",
"When analysing results of different entity recognition techniques, we need to remember that they strongly rely on output of the question analysis, which is not perfect. In particular, tests show that 15.65 per cent of questions is assigned to wrong type and 17.81 per cent search results do not include the expected document BIBREF36 . The entity recognition (ER) stage, a focus of this work, is very unlikely to deliver valid answers in these cases. However, as the expected question type and source document are available in question metadata, it is possible to correct results of question analysis by artificially replacing a wrong type and/or adding the expected document to the retrieved set. In that way the ER modules could be evaluated, as if question analysis worked perfectly. Note that this approach slightly favours NER-based solutions as the question metadata contains general types and named entity types but lack focus synsets, used by DeepER."
],
[
"The goal of the first experiment is to test how number a of documents retrieved from the search engine and analysed by the entity recognition techniques, influences the performance. Question classification errors have been bypassed as described in the previous paragraph. Additionally, two versions have been evaluated: with and without corrections of a retrieved set of documents. Figure FIGREF89 demonstrates results for different entity recognition techniques.",
"As we can see, if a retrieved set contains the desired article, adding new documents slightly increases recall, while precision drops observably. That is because additional irrelevant documents usually introduce noise. However, in some cases they are useful, as increasing recall indicates. On the other hand, if we have no guarantee of presence of the expected document in a list, it seems more desirable to extend it, especially for small sizes. For sets bigger than 50 elements, the noise factor again dominates our results. Judging by F1 measure, the optimal value is 20 documents.",
"When it comes to the comparison, it should be noted that DeepER performs noticeably better than traditional NER. The gain in precision is small, but recall is almost twice as big. It could be easily explained by the fact that the NER solutions are unable to handle UNNAMED_ENTITY type, which accounts for 36 per cent of the entity questions.",
"It is also worthwhile to check how the system performs while using different values of minimal confidence rate (Jaccard similarity), as described in section UID38 . It could become useful when we demand higher precision and approve lower recall ratio. The plot in figure FIGREF90 shows answering performance using DeepER with corrected question analysis with respect to the minimal confidence rate. Generally, the system behaves as expected, but the exact values disappoint. The precision remain at a level of 25-40 per cent up to confidence 0.75, where in turn recall drops to 0.35 per cent only. Values of F1 measure suggest that 0.2 is the highest sensible confidence rate.",
"One more parameter worth testing, explained in section UID34 , is the context generation strategy. To find the entity with a context most similar to a question content, we could analyse a single sentence, where it appears, or a sequence of words of a predefined length. For both of these solutions, we could also add a document title, as it is likely to be referred to by anaphoric expressions. Figure FIGREF91 shows the value of precision (recall does not depend on context) for these four solutions.",
"We can see that inclusion of a title in a context helps to achieve a better precision. The impact of anaphoric reference to title emerges clearly in case of flexible context – the difference grows with context size. Quite surprisingly, for the optimal context length (1.5 * question size), it is on the contrary. However, because of the small difference between the techniques including title, for the sake of simplicity, the single sentence is used in the final evaluation."
],
[
"To impose a realistic challenge to the system, the evaluation set, used at this stage, substantially differs from the one used during the development (see section SECREF80 ). A configuration for the final evaluation has been prepared based on results of the experiments. All of the tested versions share the following features:",
"no question analysis corrections,",
"question classification and query generation solutions which proved best in the previous experiments (see section SECREF18 ),",
"a retrieved set of documents including 20 articles,",
"no minimal confidence,",
"singe sentence context with title.",
"Tested solutions differ with respect to entity recognition only; RAFEL variants based on the following options are considered:",
"quantities recognizer (Quant),",
"traditional NER solutions: Nerf and Liner2,",
"deep entity recognition (DeepER),",
"hybrid approach, where entity mentions were gathered from all the above sources.",
"Table TABREF103 shows results of the final evaluation, expressed by recall, precision, F1 measure and Mean Reciprocal Rank (MRR). Standard deviations of these values have been obtained by bootstrap resampling of the test set. Additionally, precision obtained by automatic evaluation has been added, where applicable. As we can see, only a small percentage of questions is handled by the quantitative entities recognition. NER-based solutions deal with slightly more (Nerf) or less (Liner2) than a half of the questions. When using DeepER, the recall ratio rises to 73 per cent while the precision does not differ significantly. That is because UNNAMED_ENTITY questions (unreachable for traditional NER) account for a substantial part of the test set. The maximum recall is obtained by the hybrid solution (90 per cent) but it comes at a cost of lower precision (33 per cent). On the other hand, when we take the whole ranking lists into account, traditional NERs seem to perform better (in terms of MRR).",
"As expected, the automatic evaluation underestimates precision, but the difference remains below 5 per cent. Judging by F1 measure, the hybrid solution seems to beat the others."
],
[
"The main strength of DeepER compared to NER, according to results shown in figure TABREF103 , is much higher recall. Table TABREF106 shows examples of questions, to which only DeepER provides a correct answer. As we can see (notice question foci in the table), they could not be assigned to any of the traditional NE categories.",
"The other striking fact in the results is low precision. A part of the wrong answers was inspected and most of the errors seem to result from the following phenomena:",
"The entity recognizers also introduce errors typical for them:",
"The last remark applies also to other techniques. For example, consider a word kot, which means a cat. However, it is also a name of a journal, a lake, a village, a badge (KOT), a surname of 10 persons in the Polish Wikipedia and much more. A human would usually assume the most common meaning (a cat), but the system treats them as equally probable. It introduces noise in the process, as such an entity matches many types of questions.",
"Another thing that demands explanation is a difference in precision of answers found using Liner2 and DeepER: in evaluation set the latter does not maintain its advantage from development set. It could be explained by different compositions of the question sets (table TABREF85 ) – the development one contains much more questions beginning with ambiguous pronouns, followed by a question focus, e.g. Który poeta... (which poet), thus providing a precise synset (a poet) for deep entity recognition. Members of the evaluation set much more frequently begin with pronouns like Kto ...(who), where a synset corresponds to a general NE type (a person).",
"As RAFAEL is the first Polish QA system, able to answer by entities instead of documents, we can not compare it directly to any other solution. However, the evaluation set has been created based on questions published by BIBREF56 and used for evaluation of a document retrieval system BIBREF18 . Their baseline configuration achieved a@1 (percentage of questions answered by the first document, corresponds to precision in table TABREF103 ) equal 26.09 per cent. By taking into account proximity of keyword matches (MCSW method), they improved the result to 38.63 per cent. We can see that RAFAEL, despite solving much more challenging problem, in all configurations obtains better precision than baseline; using Liner2 it beats even the best method tested on this set (MCSW).",
"The results suggest two possible directions of future work to improve performance of RAFAEL. Firstly, involving semantics in sentence matching could solve some of the problems mentioned above. There are a lot of techniques in that area, also in QA systems (see a variety of them used by BIBREF39 ), but their implementation in a morphologically rich language would require a thorough study. For example, there exist techniques computing a semantic similarity based on a WordNet graph BIBREF57 , which is available for Polish and proved very useful in this study. Secondly, the relatively good performance of hybrid ER indicates that it may be good to apply different entity recognizer to different questions. For example, we could evaluate them for each question type separately and select the one that performs best for a given one. However, it would require much more training data to have a substantial number of questions of each type, including the scarce ones (observe sparsity of table TABREF85 ).",
"When it comes to DeepER, word ambiguity seem to be the main issue for future efforts. Of course, a full-lexicon precise word-sense disambiguation tool would solve the problem, but we can't expect it in near future. Instead, we could select a synset somewhere in a path between a focus synset and a named entity type. In the example from figure FIGREF54 rather than choosing between <prezydent.1, prezydent miasta.1> (president of a city) and <prezydent.2> (president of a country) we could use <urzędnik.1, biuralista.1> (official), which covers both meanings."
],
[
"This paper introduces RAFAEL, a complete open-domain question answering system for Polish. It is capable of analysing a given question, scanning a large corpus and extracting an answer, represented as a short string of text.",
"In its design, the focus has been on entity recognition techniques, used to extract all the entities compatible with a question from a given text. Apart from the traditional named entity recognition, differentiating between several broad categories of NEs, a novel technique, called Deep Entity Recognition (DeepER), has been proposed and implemented. It is able to find entities belonging to a given WordNet synset, using an entity library, gathered by interpreting definitions from encyclopaedia.",
"Automatic evaluation, provided by DeepER approach, has let to perform several experiments, showing answering accuracy with respect to different parameters. Their conclusions have been used to prepare final evaluation, which results have been checked manually. They suggest that the DeepER-based solution yields similar precision to NER, but is able to answer much more questions, including those beyond the traditional categories of named entities."
],
[
"As mentioned in section SECREF32 , apart from DeepER, RAFAEL employs also traditional NER-based solutions for entity recognition: NERF, Liner2 and Quant. Each of them uses its own typology of named entities, which covers only a part of the types, enumerated in section SECREF18 . Table TABREF118 shows a correspondence between these types. As we can see, there are a few problems:",
"The problems 3 and 4 are solved by an additional postprocessing code, extracting CENTURY from date and NAME and SURNAME from person_nam entities. In case of multi-segment person entities it assumes that the first and last word correspond to first and last name, respectively.",
"While NERF and Liner2 are standalone NER tools and details of their design are available in previously mentioned publications, Quant has been created specifically for RAFAEL. To find numbers, it annotates all chains of segments according to a predefined pattern, which accepts the following types of segments:",
"The pattern is matched in greedy mode, i.e. it adds as many new segments as possible. It could recognise expressions like 10 tysięcy (10 thousand), kilka milionów (several million), 10 000 or 1.698,88 (1,698.88).",
"Quantity is a sequence of segments, recognised as a number, followed by a unit of measurement. To check whether a word denotes a unit of measurement, the plWordNet is searched for lexemes equal to its base. Then it suffices to check whether it belongs to a synset, having <jednostka miary 1> (unit of measurement) as one of (direct or indirect) hypernyms, e.g. piętnaście kilogramów (fifteen kilograms) or 5 000 watów (5 000 watts)."
],
[
"Study was supported by research fellowship within \"Information technologies: research and their interdisciplinary applications\" agreement number POKL.04.01.01-00-051/10-00. Critical reading of the manuscript by Agnieszka Mykowiecka and Aleksandra Brzezińska is gratefully acknowledged."
]
],
"section_name": [
"Introduction",
"RAFAEL",
"Related work",
"System Architecture",
"Knowledge Base Processing",
"Question Analysis",
"Document Retrieval",
"Entity Recognition",
"Mention selection",
"Deep Entity Recognition",
"Entity Library",
"Evaluation",
"Data",
"Automatic Evaluation",
"Results",
"Experiments",
"Final System Evaluation",
"Discussion",
"Conclusions",
"Appendix A: Named Entity Recognition in RAFAEL",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"3dd14ec7c6c2a4fa560f7cff98479063dda0e1c9"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1075c87b188f9958978397a9f9589fc0136d8fca"
],
"answer": [
{
"evidence": [
"Secondly, texts go through a cascade of annotation tools, enriching it with the following information:",
"Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,",
"Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,",
"Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,",
"Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 ."
],
"extractive_spans": [],
"free_form_answer": "Using a set of annotation tools such as Morfeusz, PANTERA, Spejd, NERF and Liner",
"highlighted_evidence": [
"Secondly, texts go through a cascade of annotation tools, enriching it with the following information:\n\nMorphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,\n\nTagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,\n\nSyntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,\n\nNamed entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3ed4ab7fb1ef561174c750eaf67ea3cc23b8d73b"
],
"answer": [
{
"evidence": [
"Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account."
],
"extractive_spans": [
"only the first word sense (usually the most common) is taken into account"
],
"free_form_answer": "",
"highlighted_evidence": [
"In case of polysemous words, only the first word sense (usually the most common) is taken into account."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they compare DeepER against other approaches?",
"How is the data in RAFAEL labelled?",
"How do they handle polysemous words in their entity library?"
],
"question_id": [
"63496705fff20c55d4b3d8cdf4786f93e742dd3d",
"7b44bee49b7cb39cb7d5eec79af5773178c27d4d",
"6d54bad91b6ccd1108d1ddbff1d217c6806e0842"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Overall architecture of the QA system – RAFAEL. See descriptions of elements in text.",
"Fig. 2. Outline of a question focus analysis procedure used to determine an entity type in case of ambiguous interrogative pronouns.",
"Fig. 3. Example of the entity extraction process in DeepER, transforming a Wikipedia entry of Lech Wałęsa into a list of synsets.",
"Table 1. A distribution of different general types and named entity types in development (1130 questions) and final evaluation (576 questions) sets.",
"Table 2. Exemplary questions with their types (general and named entity), expected source articles and answers.",
"Fig. 4. Question answering performance with respect to size of a retrieved set of documents, undergoing a full analysis. Two versions are considered – with and without guaranteed presence of an article, containing the desired information, in a set. The results for different entity recognition techniques– traditional NER (Nerf, Liner2) and DeepER.",
"Fig. 5. RAFAEL performance with respect to minimal confidence rate. Results computed using DeepER with corrected question type and corrected list of 50 documents.",
"Fig. 6. Question answering performance for different context generation strategies: single sentence and sequence of segments of certain length. Both types considered with and without an article title added.",
"Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid).",
"Table 4. Examples of questions which have been handled and answered correctly only with the DeepER approach. Their foci lie beyond areas covered by the NE categories.",
"Table 5. Correspondence between named entity types from question analysis and supported by different NER solutions."
],
"file": [
"4-Figure1-1.png",
"6-Figure2-1.png",
"12-Figure3-1.png",
"15-Table1-1.png",
"16-Table2-1.png",
"18-Figure4-1.png",
"19-Figure5-1.png",
"19-Figure6-1.png",
"20-Table3-1.png",
"21-Table4-1.png",
"23-Table5-1.png"
]
} | [
"How is the data in RAFAEL labelled?"
] | [
[
"1605.08675-Knowledge Base Processing-4",
"1605.08675-Knowledge Base Processing-5",
"1605.08675-Knowledge Base Processing-3",
"1605.08675-Knowledge Base Processing-2",
"1605.08675-Knowledge Base Processing-1"
]
] | [
"Using a set of annotation tools such as Morfeusz, PANTERA, Spejd, NERF and Liner"
] | 215 |
1709.08858 | Polysemy Detection in Distributed Representation of Word Sense | In this paper, we propose a statistical test to determine whether a given word is used as a polysemic word or not. The statistic of the word in this test roughly corresponds to the fluctuation in the senses of the neighboring words a nd the word itself. Even though the sense of a word corresponds to a single vector, we discuss how polysemy of the words affects the position of vectors. Finally, we also explain the method to detect this effect. | {
"paragraphs": [
[
"Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.",
"To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor. We have found that there is a difference in the surrounding uniformity between a monosemic word and a polysemic word. This paper describes how to compute surrounding uniformity for a given word, and discuss the relationship between surrounding uniformity and polysemy."
],
[
"The distributed word representation can be computed as weight vectors of neurons, which learn language modeling BIBREF0 . We can obtain a distributed representation of a word using the Word2Vec software BIBREF1 which enable us to perform vector addition/subtraction on a word's meaning. The theoretical background is analyzed by BIBREF2 , where the operation is to factorize a word-context matrix, where the elements in the matrix are some function of the given word and its context pairs. This analysis gives us insight into how the vector is affected by multiple senses or multiple context sets. If a word has two senses, the obtained representation for the word will be a linearly interpolated point between the two points of their senses.",
"The importance of multiple senses is well recognized in word sense detection in distributed representation. The usual approach is to compute the corresponding vectors for each sense of a word BIBREF3 , BIBREF4 . In this approach, first, the context is clustered. Then, the vector for each cluster is computed. However, the major problem faced by this approach is that all target words need to be assumed as polysemic words first, and their contexts are always required to be clustered. Another approach is to use external language resources for word sense, and to classify the context BIBREF5 . The problem with this approach is that it requires language resources of meanings to obtain the meaning of a polysemic word. If we know whether a given word is polysemic or monosemic thorough a relatively simple method, we can concentrate our attention on polysemic words."
],
[
"In this paper, we assume that the sense of a word is determined by the distribution of contexts in which the word appears in a given corpus. If a word comes to be used in new contexts, the word comes to have a new sense. If we could have an infinitely sizes corpus, this sense might converge into the sense in the dictionary. In reality, the size of the corpus in hand is limited, and some senses indicated in a dictionary may not appear in the corpus. The distinction between the senses in a dictionary and the senses in the corpus is important in this paper, because it is crucial for discussing polysemy. All discussions in this paper depend on the corpus in hand. We now use the FIL9 corpus (http://mattmahoney.net/dc/textdata), which primarily consists of a description of believed facts, rather than conversations. We can expect that the senses that are mainly used in conversation would not appear in this corpus.",
"In this paper, we analyze auxiliary verbs, which are polysemic words from a dictionary. If the corpus is limited to a description of believed facts, we may regard auxiliary verbs as monosemic words, since their contexts are limited. In addition, we particularly analyze the relationship between the auxiliary verb “may”, and name of the month “May”. In the dictionary, these two are regarded as two different words, rather than as two different senses of one word. By ignoring the upper/lower case characters, these two words have same character sequence and the word “may” becomes a polysemic word, which has two types of context in the given corpus."
],
[
"Our proposed method is based on the following measures. Let $\\vec{w}$ be the vector corresponding to the given word. Let $N$ be the size of the neighbor, such as 4. First, we choose $N$ neighboring words whose angle with the given word is the smallest. This operation is already implemented in the Word2Vec software. Let $\\vec{a_i}$ ( $\\vec{w}$ ) be the vectors corresponding to $i$ th vector of the neighbor of the word.",
"We choose the uniformity of vectors, which can be regarded as general case of triangle inequality. The uniformity of a set of vectors is a ratio, i.e., the size of the vector of the vector addition of the vectors divided by the scalar sum of the sizes of the vectors. If and only if all directions of the vectors are the same, the uniformity becomes 1.0. We compute this uniformity for the neighbors, including the word itself. Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$ ",
"where $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$ ",
"When computing SU, we consider the set of words whose vectors are reliable. We choose these words as the most frequently appearing words in corpus. The size of words is denoted as $limit$ . If a word is not in this set, or the word does not have sufficient number of neighbors in this set, we consider that the value of SU is undefined, and that the word does not have this value.",
"Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:",
"This is a basic statistical test BIBREF6 to detect outliers.",
"Note that we cannot compute the variance if some $a_i$ does not have the value of SU. Further, it may be also possible that all $a_i$ may have the same SU, sharing identical neighbors. In this case, the variance becomes an extreme value, that is, 0. In these cases, we consider that we cannot perform the statistical test."
],
[
"We used FIL9, which is freely available as the test corpus for Word2Vec and is derived from Wikipedia. We compute 200-dimensional distributed vector representations with default parameter. In this situation, all-uppercase are converted into lower case. This is why all proper nouns are in lower case in this example. First we selected stable words as the 1000 words that appear most frequently in the text. We compute surrounding uniformity of these words. We define the given word $w$ and its neighboring word $a_i$ are limited to stable words. We then determine the search scope for stable neighboring words and set $N$ , which is the number of neighbors used to compute the surrounding uniformity, to 4. For example, if there are 7 stable words in the search scope, we use only the top 4 words to compute the surrounding uniformity.",
"Table 1 shows the uniformity of auxiliary verbs in this setting. We were able to compute the surrounding uniformity for 160 words; for the remaining 840 words, there were fewer than the required 4 stable neighboring words in the search scope and the surrounding uniformity could not be determined.",
"For the case of the word “may”, neighbor words are “can”, “should”, “might”, and “will”. Their surrounding uniformities are, 0.9252 (“can”), 0.9232 (“should”), 0.9179 (“might”), and 0.9266 (“will”). Then $m$ is equal to 0.9232, and $\\sigma $ is equal to 0.0038. Therefore, $m-3\\sigma $ is 0.9118, which is greater than 0.8917 (“may”). Since the surrounding uniformity of the word “may” is regarded as an outlier, we think of “may” as polysemic. In this setting, the word “may” is polysemic because the program works in a case-insensitive mode, and the word “may” could be both an auxiliary verb and the name of a month.",
"The next example is the word “might”, whose surrounding uniformity is smaller than every neighbor word. For the word “might”, neighbor words are “would”, “could”, “should”, and “cannot”. Their surrounding uniformities are 0.9266 (“would”), 0.9290 (“could”), 0.9232 (“should”), and 0.9224 (“cannot”). Hence, $m$ is equal to 0.9253, and $\\sigma $ is equal to 0.0032. Therefore, $m-3\\sigma $ is 0.9157, which is less than 0.9179 (“might”). We cannot say 0.9179 is an outlier, and thus we cannot say the word “might” is polysemic.",
"Figure 1 shows the distribution of vectors.",
"The vector of “may” is placed in the interpolated position between “may” as an auxiliary verb and “may” as the name of a month. Since the word “may” is more frequently used as auxiliary verb, the vector is placed near other auxiliary verbs. However, the position of “may” could be an outlier for other auxiliary verbs.",
"In addition, we should show the results of names of months because these names will have the same contexts when the word is used as the name of a month. The word “may” has other contexts as auxiliary verbs. The word “august” has the sense of an adjective in the dictionary. The word “march” has a sense of a verb. Other names are monosemic words in the dictionary. Table 2 shows the surrounding uniformity for all the names of the months.",
"If we apply the test, only the word “may” passes the test. The example that fails the test is the word “august”, whose surrounding uniformity is also smaller than every neighbor word. For the case of the word “august”, $m$ is equal to 0.9808, and $\\sigma $ is equal to 0.0005. Therefore, $m-3\\sigma $ becomes 0.9793, which is less than 0.9802 (“august”). We cannot say the word “august” is polysemic, but the value of uniformity is very close to the lower bound. Other names have a greater uniformity than the corresponding lower bound. In summary, the proposed method can detect the polysemic “may”, but cannot detect the polysemicity of “august” and “march”.",
"Although we can claim nothing if the statistical test fails, even the negatives have a practical value for this test. For the case of the word “august”, it can be used as an adjective. Although we cannot say the word “august” is polysemic from the proposed procedure, we cannot claim that the word “august” is monosemic. We think this failure is caused by a few, if any, contexts of “august” as an adjective. In that case, the clustering context will be difficult in practice. Therefore, the proposed test will be meaningful even for a negative result, when the result is used to judge whether further analysis of the context is worthwhile. This discussion should be also true for the word “march”, which may be used as a verb.",
"There are other interesting words for which the proposed method detects polysemicity. These words are “james”, “mark”, and “bill”. The neighboring words are names of persons, such as “john”, “richard”, “robert”, “william”, “david”, “charles”, “henry”, “thomas”, “michael”, and “edward”. “mark” and “bill” have the same spell of the regular noun. The word “james” does not have such words and is subject to error analysis."
],
[
"First, we set the value of $limit$ to 1000, and $N$ to 4. We then performed the statistical test of these 1000 words. From these, 33 words passed test, and we assume that these words belong to the set POLY. Further, we are unable to performs the statistical test for 127 words. We say that the remaining 840 words belong to the set MONO.",
"As evaluation, we attempted to measure the agreement of human judgment for the all words of POLY and MONO. However, during the valuation, we found that many of the errors come from the problem of Word2Vec. For example, the vector of “sir” and the vector of “william” are very close because “sir william” should be very close to “william”. This is similar for “w” and “george\".",
"Therefore, we first selected words whose 10 neighboring words seem reasonable neighbors for human judgments, and performed human judgments of polysemicity. We also focused the words that have bigger SU than 0.75. This is because the statistical test will be reliable when SU is large. Table 3 shows that list of words that passed the test, and have higher SU than 0.75.",
"Table 3 shows all the words in POLY that are judged by human. Similarly Table 4 shows all the words in MONO that are judged by human.",
"We have sampled words from MONO because there are many words in MONO. In these tables, the SU of surrounding words are also presented.",
"Table 5 shows the confusion matrix for computer human judgment.",
"As there exists a case for which the number is less than or equal to 5, we need Yate's continuity correction. It achieves statistical significance with level of $\\alpha =0.05$ . The disagreement in POLY in Table 5 for the word “james” attracted our attention."
],
[
"The disagreement in MONO could be because we chose $3\\sigma $ , which can detect polysemicity in extremely apparent cases. Even so, the word “james” passes the proposed statistical test. Therefore, the word “james” is worth investing in.",
"After examining the context of “james”, we found that it can be used as the name of river and a person. Table 6 shows the various names and how many times the name is used with the word “river”.",
"The word “james” is most frequently used with “river”. This may make the word pass the statistical test."
],
[
"The majority of the polysemicity presented in this paper exists due to the Word2Vec compute the distributed representation after ignoring cases. This polysemicity might not be regarded as polysemicity with more careful preprocessing.",
"The behavior of proposed method depends on the Word2Vec options and the size of the corpus. If Word2Vec does not have a reasonable neighbor that consists of words of similar usage, the proposed method cannot work effectively. In addition, a problem arising due the use of Word2Vec for our application is the placement of the vector “sir” and the vector “william” in similar position. Therefore, we may need to utilize another method to compute the distributed representation of words. We use the FIL9 corpus for the experiment. Though this corpus is freely available to everyone, the size may not be sufficient. Although we can detect the polysemicity of “may”, we cannot detect the polysemicity of “august” and “march”. The statistical test cannot detect the right answer if we do not have sufficient data; therefore, this failure may be interpreted as insufficient usage of “march” as verb, and “august” as adverb, owing to its origin from Wikipedia, which is in essence a description of facts.",
"We believe we need to find a way to select the number of neighbors to improve the accuracy of the test. To make the statistical test more accurate, we need more samples from the neighbors. At the same time, since we assume that we can measure the statistical fluctuation from the neighbors, we need to exclude words of a different nature from the neighbors. It is natural that the right number for a neighbor may be different according to the word. The number that we choose is the minimum value for the statistical test, and has room to adjust for improvement.",
"We computed the neighbor and surrounding uniformity of the 1000 most frequently used words in FIL9. We observed that proper nouns tend to have a large surrounding uniformity, whereas prepositions tend to have a small surrounding uniformity. It is an interesting observation that the surrounding uniformity reflects the part of speech information, although it is difficult to determine the class of a word from the value of the surrounding uniformity alone. For the ease of confirming this observation, the obtained table can be downloaded from the reference (http://www.ss.cs.tut.ac.jp/FIL9SU/)."
],
[
"In this paper, we proposed a method to detect polysemy based on the distributed representation by Word2Vec. We computed the surrounding uniformity of word vector and formed a statistical test. We illustrated several examples to this measure, and explained the statistical test for detecting polysemy. In addition, we have also discussed the feasibility of this test."
]
],
"section_name": [
"Introduction",
"Related Work",
"Senses and Contexts",
"Proposed Method",
"Experimental Settings and Examples of Calculation",
"Evaluation",
"Error analysis",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"107800957bb3f9cc126bc15bd4413355fdfe15dc"
],
"answer": [
{
"evidence": [
"Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.",
"To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor. We have found that there is a difference in the surrounding uniformity between a monosemic word and a polysemic word. This paper describes how to compute surrounding uniformity for a given word, and discuss the relationship between surrounding uniformity and polysemy.",
"We choose the uniformity of vectors, which can be regarded as general case of triangle inequality. The uniformity of a set of vectors is a ratio, i.e., the size of the vector of the vector addition of the vectors divided by the scalar sum of the sizes of the vectors. If and only if all directions of the vectors are the same, the uniformity becomes 1.0. We compute this uniformity for the neighbors, including the word itself. Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$",
"where $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$"
],
"extractive_spans": [],
"free_form_answer": "Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:\n1) Setting N, the size of the neighbor.\n2) Choosing N neighboring words ai in the order whose angle with the vector of the given word w is the smallest.\n3) Computing the surrounding uniformity for ai(0 < i ≤ N) and w.\n4) Computing the mean m and the sample variance σ for the uniformities of ai .\n5) Checking whether the uniformity of w is less than m − 3σ. If the value is less than m − 3σ, we may regard w as a polysemic word.",
"highlighted_evidence": [
"One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word.",
"We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word.",
" Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.",
"To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity.",
"The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor",
"Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$\n\nwhere $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
],
"nlp_background": [
""
],
"paper_read": [
"no"
],
"question": [
"How is the fluctuation in the sense of the word and its neighbors measured?"
],
"question_id": [
"238ec3c1e1093ce2f5122ee60209b969f7669fae"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"TABLE I AUXILIARY VERBS, THEIR NEIGHBORING WORDS, AND SURROUNDING UNIFORMITIES. THE NEIGHBORING WORDS OF AN AUXILIARY VERB CONSIST OF OTHER AUXILIARY VERBS. THE WORD “MAY” HAS A SMALL SURROUNDING UNIFORMITY, ALTHOUGH ITS NEIGHBORING WORDS CONSIST OF AUXILIARY VERBS.",
"TABLE II NAMES OF THE MONTHS, THEIR NEIGHBORING WORDS, AND SURROUNDING UNIFORMITIES. ONLY “MAY”, WHICH HAS THE SMALLEST SURROUNDING UNIFORMITY, PASS THE STATISTICAL TEST. ALTHOUGH THE WORD “MAY” MIGHT BE USED AS THE NAME OF A MONTH, THE CORRESPONDING VECTOR IS NEAR THE AUXILIARY VERBS.",
"TABLE III EVALUATED WORDS AND ITS NEIGHBOR THAT PASSES THE STATISTICAL TEST.",
"TABLE IV EVALUATED WORDS THAT DOES NOT PASS THE STATISTICAL TEST.",
"TABLE V CONFUSION MATRIX OF THE AGREEMENT BETWEEN COMPUTER AND HUMAN JUDGMENTS. IT SHOWS STATISTICAL SIGNIFICANCE BY USING X2 TEST.",
"TABLE VI FREQUENCIES OF A PERSON’S NAME AND THE NAME FOLLOWED BY THE WORD “RIVER”. THE NAME“JAMES” IS THE MOST FREQUENTLY USED NAME WITH THE WORD “RIVER”."
],
"file": [
"3-TableI-1.png",
"4-TableII-1.png",
"5-TableIII-1.png",
"5-TableIV-1.png",
"5-TableV-1.png",
"5-TableVI-1.png"
]
} | [
"How is the fluctuation in the sense of the word and its neighbors measured?"
] | [
[
"1709.08858-Introduction-1",
"1709.08858-Introduction-0"
]
] | [
"Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:\n1) Setting N, the size of the neighbor.\n2) Choosing N neighboring words ai in the order whose angle with the vector of the given word w is the smallest.\n3) Computing the surrounding uniformity for ai(0 < i ≤ N) and w.\n4) Computing the mean m and the sample variance σ for the uniformities of ai .\n5) Checking whether the uniformity of w is less than m − 3σ. If the value is less than m − 3σ, we may regard w as a polysemic word."
] | 216 |
1910.00825 | Abstractive Dialog Summarization with Semantic Scaffolds | The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet)to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics. | {
"paragraphs": [
[
"Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.",
"There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.",
"In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics."
],
[
"BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.",
"Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.",
"Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation\". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work."
],
[
"As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain."
],
[
"We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:",
"where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.",
"With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:",
"Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch\" to choose from copy and generation:",
"where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.",
"The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:",
"where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\\prime }$, $V$, $b$ and $b^{\\prime }$ are learnable parameters used to calculate such distribution."
],
[
"Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold."
],
[
"Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:",
"The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:"
],
[
"We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.",
"We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:",
"Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5."
],
[
"We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:",
"where $U$, $U^{\\prime }$, $b_{d}$ and $b_{d}^{\\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:",
"The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\\hat{d_i}$ and predict probability $d_i$ for this task:",
"where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\\lambda $ and the objective function is:"
],
[
"We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing."
],
[
"ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:",
"Reference: You are going to [restaurant_name] at [time].",
"Summary: You are going to [restaurant_name] at.",
"In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to\") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:",
"where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.",
"CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain."
],
[
"We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\\beta _1=0.9$, $\\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively."
],
[
"To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.",
"We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics."
],
[
"We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).",
"We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary."
],
[
"Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi\" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.",
"Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).",
"Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting."
],
[
"We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.",
"Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Method",
"Proposed Method ::: Background",
"Proposed Method ::: Scaffold Pointer Network (SPNet)",
"Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold",
"Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold",
"Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold",
"Experimental Settings ::: Dataset",
"Experimental Settings ::: Evaluation Metrics",
"Experimental Settings ::: Implementation Details",
"Results and Discussions ::: Automatic Evaluation Results",
"Results and Discussions ::: Human Evaluation Results",
"Results and Discussions ::: Case study",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"d214c4bc382c51d8f0cd08b640a46c76afbbbd86"
],
"answer": [
{
"evidence": [
"We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.",
"FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds."
],
"extractive_spans": [],
"free_form_answer": "SPNet vs best baseline:\nROUGE-1: 90.97 vs 90.68\nCIC: 70.45 vs 70.25",
"highlighted_evidence": [
"We show all the models' results in Table TABREF24",
"FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"87489cb800ee2bd74ed869331e049f50df8490cd"
],
"answer": [
{
"evidence": [
"We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).",
"We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics."
],
"extractive_spans": [
"ROUGE and CIC",
"relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair"
],
"free_form_answer": "",
"highlighted_evidence": [
"The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).",
"We observe that SPNet reaches the highest score in both ROUGE and CIC"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"dd2e932f857b22b80622c71fdff3724951a7b2ef"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"658e80b812db9c136734de7fac04f01050ba7696"
],
"answer": [
{
"evidence": [
"Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries."
],
"extractive_spans": [],
"free_form_answer": "Not at the moment, but summaries can be additionaly extended with this annotations.",
"highlighted_evidence": [
"We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8c16d083a2893633aec9f3bcfddc03ede96237de"
],
"answer": [
{
"evidence": [
"Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:",
"We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.",
"We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:"
],
"extractive_spans": [
"Our encoder-decoder framework employs separate encoding for different speakers in the dialog.",
"We integrate semantic slot scaffold by performing delexicalization on original dialogs.",
"We integrate dialog domain scaffold through a multi-task framework."
],
"free_form_answer": "",
"highlighted_evidence": [
"Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ .",
"We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling.",
"We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5274d125124da018bd4cea634e16b14af46f9fe4"
],
"answer": [
{
"evidence": [
"To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation."
],
"extractive_spans": [
"Pointer-Generator",
"Transformer"
],
"free_form_answer": "",
"highlighted_evidence": [
"To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1162bf54756068e0894e0ec3e15af76802321f63"
],
"answer": [
{
"evidence": [
"In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to\") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:",
"where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.",
"CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (formula for CIC) it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities",
"highlighted_evidence": [
"To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:\n\nwhere $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.\n\nCIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5fa3ee21cd7d33a6a7d8bad663cc0b8a8cc5bab4"
],
"answer": [
{
"evidence": [
"CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?",
"What automatic and human evaluation metrics are used to compare SPNet to its counterparts?",
"Is proposed abstractive dialog summarization dataset open source?",
"Is it expected to have speaker role, semantic slot and dialog domain annotations in real world datasets?",
"How does SPNet utilize additional speaker role, semantic slot and dialog domain annotations?",
"What are previous state-of-the-art document summarization methods used?",
"How does new evaluation metric considers critical informative entities?",
"Is new evaluation metric extension of ROGUE?"
],
"question_id": [
"f398587b9a0008628278a5ea858e01d3f5559f65",
"d5f8707ddc21741d52b3c2a9ab1af2871dc6c90b",
"58f3bfbd01ba9768172be45a819faaa0de2ddfa4",
"73633afbefa191b36cca594977204c6511f9dad4",
"db39a71080e323ba2ddf958f93778e2b875dcd24",
"6da2cb3187d3f28b75ac0a61f6562a8adf716109",
"c47e87efab11f661993a14cf2d7506be641375e4",
"14684ad200915ff1e3fc2a89cb614e472a1a2854"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: SPNet overview. The blue and yellow box is the user and system encoder respectively. The encoders take the delexicalized conversation as input. The slots values are aligned with their slots position. Pointing mechanism merges attention distribution and vocabulary distribution to obtain the final distribution. We then fill the slots values into the slot tokens to convert the template to a complete summary. SPNet also performs domain classification to improve encoder representation.",
"Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds.",
"Table 2: An example dialog and Pointer-Generator, SPNet and ground truth summaries. We underline semantic slots in the conversation. Red denotes incorrect slot values and green denotes the correct ones.",
"Table 3: The upper is the scoring part and the lower is the the ranking part. SPNet outperforms Pointer-Generator in all three human evaluation metrics and the differences are significant, with the confidence over 99.5% in student t test. In the ranking part, the percentage of each choice is shown in decimal. Win, lose and tie refer to the state of the former summary in ranking."
],
"file": [
"4-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png"
]
} | [
"By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?",
"Is it expected to have speaker role, semantic slot and dialog domain annotations in real world datasets?",
"How does new evaluation metric considers critical informative entities?"
] | [
[
"1910.00825-6-Table1-1.png",
"1910.00825-Results and Discussions ::: Automatic Evaluation Results-1"
],
[
"1910.00825-Conclusion and Future Work-1"
],
[
"1910.00825-Experimental Settings ::: Evaluation Metrics-3",
"1910.00825-Experimental Settings ::: Evaluation Metrics-4",
"1910.00825-Experimental Settings ::: Evaluation Metrics-5"
]
] | [
"SPNet vs best baseline:\nROUGE-1: 90.97 vs 90.68\nCIC: 70.45 vs 70.25",
"Not at the moment, but summaries can be additionaly extended with this annotations.",
"Answer with content missing: (formula for CIC) it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities"
] | 223 |
1910.00458 | MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension | Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the learning task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets. | {
"paragraphs": [
[
"Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.",
"In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model.",
"Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.",
"We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset)."
],
[
"In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$."
],
[
"Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \\in \\mathbb {R}^{d\\times l}$, which is then projected into a single value $p=C(H)$ ($p\\in \\mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection."
],
[
"For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.",
"The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\\in \\mathbb {R}^{d\\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\\in \\mathbb {R}^{d\\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better.",
"We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\\mathbf {s}^0=\\sum _i \\alpha _i H_i^P$, where $\\alpha _i=\\frac{exp(w_1^TH_i^P)}{\\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \\in {1,2,...,K-1}$, the state is calculated by:",
"where $\\mathbf {x}^k=\\sum _i\\beta _iH_i^{QO}$ and $\\beta _i=\\frac{exp(w_2^T[\\mathbf {s}^{k-1};H_i^{QO}])}{\\sum _j exp(w_2^T[\\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state:",
"Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair."
],
[
"We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10."
],
[
"We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details."
],
[
"After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets."
],
[
"We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits."
],
[
"Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned."
],
[
"For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17."
],
[
"We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material.",
"More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset."
],
[
"We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.",
"We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method.",
"To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\\sim $1% improvement."
],
[
"As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage."
],
[
"By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems?",
"To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets.",
"For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin.",
"Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning.",
"In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful."
],
[
"The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data."
],
[
"In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset.",
"Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets."
],
[
"Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits."
],
[
"So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most.",
"Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder."
],
[
"In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%.",
"However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material."
],
[
"There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5.",
"Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful.",
"Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task."
],
[
"We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains."
]
],
"section_name": [
"Introduction",
"Methods",
"Methods ::: Model Architecture",
"Methods ::: Multi-step Attention Network",
"Methods ::: Two Stage Training",
"Methods ::: Two Stage Training ::: Coarse-tuning Stage",
"Methods ::: Two Stage Training ::: Multi-task Learning Stage",
"Experimental Setup ::: Datasets",
"Experimental Setup ::: Speaker Normalization",
"Experimental Setup ::: Multi-task Learning",
"Experimental Setup ::: Training Details",
"Results",
"Discussion ::: Why does natural language inference help?",
"Discussion ::: Can other tasks help with MCQA?",
"Discussion ::: NLI dataset helps with convergence",
"Discussion ::: Multi-stage or Multi-task",
"Discussion ::: Multi-steps reasoning is important",
"Discussion ::: Could the source dataset be benefited?",
"Discussion ::: Error Analysis",
"Related Work",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"11e9dc8da152c948ba3f0ed165402dffad6fae49"
],
"answer": [
{
"evidence": [
"We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%."
],
"extractive_spans": [
"test accuracy of 88.9%, which exceeds the previous best by 16.9%"
],
"free_form_answer": "",
"highlighted_evidence": [
"Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"fb11cb05fe3d851cc4d17da20a5b958dad0af096"
],
"answer": [
{
"evidence": [
"We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits."
],
"extractive_spans": [
"MultiNLI BIBREF15 and SNLI BIBREF16 "
],
"free_form_answer": "",
"highlighted_evidence": [
"For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"6f65f4be18453d162510778c0b8c582ffc5f27f7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines."
],
"extractive_spans": [],
"free_form_answer": "FTLM++, BERT-large, XLNet",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"605df693493ead557174f3a1ebb05efb09517f15"
],
"answer": [
{
"evidence": [
"Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11."
],
"extractive_spans": [
"DREAM, MCTest, TOEFL, and SemEval-2018 Task 11"
],
"free_form_answer": "",
"highlighted_evidence": [
"To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How big are improvements of MMM over state of the art?",
"What out of domain datasets authors used for coarse-tuning stage?",
"What are state of the art methods MMM is compared to?",
"What four representative datasets are used for bechmark?"
],
"question_id": [
"53d6cbee3606dd106494e2e98aa93fdd95920375",
"9dc844f82f520daf986e83466de0c84d93953754",
"9fe4a2a5b9e5cf29310ab428922cc8e7b2fc1d11",
"36d892460eb863220cd0881d5823d73bbfda172c"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Data samples of DREAM dataset. ( √ : the correct answer)",
"Figure 1: Model architecture. “Encoder”is a pre-trained sentence encoder such as BERT. “Classifier” is a top-level classifier.",
"Figure 2: Multi-stage and multi-task fine-tuning strategy.",
"Table 2: Statistics of MCQA datasets. (crowd.: crowd-sourcing; ?: answer options are not text snippets from reference documents.)",
"Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines.",
"Table 4: Performance in accuracy (%) on test sets of other datasets: MCTest (MC160 and MC500), TOEFL, and SemEval. Performance marked by ? is reported by (Richardson, Burges, and Renshaw 2013) and that marked by † is from (Ostermann et al. 2018). Numbers in the parentheses indicate the accuracy increased by MMM. “-B” means the base model and “-L” means the large model.",
"Table 5: Ablation study on the DREAM and MCTest-MC160 (MC160) datasets. Accuracy (%) is on the development set.",
"Table 6: Transfer learning results for DREAM and MC500. The BERT-Base model is first fine-tuned on each source dataset and then further fine-tuned on the target dataset. Accuracy is on the the development set. A two-layer FCNN is used as the classifier.",
"Table 7: Comparison between multi-task learning and sequential fine-tuning. BERT-Base model is used and the accuracy is on the development set. Target refers to the target dataset in transfer learning. A two-layer FCNN instead of MAN is used as the classifier.",
"Figure 3: Train loss curve with respect to optimization steps. With prior coarse-tuning on NLI data, convergence becomes much faster and easier.",
"Figure 4: Effects of the number of reasoning steps for the MAN classifier. 0 steps means using FCNN instead of MAN. The BERTBase model and DREAM dataset are used.",
"Table 8: Ablation study for the RACE dataset. The accuracy is on the development set. All parts of MMM improve this source dataset.",
"Table 9: Comparison of the test accuracy of the RACE dataset between our approach MMM and the official reports that are from the dataset leaderboard.",
"Table 10: Error analysis on DREAM. The column of “Percent” reports the percentage of question types among 150 samples that are from the development set of DREAM dataset that are wrongly predicted by the BERT-Base baseline model. The column of “Accuracy” reports the accuracy of our best model (RoBERTa-Large+MMM) on these samples."
],
"file": [
"1-Table1-1.png",
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"5-Table6-1.png",
"6-Table7-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png",
"7-Table8-1.png",
"7-Table9-1.png",
"7-Table10-1.png"
]
} | [
"What are state of the art methods MMM is compared to?"
] | [
[
"1910.00458-4-Table3-1.png"
]
] | [
"FTLM++, BERT-large, XLNet"
] | 225 |
2001.11268 | Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks | This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture's use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature. | {
"paragraphs": [
[
"Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict methodology that seeks to include all relevant information on the review topic BIBREF0.",
"A SR, as produced by the quality standards of Cochrane, is conducted to appraise and synthesize all research for a specific research question, therefore providing access to the best available medical evidence where needed BIBREF1. The research question is specified using the PICO (population; intervention; comparator; outcomes) framework. The researchers conduct very broad literature searches in order to retrieve every piece of clinical evidence that meets their review's inclusion criteria, commonly all RCTs of a particular healthcare intervention in a specific population. In a search, no piece of relevant information should be missed. In other words, the aim is to achieve a recall score of one. This implies that the searches are broad BIBREF2, and authors are often left to screen a large number of abstracts manually in order to identify a small fraction of relevant publications for inclusion in the SR BIBREF3.",
"The number of RCTs is increasing, and with it increases the potential number of reviews and the amount of workload that is implied for each. Research on the basis of PubMed entries shows that both the number of publications and the number of SRs increased rapidly in the last ten years BIBREF4, which is why acceleration of the systematic reviewing process is of interest in order to decrease working hours of highly trained researchers and to make the process more efficient.",
"",
"In this work, we focus on the detection and annotation of information about the PICO elements of RCTs described in English PubMed abstracts. In practice, the comparators involved in the C of PICO are just additional interventions, so we often refer to PIO (populations; interventions; outcomes) rather than PICO. Focus points for the investigation are the problems of ambiguity in labelled PIO data, integration of training data from different tasks and sources and assessing our model's capacity for transfer learning and domain adaptation.",
"Recent advances in natural language processing (NLP) offer the potential to be able to automate or semi-automate the process of identifying information to be included in a SR. For example, an automated system might attempt to PICO-annotate large corpora of abstracts, such as RCTs indexed on PubMed, or assess the results retrieved in a literature search and predict which abstract or full text article fits the inclusion criteria of a review. Such systems need to be able to classify and extract data of interest. We show that transformer models perform well on complex data-extraction tasks. Language models are moving away from the semantic, but static representation of words as in Word2Vec BIBREF5, hence providing a richer and more flexible contextualized representation of input features within sentences or long sequences of text.",
"The rest of this paper is organized as follows. The remainder of this section introduces related work and the contributions of our work. Section 2 describes the process of preparing training data, and introduces approaches to fine-tuning for sentence classification and question answering tasks. Results are presented in section 3, and section 4 includes a critical evaluation and implications for practice."
],
[
"The website systematicreviewtools.com BIBREF6 lists 36 software tools for study selection to date. Some tools are intended for organisational purposes and do not employ PICO classification, such as Covidence BIBREF7. The tool Rayyan uses support vector machines BIBREF8. RobotReviewer uses neural networks, word embeddings and recently also a transformer for named entity recognition (NER) BIBREF9. Question answering systems for PICO data extraction exist based on matching words from knowledge bases, hand-crafted rules and naïve Bayes classification, both on entity and sentence level BIBREF10, BIBREF11, but commonly focus on providing information to practicing clinicians rather than systematic reviewers BIBREF12.",
"In the following we introduce models related to our sentence and entity classification tasks and the data on which our experiments are based. We made use of previously published training and testing data in order to ensure comparability between models."
],
[
"In the context of systematic review (semi)automation, sentence classification can be used in the screening process, by highlighting relevant pieces of text. A long short-term memory (LSTM) neural network trained with sentences of structured abstracts from PubMed was published in 2018 BIBREF13. It uses a pre-trained Word2Vec embedding in order to represent each input word as a fixed vector. Due to the costs associated with labelling, its authors acquired sentence labels via automated annotation. Seven classes were assigned on the basis of structured headings within the text of each abstract. Table TABREF4 provides an overview of class abbreviations and their meaning.In the following we refer to it as the PubMed data.",
"The LSTM itself yields impressive results with F1 scores for annotation of up to 0.85 for PIO elements, it generalizes across domains and assigns one label per sentence. We were able to confirm these scores by replicating a local version of this model."
],
[
"The Stanford Question Answering Dataset (SQuAD) is a reading-comprehension dataset for machine learning tasks. It contains question contexts, questions and answers and is available in two versions. The older version contains only questions that can be answered based on the given context. In its newer version, the dataset also contains questions which can not be answered on the basis of the given context. The SQuAD creators provide an evaluation script, as well as a public leader board to compare model performances BIBREF14."
],
[
"In the PICO domain, the potential of NER was shown by Nye and colleagues in using transformers, as well as LSTM and conditional random fields. In the following, we refer to these data as the ebm-nlp corpus. BIBREF15. The ebm-nlp corpus provided us with 5000 tokenized and annotated RCT abstracts for training, and 190 expert-annotated abstracts for testing. Annotation in this corpus include PIO classes, as well as more detailed information such as age, gender or medical condition. We adapted the human-annotated ebm-nlp corpus of abstracts for training our QA-BERT question answering system."
],
[
"In the following, the bidirectional encoder representations from transformers (BERT) architecture is introduced BIBREF16. This architecture's key strengths are rooted in both feature representation and training. A good feature representation is essential to ensure any model's performance, but often data sparsity in the unsupervised training of embedding mechanisms leads to losses in overall performance. By employing a word piece vocabulary, BERT eliminated the problem of previously unseen words. Any word that is not present in the initial vocabulary is split into a sub-word vocabulary. Especially in the biomedical domain this enables richer semantic representations of words describing rare chemical compounds or conditions. A relevant example is the phrase ’two drops of ketorolac tromethamine’, where the initial three words stay intact, while the last words are tokenized to ’ket’, ’#oro’, ’#lac’, ’tro’, ’#meth’, ’#amine’, hence enabling the following model to focus on relevant parts of the input sequence, such as syllables that indicate chemical compounds. When obtaining a numerical representation for its inputs, transformers apply a ’self-attention’ mechanism, which leads to a contextualized representation of each word with respect to its surrounding words.",
"BERT's weights are pre-trained in an unsupervised manner, based on large corpora of unlabelled text and two pre-training objectives. To achieve bidirectionality, its first pre-training objective includes prediction of randomly masked words. Secondly, a next-sentence prediction task trains the model to capture long-term dependencies. Pre-training is computationally expensive but needs to be carried out only once before sharing the weights together with the vocabulary. Fine-tuning to various downstream tasks can be carried out on the basis of comparably small amounts of labelled data, by changing the upper layers of the neural network to classification layers for different tasks.",
"SCIBERT is a model based on the BERT-base architecture, with further pre-trained weights based on texts from the Semantic Scholar search engine BIBREF17. We used these weights as one of our three starting points for fine-tuning a sentence classification architecture BIBREF18. Furthermore, BERT-base (uncased) and Bert multilingual (cased, base architecture) were included in the comparison BIBREF16."
],
[
"In the following, we discuss weaknesses in the PubMed data, and LSTM models trained on this type of labelled data. LSTM architectures commonly employ a trimmed version of Word2Vec embeddings as embedding layer. In our case, this leads to 20% of the input data being represented by generic `Unknown' tokens. These words are missing because they occur so rarely that no embedding vector was trained for them. Trimming means that the available embedding vocabulary is then further reduced to the known words of the training, development and testing data, in order to save memory and increase speed. The percentage of unknown tokens is likely to increase when predicting on previously unseen and unlabelled data. We tested our locally trained LSTM on 5000 abstracts from a study-based register BIBREF19 and found that 36% of all unique input features did not have a known representation.",
"In the case of the labelled training and testing data itself, automatic annotation carries the risk of producing wrongly labelled data. But it also enables the training of neural networks in the first place because manual gold standard annotations for a project on the scale of a LSTM are expensive and time-consuming to produce. As we show later, the automated annotation technique causes noise in the evaluation because as the network learns, it can assign correct tags to wrongly labelled data. We also show that sentence labels are often ambiguous, and that the assignment of a single label limits the quality of the predictions for their use in real-world reviewing tasks.",
"We acknowledge that the assignment of classes such as `Results' or `Conclusions' to sentences is potentially valuable for many use-cases. However, those sentences can contain additional information related to the PICO classes of interest. In the original LSTM-based model the A, M, R, and C data classes in Table TABREF4 are utilized for sequence optimization, which leads to increased classification scores. Their potential PICO content is neglected, although it represents crucial information in real-world reviewing tasks.",
"A general weakness of predicting labels for whole sentences is the practical usability of the predictions. We will show sentence highlighting as a potential use-case for focusing reader's attention to passages of interest. However, the data obtained through this method are not fine-grained enough for usage in data extraction, or for the use in pipelines for automated evidence synthesis. Therefore, we expand our experiments to include QA-BERT, a question-answering model that predicts the locations of PICO entities within sentences."
],
[
"In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.",
"In the second fine-tuning approach, we apply a question answering architecture to the task of data extraction. Previous models for PICO question answering relied on vast knowledge bases and hand-crafted rules. Our fine-tuning approach shows that an abstract as context, together with a combination of annotated PICO entities and SQuAD data can result in a system that outperforms contemporary entity recognition systems, while retaining general reading comprehension capabilities."
],
[
"A language processing model's performance is limited by its capability of representing linguistic concepts numerically. In this preliminary experiment, we used the PubMed corpus for sentence classification to show the quality of PICO sentence embeddings retrieved from BERT. We mapped a random selection of 3000 population, intervention, and outcome sentences from the PubMed corpus to BERT-base uncased and SCIBERT. This resulted in each sentence being represented by a fixed length vector of 768 dimensions in each layer respectively, as defined by the model architecture's hidden size. These vectors can be obtained for each of the network's layers, and multiple layers can be represented together by concatenation and pooling. We used the t-distributed Stochastic Neighbour Embedding (t-SNE) algorithm to reduce each layer-embedding into two-dimensional space, and plotted the resulting values. Additionally, we computed adjusted rand scores in order to evaluate how well each layer (or concatenation thereof, always using reduce_mean pooling) represents our input sequence. The rand scores quantify the extent to which a naïve K-means (N=3) clustering algorithm in different layers alone led to correct grouping of the input sentences."
],
[
"We used the PubMed corpus to fine-tune a sentence classification architecture. Class names and abbreviations are displayed in Table TABREF4. The corpus was supplied in pre-processed form, comprising 24,668 abstracts. For more information about the original dataset we refer to its original publication BIBREF13. Because of the PICO framework, methods for systematic review semi(automation) commonly focus on P, I, and O detection. A, M, R, and C classes are an additional feature of this corpus. They were included in the following experiment because they represent important information in abstracts and they occur in a vast majority of published trial text. Their exclusion can lead to false classification of sentences in full abstracts. In a preliminary experiment we summarized A, M, R, and C sentences as a generic class named ’Other’ in order to shift the model's focus to PIO classes. This resulted in high class imbalance, inferior classification scores and a loss of ability to predict these classes when supporting systematic reviewers during the screening process.",
"In the following, abstracts that did not include a P, I, and O label were excluded. This left a total of 129,095 sentences for training, and 14,344 for testing (90:10 split)."
],
[
"We carried out fine-tuning for sentence classification based on BERT-base (uncased), multilingual BERT (cased), and on SCIBERT. We changed the classification layer on top of the original BERT model. It remains as linear, fully connected layer but now employs the sigmoid cross-entropy loss with logits function for optimization. During training, this layer is optimised for predicting probabilities over all seven possible sentence labels. Therefore, this architecture enables multi-class, multi-label predictions. In comparison, the original BERT fine-tuning approach for sentence classification employed a softmax layer in order to obtain multi-class, single-label predictions of the most probable class only. During the training process the model then predicts class labels from Table 1 for each sentence. After each training step, backpropagation then adjusts the model's internal weights. To save GPU resources, a maximal sequence length of 64, batch size 32, learning rate of $2\\times 10^{-5}$, a warm-up proportion of 0.1 and two epochs for training were used."
],
[
"In the scope of the experiments for this paper, the model returns probabilities for the assignment of each class for every sentence. These probabilities were used to show effects of different probability thresholds (or simply assignment to the most probable class) on recall, precision and F1 scores. The number of classes was set to 7, thereby making use of the full PubMed dataset."
],
[
"Both the training and testing subsets from the ebm-nlp data were adapted to fit the SQuAD format. We merged both datasets in order to train a model which firstly correctly answers PICO questions on the basis of being trained with labelled ebm-nlp data, and secondly retains the flexibility of general-purpose question answering on the basis of SQuAD. We created sets of general, differently phrased P, I, and O questions for the purpose of training a broad representation of each PICO element question.",
"In this section we describe the process of adapting the ebm-nlp data to the second version of the SQuAD format, and then augmenting the training data with some of the original SQuAD data. Figure FIGREF19 shows an example of the converted data, together with a high-level software architecture description for our QA-BERT model. We created a conversion script to automate this task. To reduce context length, it first split each ebm-nlp abstract into sentences. For each P, I, and O class it checked the presence of annotated entity spans in the ebm-nlp source files. Then, a question was randomly drawn from our set of general questions for this class, to complete a context and a span-answer pair in forming a new SQuAD-like question element. In cases where a sentence did not contain a span, a question was still chosen, but the answer was marked as impossible, with the plausible answer span set to begin at character 0. In the absence of impossible answers, the model would always return some part of the context as answer, and hence be of no use for rarer entities such as P, which only occurs in only 30% of all context sentences.",
"For the training data, each context can contain one possible answer, whereas for testing multiple question-answer pairs are permitted. An abstract is represented as a domain, subsuming its sentences and question answer-text pairs. In this format, our adapted data are compatible with the original SQuAD v.2 dataset, so we chose varying numbers of original SQuAD items and shuffled them into the training data. This augmentation of the training data aims to reduce the dependency on large labelled corpora for PICO entity extraction. Testing data can optionally be enriched in the same way, but for the presentation of our results we aimed to be comparable with previously published models and therefore chose to evaluate only on the subset of expert-annotated ebm-nlp testing data."
],
[
"The python Huggingface Transformers library was used for fine-tuning the question-answering models. This classification works by adding a span-classification head on top of a pre-trained transformer model. The span-classification mechanism learns to predict the most probable start and end positions of potential answers within a given context BIBREF22.",
"The Transformers library offers classes for tokenizers, BERT and other transformer models and provides methods for feature representation and optimization. We used BertForQuestionAnswering. Training was carried out on Google's Colab, using the GPU runtime option. We used a batch size of 18 per GPU and a learning rate of $3^{-5}$. Training lasted for 2 epochs, context length was limited to 150. To reduce the time needed to train, we only used BERT-base (uncased) weights as starting points, and used a maximum of 200 out of the 442 SQuAD domains.",
"To date, the Transformers library includes several BERT, XLM, XLNet, DistilBERT and ALBERT question answering models that can be fine-tuned with the scripts and data that we describe in this paper."
],
[
"Figure FIGREF23 shows the dimensionality-reduced vectors for 3000 sentences in BERT-base, along with the positions of three exemplary sentences. All three examples were labelled as 'P' in the gold standard. This visualization highlights overlaps between the sentence data and ambiguity or noise in the labels.",
"UTF8bsmi",
"Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.",
"Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base."
],
[
"Precision, recall, and F1 scores, including a comparison with the LSTM, are summarized in Table TABREF22. Underlined scores represent the top score across all models, and scores in bold are the best results for single- and multi-label cases respectively. The LSTM assigns one label only and was outperformed in all classes of main interest (P, I, and O).",
"A potential pitfall of turning this task into multi-label classification is an increase of false-positive predictions, as more labels are assigned than given in the single-labelled testing data in the first place. However, the fine-tuned BERT models achieved high F1 scores, and large improvements in terms of recall and precision. In its last row, Table TABREF22 shows different probability thresholds for class assignment when using the PubMed dataset and our fine-tuned SCIBERT model for multi-label prediction. After obtaining the model's predictions, a simple threshold parameter can be used to obtain the final class labels. On our labelled testing data, we tested 50 evenly spaced thresholds between 0 and 1 in order to obtain these graphs. Here, recall and precision scores in ranges between 0.92 and 0.97 are possible with F1 scores not dropping below 0.84 for the main classes of interest. In practice, the detachment between model predictions and assignment of labels means that a reviewer who wishes to switch between high recall and high precision results can do so very quickly, without obtaining new predictions from the model itself.",
"More visualizations can be found in this project's GitHub repository , including true class labels and a detailed breakdown of true and false predictions for each class. The highest proportion of false classification appears between the results and conclusion classes.",
"The fine-tuned multilingual model showed marginally inferior classification scores on the exclusively English testing data. However, this model's contribution is not limited to the English language because its interior weights embed a shared vocabulary of 100 languages, including German and Chinese. Our evaluation of the multilingual model's capacity for language transfer is of a qualitative nature, as there were no labelled Chinese or German data available. Table TABREF24 shows examples of two abstracts, as predicted by the model. Additionally, this table demonstrates how a sentence prediction model can be used to highlight text. With the current infrastructure it is possible to highlight PICOs selectively, to highlight all classes simultaneously, and to adjust thresholds for class assignment in order to increase or decrease the amount of highlighted sentences. When applied to full texts of RCTs and cohort studies, we found that the model retained its ability to identify and highlight key sentences correctly for each class.",
"",
"We tested various report types, as well as recent and old publications, but remain cautious that large scale testing on labelled data is needed to draw solid conclusions on these model's abilities for transfer learning. For further examples in the English language, we refer to our GitHub repository."
],
[
"We trained and evaluated a model for each P, I, and O class. Table TABREF29 shows our results, indicated as QA-BERT, compared with the currently published leader board for the ebm-nlp data BIBREF25 and results reported by the authors of SCIBERT BIBREF18. For the P and I classes, our models outperformed the results on this leader board. The index in our model names indicates the amount of additional SQuAD domains added to the training data. We never used the full SQuAD data in order to reduce time for training but observed increased performance when adding additional data. For classifying I entities, an increase from 20 to 200 additional SQuAD domains resulted in an increase of 8% for the F1 score, whereas the increase for the O domain was less than 1%. After training a model with 200 additional SQuAD domains, we also evaluated it on the original SQuAD development set and obtained a F1 score of 0.72 for this general reading comprehension task.",
"In this evaluation, the F1 scores represent the overlap of labelled and predicted answer spans on token level. We also obtained scores for the subgroups of sentences that did not contain an answer versus the ones that actually included PICO elements. These results are shown in Table TABREF30.",
"For the P class, only 30% of all sentences included an entity, whereas its sub-classes age, gender, condition and size averaged 10% each. In the remaining classes, these percentages were higher. F1 scores for correctly detecting that a sentence includes no PICO element exceeded 0.92 in all classes. This indicates that the addition of impossible answer elements was successful, and that the model learned a representation of how to discriminate PICO contexts. The scores for correctly predicting PICOs in positive scenarios are lower. These results are presented in Table TABREF30. Here, two factors could influence this score in a negative way. First, labelled spans can be noisy. Training spans were annotated by crowd workers and the authors of the original dataset noted inter-annotator disagreement. Often, these spans include full stops, other punctuation or different levels of detail describing a PICO. The F1 score decreases if the model predicts a PICO, but the predicted span includes marginal differences that were not marked up by the experts who annotated the testing set. Second, some spans include multiple PICOs, sometimes across sentence boundaries. Other spans mark up single PICOS in succession. In these cases the model might find multiple PICOs in a row, and annotate them as one or vice versa."
],
[
"In this work, we have shown possibilities for sentence classification and data extraction of PICO characteristics from abstracts of RCTs.",
"For sentence classification, models based on transformers can predict multiple labels per sentence, even if trained on a corpus that assigns a single label only. Additionally, these architectures show a great level of flexibility with respect to adjusting precision and recall scores. Recall is an important metric in SR tasks and the architectures proposed in this paper enable a post-classification trade-off setting that can be adjusted in the process of supporting reviewers in real-world reviewing tasks.",
"However, tagging whole sentences with respect to populations, interventions and outcomes might not be an ideal method to advance systematic review automation. Identifying a sentence's tag could be helpful for highlighting abstracts from literature searches. This focuses the reader's attention on sentences, but is less helpful for automatically determining whether a specific entity (e.g. the drug aspirin) is mentioned.",
"Our implementation of the question answering task has shown that a substantial amount of PICO entities can be identified in abstracts on a token level. This is an important step towards reliable systematic review automation. With our provided code and data, the QA-BERT model can be switched with more advanced transformer architectures, including XLM, XLNet, DistilBERT and ALBERT pre-trained models. More detailed investigations into multilingual predictions BIBREF26 pre-processing and predicting more than one PICO per sentence are reserved for future work."
],
[
"Limitations in the automatically annotated PubMed training data mostly consist of incomplete detection or noise P, I, and O entities due to the single labelling. We did not have access to multilingual annotated PICO corpora for testing, and therefore tested the model on German abstracts found on PubMed, as well as Chinese data provided by the Cochrane Schizophrenia Group.",
"For the question answering, we limited the use of original SQuAD domains to enrich our data. This was done in order to save computing resources, as an addition of 100 SQuAD domains resulted in training time increases of two hours, depending on various other parameter settings. Adjusted parameters include increased batch size, and decreased maximal context length in order to reduce training time."
],
[
"With this paper we aimed to explore state-of-the-art NLP methods to advance systematic review (semi)automation. Both of the presented fine-tuning approaches for transformers demonstrated flexibility and high performance. We contributed an approach to deal with ambiguity in whole-sentence predictions, and proposed the usage of a completely different approach to entity recognition in settings where training data are sparse.",
"In conclusion we wish to emphasize our argument that for future applications, interoperability is important. Instead of developing yet another stand-alone organizational interface with a machine learning classifier that works on limited data only, the focus should be to develop and train cross-domain and neural models that can be integrated into the backend of existing platforms. The performance of these models should be comparable on standardized datasets, evaluation scripts and leader boards.",
"The logical next step, which remains less explored in the current literature because of its complexity, is the task of predicting an RCT's included or excluded status on the basis of PICOs identified in its text. For this task, more complex architectures that include drug or intervention ontologies could be integrated. Additionally, information from already completed reviews could be re-used as training data."
],
[
"We would like to thank Clive Adams for providing testing data and feedback for this project. We thank Vincent Cheng for the Chinese translation. Furthermore, we thank the BERT team at Google Research and Allenai for making their pre-trained model weights available. Finally, we acknowledge the Huggingface team and thank them for implementing the SQuAD classes for Transformers."
],
[
"LS was funded by the National Institute for Health Research (NIHR Systematic Review Fellowship, RM-SR-2017-09-028). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care."
],
[
"Scripts and supplementary material, as well as further illustrations are available from https://github.com/L-ENA/HealthINF2020. Training data for sentence classification and question answering are freely available from the cited sources.",
"Additionally, the Cochrane Schizophrenia Group extracted, annotated and made available data from studies included in over 200 systematic reviews. This aims at supporting the development of methods for reviewing tasks, and to increase the re-use of their data. These data include risk-of-bias assessment, results including all clean and published outcome data extracted by reviewers, data on PICOs, methods, and identifiers such as PubMed ID and a link to their study-based register. Additionally, a senior reviewer recently carried out a manual analysis of all 33,000 outcome names in these reviews, parsed and allocated to 15,000 unique outcomes in eight main categories BIBREF27."
]
],
"section_name": [
"INTRODUCTION",
"INTRODUCTION ::: Tools for SR automation and PICO classification",
"INTRODUCTION ::: Sentence classification data",
"INTRODUCTION ::: Question answering data ::: SQuAD",
"INTRODUCTION ::: Question answering data ::: Ebm-nlp",
"INTRODUCTION ::: Introduction to transformers",
"INTRODUCTION ::: Weaknesses in the previous sentence classification approach",
"INTRODUCTION ::: Contributions of this research",
"METHODOLOGY ::: Feature representation and advantages of contextualization",
"METHODOLOGY ::: Sentence classification ::: Preparation of the data",
"METHODOLOGY ::: Sentence classification ::: Fine-tuning",
"METHODOLOGY ::: Sentence classification ::: Post-training assignment of classes",
"METHODOLOGY ::: Question answering ::: Preparation of the data",
"METHODOLOGY ::: Question answering ::: Fine-tuning",
"RESULTS ::: Feature representation and contextualization",
"RESULTS ::: Sentence classification",
"RESULTS ::: Question answering",
"DISCUSSION",
"DISCUSSION ::: Limitations",
"CONCLUSION",
"ACKNOWLEDGEMENTS",
"FUNDING",
"Availability of the code and data"
]
} | {
"answers": [
{
"annotation_id": [
"11ea0b3864122600cc8ab3c6e1d34caea0d87c8c"
],
"answer": [
{
"evidence": [
"In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.",
"Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base."
],
"extractive_spans": [
"LSTM",
"SCIBERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13.",
"SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"7c2e7cb2253cdf2c28dc3ebda63e2141052f4290"
],
"answer": [
{
"evidence": [
"Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.",
"FLOAT SELECTED: Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions."
],
"extractive_spans": [],
"free_form_answer": "Some sentences are associated to ambiguous dimensions in the hidden state output",
"highlighted_evidence": [
"Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. ",
"FLOAT SELECTED: Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What baselines did they consider?",
"What are the problems related to ambiguity in PICO sentence prediction tasks?"
],
"question_id": [
"4cbc56d0d53c4c03e459ac43e3c374b75fd48efe",
"e5a965e7a109ae17a42dd22eddbf167be47fca75"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"transformers",
"transformers"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Classes for the sentence classification task.",
"Figure 1: Colour coded example for a population entity annotation, converted to SQuAD v.2 format. Combined data are used to train and evaluate the system.",
"Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions.",
"Figure 3: Visualisation of training sentences using SCIBERT. The x and y-axes represent the two most dominant t-SNE reduced dimensions for each concatenation of layers",
"Table 2: Summary of results for the sentence classification. task",
"Table 3: Predicting PICOs in Chinese and German. Classes were assigned based on foreign language inputs only. For reference, translations were provided by native speakers.",
"Table 4: Question Answering versus entity recognition results.",
"Table 5: Subgroups of possible sentences versus impossible sentences.",
"Table 6: This table shows two examples for intervention span predictions in QA-BERT200. On the official SQuAD development set, the same model achieved a good score, an exemplary question and prediction for this is given in the bottom row."
],
"file": [
"2-Table1-1.png",
"5-Figure1-1.png",
"6-Figure2-1.png",
"6-Figure3-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"9-Table4-1.png",
"10-Table5-1.png",
"10-Table6-1.png"
]
} | [
"What are the problems related to ambiguity in PICO sentence prediction tasks?"
] | [
[
"2001.11268-6-Figure2-1.png",
"2001.11268-RESULTS ::: Feature representation and contextualization-2"
]
] | [
"Some sentences are associated to ambiguous dimensions in the hidden state output"
] | 226 |
1706.07179 | RelNet: End-to-End Modeling of Entities & Relations | We introduce RelNet: a new model for relational reasoning. RelNet is a memory augmented neural network which models entities as abstract memory slots and is equipped with an additional relational memory which models relations between all memory pairs. The model thus builds an abstract knowledge graph on the entities and relations present in a document which can then be used to answer questions about the document. It is trained end-to-end: only supervision to the model is in the form of correct answers to the questions. We test the model on the 20 bAbI question-answering tasks with 10k examples per task and find that it solves all the tasks with a mean error of 0.3%, achieving 0% error on 11 of the 20 tasks. | {
"paragraphs": [
[
"Reasoning about entities and their relations is an important problem for achieving general artificial intelligence. Often such problems are formulated as reasoning over graph-structured representation of knowledge. Knowledge graphs, for example, consist of entities and relations between them BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Representation learning BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and reasoning BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 with such structured representations is an important and active area of research.",
"Most previous work on knowledge representation and reasoning relies on a pipeline of natural language processing systems, often consisting of named entity extraction BIBREF12 , entity resolution and coreference BIBREF13 , relationship extraction BIBREF4 , and knowledge graph inference BIBREF14 . While this cascaded approach of using NLP systems can be effective at reasoning with knowledge bases at scale, it also leads to a problem of compounding of the error from each component sub-system. The importance of each of these sub-component on a particular downstream application is also not clear.",
"For the task of question-answering, we instead make an attempt at an end-to-end approach which directly models the entities and relations in the text as memory slots. While incorporating existing knowledge (from curated knowledge bases) for the purpose of question-answering BIBREF11 , BIBREF8 , BIBREF15 is an important area of research, we consider the simpler setting where all the information is contained within the text itself – which is the approach taken by many recent memory based neural network models BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 .",
"Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question answering. However, this model lacks any module for relational reasoning. In response, we propose RelNet, which extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector. The only supervision signal for our method comes from answering questions on the text.",
"We demonstrate the utility of the model through experiments on the bAbI tasks BIBREF18 and find that the model achieves smaller mean error across the tasks than the best previously published result BIBREF17 in the 10k examples regime and achieves 0% error on 11 of the 20 tasks."
],
[
"We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory.",
"There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question."
],
[
"There is a long line of work in textual question-answering systems BIBREF21 , BIBREF22 . Recent successful approaches use memory based neural networks for question answering, for example BIBREF23 , BIBREF18 , BIBREF24 , BIBREF19 , BIBREF17 . Our model is also a memory network based model and is also related to the neural turing machine BIBREF25 . As described previously, the model is closely related to the Recurrent Entity Networks model BIBREF17 which describes an end-to-end approach to model entities in text but does not directly model relations. Other approaches to question answering use external knowledge, for instance external knowledge bases BIBREF26 , BIBREF11 , BIBREF27 , BIBREF28 , BIBREF9 or external text like Wikipedia BIBREF29 , BIBREF30 .",
"Very recently, and in parallel to this work, a method for relational reasoning called relation networks BIBREF31 was proposed. They demonstrated that simple neural network modules are not as effective at relational reasoning and their proposed module is similar to our model. However, relation network is not a memory-based model and there is no mechanism to read and write relevant information for each pair. Moreover, while their approach scales as the square of the number of sentences, our approach scales as the square of the number of memory slots used per QA pair. The output module in our model can be seen as a type of relation network.",
"Representation learning and reasoning over graph structured data is also relevant to this work. Graph based neural network models BIBREF32 , BIBREF33 , BIBREF34 have been proposed which take graph data as an input. The relational memory however does not rely on a specified graph structure and such models can potentially be used for multi-hop reasoning over the relational memory. BIBREF35 proposed a method for learning a graphical representation of the text data for question answering, however the model requires explicit supervision for the graph at every step whereas RelNet does not require explicit supervision for the graph."
],
[
"We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks.",
"Training Details: We used Adam and did a grid search for the learning rate in {0.01, 0.005, 0.001} and choose a fixed learning rate of 0.005 based on performance on the validation set, and clip the gradient norm at 2. We keep all other details similar to BIBREF17 for a fair comparison. embedding dimensions were fixed to be 100, models were trained for a maximum of 250 epochs with mini-batches size of 32 for all tasks except 3 for which the batch size was 16. The document sizes were limited to most recent 70 sentences for all tasks, except for task 3 for which it was limited to 130. The RelNet models were run for 5 times with random seed on each task and the model with best validation performance was chosen as the final model. The baseline EntNet model was run for 10 times for each task BIBREF17 .",
"The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
[
"We demonstrated an end-to-end trained neural network augmented with a structured memory representation which can reason about entities and relations for question answering. Future work will investigate the performance of these models on more real world datasets, interpreting what the models learn, and scaling these models to answer questions about entities and relations from reading massive text corpora."
]
],
"section_name": [
"Introduction",
"RelNet Model",
"Related Work",
"Experiments",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"a5d0953d56d8cd11ea834da09e2416aee83102ea"
],
"answer": [
{
"evidence": [
"Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question answering. However, this model lacks any module for relational reasoning. In response, we propose RelNet, which extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector. The only supervision signal for our method comes from answering questions on the text."
],
"extractive_spans": [
"the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector."
],
"free_form_answer": "",
"highlighted_evidence": [
"Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"48d2fcec8e2a7967bf3f1ab2c12b0e95c778fd7e"
],
"answer": [
{
"evidence": [
"There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question."
],
"extractive_spans": [],
"free_form_answer": "entity memory and relational memory.",
"highlighted_evidence": [
"There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"7090d01d80d3d73861302db34a0bea96bcc9af89"
],
"answer": [
{
"evidence": [
"The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
"extractive_spans": [
"The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
"free_form_answer": "",
"highlighted_evidence": [
" The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks.",
"The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"121f0702a2eab76c1ad0119ac520adc61edd716c"
],
"answer": [
{
"evidence": [
"We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory."
],
"extractive_spans": [
"extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. ",
"The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory."
],
"free_form_answer": "",
"highlighted_evidence": [
"The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"bd36e3e626f515050572af1723aa2049868fe1ec"
],
"answer": [
{
"evidence": [
"We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks.",
"The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
"extractive_spans": [
"We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 .",
" The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How is knowledge retrieved in the memory?",
"How is knowledge stored in the memory?",
"What are the relative improvements observed over existing methods?",
"What is the architecture of the neural network?",
"What methods is RelNet compared to?"
],
"question_id": [
"082c88e132b4f1bf68abdc3a21ac4af180de1113",
"74091e10f596428135b0ab06008608e09c051565",
"43b4f7eade7a9bcfaf9cc0edba921a41d6036e9c",
"a75861e6dd72d69fdf77ebd81c78d26c6f7d0864",
"60fd7ef7986a5752b31d3bd12bbc7da6843547a4"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: RelNet Model: The model represents the state of the world as a neural turing machine with relational memory. At each time step, the model reads the sentence into an encoding vector and updates both entity memories and all edges between them representing the relations.",
"Table 1: Mean % Error on the 20 Babi tasks."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"How is knowledge stored in the memory?"
] | [
[
"1706.07179-RelNet Model-1"
]
] | [
"entity memory and relational memory."
] | 227 |
1909.08824 | Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder | Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP). Given an observed event, it is trivial for human to infer its intents and effects, while this type of If-Then reasoning still remains challenging for NLP systems. To facilitate this, a If-Then commonsense reasoning dataset Atomic is proposed, together with an RNN-based Seq2Seq model to conduct such reasoning. However, two fundamental problems still need to be addressed: first, the intents of an event may be multiple, while the generations of RNN-based Seq2Seq models are always semantically close; second, external knowledge of the event background may be necessary for understanding events and conducting the If-Then reasoning. To address these issues, we propose a novel context-aware variational autoencoder effectively learning event background information to guide the If-Then reasoning. Experimental results show that our approach improves the accuracy and diversity of inferences compared with state-of-the-art baseline methods. | {
"paragraphs": [
[
"Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.",
"To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.",
"However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.",
"Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.",
"To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.",
"In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).",
"Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE."
],
[
"Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:",
"Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.",
"Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.",
"Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.",
"Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.",
"Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.",
"Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\\lbrace x_1,\\dots , x_{m}\\rbrace $, and $y=\\lbrace y_1,\\dots , y_{n}\\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.",
"Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\\theta $) $p_{\\theta }(y|x,z)$ and $p_{\\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.",
"CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\\phi $) $q_{\\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:",
"Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\\theta }(z|x)$ as a prior network, $q_{\\phi }(z|x,y)$ as a recognition network, and $p_{\\theta }(y|x,z)$ as a neural decoder."
],
[
"Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.",
"To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.",
"Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\\prime }}$.",
"Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\\prime }}$, where $z_{c^{\\prime }}$ contains rich event background knowledge helpful for If-Then reasoning."
],
[
"As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\\phi }(z|x,y)$, $q_{\\phi }(z_c|x,c)$ and $q_{\\phi }(z|z_{c^{\\prime }}, x)$, a prior network for modeling $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\\prime }}$ to generate targets.",
"Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\\lbrace h_1^c,\\dots ,h_{l_c}^c\\rbrace $, $h^x=\\lbrace h_1^x,\\dots ,h_{l_x}^x\\rbrace $ and $h^y=\\lbrace h_1^y,\\dots ,h_{l_y}^y\\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.",
"Recognition Network The recognition network models $q_{\\phi }(z|x,y)$, $q_{\\phi }(z_c|x,c)$, $q_{\\phi }(z|z_{c^{\\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.",
"Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:",
"where $\\mu $ denotes the mean of the distribution, $\\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.",
"Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\\phi }(z_{c}|x,c)$, $q_{\\phi }(z_{c^{\\prime }}|x,y)$ and $q_{\\phi }(z|x,y)$:",
"Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.",
"Prior Network Prior Network models $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$ based on $h^x$. The distribution of $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:",
"where $\\mu ^{^{\\prime }}$ denotes the mean of the distribution, $\\sigma ^{^{\\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.",
"Then the attention-based inferer module is still employed to estimate parameters of distributions:",
"Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\\prime }}$, the neural decoder defines the generation probability of $y$ as following:",
"where $p(y_j|y<j, z, z_{c^{\\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\\cdot )$ is an attention-based feed forward model, $e_j=\\sum _i \\alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.",
"Note that through concatenating $z$ and $z_{c^{\\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\\prime }}$. In addition, the randomness of $z$ and $z_{c^{\\prime }}$ would increase the diversity of model generation.",
"Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\\theta }(\\cdot )$ or $q_{\\phi }(\\cdot )$ by capturing semantic interactions of input sequences.",
"Specifically, given two input sequences (e.g., representations of contexts and events) $a=\\lbrace a_1,\\dots ,a_{l_a}\\rbrace $ and $b=\\lbrace b_1,\\dots ,b_{l_b}\\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:",
"where $W_a \\in \\mathbb {R}^{d\\times d_a}$ and $W_b \\in \\mathbb {R}^{d\\times d_b}$ are parameter weights.",
"With these attention scores, the context vectors of both sequences are given by:",
"Then we perform a mean pooling operation on context vectors of both sequences:",
"To obtain the mean and standard deviation, the pooled context vectors $\\bar{c^a}$ and $\\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:",
"Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:"
],
[
"With the incorporation of $z_{c^{\\prime }}$, the original loglikelihood could be decomposed as:",
"Then following traditional CVAE, the ELBO of CWVAE is defined as follows:",
"which is the objective function at the finetune stage.",
"While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:",
"where the context aware regularization term is the KL distance between $z$ and $z_{c^{\\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\\prime }}$."
],
[
"To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \\times d_a$ and $100 \\times d_b$ respectively. The dimension of $z_c$, $z_{c^{\\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001."
],
[
"The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.",
"For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples."
],
[
"We compared our proposed model with the following four baseline methods:",
"RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.",
"Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.",
"VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.",
"CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.",
"Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively."
],
[
"We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens."
],
[
"Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively."
],
[
"We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:",
"(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.",
"(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.",
"(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.",
"To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning."
],
[
"Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy."
],
[
"Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.",
"Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants."
],
[
"VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it."
],
[
"In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations."
],
[
"We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137."
]
],
"section_name": [
"Introduction",
"Background",
"Context-aware Variational Autoencoder",
"Context-aware Variational Autoencoder ::: Architecture of CWVAE",
"Context-aware Variational Autoencoder ::: Optimizing",
"Context-aware Variational Autoencoder ::: Training Details",
"Experiments ::: Auxiliary Dataset",
"Experiments ::: Baselines",
"Experiments ::: Evaluation Metrics ::: Automatic Evaluation",
"Experiments ::: Evaluation Metrics ::: Human Evaluation",
"Experiments ::: Overall Results",
"Experiments ::: Case Study",
"Related Work ::: Event-Centered Commonsense Reasoning",
"Related Work ::: Variational AutoEncoder-Decoder Based Natural Language Generation",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"7f7d9a78c51f1de52959ee1634d8d01fc56c9efd"
],
"answer": [
{
"evidence": [
"We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens."
],
"extractive_spans": [],
"free_form_answer": "by number of distinct n-grams",
"highlighted_evidence": [
"Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"5f5d24e05be705e9487a2032e7c9a8e3c69d41d7"
],
"answer": [
{
"evidence": [
"We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.",
"FLOAT SELECTED: Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.",
"FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened."
],
"extractive_spans": [],
"free_form_answer": "ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.\nOn Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respectively.",
"highlighted_evidence": [
"Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. ",
"FLOAT SELECTED: Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.",
"FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"667d47b73133321cfe695db94c2418e8b8c4d9bb"
],
"answer": [
{
"evidence": [
"We compared our proposed model with the following four baseline methods:",
"RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.",
"Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.",
"VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.",
"CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.",
"Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.",
"FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened."
],
"extractive_spans": [
"RNN-based Seq2Seq",
"Variational Seq2Seq",
"VRNMT ",
"CWVAE-Unpretrained"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compared our proposed model with the following four baseline methods:\n\nRNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.\n\nVariational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.\n\nVRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.\n\nCWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.\n\nNote that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.",
"FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"d01baf34ae2b5ff6b706bad6ad645c4da7d42d1b"
],
"answer": [
{
"evidence": [
"In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.)."
],
"extractive_spans": [],
"free_form_answer": " CWVAE is trained on an auxiliary dataset to learn the event background information by using the context-aware latent variable. Then, in finetute stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target.",
"highlighted_evidence": [
"In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"122017054a7e7b46d0ad276b7a3e5abd76b463ba"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How do they measure the diversity of inferences?",
"By how much do they improve the accuracy of inferences over state-of-the-art methods?",
"Which models do they use as baselines on the Atomic dataset?",
"How does the context-aware variational autoencoder learn event background information?",
"What is the size of the Atomic dataset?"
],
"question_id": [
"7d59374d9301a0c09ea5d023a22ceb6ce07fb490",
"8e2b125426d1220691cceaeaf1875f76a6049cbd",
"42bc4e0cd0f3e238a4891142f1b84ebcd6594bf1",
"fb76e994e2e3fa129f1e94f1b043b274af8fb84c",
"99ef97336c0112d9f60df108f58c8b04b519a854"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
" ",
" ",
" ",
" ",
" "
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A illustration of two challenging problems in IfThen reasoning. (a) Given an observed event, the feelings about this event could be multiple. (b) Background knowledge is need for generating reasonable inferences, which is absent in the dataset (marked by dashed lines).",
"Table 1: Hierarchical structure of Event2Mind dataset. For specific inference dimensions, “x” and “o” refers to PersonX and others respectively.",
"Table 2: Hierarchical structure of Atomic dataset. For specific inference dimensions, “x” and “o” refers to PersonX and others respectively.",
"Figure 2: Illustration of inference and generation process of CVAE in a directed graph. Dashed lines represent the inference of z. Solid lines represent the generation process.",
"Figure 3: Illustration of pretrain, finetune and generation process of CWVAE in a directed graph. Dashed lines represent the inference of z, zc and zc′ . Solid lines represent the generation process. Red circle denotes the context-aware latent variable.",
"Figure 4: Architecture of CWVAE. We mark Neural encoder in green, prior network in blue, recognition network in brown and neural decoder in orange, respectively.",
"Table 3: An example for the construction of auxiliary dataset. For a five-sentence-paragraph, the first three sentences are taken as event context, while the fourth and fifth sentence is taken as base event and target respectively.",
"Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.",
"Table 5: Distinct-1 and distinct-2 scores for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.",
"Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened.",
"Table 7: Distinct-1 and distinct-2 scores for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened.",
"Table 9: Human evaluation results on Atomic.",
"Table 8: Human evaluation results on Event2Mind.",
"Table 10: An example of inferences made by CWVAE and RNN-based Seq2Seq model under inference dimension “xIntent”."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png",
"7-Table9-1.png",
"7-Table8-1.png",
"8-Table10-1.png"
]
} | [
"How do they measure the diversity of inferences?",
"By how much do they improve the accuracy of inferences over state-of-the-art methods?",
"How does the context-aware variational autoencoder learn event background information?"
] | [
[
"1909.08824-Experiments ::: Evaluation Metrics ::: Automatic Evaluation-0"
],
[
"1909.08824-Experiments ::: Evaluation Metrics ::: Automatic Evaluation-0",
"1909.08824-7-Table6-1.png",
"1909.08824-6-Table4-1.png"
],
[
"1909.08824-Introduction-5"
]
] | [
"by number of distinct n-grams",
"ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.\nOn Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respectively.",
" CWVAE is trained on an auxiliary dataset to learn the event background information by using the context-aware latent variable. Then, in finetute stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target."
] | 228 |
1701.03214 | An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation | In this paper, we propose a novel domain adaptation method named"mixed fine tuning"for neural machine translation (NMT). We combine two existing approaches namely fine tuning and multi domain NMT. We first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus which is a mix of the in-domain and out-of-domain corpora. All corpora are augmented with artificial tags to indicate specific domains. We empirically compare our proposed method against fine tuning and multi domain methods and discuss its benefits and shortcomings. | {
"paragraphs": [
[
"One of the most attractive features of neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 is that it is possible to train an end to end system without the need to deal with word alignments, translation rules and complicated decoding algorithms, which are a characteristic of statistical machine translation (SMT) systems. However, it is reported that NMT works better than SMT only when there is an abundance of parallel corpora. In the case of low resource domains, vanilla NMT is either worse than or comparable to SMT BIBREF3 .",
"Domain adaptation has been shown to be effective for low resource NMT. The conventional domain adaptation method is fine tuning, in which an out-of-domain model is further trained on in-domain data BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . However, fine tuning tends to overfit quickly due to the small size of the in-domain data. On the other hand, multi domain NMT BIBREF8 involves training a single NMT model for multiple domains. This method adds tags “<2domain>\" by modifying the parallel corpora to indicate domains without any modifications to the NMT system architecture. However, this method has not been studied for domain adaptation in particular.",
"Motivated by these two lines of studies, we propose a new domain adaptation method called “mixed fine tuning,\" where we first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus that is a mix of the in-domain and out-of-domain corpora. Fine tuning on the mixed corpus instead of the in-domain corpus can address the overfitting problem. All corpora are augmented with artificial tags to indicate specific domains. We tried two different corpora settings:",
"We observed that “mixed fine tuning\" works significantly better than methods that use fine tuning and domain tag based approaches separately. Our contributions are twofold:"
],
[
"Besides fine tuning and multi domian NMT using tags, another direction for domain adaptation is using in-domain monolingual data. Either training an in-domain recurrent neural language (RNN) language model for the NMT decoder BIBREF13 or generating synthetic data by back translating target in-domain monolingual data BIBREF5 have been studied."
],
[
"All the methods that we compare are simple and do not need any modifications to the NMT system."
],
[
"Fine tuning is the conventional way for domain adaptation, and thus serves as a baseline in this study. In this method, we first train an NMT system on a resource rich out-of-domain corpus till convergence, and then fine tune its parameters on a resource poor in-domain corpus (Figure 1 )."
],
[
"The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain.",
"We can further fine tune the multi domain model on the in-domain data, which is named as “multi domain + fine tuning.”"
],
[
"The proposed mixed fine tuning method is a combination of the above methods (shown in Figure 2 ). The training procedure is as follows:",
"Train an NMT model on out-of-domain data till convergence.",
"Resume training the NMT model from step 1 on a mix of in-domain and out-of-domain data (by oversampling the in-domain data) till convergence.",
"By default, we utilize domain tags, but we also consider settings where we do not use them (i.e., “w/o tags”). We can further fine tune the model from step 2 on the in-domain data, which is named as “mixed fine tuning + fine tuning.”",
"Note that in the “fine tuning” method, the vocabulary obtained from the out-of-domain data is used for the in-domain data; while for the “multi domain” and “mixed fine tuning” methods, we use a vocabulary obtained from the mixed in-domain and out-of-domain data for all the training stages."
],
[
"We conducted NMT domain adaptation experiments in two different settings as follows:"
],
[
"Chinese-to-English translation was the focus of the high quality in-domain corpus setting. We utilized the resource rich patent out-of-domain data to augment the resource poor spoken language in-domain data. The patent domain MT was conducted on the Chinese-English subtask (NTCIR-CE) of the patent MT task at the NTCIR-10 workshop BIBREF9 . The NTCIR-CE task uses 1000000, 2000, and 2000 sentences for training, development, and testing, respectively. The spoken domain MT was conducted on the Chinese-English subtask (IWSLT-CE) of the TED talk MT task at the IWSLT 2015 workshop BIBREF10 . The IWSLT-CE task contains 209,491 sentences for training. We used the dev 2010 set for development, containing 887 sentences. We evaluated all methods on the 2010, 2011, 2012, and 2013 test sets, containing 1570, 1245, 1397, and 1261 sentences, respectively."
],
[
"Chinese-to-Japanese translation was the focus of the low quality in-domain corpus setting. We utilized the resource rich scientific out-of-domain data to augment the resource poor Wikipedia (essentially open) in-domain data. The scientific domain MT was conducted on the Chinese-Japanese paper excerpt corpus (ASPEC-CJ) BIBREF11 , which is one subtask of the workshop on Asian translation (WAT) BIBREF15 . The ASPEC-CJ task uses 672315, 2090, and 2107 sentences for training, development, and testing, respectively. The Wikipedia domain task was conducted on a Chinese-Japanese corpus automatically extracted from Wikipedia (WIKI-CJ) BIBREF12 using the ASPEC-CJ corpus as a seed. The WIKI-CJ task contains 136013, 198, and 198 sentences for training, development, and testing, respectively."
],
[
"For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100.",
"For performance comparison, we also conducted experiments on phrase based SMT (PBSMT). We used the Moses PBSMT system BIBREF17 for all of our MT experiments. For the respective tasks, we trained 5-gram language models on the target side of the training data using the KenLM toolkit with interpolated Kneser-Ney discounting, respectively. In all of our experiments, we used the GIZA++ toolkit for word alignment; tuning was performed by minimum error rate training BIBREF18 , and it was re-run for every experiment.",
"For both MT systems, we preprocessed the data as follows. For Chinese, we used KyotoMorph for segmentation, which was trained on the CTB version 5 (CTB5) and SCTB BIBREF19 . For English, we tokenized and lowercased the sentences using the tokenizer.perl script in Moses. Japanese was segmented using JUMAN BIBREF20 .",
"For NMT, we further split the words into sub-words using byte pair encoding (BPE) BIBREF21 , which has been shown to be effective for the rare word problem in NMT. Another motivation of using sub-words is making the different domains share more vocabulary, which is important especially for the resource poor domain. For the Chinese-to-English tasks, we trained two BPE models on the Chinese and English vocabularies, respectively. For the Chinese-to-Japanese tasks, we trained a joint BPE model on both of the Chinese and Japanese vocabularies, because Chinese and Japanese could share some vocabularies of Chinese characters. The number of merge operations was set to 30,000 for all the tasks."
],
[
"Tables 1 and 2 show the translation results on the Chinese-to-English and Chinese-to-Japanese tasks, respectively. The entries with SMT and NMT are the PBSMT and NMT systems, respectively; others are the different methods described in Section \"Methods for Comparison\" . In both tables, the numbers in bold indicate the best system and all systems that were not significantly different from the best system. The significance tests were performed using the bootstrap resampling method BIBREF22 at $p < 0.05$ .",
"We can see that without domain adaptation, the SMT systems perform significantly better than the NMT system on the resource poor domains, i.e., IWSLT-CE and WIKI-CJ; while on the resource rich domains, i.e., NTCIR-CE and ASPEC-CJ, NMT outperforms SMT. Directly using the SMT/NMT models trained on the out-of-domain data to translate the in-domain data shows bad performance. With our proposed “Mixed fine tuning\" domain adaptation method, NMT significantly outperforms SMT on the in-domain tasks.",
"Comparing different domain adaptation methods, “Mixed fine tuning” shows the best performance. We believe the reason for this is that “Mixed fine tuning” can address the over-fitting problem of “Fine tuning.” We observed that while “Fine tuning” overfits quickly after only 1 epoch of training, “Mixed fine tuning” only slightly overfits until covergence. In addition, “Mixed fine tuning” does not worsen the quality of out-of-domain translations, while “Fine tuning” and “Multi domain” do. One shortcoming of “Mixed fine tuning” is that compared to “fine tuning,” it took a longer time for the fine tuning process, as the time until convergence is essentially proportional to the size of the data used for fine tuning.",
"“Multi domain” performs either as well as (IWSLT-CE) or worse than (WIKI-CJ) “Fine tuning,” but “Mixed fine tuning” performs either significantly better than (IWSLT-CE) or is comparable to (WIKI-CJ) “Fine tuning.” We believe the performance difference between the two tasks is due to their unique characteristics. As WIKI-CJ data is of relatively poorer quality, mixing it with out-of-domain data does not have the same level of positive effects as those obtained by the IWSLT-CE data.",
"The domain tags are helpful for both “Multi domain” and “Mixed fine tuning.” Essentially, further fine tuning on in-domain data does not help for both “Multi domain” and “Mixed fine tuning.” We believe the reason for this is that the “Multi domain” and “Mixed fine tuning” methods already utilize the in-domain data used for fine tuning."
],
[
"In this paper, we proposed a novel domain adaptation method named “mixed fine tuning” for NMT. We empirically compared our proposed method against fine tuning and multi domain methods, and have shown that it is effective but is sensitive to the quality of the in-domain data used.",
"In the future, we plan to incorporate an RNN model into our current architecture to leverage abundant in-domain monolingual corpora. We also plan on exploring the effects of synthetic data by back translating large in-domain monolingual corpora. "
]
],
"section_name": [
"Introduction",
"Related Work",
"Methods for Comparison",
"Fine Tuning",
"Multi Domain",
"Mixed Fine Tuning",
"Experimental Settings",
"High Quality In-domain Corpus Setting",
"Low Quality In-domain Corpus Setting",
"MT Systems",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"f92d4930c3a5af4cac3ed3b914ec9a554dfeade4"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE."
],
"extractive_spans": [],
"free_form_answer": "0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"12335d0c788b511cd38f82941b7e5bba2fe24e21"
],
"answer": [
{
"evidence": [
"For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100."
],
"extractive_spans": [
"LSTMs"
],
"free_form_answer": "",
"highlighted_evidence": [
"For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"65f0a6719b495621b5ad95e39f4305074795673f"
],
"answer": [
{
"evidence": [
"The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain."
],
"extractive_spans": [
"Appending the domain tag “<2domain>\" to the source sentences of the respective corpora"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How much improvement does their method get over the fine tuning baseline?",
"What kinds of neural networks did they use in this paper?",
"How did they use the domain tags?"
],
"question_id": [
"a978a1ee73547ff3a80c66e6db3e6c3d3b6512f4",
"46ee1cbbfbf0067747b28bdf4c8c2f7dc8955650",
"4f12b41bd3bb2610abf7d7835291496aa69fb78c"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"domain adaptation",
"domain adaptation",
"domain adaptation"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Fine tuning for domain adaptation",
"Figure 2: Tag based multi domain NMT",
"Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE.",
"Table 2: Domain adaptation results (BLEU-4 scores) for WIKI-CJ using ASPEC-CJ."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"3-Table2-1.png"
]
} | [
"How much improvement does their method get over the fine tuning baseline?"
] | [
[
"1701.03214-3-Table1-1.png"
]
] | [
"0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE."
] | 230 |
1611.02550 | Discriminative Acoustic Word Embeddings: Recurrent Neural Network-Based Approaches | Acoustic word embeddings --- fixed-dimensional vector representations of variable-length spoken word segments --- have begun to be considered for tasks such as speech recognition and query-by-example search. Such embeddings can be learned discriminatively so that they are similar for speech segments corresponding to the same word, while being dissimilar for segments corresponding to different words. Recent work has found that acoustic word embeddings can outperform dynamic time warping on query-by-example search and related word discrimination tasks. However, the space of embedding models and training approaches is still relatively unexplored. In this paper we present new discriminative embedding models based on recurrent neural networks (RNNs). We consider training losses that have been successful in prior work, in particular a cross entropy loss for word classification and a contrastive loss that explicitly aims to separate same-word and different-word pairs in a"Siamese network"training setting. We find that both classifier-based and Siamese RNN embeddings improve over previously reported results on a word discrimination task, with Siamese RNNs outperforming classification models. In addition, we present analyses of the learned embeddings and the effects of variables such as dimensionality and network structure. | {
"paragraphs": [
[
"Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for the individual units. An alternative, which has been considered by some researchers, is to consider each entire word segment as a single unit, without assigning parts of it to sub-word units. One motivation for the use of whole-word approaches is that they avoid the need for sub-word models. This is helpful since, despite decades of work on sub-word modeling BIBREF0 , BIBREF1 , it still poses significant challenges. For example, speech processing systems are still hampered by differences in conversational pronunciations BIBREF2 . A second motivation is that considering whole words at once allows us to consider a more flexible set of features and reason over longer time spans.",
"Whole-word approaches typically involve, at some level, template matching. For example, in template-based speech recognition BIBREF3 , BIBREF4 , word scores are computed from dynamic time warping (DTW) distances between an observed segment and training segments of the hypothesized word. In query-by-example search, putative matches are typically found by measuring the DTW distance between the query and segments of the search database BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . In other words, whole-word approaches often boil down to making decisions about whether two segments are examples of the same word or not.",
"An alternative to DTW that has begun to be explored is the use of acoustic word embeddings (AWEs), or vector representations of spoken word segments. AWEs are representations that can be learned from data, ideally such that the embeddings of two segments corresponding to the same word are close, while embeddings of segments corresponding to different words are far apart. Once word segments are represented via fixed-dimensional embeddings, computing distances is as simple as measuring a cosine or Euclidean distance between two vectors.",
"There has been some, thus far limited, work on acoustic word embeddings, focused on a number of embedding models, training approaches, and tasks BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . In this paper we explore new embedding models based on recurrent neural networks (RNNs), applied to a word discrimination task related to query-by-example search. RNNs are a natural model class for acoustic word embeddings, since they can handle arbitrary-length sequences. We compare several types of RNN-based embeddings and analyze their properties. Compared to prior embeddings tested on the same task, our best models achieve sizable improvements in average precision."
],
[
"We next briefly describe the most closely related prior work.",
"Maas et al. BIBREF9 and Bengio and Heigold BIBREF10 used acoustic word embeddings, based on convolutional neural networks (CNNs), to generate scores for word segments in automatic speech recognition. Maas et al. trained CNNs to predict (continuous-valued) embeddings of the word labels, and used the resulting embeddings to define feature functions in a segmental conditional random field BIBREF17 rescoring system. Bengio and Heigold also developed CNN-based embeddings for lattice rescoring, but with a contrastive loss to separate embeddings of a given word from embeddings of other words.",
"Levin et al. BIBREF11 developed unsupervised embeddings based on representing each word as a vector of DTW distances to a collection of reference word segments. This representation was subsequently used in several applications: a segmental approach for query-by-example search BIBREF12 , lexical clustering BIBREF18 , and unsupervised speech recognition BIBREF19 . Voinea et al. BIBREF15 developed a representation also based on templates, in their case phone templates, designed to be invariant to specific transformations, and showed their robustness on digit classification.",
"Kamper et al. BIBREF13 compared several types of acoustic word embeddings for a word discrimination task related to query-by-example search, finding that embeddings based on convolutional neural networks (CNNs) trained with a contrastive loss outperformed the reference vector approach of Levin et al. BIBREF11 as well as several other CNN and DNN embeddings and DTW using several feature types. There have now been a number of approaches compared on this same task and data BIBREF11 , BIBREF20 , BIBREF21 , BIBREF22 . For a direct comparison with this prior work, in this paper we use the same task and some of the same training losses as Kamper et al., but develop new embedding models based on RNNs.",
"The only prior work of which we are aware using RNNs for acoustic word embeddings is that of Chen et al. BIBREF16 and Chung et al. BIBREF14 . Chen et al. learned a long short-term memory (LSTM) RNN for word classification and used the resulting hidden state vectors as a word embedding in a query-by-example task. The setting was quite specific, however, with a small number of queries and speaker-dependent training. Chung et al. BIBREF14 worked in an unsupervised setting and trained single-layer RNN autoencoders to produce embeddings for a word discrimination task. In this paper we focus on the supervised setting, and compare a variety of RNN-based structures trained with different losses.",
""
],
[
"",
"An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 .",
"The RNN hidden state at each time frame can be viewed as a representation of the input seen thus far, and its value in the last time frame INLINEFORM0 could itself serve as the final word embedding. The fully connected layers are added to account for the fact that some additional transformation may improve the representation. For example, the hidden state may need to be larger than the desired word embedding dimension, in order to be able to \"remember\" all of the needed intermediate information. Some of that information may not be needed in the final embedding. In addition, the information maintained in the hidden state may not necessarily be discriminative; some additional linear or non-linear transformation may help to learn a discriminative embedding.",
"Within this class of embedding models, we focus on Long Short-Term Memory (LSTM) networks BIBREF23 and Gated Recurrent Unit (GRU) networks BIBREF24 . These are both types of RNNs that include a mechanism for selectively retaining or discarding information at each time frame when updating the hidden state, in order to better utilize long-term context. Both of these RNN variants have been used successfully in speech recognition BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 .",
"In an LSTM RNN, at each time frame both the hidden state INLINEFORM0 and an associated “cell memory\" vector INLINEFORM1 , are updated and passed on to the next time frame. In other words, each forward edge in Figure FIGREF1 can be viewed as carrying both the cell memory and hidden state vectors. The updates are modulated by the values of several gating vectors, which control the degree to which the cell memory and hidden state are updated in light of new information in the current frame. For a single-layer LSTM network, the updates are as follows:",
" INLINEFORM0 ",
"where INLINEFORM0 , and INLINEFORM1 are all vectors of the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate sizes, INLINEFORM4 and INLINEFORM5 are learned bias vectors, INLINEFORM6 is a componentwise logistic activation, and INLINEFORM7 refers to the Hadamard (componentwise) product.",
"Similarly, in a GRU network, at each time step a GRU cell determines what components of old information are retained, overwritten, or modified in light of the next step in the input sequence. The output from a GRU cell is only the hidden state vector. A GRU cell uses a reset gate INLINEFORM0 and an update gate INLINEFORM1 as described below for a single-layer network: INLINEFORM2 ",
"where INLINEFORM0 , and INLINEFORM1 are all the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate size, and INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are learned bias vectors.",
"All of the above equations refer to single-layer networks. In a deep network, with multiple stacked layers, the same update equations are used in each layer, with the state, cell, and gate vectors replaced by layer-specific vectors INLINEFORM0 and so on for layer INLINEFORM1 . For all but the first layer, the input INLINEFORM2 is replaced by the hidden state vector from the previous layer INLINEFORM3 .",
"For the fully connected layers, we use rectified linear unit (ReLU) BIBREF29 activation, except for the final layer which depends on the form of supervision and loss used in training.",
""
],
[
"We train the RNN-based embedding models using a set of pre-segmented spoken words. We use two main training approaches, inspired by prior work but with some differences in the details. As in BIBREF13 , BIBREF10 , our first approach is to use the word labels of the training segments and train the networks to classify the word. In this case, the final layer of INLINEFORM0 is a log-softmax layer. Here we are limited to the subset of the training set that has a sufficient number of segments per word to train a good classifier, and the output dimensionality is equal to the number of words (but see BIBREF13 for a study of varying the dimensionality in such a classifier-based embedding model by introducing a bottleneck layer). This model is trained end-to-end and is optimized with a cross entropy loss. Although labeled data is necessarily limited, the hope is that the learned models will be useful even when applied to spoken examples of words not previously seen in the training data. For words not seen in training, the embeddings should correspond to some measure of similarity of the word to the training words, measured via the posterior probabilities of the previously seen words. In the experiments below, we examine this assumption by analyzing performance on words that appear in the training data compared to those that do not.",
"The second training approach, based on earlier work of Kamper et al. BIBREF13 , is to train \"Siamese\" networks BIBREF30 . In this approach, full supervision is not needed; rather, we use weak supervision in the form of pairs of segments labeled as same or different. The base model remains the same as before—an RNN followed by a set of fully connected layers—but the final layer is no longer a softmax but rather a linear activation layer of arbitrary size. In order to learn the parameters, we simultaneously feed three word segments through three copies of our model (i.e. three networks with shared weights). One input segment is an “anchor\", INLINEFORM0 , the second is another segment with the same word label, INLINEFORM1 , and the third is a segment corresponding to a different word label, INLINEFORM2 . Then, the network is trained using a “cos-hinge\" loss:",
" DISPLAYFORM0 ",
"where INLINEFORM0 is the cosine distance between INLINEFORM1 . Unlike cross entropy training, here we directly aim to optimize relative (cosine) distance between same and different word pairs. For tasks such as query-by-example search, this training loss better respects our end objective, and can use more data since neither fully labeled data nor any minimum number of examples of each word should be needed.",
""
],
[
"",
"Our end goal is to improve performance on downstream tasks requiring accurate word discrimination. In this paper we use an intermediate task that more directly tests whether same- and different-word pairs have the expected relationship. and that allows us to compare to a variety of prior work. Specifically, we use the word discrimination task of Carlin et al. BIBREF20 , which is similar to a query-by-example task where the word segmentations are known. The evaluation consists of determining, for each pair of evaluation segments, whether they are examples of the same or different words, and measuring performance via the average precision (AP). We do this by measuring the cosine similarity between their acoustic word embeddings and declaring them to be the same if the distance is below a threshold. By sweeping the threshold, we obtain a precision-recall curve from which we compute the AP.",
"The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments.",
"When training the Siamese networks, the training data consists of all of the same-word pairs in the full training set (approximately 100k pairs). For each such training pair, we randomly sample a third example belonging to a different word type, as required for the INLINEFORM0 loss.",
""
],
[
"Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set.",
"The classifier network is trained with a cross entropy loss and optimized using stochastic gradient descent (SGD) with Nesterov momentum BIBREF33 . The learning rate is initialized at 0.1 and is reduced by a factor of 10 according to the following heuristic: If 99% of the current epoch's average batch loss is greater than the running average of batch losses over the last 3 epochs, this is considered a plateau; if there are 3 consecutive plateau epochs, then the learning rate is reduced. Training stops when reducing the learning rate no longer improves dev set AP. Then, the model from the epoch corresponding to the the best dev set AP is chosen. Several other optimizers—Adagrad BIBREF34 , Adadelta BIBREF35 , and Adam BIBREF36 —were explored in initial experiments on the dev set, but all reported results were obtained using SGD with Nesterov momentum.",
""
],
[
"For experiments with Siamese networks, we initialize (warm-start) the networks with the tuned classification network, removing the final log-softmax layer and replacing it with a linear layer of size equal to the desired embedding dimensionality. We explored embeddings with dimensionalities between 8 and 2048. We use a margin of 0.4 in the cos-hinge loss.",
"In training the Siamese networks, each training mini-batch consists of INLINEFORM0 triplets. INLINEFORM1 triplets are of the form INLINEFORM2 where INLINEFORM3 and INLINEFORM4 are examples of the same class (a pair from the 100k same-word pair set) and INLINEFORM5 is a randomly sampled example from a different class. Then, for each of these INLINEFORM6 triplets INLINEFORM7 , an additional triplet INLINEFORM8 is added to the mini-batch to allow all segments to serve as anchors. This is a slight departure from earlier work BIBREF13 , which we found to improve stability in training and performance on the development set.",
"In preliminary experiments, we compared two methods for choosing the negative examples INLINEFORM0 during training, a uniform sampling approach and a non-uniform one. In the case of uniform sampling, we sample INLINEFORM1 uniformly at random from the full set of training examples with labels different from INLINEFORM2 . This sampling method requires only word-pair supervision. In the case of non-uniform sampling, INLINEFORM3 is sampled in two steps. First, we construct a distribution INLINEFORM4 over word labels INLINEFORM5 and sample a different label from it. Second, we sample an example uniformly from within the subset with the chosen label. The goal of this method is to speed up training by targeting pairs that violate the margin constraint. To construct the multinomial PMF INLINEFORM6 , we maintain an INLINEFORM7 matrix INLINEFORM8 , where INLINEFORM9 is the number of unique word labels in training. Each word label corresponds to an integer INLINEFORM10 INLINEFORM11 [1, INLINEFORM12 ] and therefore a row in INLINEFORM13 . The values in a row of INLINEFORM14 are considered similarity scores, and we can retrieve the desired PMF for each row by normalizing by its sum.",
"At the start of each epoch, we initialize INLINEFORM0 with 0's along the diagonal and 1's elsewhere (which reduces to uniform sampling). For each training pair INLINEFORM1 , we update INLINEFORM2 for both INLINEFORM3 and INLINEFORM4 :",
" INLINEFORM0 ",
"The PMFs INLINEFORM0 are updated after the forward pass of an entire mini-batch. The constant INLINEFORM1 enforces a potentially stronger constraint than is used in the INLINEFORM2 loss, in order to promote diverse sampling. In all experiments, we set INLINEFORM3 . This is a heuristic approach, and it would be interesting to consider various alternatives. Preliminary experiments showed that the non-uniform sampling method outperformed uniform sampling, and in the following we report results with non-uniform sampling.",
"We optimize the Siamese network model using SGD with Nesterov momentum for 15 epochs. The learning rate is initialized to 0.001 and dropped every 3 epochs until no improvement is seen on the dev set. The final model is taken from the epoch with the highest dev set AP. All models were implemented in Torch BIBREF37 and used the rnn library of BIBREF38 .",
""
],
[
" Based on development set results, our final embedding models are LSTM networks with 3 stacked layers and 3 fully connected layers, with output dimensionality of 1024 in the case of Siamese networks. Final test set results are given in Table TABREF7 . We include a comparison with the best prior results on this task from BIBREF13 , as well as the result of using standard DTW on the input MFCCs (reproduced from BIBREF13 ) and the best prior result using DTW, obtained with frame features learned with correlated autoencoders BIBREF21 . Both classifier and Siamese LSTM embedding models outperform all prior results on this task of which we are aware.",
"We next analyze the effects of model design choices, as well as the learned embeddings themselves.",
""
],
[
"Table TABREF10 shows the effect on development set performance of the number of stacked layers INLINEFORM0 , the number of fully connected layers INLINEFORM1 , and LSTM vs. GRU cells, for classifier-based embeddings. The best performance in this experiment is achieved by the LSTM network with INLINEFORM2 . However, performance still seems to be improving with additional layers, suggesting that we may be able to further improve performance by adding even more layers of either type. However, we fixed the model to INLINEFORM3 in order to allow for more experimentation and analysis within a reasonable time.",
"Table TABREF10 reveals an interesting trend. When only one fully connected layer is used, the GRU networks outperform the LSTMs given a sufficient number of stacked layers. On the other hand, once we add more fully connected layers, the LSTMs outperform the GRUs. In the first few lines of Table TABREF10 , we use 2, 3, and 4 layer stacks of LSTMs and GRUs while holding fixed the number of fully-connected layers at INLINEFORM0 . There is clear utility in stacking additional layers; however, even with 4 stacked layers the RNNs still underperform the CNN-based embeddings of BIBREF13 until we begin adding fully connected layers.",
"After exploring a variety of stacked RNNs, we fixed the stack to 3 layers and varied the number of fully connected layers. The value of each additional fully connected layer is clearly greater than that of adding stacked layers. All networks trained with 2 or 3 fully connected layers obtain more than 0.4 AP on the development set, while stacked RNNs with 1 fully connected layer are at around 0.3 AP or less. This may raise the question of whether some simple fully connected model may be all that is needed; however, previous work has shown that this approach is not competitive BIBREF13 , and convolutional or recurrent layers are needed to summarize arbitrary-length segments into a fixed-dimensional representation.",
""
],
[
"For the Siamese networks, we varied the output embedding dimensionality, as shown in Fig. FIGREF11 . This analysis shows that the embeddings learned by the Siamese RNN network are quite robust to reduced dimensionality, outperforming the classifier model for all dimensionalities 32 or higher and outperforming previously reported dev set performance with CNN-based embeddings BIBREF13 for all dimensionalities INLINEFORM0 .",
""
],
[
"We might expect the learned embeddings to be more accurate for words that are seen in training than for ones that are not. Fig. FIGREF11 measures this effect by showing performance as a function of the number of occurrences of the dev words in the training set. Indeed, both model types are much more successful for in-vocabulary words, and their performance improves the higher the training frequency of the words. However, performance increases more quickly for the Siamese network than for the classifier as training frequency increases. This may be due to the fact that, if a word type occurs at least INLINEFORM0 times in the classifier training set, then it occurs at least INLINEFORM1 times in the Siamese paired training data.",
""
],
[
"In order to gain a better qualitative understanding of the differences between clasiffier and Siamese-based embeddings, and of the learned embedding space more generally, we plot a two-dimensional visualization of some of our learned embeddings via t-SNE BIBREF40 in Fig. FIGREF12 . For both classifier and Siamese embeddings, there is a marked difference in the quality of clusters formed by embeddings of words that were previously seen vs. previously unseen in training. However, the Siamese network embeddings appear to have better relative distances between word clusters with similar and dissimilar pronunciations. For example, the word programs appears equidistant from problems and problem in the classifier-based embedding space, but in the Siamese embedding space problems falls between problem and programs. Similarly, the cluster for democracy shifts with respect to actually and especially to better respect differences in pronunciation. More study of learned embeddings, using more data and word types, is needed to confirm such patterns in general. Improvements in unseen word embeddings from the classifier embedding space to the Siamese embedding space (such as for democracy, morning, and basketball) are a likely result of optimizing the model for relative distances between words.",
""
],
[
"",
"Our main finding is that RNN-based acoustic word embeddings outperform prior approaches, as measured via a word discrimination task related to query-by-example search. Our best results are obtained with deep LSTM RNNs with a combination of several stacked layers and several fully connected layers, optimized with a contrastive Siamese loss. Siamese networks have the benefit that, for any given training data set, they are effectively trained on a much larger set, in the sense that they measure a loss and gradient for every possible pair of data points. Our experiments suggest that the models could still be improved with additional layers. In addition, we have found that, for the purposes of acoustic word embeddings, fully connected layers are very important and have a more significant effect per layer than stacked layers, particularly when trained with the cross entropy loss function.",
"These experiments represent an initial exploration of sequential neural models for acoustic word embeddings. There are a number of directions for further work. For example, while our analyses suggest that Siamese networks are better than classifier-based models at embedding previously unseen words, our best embeddings are still much poorer for unseen words. Improvements in this direction may come from larger training sets, or may require new models that better model the shared structure between words. Other directions for future work include additional forms of supervision and training, as well as application to downstream tasks."
]
],
"section_name": [
"Introduction",
"Related work",
"Approach",
"Training",
"EXPERIMENTS",
"Classification network details",
"Siamese network details",
"Results",
"Effect of model structure",
"Effect of embedding dimensionality",
"Effect of training vocabulary",
"Visualization of embeddings",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1fd4f3fbe7b6046c29581d726d5cfe3e080fd7c8"
],
"answer": [
{
"evidence": [
"An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 ."
],
"extractive_spans": [
"a vector of frame-level acoustic features"
],
"free_form_answer": "",
"highlighted_evidence": [
"An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1296db0535d800668b7dfc49d903edf11643d543"
],
"answer": [
{
"evidence": [
"Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set."
],
"extractive_spans": [
"1061"
],
"free_form_answer": "",
"highlighted_evidence": [
"The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2aa70ad856356c985fd3ab88b850c08da935d830"
],
"answer": [
{
"evidence": [
"The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments."
],
"extractive_spans": [
"Switchboard conversational English corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"e29d3437584259c203f003372b6df706a73753c3"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations."
],
"extractive_spans": [],
"free_form_answer": "Their best average precision tops previous best result by 0.202",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How do they represent input features of their model to train embeddings?",
"Which dimensionality do they use for their embeddings?",
"Which dataset do they use?",
"By how much do they outpeform previous results on the word discrimination task?"
],
"question_id": [
"d40662236eed26f17dd2a3a9052a4cee1482d7d6",
"1d791713d1aa77358f11501f05c108045f53c8aa",
"6b6360fab2edc836901195c0aba973eae4891975",
"b6b5f92a1d9fa623b25c70c1ac67d59d84d9eec8"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1: LSTM-based acoustic word embedding model. For GRUbased models, the structure is the same, but the LSTM cells are replaced with GRU cells, and there is no cell activation vector; the recurrent connections only carry the hidden state vector hlt.",
"Fig. 2: Effect of embedding dimensionality (left) and occurrences in training set (right).",
"Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations.",
"Table 2: Average precision on the dev set, using classifier-based embeddings. S = # stacked layers, F = # fully connected layers.",
"Fig. 3: t-SNE visualization of word embeddings from the dev set produced by the classifier (top) vs. Siamese (bottom) models. Word labels seen at training time are denoted by triangles and word labels unseen at training time are denoted by circles."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Figure3-1.png"
]
} | [
"By how much do they outpeform previous results on the word discrimination task?"
] | [
[
"1611.02550-5-Table1-1.png"
]
] | [
"Their best average precision tops previous best result by 0.202"
] | 233 |
1601.06068 | Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing | One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer. In this paper we propose to bridge this gap by generating paraphrases of the input question with the goal that at least one of them will be correctly mapped to a knowledge-base query. We introduce a novel grammar model for paraphrase generation that does not require any sentence-aligned paraphrase corpus. Our key idea is to leverage the flexibility and scalability of latent-variable probabilistic context-free grammars to sample paraphrases. We do an extrinsic evaluation of our paraphrases by plugging them into a semantic parser for Freebase. Our evaluation experiments on the WebQuestions benchmark dataset show that the performance of the semantic parser significantly improves over strong baselines. | {
"paragraphs": [
[
"Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase logical forms for the goal of question answering. Current systems accomplish this by learning task-specific grammars BIBREF5 , strongly-typed CCG grammars BIBREF6 , BIBREF7 , or neural networks without requiring any grammar BIBREF8 . These methods are sensitive to the words used in a question and their word order, making them vulnerable to unseen words and phrases. Furthermore, mismatch between natural language and Freebase makes the problem even harder. For example, Freebase expresses the fact that “Czech is the official language of Czech Republic” (encoded as a graph), whereas to answer a question like “What do people in Czech Republic speak?” one should infer people in Czech Republic refers to Czech Republic and What refers to the language and speak refers to the predicate official language.",
"We address the above problems by using paraphrases of the original question. Paraphrasing has shown to be promising for semantic parsing BIBREF9 , BIBREF10 , BIBREF11 . We propose a novel framework for paraphrasing using latent-variable PCFGs (L-PCFGs). Earlier approaches to paraphrasing used phrase-based machine translation for text-based QA BIBREF12 , BIBREF13 , or hand annotated grammars for KB-based QA BIBREF10 . We find that phrase-based statistical machine translation (MT) approaches mainly produce lexical paraphrases without much syntactic diversity, whereas our grammar-based approach is capable of producing both lexically and syntactically diverse paraphrases. Unlike MT based approaches, our system does not require aligned parallel paraphrase corpora. In addition we do not require hand annotated grammars for paraphrase generation but instead learn the grammar directly from a large scale question corpus.",
"The main contributions of this paper are two fold. First, we present an algorithm (§ \"Paraphrase Generation Using Grammars\" ) to generate paraphrases using latent-variable PCFGs. We use the spectral method of narayan-15 to estimate L-PCFGs on a large scale question treebank. Our grammar model leads to a robust and an efficient system for paraphrase generation in open-domain question answering. While CFGs have been explored for paraphrasing using bilingual parallel corpus BIBREF14 , ours is the first implementation of CFG that uses only monolingual data. Second, we show that generated paraphrases can be used to improve semantic parsing of questions into Freebase logical forms (§ \"Semantic Parsing using Paraphrasing\" ). We build on a strong baseline of reddylargescale2014 and show that our grammar model competes with MT baseline even without using any parallel paraphrase resources."
],
[
"Our paraphrase generation algorithm is based on a model in the form of an L-PCFG. L-PCFGs are PCFGs where the nonterminals are refined with latent states that provide some contextual information about each node in a given derivation. L-PCFGs have been used in various ways, most commonly for syntactic parsing BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 .",
"In our estimation of L-PCFGs, we use the spectral method of narayan-15, instead of using EM, as has been used in the past by matsuzaki-2005 and petrov-2006. The spectral method we use enables the choice of a set of feature functions that indicate the latent states, which proves to be useful in our case. It also leads to sparse grammar estimates and compact models.",
"The spectral method works by identifying feature functions for “inside” and “outside” trees, and then clusters them into latent states. Then it follows with a maximum likelihood estimation step, that assumes the latent states are represented by clusters obtained through the feature function clustering. For more details about these constructions, we refer the reader to cohen-13 and narayan-15.",
"The rest of this section describes our paraphrase generation algorithm."
],
[
"We define our paraphrase generation task as a sampling problem from an L-PCFG $G_{\\mathrm {syn}}$ , which is estimated from a large corpus of parsed questions. Once this grammar is estimated, our algorithm follows a pipeline with two major steps.",
"We first build a word lattice $W_q$ for the input question $q$ . We use the lattice to constrain our paraphrases to a specific choice of words and phrases that can be used. Once this lattice is created, a grammar $G_{\\mathrm {syn}}^{\\prime }$ is then extracted from $G_{\\mathrm {syn}}$ . This grammar is constrained to the lattice.",
"We experiment with three ways of constructing word lattices: naïve word lattices representing the words from the input question only, word lattices constructed with the Paraphrase Database BIBREF14 and word lattices constructed with a bi-layered L-PCFG, described in § \"Bi-Layered L-PCFGs\" . For example, Figure 1 shows an example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.",
"Once $G_{\\mathrm {syn}}^{\\prime }$ is generated, we sample paraphrases of the input question $q$ . These paraphrases are further filtered with a classifier to improve the precision of the generated paraphrases.",
"We train the L-PCFG $G_{\\mathrm {syn}}$ on the Paralex corpus BIBREF9 . Paralex is a large monolingual parallel corpus, containing 18 million pairs of question paraphrases with 2.4M distinct questions in the corpus. It is suitable for our task of generating paraphrases since its large scale makes our model robust for open-domain questions. We construct a treebank by parsing 2.4M distinct questions from Paralex using the BLLIP parser BIBREF25 .",
"Given the treebank, we use the spectral algorithm of narayan-15 to learn an L-PCFG for constituency parsing to learn $G_{\\mathrm {syn}}$ . We follow narayan-15 and use the same feature functions for the inside and outside trees as they use, capturing contextual syntactic information about nonterminals. We refer the reader to narayan-15 for more detailed description of these features. In our experiments, we set the number of latent states to 24.",
"Once we estimate $G_{\\mathrm {syn}}$ from the Paralex corpus, we restrict it for each question to a grammar $G_{\\mathrm {syn}}^{\\prime }$ by keeping only the rules that could lead to a derivation over the lattice. This step is similar to lexical pruning in standard grammar-based generation process to avoid an intermediate derivation which can never lead to a successful derivation BIBREF26 , BIBREF27 .",
"Sampling a question from the grammar $G_{\\mathrm {syn}}^{\\prime }$ is done by recursively sampling nodes in the derivation tree, together with their latent states, in a top-down breadth-first fashion. Sampling from the pruned grammar $G_{\\mathrm {syn}}^{\\prime }$ raises an issue of oversampling words that are more frequent in the training data. To lessen this problem, we follow a controlled sampling approach where sampling is guided by the word lattice $W_q$ . Once a word $w$ from a path $e$ in $W_q$ is sampled, all other parallel or conflicting paths to $e$ are removed from $W_q$ . For example, generating for the word lattice in Figure 1 , when we sample the word citizens, we drop out the paths “human beings”, “people's”, “the population”, “people” and “members of the public” from $W_q$ and accordingly update the grammar. The controlled sampling ensures that each sampled question uses words from a single start-to-end path in $W_q$ . For example, we could sample a question what is Czech Republic 's language? by sampling words from the path (what, language, do, people 's, in, Czech, Republic, is speaking, ?) in Figure 1 . We repeat this sampling process to generate multiple potential paraphrases.",
"The resulting generation algorithm has multiple advantages over existing grammar generation methods. First, the sampling from an L-PCFG grammar lessens the lexical ambiguity problem evident in lexicalized grammars such as tree adjoining grammars BIBREF27 and combinatory categorial grammars BIBREF28 . Our grammar is not lexicalized, only unary context-free rules are lexicalized. Second, the top-down sampling restricts the combinatorics inherent to bottom-up search BIBREF29 . Third, we do not restrict the generation by the order information in the input. The lack of order information in the input often raises the high combinatorics in lexicalist approaches BIBREF30 . In our case, however, we use sampling to reduce this problem, and it allows us to produce syntactically diverse questions. And fourth, we impose no constraints on the grammar thereby making it easier to maintain bi-directional (recursive) grammars that can be used both for parsing and for generation BIBREF31 ."
],
[
"As mentioned earlier, one of our lattice types is based on bi-layered PCFGs introduced here.",
"In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions.",
"To create the bi-layered L-PCFG, we again use the spectral algorithm of narayan-15 to estimate a grammar $G_{\\mathrm {par}}$ from the Paralex corpus. We use the word alignment of paraphrase question pairs in Paralex to map inside and outside trees of each nonterminals in the treebank to bag of word features. The number of latent states we use is 1,000.",
"Once the two feature functions (syntactic in $G_{\\mathrm {syn}}$ and semantic in $G_{\\mathrm {par}}$ ) are created, each nonterminal in the training treebank is assigned two latent states (cluster identifiers). Figure 2 shows an example annotation of trees for three paraphrase questions from the Paralex corpus. We compute the parameters of the bi-layered L-PCFG $G_{\\mathrm {layered}}$ with a simple frequency count maximum likelihood estimate over this annotated treebank. As such, $G_{\\mathrm {layered}}$ is a combination of $G_{\\mathrm {syn}}$ and $G_{\\mathrm {par}}$ , resulting in 24,000 latent states (24 syntactic x 1000 semantic).",
"Consider an example where we want to generate paraphrases for the question what day is nochebuena. Parsing it with $G_{\\mathrm {layered}}$ will lead to the leftmost hybrid structure as shown in Figure 2 . The assignment of the first latent states for each nonterminals ensures that we retrieve the correct syntactic representation of the sentence. Here, however, we are more interested in the second latent states assigned to each nonterminals which capture the paraphrase information of the sentence at various levels. For example, we have a unary lexical rule (NN-*-142 day) indicating that we observe day with NN of the paraphrase type 142. We could use this information to extract unary rules of the form (NN-*-142 $w$ ) in the treebank that will generate words $w$ which are paraphrases to day. Similarly, any node WHNP-*-291 in the treebank will generate paraphrases for what day, SBARQ-*-403, for what day is nochebuena. This way we will be able to generate paraphrases when is nochebuena and when is nochebuena celebrated as they both have SBARQ-*-403 as their roots.",
"To generate a word lattice $W_q$ for a given question $q$ , we parse $q$ with the bi-layered grammar $G_{\\mathrm {layered}}$ . For each rule of the form $X$ - $m_1$ - $m_2 \\rightarrow w$ in the bi-layered tree with $X \\in {\\cal P}$ , $m_1 \\in \\lbrace 1, \\ldots , 24 \\rbrace $ , $m_2 \\in \\lbrace 1, \\ldots , 1000 \\rbrace $ and $q$0 a word in $q$1 , we extract rules of the form $q$2 - $q$3 - $q$4 from $q$5 such that $q$6 . For each such $q$7 , we add a path $q$8 parallel to $q$9 in the word lattice."
],
[
"Our sampling algorithm overgenerates paraphrases which are incorrect. To improve its precision, we build a binary classifier to filter the generated paraphrases. We randomly select 100 distinct questions from the Paralex corpus and generate paraphrases using our generation algorithm with various lattice settings. We randomly select 1,000 pairs of input-sampled sentences and manually annotate them as “correct” or “incorrect” paraphrases. We train our classifier on this manually created training data. We follow madnani2012, who used MT metrics for paraphrase identification, and experiment with 8 MT metrics as features for our binary classifier. In addition, we experiment with a binary feature which checks if the sampled paraphrase preserves named entities from the input sentence. We use WEKA BIBREF32 to replicate the classifier of madnani2012 with our new feature. We tune the feature set for our classifier on the development data."
],
[
"In this section we describe how the paraphrase algorithm is used for converting natural language to Freebase queries. Following reddylargescale2014, we formalize the semantic parsing problem as a graph matching problem, i.e., finding the Freebase subgraph (grounded graph) that is isomorphic to the input question semantic structure (ungrounded graph).",
"This formulation has a major limitation that can be alleviated by using our paraphrase generation algorithm. Consider the question What language do people in Czech Republic speak?. The ungrounded graph corresponding to this question is shown in Figure 3 . The Freebase grounded graph which results in correct answer is shown in Figure 3 . Note that these two graphs are non-isomorphic making it impossible to derive the correct grounding from the ungrounded graph. In fact, at least 15% of the examples in our development set fail to satisfy isomorphic assumption. In order to address this problem, we use paraphrases of the input question to generate additional ungrounded graphs, with the aim that one of those paraphrases will have a structure isomorphic to the correct grounding. Figure 3 and Figure 3 are two such paraphrases which can be converted to Figure 3 as described in sec:groundedGraphs.",
"For a given input question, first we build ungrounded graphs from its paraphrases. We convert these graphs to Freebase graphs. To learn this mapping, we rely on manually assembled question-answer pairs. For each training question, we first find the set of oracle grounded graphs—Freebase subgraphs which when executed yield the correct answer—derivable from the question's ungrounded graphs. These oracle graphs are then used to train a structured perceptron model. These steps are discussed in detail below."
],
[
"We use GraphParser BIBREF7 to convert paraphrases to ungrounded graphs. This conversion involves three steps: 1) parsing the paraphrase using a CCG parser to extract syntactic derivations BIBREF33 , 2) extracting logical forms from the CCG derivations BIBREF34 , and 3) converting the logical forms to an ungrounded graph. The ungrounded graph for the example question and its paraphrases are shown in Figure 3 , Figure 3 and Figure 3 , respectively."
],
[
"The ungrounded graphs are grounded to Freebase subgraphs by mapping entity nodes, entity-entity edges and entity type nodes in the ungrounded graph to Freebase entities, relations and types, respectively. For example, the graph in Figure 3 can be converted to a Freebase graph in Figure 3 by replacing the entity node Czech Republic with the Freebase entity CzechRepublic, the edge (speak.arg $_2$ , speak.in) between $x$ and Czech Republic with the Freebase relation (location.country.official_language.2, location.country.official_language.1), the type node language with the Freebase type language.human_language, and the target node remains intact. The rest of the nodes, edges and types are grounded to null. In a similar fashion, Figure 3 can be grounded to Figure 3 , but not Figure 3 to Figure 3 . If no paraphrase is isomorphic to the target grounded grounded graph, our grounding fails."
],
[
"We use a linear model to map ungrounded graphs to grounded ones. The parameters of the model are learned from question-answer pairs. For example, the question What language do people in Czech Republic speak? paired with its answer $\\lbrace \\textsc {CzechLanguage}\\rbrace $ . In line with most work on question answering against Freebase, we do not rely on annotated logical forms associated with the question for training and treat the mapping of a question to its grounded graph as latent.",
"Let $q$ be a question, let $p$ be a paraphrase, let $u$ be an ungrounded graph for $p$ , and let $g$ be a grounded graph formed by grounding the nodes and edges of $u$ to the knowledge base $\\mathcal {K}$ (throughout we use Freebase as the knowledge base). Following reddylargescale2014, we use beam search to find the highest scoring tuple of paraphrase, ungrounded and grounded graphs $(\\hat{p}, \\hat{u}, \\hat{g})$ under the model $\\theta \\in \\mathbb {R}^n$ : $\n({\\hat{p},\\hat{u},\\hat{g}}) = \\operatornamewithlimits{arg\\,max}_{(p,u,g)} \\theta \\cdot \\Phi (p,u,g,q,\\mathcal {K})\\,,\n$ ",
"where $\\Phi (p, u, g, q, \\mathcal {K}) \\in \\mathbb {R}^n$ denotes the features for the tuple of paraphrase, ungrounded and grounded graphs. The feature function has access to the paraphrase, ungrounded and grounded graphs, the original question, as well as to the content of the knowledge base and the denotation $|g|_\\mathcal {K}$ (the denotation of a grounded graph is defined as the set of entities or attributes reachable at its target node). See sec:details for the features employed. The model parameters are estimated with the averaged structured perceptron BIBREF35 . Given a training question-answer pair $(q,\\mathcal {A})$ , the update is: $\n\\theta ^{t+1} \\leftarrow \\theta ^{t} + \\Phi (p^+, u^+, g^+, q,\n\\mathcal {K}) - \\Phi (\\hat{p}, \\hat{u}, \\hat{g}, q, \\mathcal {K})\\,,\n$ ",
"where $({p^+,u^+,g^+})$ denotes the tuple of gold paraphrase, gold ungrounded and grounded graphs for $q$ . Since we do not have direct access to the gold paraphrase and graphs, we instead rely on the set of oracle tuples, $\\mathcal {O}_{\\mathcal {K}, \\mathcal {A}}(q)$ , as a proxy: $\n(p^{+},u^{+},{g^{+}}) = \\operatornamewithlimits{arg\\,max}_{(p,u,g) \\in \\mathcal {O}_{\\mathcal {K},\\mathcal {A}}(q)} \\theta \\cdot \\Phi ({p,u,g,q,\\mathcal {K}})\\,,\n$ ",
"where $\\mathcal {O}_{\\mathcal {K}, \\mathcal {A}}(q)$ is defined as the set of tuples ( $p$ , $u$ , $g$ ) derivable from the question $q$ , whose denotation $|g|_\\mathcal {K}$ has minimal $F_1$ -loss against the gold answer $\\mathcal {A}$ . We find the oracle graphs for each question a priori by performing beam-search with a very large beam."
],
[
"Below, we give details on the evaluation dataset and baselines used for comparison. We also describe the model features and provide implementation details."
],
[
"We evaluate our approach on the WebQuestions dataset BIBREF5 . WebQuestions consists of 5,810 question-answer pairs where questions represents real Google search queries. We use the standard train/test splits, with 3,778 train and 2,032 test questions. For our development experiments we tune the models on held-out data consisting of 30% training questions, while for final testing we use the complete training data. We use average precision (avg P.), average recall (avg R.) and average F $_1$ (avg F $_1$ ) proposed by berantsemantic2013 as evaluation metrics."
],
[
"We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.",
"We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions."
],
[
"For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem.",
"We use the features from reddylargescale2014. These include edge alignments and stem overlaps between ungrounded and grounded graphs, and contextual features such as word and grounded relation pairs. In addition to these features, we add two new real-valued features – the paraphrase classifier's score and the entity disambiguation lattice score.",
"We use beam search to infer the highest scoring graph pair for a question. The search operates over entity-entity edges and entity type nodes of each ungrounded graph. For an entity-entity edge, there are two operations: ground the edge to a Freebase relation, or skip the edge. Similarly, for an entity type node, there are two operations: ground the node to a Freebase type, or skip the node. We use a beam size of 100 in all our experiments."
],
[
"In this section, we present results from five different systems for our question-answering experiments: original, mt, naive, ppdb and bilayered. First two are baseline systems. Other three systems use paraphrases generated from an L-PCFG grammar. naive uses a word lattice with a single start-to-end path representing the input question itself, ppdb uses a word lattice constructed using the PPDB rules, and bilayered uses bi-layered L-PCFG to build word lattices. Note that naive does not require any parallel resource to train, ppdb requires an external paraphrase database, and bilayered, like mt, needs a parallel corpus with paraphrase pairs. We tune our classifier features and GraphParser features on the development data. We use the best setting from tuning for evaluation on the test data."
],
[
"We described a grammar method to generate paraphrases for questions, and applied it to a question answering system based on semantic parsing. We showed that using paraphrases for a question answering system is a useful way to improve its performance. Our method is rather generic and can be applied to any question answering system."
],
[
"The authors would like to thank Nitin Madnani for his help with the implementation of the paraphrase classifier. We would like to thank our anonymous reviewers for their insightful comments. This research was supported by an EPSRC grant (EP/L02411X/1), the H2020 project SUMMA (under grant agreement 688139), and a Google PhD Fellowship for the second author."
]
],
"section_name": [
"Introduction",
"Paraphrase Generation Using Grammars",
"Paraphrases Generation Algorithm",
"Bi-Layered L-PCFGs",
"Paraphrase Classification",
"Semantic Parsing using Paraphrasing",
"Ungrounded Graphs from Paraphrases",
"Grounded Graphs from Ungrounded Graphs",
"Learning",
"Experimental Setup",
"Evaluation Data and Metric",
"Baselines",
"Implementation Details",
"Results and Discussion",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"208951f0d5f93c878368122d70fd94c337104a5e"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"12f2e670e6d94fab6636a8ef24121fc2f2100eeb"
],
"answer": [
{
"evidence": [
"For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem."
],
"extractive_spans": [],
"free_form_answer": "10*n paraphrases, where n depends on the number of paraphrases that contain the entity mention spans",
"highlighted_evidence": [
"For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"727ec6309fb3d7beb4d8cf4455fe5c4778bb660e"
],
"answer": [
{
"evidence": [
"In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions."
],
"extractive_spans": [
"syntactic information",
"semantic and topical information"
],
"free_form_answer": "",
"highlighted_evidence": [
"We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"32749f613e7b20e5fde56cfe720b1ecddf2646ff"
],
"answer": [
{
"evidence": [
"We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.",
"We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions."
],
"extractive_spans": [
"GraphParser without paraphrases",
"monolingual machine translation based model for paraphrase generation"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases",
"We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ab1027fb3232572ed0261cb9521d6d9f472e86e2"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate the quality of the paraphrasing model?",
"How many paraphrases are generated per question?",
"What latent variables are modeled in the PCFG?",
"What are the baselines?"
],
"question_id": [
"117aa7811ed60e84d40cd8f9cb3ca78781935a98",
"c359ab8ebef6f60c5a38f5244e8c18d85e92761d",
"ad362365656b0b218ba324ae60701eb25fe664c1",
"423bb905e404e88a168e7e807950e24ca166306c"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"semantic parsing",
"semantic parsing",
"semantic parsing",
"semantic parsing"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.",
"Figure 2: Trees used for bi-layered L-PCFG training. The questions what day is nochebuena, when is nochebuena and when is nochebuena celebrated are paraphrases from the Paralex corpus. Each nonterminal is decorated with a syntactic label and two identifiers, e.g., for WP-7-254, WP is the syntactic label assigned by the BLLIP parser, 7 is the syntactic latent state, and 254 is the semantic latent state.",
"Figure 3: Ungrounded graphs for an input question and its paraphrases along with its correct grounded graph. The green squares indicate NL or Freebase entities, the yellow rectangles indicate unary NL predicates or Freebase types, the circles indicate NL or Freebase events, the edge labels indicate binary NL predicates or Freebase relations, and the red diamonds attach to the entity of interest (the answer to the question).",
"Table 1: Oracle statistics and results on the WebQuestions development set.",
"Table 2: Results on WebQuestions test dataset."
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"7-Figure3-1.png",
"9-Table1-1.png",
"9-Table2-1.png"
]
} | [
"How many paraphrases are generated per question?"
] | [
[
"1601.06068-Implementation Details-0"
]
] | [
"10*n paraphrases, where n depends on the number of paraphrases that contain the entity mention spans"
] | 235 |
1709.07916 | Characterizing Diabetes, Diet, Exercise, and Obesity Comments on Twitter | Social media provide a platform for users to express their opinions and share information. Understanding public health opinions on social media, such as Twitter, offers a unique approach to characterizing common health issues such as diabetes, diet, exercise, and obesity (DDEO), however, collecting and analyzing a large scale conversational public health data set is a challenging research task. The goal of this research is to analyze the characteristics of the general public's opinions in regard to diabetes, diet, exercise and obesity (DDEO) as expressed on Twitter. A multi-component semantic and linguistic framework was developed to collect Twitter data, discover topics of interest about DDEO, and analyze the topics. From the extracted 4.5 million tweets, 8% of tweets discussed diabetes, 23.7% diet, 16.6% exercise, and 51.7% obesity. The strongest correlation among the topics was determined between exercise and obesity. Other notable correlations were: diabetes and obesity, and diet and obesity DDEO terms were also identified as subtopics of each of the DDEO topics. The frequent subtopics discussed along with Diabetes, excluding the DDEO terms themselves, were blood pressure, heart attack, yoga, and Alzheimer. The non-DDEO subtopics for Diet included vegetarian, pregnancy, celebrities, weight loss, religious, and mental health, while subtopics for Exercise included computer games, brain, fitness, and daily plan. Non-DDEO subtopics for Obesity included Alzheimer, cancer, and children. With 2.67 billion social media users in 2016, publicly available data such as Twitter posts can be utilized to support clinical providers, public health experts, and social scientists in better understanding common public opinions in regard to diabetes, diet, exercise, and obesity. | {
"paragraphs": [
[
"The global prevalence of obesity has doubled between 1980 and 2014, with more than 1.9 billion adults considered as overweight and over 600 million adults considered as obese in 2014 BIBREF0 . Since the 1970s, obesity has risen 37 percent affecting 25 percent of the U.S. adults BIBREF1 . Similar upward trends of obesity have been found in youth populations, with a 60% increase in preschool aged children between 1990 and 2010 BIBREF2 . Overweight and obesity are the fifth leading risk for global deaths according to the European Association for the Study of Obesity BIBREF0 . Excess energy intake and inadequate energy expenditure both contribute to weight gain and diabetes BIBREF3 , BIBREF4 .",
"Obesity can be reduced through modifiable lifestyle behaviors such as diet and exercise BIBREF4 . There are several comorbidities associated with being overweight or obese, such as diabetes BIBREF5 . The prevalence of diabetes in adults has risen globally from 4.7% in 1980 to 8.5% in 2014. Current projections estimate that by 2050, 29 million Americans will be diagnosed with type 2 diabetes, which is a 165% increase from the 11 million diagnosed in 2002 BIBREF6 . Studies show that there are strong relations among diabetes, diet, exercise, and obesity (DDEO) BIBREF7 , BIBREF4 , BIBREF8 , BIBREF9 ; however, the general public's perception of DDEO remains limited to survey-based studies BIBREF10 .",
"The growth of social media has provided a research opportunity to track public behaviors, information, and opinions about common health issues. It is estimated that the number of social media users will increase from 2.34 billion in 2016 to 2.95 billion in 2020 BIBREF11 . Twitter has 316 million users worldwide BIBREF12 providing a unique opportunity to understand users' opinions with respect to the most common health issues BIBREF13 . Publicly available Twitter posts have facilitated data collection and leveraged the research at the intersection of public health and data science; thus, informing the research community of major opinions and topics of interest among the general population BIBREF14 , BIBREF15 , BIBREF16 that cannot otherwise be collected through traditional means of research (e.g., surveys, interviews, focus groups) BIBREF17 , BIBREF18 . Furthermore, analyzing Twitter data can help health organizations such as state health departments and large healthcare systems to provide health advice and track health opinions of their populations and provide effective health advice when needed BIBREF13 .",
"Among computational methods to analyze tweets, computational linguistics is a well-known developed approach to gain insight into a population, track health issues, and discover new knowledge BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Twitter data has been used for a wide range of health and non-health related applications, such as stock market BIBREF23 and election analysis BIBREF24 . Some examples of Twitter data analysis for health-related topics include: flu BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , mental health BIBREF31 , Ebola BIBREF32 , BIBREF33 , Zika BIBREF34 , medication use BIBREF35 , BIBREF36 , BIBREF37 , diabetes BIBREF38 , and weight loss and obesity BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF21 .",
"The previous Twitter studies have dealt with extracting common topics of one health issue discussed by the users to better understand common themes; however, this study utilizes an innovative approach to computationally analyze unstructured health related text data exchanged via Twitter to characterize health opinions regarding four common health issues, including diabetes, diet, exercise and obesity (DDEO) on a population level. This study identifies the characteristics of the most common health opinions with respect to DDEO and discloses public perception of the relationship among diabetes, diet, exercise and obesity. These common public opinions/topics and perceptions can be used by providers and public health agencies to better understand the common opinions of their population denominators in regard to DDEO, and reflect upon those opinions accordingly."
],
[
"Our approach uses semantic and linguistics analyses for disclosing health characteristics of opinions in tweets containing DDEO words. The present study included three phases: data collection, topic discovery, and topic-content analysis."
],
[
"This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research."
],
[
"To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .",
"Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .",
"Twitter users can post their opinions or share information about a subject to the public. Identifying the main topics of users' tweets provides an interesting point of reference, but conceptualizing larger subtopics of millions of tweets can reveal valuable insight to users' opinions. The topic discovery component of the study approach uses LDA to find main topics, themes, and opinions in the collected tweets.",
"We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the\"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics."
],
[
"The topic content analysis component used an objective interpretation approach with a lexicon-based approach to analyze the content of topics. The lexicon-based approach uses dictionaries to disclose the semantic orientation of words in a topic. Linguistic Inquiry and Word Count (LIWC) is a linguistics analysis tool that reveals thoughts, feelings, personality, and motivations in a corpus BIBREF58 , BIBREF59 , BIBREF60 . LIWC has accepted rate of sensitivity, specificity, and English proficiency measures BIBREF61 . LIWC has a health related dictionary that can help to find whether a topic contains words associated with health. In this analysis, we used LIWC to find health related topics."
],
[
"Obesity and Diabetes showed the highest and the lowest number of tweets (51.7% and 8.0%). Diet and Exercise formed 23.7% and 16.6% of the tweets (Table TABREF6 ).",
"Out of all 4.5 million DDEO-related tweets returned by Tweeter's API, the LDA found 425 topics. We used LIWC to filter the detected 425 topics and found 222 health-related topics. Additionally, we labeled topics based on the availability of DDEO words. For example, if a topic had “diet\", we labeled it as a diet-related topic. As expected and driven by the initial Tweeter API's query, common topics were Diabetes, Diet, Exercise, and Obesity (DDEO). (Table TABREF7 ) shows that the highest and the lowest number of topics were related to exercise and diabetes (80 and 21 out of 222). Diet and Obesity had almost similar rates (58 and 63 out of 222).",
"Each of the DDEO topics included several common subtopics including both DDEO and non-DDEO terms discovered by the LDA algorithm (Table TABREF7 ). Common subtopics for “Diabetes\", in order of frequency, included type 2 diabetes, obesity, diet, exercise, blood pressure, heart attack, yoga, and Alzheimer. Common subtopics for “Diet\" included obesity, exercise, weight loss [medicine], celebrities, vegetarian, diabetes, religious diet, pregnancy, and mental health. Frequent subtopics for “Exercise\" included fitness, obesity, daily plan, diet, brain, diabetes, and computer games. And finally, the most common subtopics for “Obesity\" included diet, exercise, children, diabetes, Alzheimer, and cancer (Table TABREF7 ). Table TABREF8 provides illustrative examples for each of the topics and subtopics.",
"Further exploration of the subtopics revealed additional patterns of interest (Tables TABREF7 and TABREF8 ). We found 21 diabetes-related topics with 8 subtopics. While type 2 diabetes was the most frequent of the sub-topics, heart attack, Yoga, and Alzheimer are the least frequent subtopics for diabetes. Diet had a wide variety of emerging themes ranging from celebrity diet (e.g., Beyonce) to religious diet (e.g., Ramadan). Diet was detected in 63 topics with 10 subtopics; obesity, and pregnancy and mental health were the most and the least discussed obesity-related topics, respectively. Exploring the themes for Exercise subtopics revealed subjects such as computer games (e.g., Pokemon-Go) and brain exercises (e.g., memory improvement). Exercise had 7 subtopics with fitness as the most discussed subtopic and computer games as the least discussed subtopic. Finally, Obesity themes showed topics such as Alzheimer (e.g., research studies) and cancer (e.g., breast cancer). Obesity had the lowest diversity of subtopics: six with diet as the most discussed subtopic, and Alzheimer and cancer as the least discussed subtopics.",
"Diabetes subtopics show the relation between diabetes and exercise, diet, and obesity. Subtopics of diabetes revealed that users post about the relationship between diabetes and other diseases such as heart attack (Tables TABREF7 and TABREF8 ). The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic expressed by users and scientifically documented in the literature.",
"The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 )."
],
[
"Diabetes, diet, exercise, and obesity are common public health related opinions. Analyzing individual- level opinions by automated algorithmic techniques can be a useful approach to better characterize health opinions of a population. Traditional public health polls and surveys are limited by a small sample size; however, Twitter provides a platform to capture an array of opinions and shared information a expressed in the words of the tweeter. Studies show that Twitter data can be used to discover trending topics, and that there is a strong correlation between Twitter health conversations and Centers for Disease Control and Prevention (CDC) statistics BIBREF62 .",
"This research provides a computational content analysis approach to conduct a deep analysis using a large data set of tweets. Our framework decodes public health opinions in DDEO related tweets, which can be applied to other public health issues. Among health-related subtopics, there are a wide range of topics from diseases to personal experiences such as participating in religious activities or vegetarian diets.",
"Diabetes subtopics showed the relationship between diabetes and exercise, diet, and obesity (Tables TABREF7 and TABREF8 ). Subtopics of diabetes revealed that users posted about the relation between diabetes and other diseases such as heart attack. The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic that was also expressed by users and scientifically documented in the literature. The inclusion of Yoga in posts about diabetes is interesting. While yoga would certainly be labeled as a form of fitness, when considering the post, it was insightful to see discussion on the mental health benefits that yoga offers to those living with diabetes BIBREF63 .",
"Diet had the highest number of subtopics. For example, religious diet activities such as fasting during the month of Ramadan for Muslims incorporated two subtopics categorized under the diet topic (Tables TABREF7 and TABREF8 ). This information has implications for the type of diets that are being practiced in the religious community, but may help inform religious scholars who focus on health and psychological conditions during fasting. Other religions such as Judaism, Christianity, and Taoism have periods of fasting that were not captured in our data collection, which may have been due to lack of posts or the timeframe in which we collected data. The diet plans of celebrities were also considered influential to explaining and informing diet opinions of Twitter users BIBREF64 .",
"Exercise themes show the Twitter users' association of exercise with “brain\" benefits such as increased memory and cognitive performance (Tables TABREF7 and TABREF8 ) BIBREF65 . The topics also confirm that exercising is associated with controlling diabetes and assisting with meal planning BIBREF66 , BIBREF9 , and obesity BIBREF67 . Additionally, we found the Twitter users mentioned exercise topics about the use of computer games that assist with exercising. The recent mobile gaming phenomenon Pokeman-Go game BIBREF68 was highly associated with the exercise topic. Pokemon-Go allows users to operate in a virtual environment while simultaneously functioning in the real word. Capturing Pokemons, battling characters, and finding physical locations for meeting other users required physically activity to reach predefined locations. These themes reflect on the potential of augmented reality in increasing patients' physical activity levels BIBREF69 .",
"Obesity had the lowest number of subtopics in our study. Three of the subtopics were related to other diseases such as diabetes (Tables TABREF7 and TABREF8 ). The scholarly literature has well documented the possible linkages between obesity and chronic diseases such as diabetes BIBREF1 as supported by the study results. The topic of children is another prominent subtopic associated with obesity. There has been an increasing number of opinions in regard to child obesity and national health campaigns that have been developed to encourage physical activity among children BIBREF70 . Alzheimer was also identified as a topic under obesity. Although considered a perplexing finding, recent studies have been conducted to identify possible correlation between obesity and Alzheimer's disease BIBREF71 , BIBREF72 , BIBREF73 . Indeed, Twitter users have expressed opinions about the study of Alzheimer's disease and the linkage between these two topics.",
"This paper addresses a need for clinical providers, public health experts, and social scientists to utilize a large conversational dataset to collect and utilize population level opinions and information needs. Although our framework is applied to Twitter, the applications from this study can be used in patient communication devices monitored by physicians or weight management interventions with social media accounts, and support large scale population-wide initiatives to promote healthy behaviors and preventative measures for diabetes, diet, exercise, and obesity.",
"This research has some limitations. First, our DDEO analysis does not take geographical location of the Twitter users into consideration and thus does not reveal if certain geographical differences exists. Second, we used a limited number of queries to select the initial pool of tweets, thus perhaps missing tweets that may have been relevant to DDEO but have used unusual terms referenced. Third, our analysis only included tweets generated in one month; however, as our previous work has demonstrated BIBREF42 , public opinion can change during a year. Additionally, we did not track individuals across time to detect changes in common themes discussed. Our future research plans includes introducing a dynamic framework to collect and analyze DDEO related tweets during extended time periods (multiple months) and incorporating spatial analysis of DDEO-related tweets."
],
[
"This study represents the first step in developing routine processes to collect, analyze, and interpret DDEO-related posts to social media around health-related topics and presents a transdisciplinary approach to analyzing public discussions around health topics. With 2.34 billion social media users in 2016, the ability to collect and synthesize social media data will continue to grow. Developing methods to make this process more streamlined and robust will allow for more rapid identification of public health trends in real time.",
"Note: Amir Karami will handle correspondence at all stages of refereeing and publication."
],
[
"The authors state that they have no conflict of interest."
],
[
"This research was partially supported by the first author's startup research funding provided by the University of South Carolina, School of Library and Information Science. We thank Jill Chappell-Fail and Jeff Salter at the University of South Carolina College of Information and Communications for assistance with technical support.",
"References"
]
],
"section_name": [
"Introduction",
"Methods",
"Data Collection",
"Topic Discovery",
"Topic Content Analysis",
"Results",
"Discussion",
"Conclusion",
"Conflict of interest",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"13493df9ec75ae877c9904e23729ff119814671f"
],
"answer": [
{
"evidence": [
"This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ea7f28bf7cf3afc36dfd4eade6a0235621cd2869"
],
"answer": [
{
"evidence": [
"The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 ).",
"FLOAT SELECTED: Figure 2: DDEO Correlation P-Value"
],
"extractive_spans": [],
"free_form_answer": "weak correlation with p-value of 0.08",
"highlighted_evidence": [
"The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics.",
"Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ).",
"FLOAT SELECTED: Figure 2: DDEO Correlation P-Value"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"33c66527e46da56cb4033d4a47173f9aa136265d"
],
"answer": [
{
"evidence": [
"To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .",
"Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .",
"We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the\"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics."
],
"extractive_spans": [],
"free_form_answer": "using topic modeling model Latent Dirichlet Allocation (LDA)",
"highlighted_evidence": [
"To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 .",
"Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 .",
"We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they evaluate only on English data?",
"How strong was the correlation between exercise and diabetes?",
"How were topics of interest about DDEO identified?"
],
"question_id": [
"e5ae8ac51946db7475bb20b96e0a22083b366a6d",
"18288c7b0f8bd7839ae92f9c293e7fb85c7e146a",
"b5e883b15e63029eb07d6ff42df703a64613a18a"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A Sample of Tweets",
"Table 1: DDEO Queries",
"Table 2: DDEO Topics and Subtopics - Diabetes, Diet, Exercise, and Obesity are shown with italic and underline styles in subtopics",
"Figure 2: DDEO Correlation P-Value",
"Table 3: Topics Examples"
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Figure2-1.png",
"7-Table3-1.png"
]
} | [
"How strong was the correlation between exercise and diabetes?",
"How were topics of interest about DDEO identified?"
] | [
[
"1709.07916-6-Figure2-1.png",
"1709.07916-Results-5"
],
[
"1709.07916-Topic Discovery-1",
"1709.07916-Topic Discovery-3",
"1709.07916-Topic Discovery-0"
]
] | [
"weak correlation with p-value of 0.08",
"using topic modeling model Latent Dirichlet Allocation (LDA)"
] | 236 |
1909.00154 | Rethinking travel behavior modeling representations through embeddings | This paper introduces the concept of travel behavior embeddings, a method for re-representing discrete variables that are typically used in travel demand modeling, such as mode, trip purpose, education level, family type or occupation. This re-representation process essentially maps those variables into a latent space called the \emph{embedding space}. The benefit of this is that such spaces allow for richer nuances than the typical transformations used in categorical variables (e.g. dummy encoding, contrasted encoding, principal components analysis). While the usage of latent variable representations is not new per se in travel demand modeling, the idea presented here brings several innovations: it is an entirely data driven algorithm; it is informative and consistent, since the latent space can be visualized and interpreted based on distances between different categories; it preserves interpretability of coefficients, despite being based on Neural Network principles; and it is transferrable, in that embeddings learned from one dataset can be reused for other ones, as long as travel behavior keeps consistent between the datasets. ::: The idea is strongly inspired on natural language processing techniques, namely the word2vec algorithm. Such algorithm is behind recent developments such as in automatic translation or next word prediction. Our method is demonstrated using a model choice model, and shows improvements of up to 60\% with respect to initial likelihood, and up to 20% with respect to likelihood of the corresponding traditional model (i.e. using dummy variables) in out-of-sample evaluation. We provide a new Python package, called PyTre (PYthon TRavel Embeddings), that others can straightforwardly use to replicate our results or improve their own models. Our experiments are themselves based on an open dataset (swissmetro). | {
"paragraphs": [
[
"Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variables that are not “quantities\" per se, such as age or even geographic coordinates tend to be discretized and then transformed into vectors of dummy variables. Similarly, categorical variables such as education level or trip purpose are already discrete, and thus are also usually “dummyfied\". Then, we may interact any subset of the above by combining (typically, multiplying) them, as long as we get in the end a vector of numeric values that can be incorporated in a statistical model, a linear one in the case of the most common logit model.",
"There are however phenomena that are hard to represent, and modelers end up struggling to find the right representation. For example, influence of social interactions between different persons, hierarchical decision making, autocorrelated nature of time and space, or abstract concepts such as accessibility, attitudes, personality traits and so on. The point here, is that the nature of our models seems to enforce a compromise between the true semantics of a variable (i.e. the “meaning\" of a certain information for the decision making process) and its realisation in practice. And that further research should be done to find new representation paradigms.",
"Historically speaking, the natural language processing (NLP) field has had similar dilemmas for decades, and for a while two general trends were competing: the statistical modeling approaches, and the linguistic theory based approaches. The former relied on simple representations, such as vector frequencies, or dummy variables, to become practical, while the latter used domain knowledge such as grammars or logic. Until recently, neither had considerable success in making machines able to understand or generate human language, but developments in deep neural networks together with overwhelmingly massive amounts of data (i.e. the World Wide Web) brought them to a new area, where the two are approaching each other and achieving hitherto results considered extremely hard, such as question answering, translation, next word prediction. One of the key concepts in this revolution is that of embeddings, which will be further explained in this paper.",
"Our focus here is on the representation of categorical variables. The default paradigm is dummy variables (also known as “one-hot-encoding\" in machine learning literature), which have well-known limitations, namely the explosion of dimensionality and enforced ortogonality. The former happens because we assign one new “dummy\" variable to each of D-1 categories, and easily go from a small original variable specification to one with hundreds of variables, bringing problems in model estimation and analysis. This often affects the data collection process itself. Since one doesn't want to end up with too many categories, we might as well give less options in a survey, or decrease the resolution of a sensor. The problem of enforced ortogonality relates to the fact that, in a dummy encoding, all categories become equidistant. The similarity between “student\" and “employed\" is the same as between “student\" and “retired\", which in many cases (e.g. mode choice, departure time choice) goes against intuition. Other encoding methods exist, such as contrasted encoding or principal components analysis (PCA). The former ends up being a subtle variation on the dummy approach, but the latter already provides an interesting answer to the problem: categories are no longer forcibly equidistant, and the number of variables can be much reduced. However, it is a non-supervised approach. The distance between “student\" and “employed\" will always be the same, regardless of the problem we are solving, but this may be intuitively illogical if we consider car ownership versus departure time choice models for example.",
"The key idea in this paper is to introduce a method, called Travel Behavior embeddings, that borrows much from the NLP concept. This method serves to encode categorical variables, and is dependent on the problem at hand. We will focus on mode choice, and test on a well-known dataset, by comparing with both dummy and PCA encoding. All the dataset and code are made openly available, and the reader can follow and generate results him/herself using an iPython notebook included. Our ultimate goal is certainly that the reader reuses our PyTre package for own purposes.",
"This paper presents some results and conclusions, after a relatively long exploration and analysis process, including other datasets and code variations not mentioned here for interest of clarity and replicability. While we show these concepts to be promising and innovative in this paper, one should be wary of over-hyping yet another Machine Learning/Artificial Intelligence concept: after all, Machine Learning is still essentially based on statistics. In NLP, the number of different words in consideration at a given moment can be in order of tens of thousands, while our categorical variables rarely go beyond a few dozens. This means that, for example, it becomes clear later that the least number of original categories, the less the benefit of embeddings (in the limit, a binary variable like gender, is useless to do embeddings with), and also that if we do get a significantly large and statistically representative dataset, a dummy variables representation is sufficient. We will quickly see, however, that complexity can grow quick enough to justify an embeddings based method even if without the shockingly better performance observed in NLP applications."
],
[
"We are generally concerned with random utility maximization (RUM) models, for they have a dominant role in travel behavior modeling. The nature of such models is predominantly numeric, linear, and quite often strictly flat (notwithstanding hierarchical variations, such as nested models BIBREF1, hierarchical Bayes BIBREF2, or non-linear transformations). As a consequence, while numerical variables (e.g. travel time, cost, or income) can be directly used as available, perhaps subject to transformations or segmentation, nominal ones bring about a greater challenge. We tend to enforce a limited set of treatments such as:",
"Dummy variables, or one-hot encoding - for each categorical variable $v$ with D categories, we get D-1 binary variables (the “dummies\"). At each input vector $x_n$, with categorical value $v=d$, the value “1\" will be assigned to the corresponding dummy, while “0\" to all others. If $v$ corresponds to the “default\" category, all dummies are “0\".",
"Contrast encoding BIBREF3 - same as dummy encoding, but instead of “1\" for each category, we have a value that results from a contrasting formula. There are many different formulas (e.g. Helmert, Sum, Backward Difference), but all consist of subtracting the mean of the target variable, for a given category, with a general stastic (e.g. the mean of the dependent variable for all categories; the mean of the dependent variable in the previous category in an ordered list).",
"Principal Components Analysis (PCA) - run the PCA algorithm on the data matrix obtained by dummy representation of the categorical variable, then re-represent it with the corresponding projected eigenvector coefficients. One selects K eigenvectors (e.g. according to a variance explained rule), and thus each category is mapped to a vector of K real values.",
"Segmenting models, mixture models - A general alternative to categorical data representation is in fact to avoid it in the first place. One obvious method would be through creating hierarchical disaggregate methods (e.g. one per category). This is not in itself a representation paradigm, but an alternative way to see this problem. It certainly raises scalability and inference concerns.",
"In datasets where behavior heterogeneity is high, and number of observations is significantly smaller than population size, increasing dimensionality by adding a variable per each category is very risky because the amount of data that is in practice usable to estimate each new coefficient becomes insufficient. A simple intuition here is by considering that, for a dummy variable that is only “1\" for a few observations in the dataset, its coefficient will be “activated\" only that small number of times. If there is a lot of variance in the associated behavior, the variance of the coefficient will also be large, and the coefficient will be considered statistically insignificant.",
"The benefit of representations that map into a latent space, like embeddings and PCA, is that such a space is inevitably shared, and thus every observation contributes indirectly to all category variables. This comes with no interpretability cost, because one can always map to the “dummy\" space and analyse the individual coefficients, as will be shown in our experiments."
],
[
"The idea of text embeddings comes from a simple re-representation necessity. A natural-language processing system is itself also a numeric machine, therefore it requires each individual word in a dictionary to match its own numeric representation. Just as in our travel models, a possible solution has been to use dummy variables, and it is quite obvious that the dimensionality of such a one-hot encoding vector, quickly becomes overwhelming. Think for example next word prediction algorithm, like the one we have in our smartphones. It is essentially a skip-gram BIBREF4 model that predicts the next word, given the n words before. The English dictionary has about 300000 words, and if we have about 5 words before for context, the number of independent variables of the model would become 1.5 million!",
"The goal of text embeddings algorithms (e.g. Word2Vec BIBREF5) is to a) reduce the representation of each word to a computationally acceptable dimension, while simultaneously b) learning the semantic distance between different words. In other words, the euclidean distance of semantically related words (e.g. “dog\" and “cat\") in this new space should be smaller than unrelated words (e.g. “dog\" and “optimize\"). As mentioned before, in a dummy (or one-hot) encoding, all distances between words are equal by definition.",
"The word embeddings methodology is very well explained in several webpages such as BIBREF6, so the reader is strongly encouraged to visit them first. However, for the sake of completeness, we summarize here the general idea.",
"Imagine the following task: given a word $w_i$ in a text, predict the next word $w_o$. If we solve it with a neural network model, we could have the architecture in Figure FIGREF8, where the input consists simply of the one-hot-encoding representation of the word (i.e. one dummy variable for each word in a dictionary of dimensionality $D$), and the output corresponds to the probability of each word in the dictionary being the next one (also a vector with dimensionality $D$).",
"The output layer thus consists simply of a softmax function. In other words, exactly the classical multinomial logit formulation that we would have in an RUM, in which each different word corresponds to an “alternative\".",
"The concept of embeddings is directly associated to the hidden layer, which is a set of linear activation neurons, typically with a dimensionality $K<<D$. Each such neuron is simply an identity function: it sums all inputs; then propagates this sum to the output layer. Since only one input neuron is activated at a time (remember that the input is a one-hot-encoding vector, with one “1\" and the rest with “0\"), each hidden layer neuron just propagates the (single) weight that links to that input neuron. If we have enough data for training this model, we will eventually land on a situation where, for each input word, there is a fixed vector of weights that are directly used in the output (softmax) function, to generate the prediction. With more data, this weight vector will not change (down to some small delta threshold). These stable vectors are what we call embeddings, and the dimensionality of these vectors is called embedding size.",
"Formally, we have a dataset $\\mathcal {D}=\\lbrace x_n, y_n\\rbrace , n=1\\ldots N$, where each $x_n$ and $y_n$ are one-hot (dummy) encodings of categorical variables. The dimensionality of $x_n$ is $D\\times 1$, with $D$ being the number of different categories in $x_n$, while the dimensionality of $y_n$ is $C\\times 1$, with $C$ being the number of categories (alternatives) in $y_n$. The full expression for the embeddings model as described is:",
"where $W$ is the embeddings matrix of size $K\\times D$, where $K$ is called the embeddings size. $B$ is a matrix of coefficients ($C\\times K$) for the softmax layer, so $B_c$ is simply the coefficients (row) vector for output class (alternative) $c$, and $\\alpha _c$ is the corresponding intercept. The typical loss function used in such models is called the categorical cross entropy:",
"Where $\\delta _{i}$ is the kronecker delta ($\\delta _{true}=1; \\delta _{false}=0$), and $\\mathcal {L}(n)$ is the cumulative loss for an individual data point. This formalization is the simplest version, without loss of generality. In practice, as seen below, we will model multiple embeddings matrices simultaneously, and will add regularization terms to the loss function, so the models tested in this paper consist of compositions of the above.",
"So these so called embeddings are in fact a relatively shallow data representation in a simple neural network. What is their added value? Obviously, the first practical benefit is dimensionality reduction, because now there is a mapping between each of the $C$ words to a unique vector of size $K$. The second aspect is that this new representation is the one that maximizes the performance towards a specific task (in our example, prediction of the next word), therefore it is a supervised process, as opposed for example to PCA. The third and more interesting aspect relates with semantic similarity. A natural consequence of the mentioned algorithm is that words that have similar output distributions (i.e. next words) will tend to be close to each other. Figure FIGREF10 shows a 2D visualization (t-SNE) with a subset of english words. In such a visualization, data is projected in 2D space by maintaining the same vector-to-vector distances as in the original ($K$ order space). Therefore the X and Y axes have no specific meaning, only distances between every pair of points are relevant.",
"We can see that semantically similar concepts, more specifically concepts that tend to have the same distribution of “next words\", are placed closer. Another intriguing consequence is that, since the words are now in the $K$ dimensional, embeddings space, we can also do some linear algebra on them. A well known formulation is $King-Man+Woman=Queen$. Essentially, the vector $King-Man$ corresponds to the concept of “crowning\" (therefore $Woman+crowning=Queen$). The same could be done with many other concept pairs. Figure FIGREF11 show also an alternative interpretation of “man-female\", as well as examples with cities and verb tense.",
"Finally, another relevant note on the embeddings representation is that, just like the PCA encoding, one can always project back into the original space and use this for interpretability. In other words, since there is a 1-to-1 mapping from each category to its encoding, there is also a 1-to-1 mapping between a model that uses dummy variables and a model using such encodings. This may be useful for interpretability, since in the case of dummy variables we have a direct interpretation (e.g. a beta coefficient value in a logit model) for the effect of a given category, while the same doesn't happen for an encoded variable (i.e. there is no meaning for the value of a single beta coefficient in an embeddings encoding when K>1). In order to preserve statistical significance information (e.g. p-values) we only need to follow the well known rules of normal random variables.",
"There are open databases available (e.g. GLoVe BIBREF9, FastText BIBREF7) that provide word embedding tables for the entire English language (Glove provides several embedding tables, up to embedding size between 100 and 300). In our next word application example, we now talk about models with 500-1500 variables, which is very manageable for our machines today.",
"Summarizing, the general idea of word embeddings is to re-represent a categorical variable into a lower dimensional representation with continuous values . Whenever such a variable is to be used in a model, one can simply replace it with the corresponding embeddings vector. We have previously demonstrated the value of such word embeddings in demand prediction in special events BIBREF10, where we collected event textual descriptions, and used Glove embedding vectors to incorporate such information in a neural network model.",
"Finally, an interesting point to mention relates to the typical difference in dataset size between the original embeddings training model (Glove, approximately 6 billion input word vectors from 37 million texts) and the model one implements to solve a particular problem (in our special events case, less than 1000 short event descriptions, with at most few hundred words each). Instead of creating ourselves a new embeddings model using the events dataset, we reused the pre-trained GloVe dataset. The benefit is significant because in practice we trained our model to deal with all words in the dictionary, much beyond the limited vocabulary that we obtained in our 1000 short texts. In practice we have used a very small percentage of the english dictionary. When, in an out-of-sample test, our model finds words that were not in the training set, it still works perfectly well."
],
[
"Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.",
"Our hypothesis is that, given the limitations of dummy variables that are commonly used and the unsupervised nature of PCA, using instead an embeddings mechanism should improve significantly the quality of our models, both in terms of loglikelihood but also in terms of allowing for lower complexity (i.e. less variables). Ultimately, one could think of a framework such as GLoVe, where embeddings for such variables could be trivially shared with the community. For example, we could have a “Travel behavior embeddings\" database, incrementally built from travel surveys from around the world. Such database could have embeddings for mode choice target variables, but also for departure time, destination choice, car ownership, and so on. Whenever a modeler wanted to estimate a new model, she could just download the right encodings and use them directly. This is particularly relevant if one considers the complicated challenges for opening or sharing travel survey datasets in our field. Of course, a major question arises: are behaviors that consistent across the world? There are certainly nuances across the world, but we believe that general patterns would emerge (e.g. a “business\" trip purpose will be closer to “work\" than “leisure\", in a departure time choice model; “student\" will be closer to “unemployed\" than to “retired\" in a car ownership model)."
],
[
"We believe that, as with word embeddings, a mapping that preserves semantic distance relative to a certain choice problem, should be useful for modeling. As with a PCA encoding, another benefit is that by sharing parameters in the learning process, the model can generalize better, as opposed to a dummy encoding, where each categorical value has its own parameter, that is only active when observed.",
"The general idea is thus to create a mapping between a variable for which we want to find an embeddings representation, and a target variable, as in Figure FIGREF15. We call the mapping function “PyTre Embeddings\", because that is the name of the object in our proposed Python “Travel Embeddings\" package.",
"From an experimental design and application perspective, the approach followed in this paper is the following:",
"Create list of categorical variables to encode (the encoding set)",
"Split dataset into train, development and test sets",
"For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).",
"Encode choice models for train, development and test sets using the learned embeddings",
"Estimate choice model accordingly using its train set",
"Evaluate the new model using the test set",
"Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set."
],
[
"Since a choice model will typically involve other variables than the categorical ones that we learn the embeddings for, it is important to take into account their effects. Figure FIGREF24 shows the simplest travel embeddings model. As an example, the categorical variable is trip purpose, and there are a few other variables such as gender, cost of the alternatives, distance, and so on. Notice that they are directly fed into the softmax output layer, together with the embeddings output.",
"The dataset sizes in transportation behavior modeling are substantially smaller than typical word embeddings ones, and the risk of overfitting is therefore higher. To mitigate this problem, besides adding regularization penalties in the objective function, we add what we call a regularizer layer for each embedding, which is no more than a softmax layer that penalizes whenever it cannot recover the original one-hot-encoding vectors (Figure FIGREF25, left). We call the combination of embeddings and its regularizer network, a Travel Embeddings layer. Finally, it is obviously better to train all embeddings simultaneously, so that they accommodate each other's effects (Figure FIGREF25, right)."
],
[
"The goal of this paper is to test the potential of embeddings in a simple and well-known choice model context, comparing it to well-known baseline techniques. Therefore, the general model specification follows quite simple assumptions. We expect that in future work from us or others, more elaborate derivations can take advantage of embeddings such as nested, mixed logit or latent class choice models (LCCM), for example.",
"We will apply the methodology to the well-known “Swissmetro\" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space.",
"All experiment code is available as a jupyter notebook in a package we created for this work (to which we called PyTre). For estimating the multinomial logit model (MNL) we used the PyLogit BIBREF11 package."
],
[
"The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments.",
"We split the dataset into 3 different parts:",
"Embeddings train set: 60% of the dataset (6373 vectors)",
"Development set: 20% of the dataset (2003 vectors)",
"Test set: 20% of the dataset (2003 vectors)"
],
[
"The PyLogit package BIBREF11 also uses Swissmetro as an example. Therefore, our model specifications will extend the default one from this package. We re-estimated this model with the train set and validated with testset. The results are shown in tables TABREF31 and TABREF32. Since we are comparing the models at the test set, the key indicators should be pseudo R-square and log-likelihood. Indicators that consider model complexity (robust r-square and AIC) are less important on the test set in our view because the overfitting effect (i.e. improving fit just by adding more variables) will no longer be verifiable in this way. Instead, one sees overfitting if test set performance is considerably inferior to the training set."
]
],
"section_name": [
"Introduction",
"Representing categorical variables",
"The concept of text embeddings",
"Travel behaviour embeddings",
"Travel behaviour embeddings ::: The general idea",
"Travel behaviour embeddings ::: Methodology",
"An experiment with mode choice",
"An experiment with mode choice ::: The Swissmetro dataset",
"An experiment with mode choice ::: Principles for the model specification"
]
} | {
"answers": [
{
"annotation_id": [
"5ac34eb67f1f8386ca9654d0d56e6e970c8f6cde"
],
"answer": [
{
"evidence": [
"The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments."
],
"extractive_spans": [
"Swissmetro dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"e7fa4a9302fccb534138aec8e7fcdff69791ab63"
],
"answer": [
{
"evidence": [
"For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).",
"Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set."
],
"extractive_spans": [],
"free_form_answer": "The embeddings are learned several times using the training set, then the average is taken.",
"highlighted_evidence": [
"For each variable in encoding set, learn the new embeddings using the embeddings train set .",
"Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"135e6e05c3d4c16db9e073bdeb856ed2f91820a2"
],
"answer": [
{
"evidence": [
"Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair."
],
"extractive_spans": [],
"free_form_answer": "The data from collected travel surveys is used to model travel behavior.",
"highlighted_evidence": [
"Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"cefa81dfd716c6568a263ac073777e97fc32f783"
],
"answer": [
{
"evidence": [
"We will apply the methodology to the well-known “Swissmetro\" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space."
],
"extractive_spans": [],
"free_form_answer": "The coefficients are projected back to the dummy variable space.",
"highlighted_evidence": [
"For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What datasets are used for evaluation?",
"How do their train their embeddings?",
"How do they model travel behavior?",
"How do their interpret the coefficients?"
],
"question_id": [
"c45a160d31ca8eddbfea79907ec8e59f543aab86",
"7358a1ce2eae380af423d4feeaa67d2bd23ae9dd",
"1165fb0b400ec1c521c1aef7a4e590f76fee1279",
"f2c5da398e601e53f9f545947f61de5f40ede1ee"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The skip gram architecture [7]",
"Figure 2: Visualization of a subset of words from FastText word embeddings database [8]",
"Figure 3: Some classical examples of embeddings algebra [9]",
"Figure 4: The general idea",
"Figure 5: Travel embeddings model",
"Figure 6: Travel embeddings model with regularization (left); Complete model, combining multiple travel embeddings layers (right).",
"Table 1: Multinomial Logit Model Regression Results - original model",
"Table 2: Multinomial Logit Model Regression coefficients - original model (**= p<0.05)",
"Table 3: New dimensionality (K) of encoding set variables",
"Figure 7: Embeddings model training performance",
"Figure 8: MDS visualizations of embeddings results",
"Figure 9: Switzerland’s cantons",
"Table 4: Testset results for embeddings model",
"Table 5: Multinomial Logit Model Regression Results - embeddings model (* = p<0.1; ** = p<0.05)",
"Table 6: Multinomial Logit Model Regression Results - embeddings model projected into dummy variable space (* = p<0.1; ** = p<0.05)",
"Table 7: Multinomial Logit Model Regression Results for dummy variable model with OD variables",
"Table 8: Multinomial Logit Model Regression Results for dummy variable model without OD variables",
"Table 9: Multinomial Logit Model Regression coefficients for dummy variable model without OD variables",
"Table 10: Results for PCA model",
"Table 11: Multinomial Logit Model Regression Results for PCA model",
"Table 12: Summary of results",
"Figure 10: R-square performance with percentage of “expensive\" survey. Left: light+detailed survey; Right: Big Data+detailed survey Note: Absence of data points means either negative R-squared, or model not possible to estimate (e.g. due to singular matrix)"
],
"file": [
"6-Figure1-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"10-Figure4-1.png",
"11-Figure5-1.png",
"12-Figure6-1.png",
"13-Table1-1.png",
"13-Table2-1.png",
"14-Table3-1.png",
"15-Figure7-1.png",
"16-Figure8-1.png",
"17-Figure9-1.png",
"17-Table4-1.png",
"18-Table5-1.png",
"19-Table6-1.png",
"21-Table7-1.png",
"21-Table8-1.png",
"22-Table9-1.png",
"23-Table10-1.png",
"24-Table11-1.png",
"24-Table12-1.png",
"25-Figure10-1.png"
]
} | [
"How do their train their embeddings?",
"How do they model travel behavior?",
"How do their interpret the coefficients?"
] | [
[
"1909.00154-Travel behaviour embeddings ::: The general idea-5",
"1909.00154-Travel behaviour embeddings ::: The general idea-9"
],
[
"1909.00154-Travel behaviour embeddings-0"
],
[
"1909.00154-An experiment with mode choice-1"
]
] | [
"The embeddings are learned several times using the training set, then the average is taken.",
"The data from collected travel surveys is used to model travel behavior.",
"The coefficients are projected back to the dummy variable space."
] | 237 |
1908.05434 | Sex Trafficking Detection with Ordinal Regression Neural Networks | Sex trafficking is a global epidemic. Escort websites are a primary vehicle for selling the services of such trafficking victims and thus a major driver of trafficker revenue. Many law enforcement agencies do not have the resources to manually identify leads from the millions of escort ads posted across dozens of public websites. We propose an ordinal regression neural network to identify escort ads that are likely linked to sex trafficking. Our model uses a modified cost function to mitigate inconsistencies in predictions often associated with nonparametric ordinal regression and leverages recent advancements in deep learning to improve prediction accuracy. The proposed method significantly improves on the previous state-of-the-art on Trafficking-10K, an expert-annotated dataset of escort ads. Additionally, because traffickers use acronyms, deliberate typographical errors, and emojis to replace explicit keywords, we demonstrate how to expand the lexicon of trafficking flags through word embeddings and t-SNE. | {
"paragraphs": [
[
"Globally, human trafficking is one of the fastest growing crimes and, with annual profits estimated to be in excess of 150 billion USD, it is also among the most lucrative BIBREF0 . Sex trafficking is a form of human trafficking which involves sexual exploitation through coercion. Recent estimates suggest that nearly 4 million adults and 1 million children are being victimized globally on any given day; furthermore, it is estimated that 99 percent of victims are female BIBREF1 . Escort websites are an increasingly popular vehicle for selling the services of trafficking victims. According to a recent survivor survey BIBREF2 , 38% of underage trafficking victims who were enslaved prior to 2004 were advertised online, and that number rose to 75% for those enslaved after 2004. Prior to its shutdown in April 2018, the website Backpage was the most frequently used online advertising platform; other popular escort websites include Craigslist, Redbook, SugarDaddy, and Facebook BIBREF2 . Despite the seizure of Backpage, there were nearly 150,000 new online sex advertisements posted per day in the U.S. alone in late 2018 BIBREF3 ; even with many of these new ads being re-posts of existing ads and traffickers often posting multiple ads for the same victims BIBREF2 , this volume is staggering.",
"Because of their ubiquity and public access, escort websites are a rich resource for anti-trafficking operations. However, many law enforcement agencies do not have the resources to sift through the volume of escort ads to identify those coming from potential traffickers. One scalable and efficient solution is to build a statistical model to predict the likelihood of an ad coming from a trafficker using a dataset annotated by anti-trafficking experts. We propose an ordinal regression neural network tailored for text input. This model comprises three components: (i) a Word2Vec model BIBREF4 that maps each word from the text input to a numeric vector, (ii) a gated-feedback recurrent neural network BIBREF5 that sequentially processes the word vectors, and (iii) an ordinal regression layer BIBREF6 that produces a predicted ordinal label. We use a modified cost function to mitigate inconsistencies in predictions associated with nonparametric ordinal regression. We also leverage several regularization techniques for deep neural networks to further improve model performance, such as residual connection BIBREF7 and batch normalization BIBREF8 . We conduct our experiments on Trafficking-10k BIBREF9 , a dataset of escort ads for which anti-trafficking experts assigned each sample one of seven ordered labels ranging from “1: Very Unlikely (to come from traffickers)” to “7: Very Likely”. Our proposed model significantly outperforms previously published models BIBREF9 on Trafficking-10k as well as a variety of baseline ordinal regression models. In addition, we analyze the emojis used in escort ads with Word2Vec and t-SNE BIBREF10 , and we show that the lexicon of trafficking-related emojis can be subsequently expanded.",
"In Section SECREF2 , we discuss related work on human trafficking detection and ordinal regression. In Section SECREF3 , we present our proposed model and detail its components. In Section SECREF4 , we present the experimental results, including the Trafficking-10K benchmark, a qualitative analysis of the predictions on raw data, and the emoji analysis. In Section SECREF5 , we summarize our findings and discuss future work."
],
[
"Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon.",
"Ordinal regression: We briefly review ordinal regression before introducing the proposed methodology. We assume that the training data are INLINEFORM0 , where INLINEFORM1 are the features and INLINEFORM2 is the response; INLINEFORM3 is the set of INLINEFORM4 ordered labels INLINEFORM5 with INLINEFORM6 . Many ordinal regression methods learn a composite map INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 have the interpretation that INLINEFORM10 is a latent “score” which is subsequently discretized into a category by INLINEFORM11 . INLINEFORM12 is often estimated by empirical risk minimization, i.e., by minimizing a loss function INLINEFORM13 averaged over the training data. Standard choices of INLINEFORM14 and INLINEFORM15 are reviewed by J. Rennie & N. Srebro ( BIBREF11 ).",
"Another common approach to ordinal regression, which we adopt in our proposed method, is to transform the label prediction into a series of INLINEFORM0 binary classification sub-problems, wherein the INLINEFORM1 th sub-problem is to predict whether or not the true label exceeds INLINEFORM2 BIBREF12 , BIBREF13 . For example, one might use a series of logistic regression models to estimate the conditional probabilities INLINEFORM3 for each INLINEFORM4 . J. Cheng et al. ( BIBREF6 ) estimated these probabilities jointly using a neural network; this was later extended to image data BIBREF14 as well as text data BIBREF15 , BIBREF16 . However, as acknowledged by J. Cheng et al. ( BIBREF6 ), the estimated probabilities need not respect the ordering INLINEFORM5 for all INLINEFORM6 and INLINEFORM7 . We force our estimator to respect this ordering through a penalty on its violation."
],
[
"Our proposed ordinal regression model consists of the following three components: Word embeddings pre-trained by a Skip-gram model, a gated-feedback recurrent neural network that constructs summary features from sentences, and a multi-labeled logistic regression layer tailored for ordinal regression. See Figure SECREF3 for a schematic. The details of its components and their respective alternatives are discussed below.",
" figure Overview of the ordinal regression neural network for text input. INLINEFORM0 represents a hidden state in a gated-feedback recurrent neural network."
],
[
"Vector representations of words, also known as word embeddings, can be obtained through unsupervised learning on a large text corpus so that certain linguistic regularities and patterns are encoded. Compared to Latent Semantic Analysis BIBREF17 , embedding algorithms using neural networks are particularly good at preserving linear regularities among words in addition to grouping similar words together BIBREF18 . Such embeddings can in turn help other algorithms achieve better performances in various natural language processing tasks BIBREF4 .",
"Unfortunately, the escort ads contain a plethora of emojis, acronyms, and (sometimes deliberate) typographical errors that are not encountered in more standard text data, which suggests that it is likely better to learn word embeddings from scratch on a large collection of escort ads instead of using previously published embeddings BIBREF9 . We use 168,337 ads scraped from Backpage as our training corpus and the Skip-gram model with Negative sampling BIBREF4 as our model."
],
[
"To process entire sentences and paragraphs after mapping the words to embeddings, we need a model to handle sequential data. Recurrent neural networks (RNNs) have recently seen great success at modeling sequential data, especially in natural language processing tasks BIBREF19 . On a high level, an RNN is a neural network that processes a sequence of inputs one at a time, taking the summary of the sequence seen so far from the previous time point as an additional input and producing a summary for the next time point. One of the most widely used variations of RNNs, a Long short-term memory network (LSTM), uses various gates to control the information flow and is able to better preserve long-term dependencies in the running summary compared to a basic RNN BIBREF20 . In our implementation, we use a further refinement of multi-layed LSTMs, Gated-feedback recurrent neural networks (GF-RNNs), which tend to capture dependencies across different timescales more easily BIBREF5 .",
"Regularization techniques for neural networks including Dropout BIBREF21 , Residual connection BIBREF7 , and Batch normalization BIBREF8 are added to GF-RNN for further improvements.",
"After GF-RNN processes an entire escort ad, the average of the hidden states of the last layer becomes the input for the multi-labeled logistic regression layer which we discuss next."
],
[
"As noted previously, the ordinal regression problem can be cast into a series of binary classification problems and thereby utilize the large repository of available classification algorithms BIBREF12 , BIBREF13 , BIBREF14 . One formulation is as follows. Given INLINEFORM0 total ranks, the INLINEFORM1 -th binary classifier is trained to predict the probability that a sample INLINEFORM2 has rank larger than INLINEFORM3 . Then the predicted rank is INLINEFORM4 ",
"In a classification task, the final layer of a deep neural network is typically a softmax layer with dimension equal to the number of classes BIBREF20 . Using the ordinal-regression-to-binary-classifications formulation described above, J. Cheng et al. ( BIBREF6 ) replaced the softmax layer in their neural network with a INLINEFORM0 -dimensional sigmoid layer, where each neuron serves as a binary classifier (see Figure SECREF7 but without the order penalty to be discussed later).",
"With the sigmoid activation function, the output of the INLINEFORM0 th neuron can be viewed as the predicted probability that the sample has rank greater than INLINEFORM5 . Alternatively, the entire sigmoid layer can be viewed as performing multi-labeled logistic regression, where the INLINEFORM6 th label is the indicator of the sample's rank being greater than INLINEFORM7 . The training data are thus re-formatted accordingly so that response variable for a sample with rank INLINEFORM8 becomes INLINEFORM9 k-1 INLINEFORM10 Y Y INLINEFORM11 Y - Y INLINEFORM12 J. Cheng et al.'s ( BIBREF6 ) final layer was preceded by a simple feed-forward network. In our case, word embeddings and GF-RNN allow us to construct a feature vector of fixed length from text input, so we can simply attach the multi-labeled logistic regression layer to the output of GF-RNN to complete an ordinal regression neural network for text input.",
"The violation of the monotonicity in the estimated probabilities (e.g., INLINEFORM0 for some INLINEFORM1 and INLINEFORM2 ) has remained an open issue since the original ordinal regression neural network proposal of J. Cheng et al ( BIBREF6 ). This is perhaps owed in part to the belief that correcting this issue would significantly increase training complexity BIBREF14 . We propose an effective and computationally efficient solution to avoid the conflicting predictions as follows: penalize such conflicts in the training phase by adding INLINEFORM3 ",
"to the loss function for a sample INLINEFORM0 , where INLINEFORM1 is a penalty parameter (Figure SECREF7 ). For sufficiently large INLINEFORM2 the estimated probabilities will respect the monotonicity condition; respecting this condition improves the interpretability of the predictions, which is vital in applications like the one we consider here as stakeholders are given the estimated probabilities. We also hypothesize that the order penalty may serve as a regularizer to improve each binary classifier (see the ablation test in Section SECREF15 ).",
" figure Ordinal regression layer with order penalty.",
"All three components of our model (word embeddings, GF-RNN, and multi-labeled logistic regression layer) can be trained jointly, with word embeddings optionally held fixed or given a smaller learning rate for fine-tuning. The hyperparameters for all components are given in the Appendix. They are selected according to either literature or grid-search."
],
[
"We first describe the datasets we use to train and evaluate our models. Then we present a detailed comparison of our proposed model with commonly used ordinal regression models as well as the previous state-of-the-art classification model by E. Tong et al. ( BIBREF9 ). To assess the effect of each component in our model, we perform an ablation test where the components are swapped by their more standard alternatives one at a time. Next, we perform a qualitative analysis on the model predictions on the raw data, which are scraped from a different escort website than the one that provides the labeled training data. Finally, we conduct an emoji analysis using the word embeddings trained on raw escort ads."
],
[
"We use raw texts scraped from Backpage and TNABoard to pre-train the word embeddings, and use the same labeled texts E. Tong et al. ( BIBREF9 ) used to conduct model comparisons. The raw text dataset consists of 44,105 ads from TNABoard and 124,220 ads from Backpage. Data cleaning/preprocessing includes joining the title and the body of an ad; adding white spaces around every emoji so that it can be tokenized properly; stripping tabs, line breaks, punctuations, and extra white spaces; removing phone numbers; and converting all letters to lower case. We have ensured that the raw dataset has no overlap with the labeled dataset to avoid bias in test accuracy. While it is possible to scrape more raw data, we did not observe significant improvements in model performances when the size of raw data increased from INLINEFORM0 70,000 to INLINEFORM1 170,000, hence we assume that the current raw dataset is sufficiently large.",
"The labeled dataset is called Trafficking-10k. It consists of 12,350 ads from Backpage labeled by experts in human trafficking detection BIBREF9 . Each label is one of seven ordered levels of likelihood that the corresponding ad comes from a human trafficker. Descriptions and sample proportions of the labels are in Table TABREF11 . The original Trafficking-10K includes both texts and images, but as mentioned in Section SECREF1 , only the texts are used in our case. We apply the same preprocessing to Trafficking-10k as we do to raw data."
],
[
"We compare our proposed ordinal regression neural network (ORNN) to Immediate-Threshold ordinal logistic regression (IT) BIBREF11 , All-Threshold ordinal logistic regression (AT) BIBREF11 , Least Absolute Deviation (LAD) BIBREF22 , BIBREF23 , and multi-class logistic regression (MC) which ignores the ordering. The primary evaluation metrics are Mean Absolute Error (MAE) and macro-averaged Mean Absolute Error ( INLINEFORM0 ) BIBREF24 . To compare our model with the previous state-of-the-art classification model for escort ads, the Human Trafficking Deep Network (HTDN) BIBREF9 , we also polarize the true and predicted labels into two classes, “1-4: Unlikely” and “5-7: Likely”; then we compute the binary classification accuracy (Acc.) as well as the weighted binary classification accuracy (Wt. Acc.) given by INLINEFORM1 ",
"Note that for applications in human trafficking detection, MAE and Acc. are of primary interest. Whereas for a more general comparison among the models, the class imbalance robust metrics, INLINEFORM0 and Wt. Acc., might be more suitable. Bootstrapping or increasing the weight of samples in smaller classes can improve INLINEFORM1 and Wt. Acc. at the cost of MAE and Acc..",
"The text data need to be vectorized before they can be fed into the baseline models (whereas vectorization is built into ORNN). The standard practice is to tokenize the texts using n-grams and then create weighted term frequency vectors using the term frequency (TF)-inverse document frequency (IDF) scheme BIBREF25 , BIBREF26 . The specific variation we use is the recommended unigram + sublinear TF + smooth IDF BIBREF26 , BIBREF27 . Dimension reduction techniques such as Latent Semantic Analysis BIBREF17 can be optionally applied to the frequency vectors, but B. Schuller et al. ( BIBREF28 ) concluded from their experiments that dimension reduction on frequency vectors actually hurts model performance, which our preliminary experiments agree with.",
"All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.",
"We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss."
],
[
"To ensure that we do not unnecessarily complicate our ORNN model, and to assess the impact of each component on the final model performance, we perform an ablation test. Using the same CV and evaluation metrics, we make the following replacements separately and re-evaluate the model: 1. Replace word embeddings pre-trained from skip-gram model with randomly initialized word embeddings; 2. replace gated-feedback recurrent neural network with long short-term memory network (LSTM); 3. disable batch normalization; 4. disable residual connection; 5. replace the multi-labeled logistic regression layer with a softmax layer (i.e., let the model perform classification, treating the ordinal response variable as a categorical variable with INLINEFORM0 classes); 6. replace the multi-labeled logistic regression layer with a 1-dimensional linear layer (i.e., let the model perform regression, treating the ordinal response variable as a continuous variable) and round the prediction to the nearest integer during testing; 7. set the order penalty to 0. The results are shown in Table TABREF16 .",
"The proposed ORNN once again has all the best metrics except for Wt. Acc. which is the 2nd best. This suggests that each component indeed makes a contribution. Note that if we disregard the ordinal labels and perform classification or regression, MAE falls off by a large margin. Setting order penalty to 0 does not deteriorate the performance by much, however, the percent of conflicting binary predictions (see Section SECREF7 ) rises from 1.4% to 5.2%. So adding an order penalty helps produce more interpretable results."
],
[
"To qualitatively evaluate how well our model predicts on raw data and observe potential patterns in the flagged samples, we obtain predictions on the 44,105 unlabelled ads from TNABoard with the ORNN model trained on Trafficking-10k, then we examine the samples with high predicted likelihood to come from traffickers. Below are the top three samples that the model considers likely:",
"[itemsep=0pt]",
"“amazing reviewed crystal only here till fri book now please check our site for the services the girls provide all updates specials photos rates reviews njfantasygirls ...look who s back amazing reviewed model samantha...brand new spinner jessica special rate today 250 hr 21 5 4 120 34b total gfe total anything goes no limits...”",
"“2 hot toght 18y o spinners 4 amazing providers today specials...”",
"“asian college girl is visiting bellevue service type escort hair color brown eyes brown age 23 height 5 4 body type slim cup size c cup ethnicity asian service type escort i am here for you settle men i am a tiny asian girl who is waiting for a gentlemen...”",
"Some interesting patterns in the samples with high predicted likelihood (here we only showed three) include: mentioning of multiple names or INLINEFORM0 providers in a single ad; possibly intentional typos and abbreviations for the sensitive words such as “tight” INLINEFORM1 “toght” and “18 year old” INLINEFORM2 “18y o”; keywords that indicate traveling of the providers such as “till fri”, “look who s back”, and “visiting”; keywords that hint on the providers potentially being underage such as “18y o”, “college girl”, and “tiny”; and switching between third person and first person narratives."
],
[
"The fight against human traffickers is adversarial and dynamic. Traffickers often avoid using explicit keywords when advertising victims, but instead use acronyms, intentional typos, and emojis BIBREF9 . Law enforcement maintains a lexicon of trafficking flags mapping certain emojis to their potential true meanings (e.g., the cherry emoji can indicate an underaged victim), but compiling such a lexicon manually is expensive, requires frequent updating, and relies on domain expertise that is hard to obtain (e.g., insider information from traffickers or their victims). To make matters worse, traffickers change their dictionaries over time and regularly switch to new emojis to replace certain keywords BIBREF9 . In such a dynamic and adversarial environment, the need for a data-driven approach in updating the existing lexicon is evident.",
"As mentioned in Section SECREF5 , training a skip-gram model on a text corpus can map words (including emojis) used in similar contexts to similar numeric vectors. Besides using the vectors learned from the raw escort ads to train ORNN, we can directly visualize the vectors for the emojis to help identify their relationships, by mapping the vectors to a 2-dimensional space using t-SNE BIBREF10 (Figure FIGREF24 ).",
"We can first empirically assess the quality of the emoji map by noting that similar emojis do seem clustered together: the smileys near the coordinate (2, 3), the flowers near (-6, -1), the heart shapes near (-8, 1), the phones near (-2, 4) and so on. It is worth emphasizing that the skip-gram model learns the vectors of these emojis based on their contexts in escort ads and not their visual representations, so the fact that the visually similar emojis are close to one another in the map suggests that the vectors have been learned as desired.",
"The emoji map can assist anti-trafficking experts in expanding the existing lexicon of trafficking flags. For example, according to the lexicon we obtained from Global Emancipation Network, the cherry emoji and the lollipop emoji are both flags for underaged victims. Near (-3, -4) in the map, right next to these two emojis are the porcelain dolls emoji, the grapes emoji, the strawberry emoji, the candy emoji, the ice cream emojis, and maybe the 18-slash emoji, indicating that they are all used in similar contexts and perhaps should all be flags for underaged victims in the updated lexicon.",
"If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos."
],
[
"Human trafficking is a form of modern day slavery that victimizes millions of people. It has become the norm for sex traffickers to use escort websites to openly advertise their victims. We designed an ordinal regression neural network (ORNN) to predict the likelihood that an escort ad comes from a trafficker, which can drastically narrow down the set of possible leads for law enforcement. Our ORNN achieved the state-of-the-art performance on Trafficking-10K BIBREF9 , outperforming all baseline ordinal regression models as well as improving the classification accuracy over the Human Trafficking Deep Network BIBREF9 . We also conducted an emoji analysis and showed how to use word embeddings learned from raw text data to help expand the lexicon of trafficking flags.",
"Since our experiments, there have been considerable advancements in language representation models, such as BERT BIBREF30 . The new language representation models can be combined with our ordinal regression layer, replacing the skip-gram model and GF-RNN, to potentially further improve our results. However, our contributions of improving the cost function for ordinal regression neural networks, qualitatively analyzing patterns in the predicted samples, and expanding the trafficking lexicon through a data-driven approach are not dependent on a particular choice of language representation model.",
"As for future work in trafficking detection, we can design multi-modal ordinal regression networks that utilize both image and text data. But given the time and resources required to label escort ads, we may explore more unsupervised learning or transfer learning algorithms, such as using object detection BIBREF31 and matching algorithms to match hotel rooms in the images."
],
[
"We thank Cara Jones and Marinus Analytics LLC for sharing the Trafficking-10K dataset. We thank Praveen Bodigutla for his suggestions on Natural Language Processing literature."
],
[
"Word Embeddings: pretraining model type: Skip-gram; speedup method: negative sampling; number of negative samples: 100; noise distribution: unigram distribution raised to 3/4rd; batch size: 16; window size: 5; minimum word count: 5; number of epochs: 50; embedding size: 128; pretraining learning rate: 0.2; fine-tuning learning rate scale: 1.0.",
"GF-RNN: hidden size: 128; dropout: 0.2; number of layers: 3; gradient clipping norm: 0.25; L2 penalty: 0.00001; learning rate decay factor: 2.0; learning rate decay patience: 3; early stop patience: 9; batch size: 200; batch normalization: true; residual connection: true; output layer type: mean-pooling; minimum word count: 5; maximum input length: 120.",
"Multi-labeled logistic regression layer: task weight scheme: uniform; conflict penalty: 0.5."
],
[
"The fight against human trafficking is adversarial, hence the access to the source materials in anti-trafficking research is typically not available to the general public by choice, but granted to researchers and law enforcement individually upon request.",
"Source code:",
"https://gitlab.com/BlazingBlade/TrafficKill",
"Trafficking-10k: Contact",
"[email protected]",
"Trafficking lexicon: Contact",
"[email protected]"
]
],
"section_name": [
"Introduction",
"Related Work",
"Method",
"Word Embeddings",
"Gated-Feedback Recurrent Neural Network",
"Multi-Labeled Logistic Regression Layer",
"Experiments",
"Datasets",
"Comparison with Baselines",
"Ablation Test",
"Qualitative Analysis of Predictions",
"Emoji Analysis",
"Discussion",
"Acknowledgments",
"Hyperparameters of the proposed ordinal regression neural network",
"Access to the source materials"
]
} | {
"answers": [
{
"annotation_id": [
"1384b1e2ddc8d8417896cb3664c4586037474138"
],
"answer": [
{
"evidence": [
"All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.",
"We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss.",
"FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.",
"FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted."
],
"extractive_spans": [],
"free_form_answer": "Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)",
"highlighted_evidence": [
"We report the mean metrics from the CV in Table TABREF14 .",
"We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models.",
"FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.",
"FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7a121e16f4f5def4e5700dfc4d6f588f03ac00a1"
],
"answer": [
{
"evidence": [
"Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"26f9aea7a6585b16f09cf6f41dfbf0a3f9f8db81"
],
"answer": [
{
"evidence": [
"If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos."
],
"extractive_spans": [
"re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones"
],
"free_form_answer": "",
"highlighted_evidence": [
"If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"By how much do they outperform previous state-of-the-art models?",
"Do they use pretrained word embeddings?",
"How is the lexicon of trafficking flags expanded?"
],
"question_id": [
"2d4d0735c50749aa8087d1502ab7499faa2f0dd8",
"43761478c26ad65bec4f0fd511ec3181a100681c",
"01866fe392d9196dda1d0b472290edbd48a99f66"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Overview of the ordinal regression neural network for text input. H represents a hidden state in a gated-feedback recurrent neural network.",
"Figure 2: Ordinal regression layer with order penalty.",
"Table 1: Description and distribution of labels in Trafficking-10K.",
"Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.",
"Table 3: Ablation test. Except for models everything is the same as Table 2.",
"Figure 3: Emoji map produced by applying t-SNE to the emojis’ vectors learned from escort ads using skip-gram model. For visual clarity, only the emojis that appeared most frequently in the escort ads we scraped are shown out of the total 968 emojis that appeared."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Figure3-1.png"
]
} | [
"By how much do they outperform previous state-of-the-art models?"
] | [
[
"1908.05434-Comparison with Baselines-3",
"1908.05434-7-Table2-1.png",
"1908.05434-Comparison with Baselines-4"
]
] | [
"Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)"
] | 238 |
1909.02480 | FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow | Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, non-autoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, directly modeling the joint distribution of all tokens simultaneously is challenging, and even with increasingly complex model structures accuracy lags significantly behind autoregressive models. In this paper, we propose a simple, efficient, and effective model for non-autoregressive sequence generation using latent variable models. Specifically, we turn to generative flow, an elegant technique to model complex distributions using neural networks, and design several layers of flow tailored for modeling the conditional density of sequential latent variables. We evaluate this model on three neural machine translation (NMT) benchmark datasets, achieving comparable performance with state-of-the-art non-autoregressive NMT models and almost constant decoding time w.r.t the sequence length. | {
"paragraphs": [
[
"Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\\mathbf {y} = \\lbrace y_1, \\ldots , y_T\\rbrace $ given an input sequence $\\mathbf {x} = \\lbrace x_1, \\ldots , x_{T^{\\prime }}\\rbrace $ using conditional probabilities $P_\\theta (\\mathbf {y}|\\mathbf {x})$ predicted by neural networks (parameterized by $\\theta $).",
"Most seq2seq models are autoregressive, meaning that they factorize the joint probability of the output sequence given the input sequence $P_\\theta (\\mathbf {y}|\\mathbf {x})$ into the product of probabilities over the next token in the sequence given the input sequence and previously generated tokens:",
"Each factor, $P_\\theta (y_{t} | y_{<t}, \\mathbf {x})$, can be implemented by function approximators such as RNNs BIBREF0 and Transformers BIBREF3. This factorization takes the complicated problem of joint estimation over an exponentially large output space of outputs $\\mathbf {y}$, and turns it into a sequence of tractable multi-class classification problems predicting $y_t$ given the previous words, allowing for simple maximum log-likelihood training. However, this assumption of left-to-right factorization may be sub-optimal from a modeling perspective BIBREF4, BIBREF5, and generation of outputs must be done through a linear left-to-right pass through the output tokens using beam search, which is not easily parallelizable on hardware such as GPUs.",
"Recently, there has been work on non-autoregressive sequence generation for neural machine translation (NMT; BIBREF6, BIBREF7, BIBREF8) and language modeling BIBREF9. Non-autoregressive models attempt to model the joint distribution $P_\\theta (\\mathbf {y}|\\mathbf {x})$ directly, decoupling the dependencies of decoding history during generation. A naïve solution is to assume that each token of the target sequence is independent given the input:",
"Unfortunately, the performance of this simple model falls far behind autoregressive models, as seq2seq tasks usually do have strong conditional dependencies between output variables BIBREF6. This problem can be mitigated by introducing a latent variable $\\mathbf {z}$ to model these conditional dependencies:",
"where $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ is the prior distribution over latent $\\mathbf {z}$ and $P_{\\theta }(\\mathbf {y}|\\mathbf {z}, \\mathbf {x})$ is the “generative” distribution (a.k.a decoder). Non-autoregressive generation can be achieved by the following independence assumption in the decoding process:",
"BIBREF6 proposed a $\\mathbf {z}$ representing fertility scores specifying the number of output words each input word generates, significantly improving the performance over Eq. (DISPLAY_FORM4). But the performance still falls behind state-of-the-art autoregressive models due to the limited expressiveness of fertility to model the interdependence between words in $\\textbf {y}$.",
"In this paper, we propose a simple, effective, and efficient model, FlowSeq, which models expressive prior distribution $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ using a powerful mathematical framework called generative flow BIBREF10. This framework can elegantly model complex distributions, and has obtained remarkable success in modeling continuous data such as images and speech through efficient density estimation and sampling BIBREF11, BIBREF12, BIBREF13. Based on this, we posit that generative flow also has potential to introduce more meaningful latent variables $\\mathbf {z}$ in the non-autoregressive generation in Eq. (DISPLAY_FORM5).",
"FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear."
],
[
"As noted above, incorporating expressive latent variables $\\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3."
],
[
"Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\\mathbf {z}$ that we want to model) through a chain of invertible transformations.",
"Formally, a set of latent variables $\\mathbf {\\upsilon } \\in \\Upsilon $ are introduced with a simple prior distribution $p_{\\Upsilon }(\\upsilon )$. We then define a bijection function $f: \\mathcal {Z} \\rightarrow \\Upsilon $ (with $g = f^{-1}$), whereby we can define a generative process over variables $\\mathbf {z}$:",
"An important insight behind flow-based models is that given this bijection function, the change of variable formula defines the model distribution on $\\mathbf {z}\\in \\mathcal {Z}$ by:",
"Here $\\frac{\\partial f_{\\theta }(\\mathbf {z})}{\\partial \\mathbf {z}}$ is the Jacobian matrix of $f_{\\theta }$ at $\\mathbf {z}$.",
"Eq. (DISPLAY_FORM9) provides a way to calculate the (complex) density of $\\mathbf {z}$ by calculating the (simple) density of $\\upsilon $ and the Jacobian of the transformation from $\\mathbf {z}$ to $\\upsilon $. For efficiency purposes, flow-based models generally use certain types of transformations $f_{\\theta }$ where both the inverse functions $g_{\\theta }$ and the Jacobian determinants are tractable to compute. A stacked sequence of such invertible transformations is also called a (normalizing) flow BIBREF10:",
"where $f = f_1 \\circ f_2 \\circ \\cdots \\circ f_K$ is a flow of $K$ transformations (omitting $\\theta $s for brevity)."
],
[
"In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters:",
"where $D=\\lbrace (\\mathbf {x}^i, \\mathbf {y}^i)\\rbrace _{i=1}^{N}$ is the set of training data. However, the likelihood $P_{\\theta }(\\mathbf {y}| \\mathbf {x})$ after marginalizing out latent variables $\\mathbf {z}$ (LHS in Eq. (DISPLAY_FORM5)) is intractable to compute or differentiate directly. Variational inference BIBREF14 provides a solution by introducing a parametric inference model $q_{\\phi }(\\mathbf {z}|\\mathbf {y}, \\mathbf {x})$ (a.k.a posterior) which is then used to approximate this integral by sampling individual examples of $\\mathbf {z}$. These models then optimize the evidence lower bound (ELBO), which considers both the “reconstruction error” $\\log P_\\theta (\\mathbf {y}|\\mathbf {z},\\mathbf {x})$ and KL-divergence between the posterior and the prior:",
"Both inference model $\\phi $ and decoder $\\theta $ parameters are optimized according to this objective."
],
[
"We first overview FlowSeq's architecture (shown in Figure FIGREF13) and training process here before detailing each component in following sections. Similarly to classic seq2seq models, at both training and test time FlowSeq first reads the whole input sequence $\\mathbf {x}$ and calculates a vector for each word in the sequence, the source encoding.",
"At training time, FlowSeq's parameters are learned using a variational training paradigm overviewed in §SECREF10. First, we draw samples of latent codes $\\mathbf {z}$ from the current posterior $q_{\\phi } (\\mathbf {z}|\\mathbf {y}, \\mathbf {x})$. Next, we feed $\\mathbf {z}$ together with source encodings into the decoder network and the prior flow to compute the probabilities of $P_{\\theta }(\\mathbf {y}|\\mathbf {z}, \\mathbf {x})$ and $p_{\\theta }(\\mathbf {z}|\\mathbf {x})$ for optimizing the ELBO (Eq. (DISPLAY_FORM12)).",
"At test time, generation is performed by first sampling a latent code $\\mathbf {z}$ from the prior flow by executing the generative process defined in Eq. (DISPLAY_FORM8). In this step, the source encodings produced from the encoder are used as conditional inputs. Then the decoder receives both the sampled latent code $\\mathbf {z}$ and the source encoder outputs to generate the target sequence $\\mathbf {y}$ from $P_{\\theta }(\\mathbf {y}|\\mathbf {z}, \\mathbf {x})$."
],
[
"The source encoder encodes the source sequences into hidden representations, which are used in computing attention when generating latent variables in the posterior network and prior network as well as the cross-attention with decoder. Any standard neural sequence model can be used as its encoder, including RNNs BIBREF0 or Transformers BIBREF3."
],
[
"The latent variables $\\mathbf {z}$ are represented as a sequence of continuous random vectors $\\mathbf {z}=\\lbrace \\mathbf {z}_1, \\ldots , \\mathbf {z}_T\\rbrace $ with the same length as the target sequence $\\mathbf {y}$. Each $\\mathbf {z}_t$ is a $d_{\\mathrm {z}}$-dimensional vector, where $d_{\\mathrm {z}}$ is the dimension of the latent space. The posterior distribution $q_{\\phi } (\\mathbf {z}|\\mathbf {y}, \\mathbf {x})$ models each $\\mathbf {z}_t$ as a diagonal Gaussian with learned mean and variance:",
"where $\\mu _{t}(\\cdot )$ and $\\sigma _{t}(\\cdot )$ are neural networks such as RNNs or Transformers."
],
[
"While we perform standard random initialization for most layers of the network, we initialize the last linear transforms that generate the $\\mu $ and $\\log \\sigma ^2$ values with zeros. This ensures that the posterior distribution as a simple normal distribution, which we found helps train very deep generative flows more stably."
],
[
"The motivation of introducing the latent variable $\\mathbf {z}$ into the model is to model the uncertainty in the generative process. Thus, it is preferable that $\\mathbf {z}$ capture contextual interdependence between tokens in $\\mathbf {y}$. However, there is an obvious local optimum where the posterior network generates a latent vector $\\mathbf {z}_t$ that only encodes the information about the corresponding target token $y_t$, and the decoder simply generates the “correct” token at each step $t$ with $\\mathbf {z}_t$ as input. In this case, FlowSeq reduces to the baseline model in Eq. (DISPLAY_FORM4). To escape this undesired local optimum, we apply token-level dropout to randomly drop an entire token when calculating the posterior, to ensure the model also has to learn how to use contextual information. This technique is similar to the “masked language model” in previous studies BIBREF15, BIBREF16, BIBREF17.",
""
],
[
"As the decoder, we take the latent sequence $\\mathbf {z}$ as input, run it through several layers of a neural sequence model such as a Transformer, then directly predict the output tokens in $\\mathbf {y}$ individually and independently. Notably, unlike standard seq2seq decoders, we do not perform causal masking to prevent attending to future tokens, making the model fully non-autoregressive."
],
[
"The flow architecture is based on Glow BIBREF11. It consists of a series of steps of flow, combined in a multi-scale architecture (see Figure FIGREF13.) Each step of flow consists three types of elementary flows – actnorm, invertible multi-head linear, and coupling. Note that all three functions are invertible and conducive to calculation of log determinants (details in Appendix SECREF6)."
],
[
"The activation normalization layer (actnorm; BIBREF11) is an alternative for batch normalization BIBREF18, that has mainly been used in the context of image data to alleviate problems in model training. Actnorm performs an affine transformation of the activations using a scale and bias parameter per feature for sequences:",
"Both $\\mathbf {z}$ and $\\mathbf {z}^{\\prime }$ are tensors of shape $[T\\times d_{\\mathrm {z}}]$ with time dimension $t$ and feature dimension $d_{\\mathrm {z}}$. The parameters are initialized such that over each feature $\\mathbf {z}_{t}^{\\prime }$ has zero mean and unit variance given an initial mini-batch of data."
],
[
"To incorporate general permutations of variables along the feature dimension to ensure that each dimension can affect every other ones after a sufficient number of steps of flow, BIBREF11 proposed a trainable invertible $1\\times 1$ convolution layer for 2D images. It is straightforward to apply similar transformations to sequential data:",
"",
"where $\\mathbf {W}$ is the weight matrix of shape $[d_{\\mathrm {z}} \\times d_{\\mathrm {z}}]$. The log-determinant of this transformation is:",
"",
"The cost of computing $\\mathrm {det}(\\mathbf {W})$ is $O(d_{\\mathrm {z}}^3)$.",
"Unfortunately, $d_{\\mathrm {z}}$ in Seq2Seq generation is commonly large, e.g. 512, significantly slowing down the model for computing $\\mathrm {det}(\\mathbf {W})$. To apply this to sequence generation, we propose a multi-head invertible linear layer, which first splits each $d_{\\mathrm {z}}$-dimensional feature vector into $h$ heads with dimension $d_h = d_{\\mathrm {z}}/h$. Then the linear transformation in (DISPLAY_FORM26) is applied to each head, with $d_h\\times d_h$ weight matrix $\\mathbf {W}$, significantly reducing the dimension. For splitting of heads, one step of flow contains one linear layer with either row-major or column-major splitting format, and these steps with different linear layers are composed in an alternating pattern."
],
[
"To model interdependence across time steps, we use affine coupling layers BIBREF19:",
"where $\\mathrm {s}(\\mathbf {z}_a, \\mathbf {x})$ and $\\mathrm {b}(\\mathbf {z}_a, \\mathbf {x})$ are outputs of two neural networks with $\\mathbf {z}_a$ and $\\mathbf {x}$ as input. These are shown in Figure FIGREF21 (c). In experiments, we implement $\\mathrm {s}(\\cdot )$ and $\\mathrm {b}(\\cdot )$ with one Transformer decoder layer BIBREF3: multi-head self-attention over $\\mathbf {z}_a$, followed by multi-head inter-attention over $\\mathbf {x}$, followed by a position-wise feed-forward network. The input $\\mathbf {z}_a$ is fed into this layer in one pass, without causal masking.",
"As in BIBREF19, the $\\mathrm {split}()$ function splits $\\mathbf {z}$ the input tensor into two halves, while the $\\mathrm {concat}$ operation performs the corresponding reverse concatenation operation. In our architecture, three types of split functions are used, based on the split dimension and pattern. Figure FIGREF21 (b) illustrates the three splitting types. The first type of split groups $\\mathbf {z}$ along the time dimension on alternate indices. In this case, FlowSeq mainly models the interactions between time-steps. The second and third types of splits perform on the feature dimension, with continuous and alternate patterns, respectively. For each type of split, we alternate $\\mathbf {z}_a$ and $\\mathbf {z}_b$ to increase the flexibility of the split function. Different types of affine coupling layers alternate in the flow, similar to the linear layers."
],
[
"We follow BIBREF19 in implementing a multi-scale architecture using the squeezing operation on the feature dimension, which has been demonstrated helpful for training deep flows. Formally, each scale is a combination of several steps of the flow (see Figure FIGREF21 (a)). After each scale, the model drops half of the dimensions with the third type of split in Figure FIGREF21 (b) to reduce computational and memory cost, outputting the tensor with shape $[T \\times \\frac{d}{2}]$. Then the squeezing operation transforms the $T \\times \\frac{d}{2}$ tensor into an $\\frac{T}{2} \\times d$ one as the input of the next scale. We pad each sentence with EOS tokens to ensure $T$ is divisible by 2. The right component of Figure FIGREF13 illustrates the multi-scale architecture."
],
[
"In autoregressive seq2seq models, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS token. However, for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence $\\mathbf {z}$. Instead of predicting the absolute length of the target sequence, we predict the length difference between source and target sequences using a classifier with a range of $[-20, 20]$. Numbers in this range are predicted by max-pooling the source encodings into a single vector, running this through a linear layer, and taking a softmax. This classifier is learned jointly with the rest of the model.",
""
],
[
"At inference time, the model needs to identify the sequence with the highest conditional probability by marginalizing over all possible latent variables (see Eq. (DISPLAY_FORM5)), which is intractable in practice. We propose three approximating decoding algorithms to reduce the search space.",
""
],
[
"Following BIBREF6, one simple and effective method is to select the best sequence by choosing the highest-probability latent sequence $\\mathbf {z}$:",
"where identifying $\\mathbf {y}^*$ only requires independently maximizing the local probability for each output position (see Eq. DISPLAY_FORM6)."
],
[
"A more accurate approximation of decoding, proposed in BIBREF6, is to draw samples from the latent space and compute the best output for each latent sequence. Then, a pre-trained autoregressive model is adopted to rank these sequences. In FlowSeq, different candidates can be generated by sampling different target lengths or different samples from the prior, and both of the strategies can be batched via masks during decoding. In our experiments, we first select the top $l$ length candidates from the length predictor in §SECREF29. Then, for each length candidate we use $r$ random samples from the prior network to generate output sequences, yielding a total of $l\\times r$ candidates."
],
[
"The third approximating method is based on the lower bound of importance weighted estimation BIBREF20. Similarly to NPD, IWD first draws samples from the latent space and computes the best output for each latent sequence. Then, IWD ranks these candidate sequences with $K$ importance samples:",
"IWD does not rely on a separate pre-trained model, though it significantly slows down the decoding speed. The detailed comparison of these three decoding methods is provided in §SECREF45."
],
[
"Different from the architecture proposed in BIBREF9, the architecture of FlowSeq is not using any autoregressive flow BIBREF21, BIBREF22, yielding a truly non-autoregressive model with efficient generation. Note that the FlowSeq remains non-autoregressive even if we use an RNN in the architecture because RNN is only used to encode a complete sequence of codes and all the input tokens can be fed into the RNN in parallel. This makes it possible to use highly-optimized implementations of RNNs such as those provided by cuDNN. Thus while RNNs do experience some drop in speed, it is less extreme than that experienced when using autoregressive models."
],
[
"We evaluate FlowSeq on three machine translation benchmark datasets: WMT2014 DE-EN (around 4.5M sentence pairs), WMT2016 RO-EN (around 610K sentence pairs) and a smaller dataset IWSLT2014 DE-EN (around 150K sentence pairs). We use scripts from fairseq BIBREF23 to preprocess WMT2014 and IWSLT2014, where the preprocessing steps follow BIBREF3 for WMT2014. We use the data provided in BIBREF7 for WMT2016. For both WMT datasets, the source and target languages share the same set of BPE embeddings while for IWSLT2014 we use separate embeddings. During training, we filter out sentences longer than 80 for WMT dataset and 60 for IWSLT, respectively."
],
[
"We implement the encoder, decoder and posterior networks with standard (unmasked) Transformer layers BIBREF3. For WMT datasets, the encoder consists of 6 layers, and the decoder and posterior are composed of 4 layers, and 8 attention heads. and for IWSLT, the encoder has 5 layers, and decoder and posterior have 3 layers, and 4 attention heads. The prior flow consists of 3 scales with the number of steps $[48, 48, 16]$ from bottom to top. To dissect the impact of model dimension on translation quality and speed, we perform experiments on two versions of FlowSeq with $d_{model}/d_{hidden} = 256/512$ (base) and $d_{model}/d_{hidden} = 512/1024$ (large). More model details are provided in Appendix SECREF7."
],
[
"Parameter optimization is performed with the Adam optimizer BIBREF24 with $\\beta =(0.9, 0.999)$ and $\\epsilon =1e-6$. Each mini-batch consist of 2048 sentences. The learning rate is initialized to $5e-4$, and exponentially decays with rate $0.999995$. The gradient clipping cutoff is $1.0$. For all the FlowSeq models, we apply $0.1$ label smoothing and averaged the 5 best checkpoints to create the final model.",
"At the beginning of training, the posterior network is randomly initialized, producing noisy supervision to the prior. To mitigate this issue, we first set the weight of the $\\mathrm {KL}$ term in ELBO to zero for 30,000 updates to train the encoder, decoder and posterior networks. Then the $\\mathrm {KL}$ weight linearly increases to one for another 10,000 updates, which we found essential to accelerate training and achieve stable performance."
],
[
"Previous work on non-autoregressive generation BIBREF6, BIBREF8 has used translations produced by a pre-trained autoregressive NMT model as the training data, noting that this can significantly improve the performance. We analyze the impact of distillation in § SECREF45."
],
[
"We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.",
"Table TABREF39 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.",
"Towards the effect of knowledge distillation, we can mainly obtain two observations: i) Similar to the findings in previous work, knowledge distillation still benefits the translation quality of FlowSeq. ii) Compared to previous models, the benefit of knowledge distillation on FlowSeq is less significant, yielding less than 3 BLEU improvement on WMT2014 DE-EN corpus, and even no improvement on WMT2016 RO-EN corpus. The reason might be that FlowSeq does not rely much on knowledge distillation to alleviate the multi-modality problem.",
"Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work."
],
[
"In this section, we compare the decoding speed (measured in average time in seconds required to decode one sentence) of FlowSeq at test time with that of the autoregressive Transformer model. We use the test set of WMT14 EN-DE for evaluation and all experiments are conducted on a single NVIDIA TITAN X GPU."
],
[
"First, we investigate how different decoding batch size can affect the decoding speed. We vary the decoding batch size within $\\lbrace 1, 4, 8, 32, 64, 128\\rbrace $. Figure. FIGREF44 shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size. However, FlowSeq has much larger gains in the decoding speed w.r.t. the increase in batch size, gaining a speed up of 594% of base model and 403% of large model when using a batch size of 128. We hypothesize that this is because the operations in FlowSeq are more friendly to batching while the Transformer model with beam search at test time is less efficient in benefiting from batching."
],
[
"Next, we examine if sentence length is a major factor affecting the decoding speed. We bucket the test data by the target sentence length. From Fig. FIGREF44, we can see that as the sentence length increases, FlowSeq achieves almost constant decoding time while Transformer has a linearly increasing decoding time. The relative decoding speed up of FlowSeq versus Transformer linearly increases as the sequence length increases. The potential of decoding long sequences with constant time is an attractive property of FlowSeq."
],
[
"In Fig. FIGREF49, we analyze how different sampling hyperparameters affect the performance of rescoring. First, we observe that the number of samples $r$ for each length is the most important factor. The performance is always improved with a larger sample size. Second, a larger number of length candidates does not necessarily increase the rescoring performance. Third, we find that a larger sampling temperature (0.3 - 0.5) can increase the diversity of translations and leads to better rescoring BLEU. However, the latent samples become noisy when a large temperature (1.0) is used."
],
[
"Following BIBREF28, we analyze the output diversity of FlowSeq. BIBREF28 proposed pairwise-BLEU and BLEU computed in a leave-one-out manner to calibrate the diversity and quality of translation hypotheses. A lower pairwise-BLEU score implies a more diverse hypothesis set. And a higher BLEU score implies a better translation quality. We experiment on a subset of test set of WMT14-ENDE with ten references each sentence BIBREF29. In Fig. FIGREF52, we compare FlowSeq with other multi-hypothesis generation methods (ten hypotheses each sentence) to analyze how well the generation outputs of FlowSeq are in terms of diversity and quality. The right corner area of the figure indicates the ideal generations: high diversity and high quality. While FlowSeq still lags behind the autoregressive generations, by increasing the sampling temperature it provides a way of generating more diverse outputs while keeping the translation quality almost unchanged. More analysis of translation outputs and detailed results are provided in the Appendix SECREF9 and SECREF10.",
""
],
[
"We propose FlowSeq, an efficient and effective model for non-autoregressive sequence generation by using generative flows. One potential direction for future work is to leverage iterative refinement techniques such as masked language models to further improve translation quality. Another exciting direction is to, theoretically and empirically, investigate the latent space in FlowSeq, hence providing deep insights of the model, even enhancing controllable text generation.",
""
],
[
"This work was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program and grant HR0011-15-C-0114 funded under the LORELEI program. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. The authors thank Amazon for their gift of AWS cloud credits and anonymous reviewers for their helpful suggestions.",
"Appendix: FlowSeq"
],
[
"Log-determinant:"
],
[
"Log-determinant:",
"where $h$ is the number of heads."
],
[
"Log-determinant:"
],
[
"In Fig. FIGREF57, we plot the train and dev loss together with dev BLEU scores for the first 50 epochs. We can see that the reconstruction loss is increasing at the initial stage of training, then start to decrease when training with full KL loss. In addition, we observed that FlowSeq does not suffer the KL collapse problem BIBREF30, BIBREF31. This is because the decoder of FlowSeq is non-autogressive, with latent variable $\\mathbf {z}$ as the only input."
],
[
"In Tab. TABREF58, we present randomly picked translation outputs from the test set of WMT14-DEEN. For each German input sentence, we pick three hypotheses from 30 samples. We have the following observations: First, in most cases, it can accurately express the meaning of the source sentence, sometimes in a different way from the reference sentence, which cannot be precisely reflected by the BLEU score. Second, by controlling the sampling hyper-parameters such as the length candidates $l$, the sampling temperature $\\tau $ and the number of samples $r$ under each length, FlowSeq is able to generate diverse translations expressing the same meaning. Third, repetition and broken translations also exist in some cases due to the lack of language model dependencies in the decoder."
],
[
"Table TABREF59 shows the detailed results of translation deversity."
]
],
"section_name": [
"Introduction",
"Background",
"Background ::: Flow-based Generative Models",
"Background ::: Variational Inference and Training",
"FlowSeq",
"FlowSeq ::: Source Encoder",
"FlowSeq ::: Posterior ::: Generation of Latent Variables.",
"FlowSeq ::: Posterior ::: Zero initialization.",
"FlowSeq ::: Posterior ::: Token Dropout.",
"FlowSeq ::: Decoder",
"FlowSeq ::: Flow Architecture for Prior",
"FlowSeq ::: Flow Architecture for Prior ::: Actnorm.",
"FlowSeq ::: Flow Architecture for Prior ::: Invertible Multi-head Linear Layers.",
"FlowSeq ::: Flow Architecture for Prior ::: Affine Coupling Layers.",
"FlowSeq ::: Flow Architecture for Prior ::: Multi-scale Architecture.",
"FlowSeq ::: Predicting Target Sequence Length",
"FlowSeq ::: Decoding Process",
"FlowSeq ::: Decoding Process ::: Argmax Decoding.",
"FlowSeq ::: Decoding Process ::: Noisy Parallel Decoding (NPD).",
"FlowSeq ::: Decoding Process ::: Importance Weighted Decoding (IWD)",
"FlowSeq ::: Discussion",
"Experiments ::: Experimental Setups ::: Translation Datasets",
"Experiments ::: Experimental Setups ::: Modules and Hyperparameters",
"Experiments ::: Experimental Setups ::: Optimization",
"Experiments ::: Experimental Setups ::: Knowledge Distillation",
"Experiments ::: Main Results",
"Experiments ::: Analysis on Decoding Speed",
"Experiments ::: Analysis on Decoding Speed ::: How does batch size affect the decoding speed?",
"Experiments ::: Analysis on Decoding Speed ::: How does sentence length affect the decoding speed?",
"Experiments ::: Analysis of Rescoring Candidates",
"Experiments ::: Analysis of Translation Diversity",
"Conclusion",
"Acknowledgments",
"Flow Layers ::: ActNorm",
"Flow Layers ::: Invertible Linear",
"Flow Layers ::: Affine Coupling",
"Analysis of training dynamics",
"Analysis of Translation Results",
"Results of Translation Diversity"
]
} | {
"answers": [
{
"annotation_id": [
"e452412e9567ff9c42bc5c5df5aa2294ce83ef7a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6438cbf42d18946a235a5140bfe434a96e788572"
],
"answer": [
{
"evidence": [
"Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work.",
"FLOAT SELECTED: Table 2: BLEU scores on two WMT datasets of models using advanced decoding methods. The first block are Transformer-base (Vaswani et al., 2017). The second and the third block are results of models trained w/w.o. knowledge distillation, respectively. n = l × r is the total number of candidates for rescoring."
],
"extractive_spans": [],
"free_form_answer": "Difference is around 1 BLEU score lower on average than state of the art methods.",
"highlighted_evidence": [
"Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring.",
"FLOAT SELECTED: Table 2: BLEU scores on two WMT datasets of models using advanced decoding methods. The first block are Transformer-base (Vaswani et al., 2017). The second and the third block are results of models trained w/w.o. knowledge distillation, respectively. n = l × r is the total number of candidates for rescoring."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"14b4ca92daf3064f129800c1500a3de17129d73a"
],
"answer": [
{
"evidence": [
"We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8."
],
"extractive_spans": [
"NAT w/ Fertility",
"NAT-IR",
"NAT-REG",
"LV NAR",
"CTC Loss",
"CMLM"
],
"free_form_answer": "",
"highlighted_evidence": [
"We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"dd4d47430c50b42e096f62ab94e8ba98175a1935"
],
"answer": [
{
"evidence": [
"FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear."
],
"extractive_spans": [
"WMT2014, WMT2016 and IWSLT-2014"
],
"free_form_answer": "",
"highlighted_evidence": [
" Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Does this model train faster than state of the art models?",
"What is the performance difference between proposed method and state-of-the-arts on these datasets?",
"What non autoregressive NMT models are used for comparison?",
"What are three neural machine translation (NMT) benchmark datasets used for evaluation?"
],
"question_id": [
"b14f13f2a3a316e5a5de9e707e1e6ed55e235f6f",
"ba6422e22297c7eb0baa381225a2f146b9621791",
"65e72ad72a9cbfc379f126b10b0ce80cfe44579b",
"cf8edc6e8c4d578e2bd9965579f0ee81f4bf35a9"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: (a) Autoregressive (b) non-autoregressive and (c) our proposed sequence generation models. x is the source, y is the target, and z are latent variables.",
"Figure 2: Neural architecture of FlowSeq, including the encoder, the decoder and the posterior networks, together with the multi-scale architecture of the prior flow. The architecture of each flow step is in Figure 3.",
"Figure 3: (a) The architecture of one step of our flow. (b) The visualization of three split pattern for coupling layers, where the red color denotes za and the blue color denotes zvb. (c) The attention-based architecture of the NN function in coupling layers.",
"Table 1: BLEU scores on three MT benchmark datasets for FlowSeq with argmax decoding and baselines with purely non-autoregressive decoding method. The first and second block are results of models trained w/w.o. knowledge distillation, respectively.",
"Table 2: BLEU scores on two WMT datasets of models using advanced decoding methods. The first block are Transformer-base (Vaswani et al., 2017). The second and the third block are results of models trained w/w.o. knowledge distillation, respectively. n = l × r is the total number of candidates for rescoring.",
"Figure 4: The decoding speed of Transformer (batched, beam size 5) and FlowSeq on WMT14 EN-DE test set (a) w.r.t different batch sizes (b) bucketed by different target sentence lengths (batch size 32).",
"Figure 5: Impact of sampling hyperparameters on the rescoring BLEU on the dev set of WMT14 DE-EN. Experiments are performed with FlowSeq-base trained with distillation data. l is the number of length candidates. r is the number of samples for each length.",
"Figure 6: Comparisons of FlowSeq with human translations, beam search and sampling results of Transformer-base, and mixture-of-experts model (Hard MoE (Shen et al., 2019)) on the averaged leave-one-out BLEU score v.s pairwise-BLEU in descending order.",
"Table 3: Comparison of model size in our experiments.",
"Figure 7: Training dynamics.",
"Table 4: Examples of translation outputs from FlowSeq-base with sampling hyperparameters l = 3, r = 10, τ = 0.4 on WMT14-DEEN.",
"Table 5: Translation diversity results of FlowSeq-large model on WMT14 EN-DE with knowledge distillation."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Figure4-1.png",
"9-Figure5-1.png",
"9-Figure6-1.png",
"12-Table3-1.png",
"13-Figure7-1.png",
"14-Table4-1.png",
"15-Table5-1.png"
]
} | [
"What is the performance difference between proposed method and state-of-the-arts on these datasets?"
] | [
[
"1909.02480-7-Table2-1.png",
"1909.02480-Experiments ::: Main Results-3"
]
] | [
"Difference is around 1 BLEU score lower on average than state of the art methods."
] | 242 |
2004.02393 | Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games | We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i.e., the question-answer pairs. We propose a cooperative game approach to deal with this problem, in which how the evidence passages are selected and how the selected passages are connected are handled by two models that cooperate to select the most confident chains from a large set of candidates (from distant supervision). For evaluation, we created benchmarks based on two multi-hop QA datasets, HotpotQA and MedHop; and hand-labeled reasoning chains for the latter. The experimental results demonstrate the effectiveness of our proposed approach. | {
"paragraphs": [
[
"NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators.",
"Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7.",
"Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models.",
"Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available.",
"We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy.",
"We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach."
],
[
"Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \\rightarrow e_{1,2} \\rightarrow p_2 \\rightarrow e_{2,3} \\rightarrow \\cdots \\rightarrow e_{n-1,n} \\rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities.",
"Our Task Given a QA pair $(q,a)$ and all its candidate passages $\\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\\mathcal {P}$ as inputs.",
"Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery."
],
[
"The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker."
],
[
"The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\\mathcal {P} = \\lbrace p_1, p_2 ... p_K\\rbrace $ from a pool of candidates, and outputs a chain of selected passages."
],
[
"For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\\mathbf {Q} = \\lbrace \\vec{\\mathbf {q}_0}, \\vec{\\mathbf {q}_1}, ..., \\vec{\\mathbf {q}_N}\\rbrace $ and $\\mathbf {H}_i = \\lbrace \\vec{\\mathbf {h}_{i,0}}, \\vec{\\mathbf {h}_{i,1}}, ..., \\vec{\\mathbf {h}_{i,M_i}}\\rbrace $ for each passage $p_i \\in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\\mathbf {Q}$ and each $\\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\\textrm {MatchLSTM}(\\mathbf {H}_i, \\mathbf {Q})$ for simplicity."
],
[
"To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\\tau }$ according to the predicted selection probability.",
"",
"The first step starts with the original question $\\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\\tilde{\\mathbf {m}}^t_{p_{\\tau }}$ back to the query space, and the new query $\\mathbf {Q}^{t+1}$ is used to select the next passage."
],
[
"We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\\mathcal {C}$. The model receives immediate reward at each step of selection.",
"In this paper we only consider chains consist of $\\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\\mathcal {C}$ in the form of $p_h\\rightarrow e \\rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\\mathcal {P}_{T}/\\mathcal {P}_{H}$ denote the set of all tail/head passages from $\\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections:",
"For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\\mathcal {C}$ that starts with $p_h$ and ends with $p_t$:"
],
[
"To alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:",
"Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss:",
"",
"Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\\textrm {nd}}$ step is defined as:",
"",
"",
"The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$."
],
[
"We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers.",
"For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances.",
"For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question.",
"During training we select chains based on the full passage set $\\mathcal {P}$; at inference time we extract the chains from the candidate set $\\mathcal {C}$ (see Section SECREF2).",
""
],
[
"We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop.",
""
],
[
"We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain.",
"Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection."
],
[
"Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement."
],
[
"In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach."
],
[
"Given the embeddings $\\mathbf {Q} = \\lbrace \\vec{\\mathbf {q}_0}, \\vec{\\mathbf {q}_1}, ..., \\vec{\\mathbf {q}_N}\\rbrace $ of the question $q$, and $\\mathbf {H}_i = \\lbrace \\vec{\\mathbf {h}_{i,0}}, \\vec{\\mathbf {h}_{i,1}}, ..., \\vec{\\mathbf {h}_{i,M_i}}\\rbrace $ of each passage $p_i \\in P$, we use the MatchLSTM BIBREF20 to match $\\mathbf {Q}$ and $\\mathbf {H}_i$ as follows:",
"The final vector $\\tilde{\\mathbf {m}}_i$ represents the matching state between $q$ and $p_i$. All the $\\tilde{\\mathbf {m}}_i$s are then passed to a linear layer that outputs the ranking score of each passage. We apply softmax over the scores to get the probability of passage selection $P(p_i|q)$. We denote the above computation as $P(p_i|q)=\\textrm {MatchLSTM}(\\mathbf {H}_i, \\mathbf {Q})$ for simplicity."
],
[
"Given the question embedding $\\mathbf {Q}^r = \\lbrace \\vec{\\mathbf {q}^r_0}, \\vec{\\mathbf {q}^r_1}, ..., \\vec{\\mathbf {q}^r_N}\\rbrace $ and the input passage embedding $\\mathbf {H}^r = \\lbrace \\vec{\\mathbf {h}^r_{0}}, \\vec{\\mathbf {h}^r_{1}}, ..., \\vec{\\mathbf {h}^r_{M}}\\rbrace $ of $p$, the Reasoner predicts the probability of each entity in the passage being the linking entity of the next passage in the chain. We use a reader model similar to BIBREF3 as our Reasoner network.",
"We first describe an attention sub-module. Given input sequence embedding $\\mathbf {A} = \\lbrace \\vec{\\mathbf {a}_0}, \\vec{\\mathbf {a}_1}, ..., \\vec{\\mathbf {a}_N}\\rbrace $ and $\\mathbf {B} = \\lbrace \\vec{\\mathbf {b}_{0}}, \\vec{\\mathbf {b}_{1}}, ..., \\vec{\\mathbf {b}_{M}}\\rbrace $, we define $\\tilde{\\mathcal {M}} = \\text{Attention}(\\mathbf {A}, \\mathbf {B})$:",
"where FFN denotes a feed forward layer which projects the concatenated embedding back to the original space.",
"The Reasoner network consists of multiple attention layers, together with a bidirectional GRU encoder and skip connection.",
"For each token $e_k, k = 0, 1,..., M$ represented by $h^r_{p,k}$ at the corresponding location, we have:",
"where $g$ is the classification layer, softmax is applied across all entities to get the probability. We denote the computation above as $P^r(e_k| \\mathbf {p}) = \\textrm {MatchLSTM.Reader}(e_k, \\mathbf {p})$ for simplicity."
],
[
"In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages).",
"In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct.",
"The accuracy is defined as the ratio:",
""
]
],
"section_name": [
"Introduction",
"Task Definition",
"Method",
"Method ::: Passage Ranking Model",
"Method ::: Passage Ranking Model ::: Passage Scoring",
"Method ::: Passage Ranking Model ::: Conditional Selection",
"Method ::: Passage Ranking Model ::: Reward via Distant Supervision",
"Method ::: Cooperative Reasoner",
"Experiments ::: Settings ::: Datasets",
"Experiments ::: Settings ::: Baselines and Evaluation Metric",
"Experiments ::: Results ::: HotpotQA",
"Experiments ::: Results ::: MedHop",
"Conclusions",
"Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Passage Scoring",
"Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Reasoner",
"Definition of Chain Accuracy"
]
} | {
"answers": [
{
"annotation_id": [
"8eefbea2f3cfcf402f9d072e674b0300e54adc66"
],
"answer": [
{
"evidence": [
"Method ::: Passage Ranking Model",
"The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\\mathcal {P} = \\lbrace p_1, p_2 ... p_K\\rbrace $ from a pool of candidates, and outputs a chain of selected passages.",
"Method ::: Cooperative Reasoner",
"To alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:"
],
"extractive_spans": [
"Reasoner model, also implemented with the MatchLSTM architecture",
"Ranker model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Method ::: Passage Ranking Model\nThe key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\\mathcal {P} = \\lbrace p_1, p_2 ... p_K\\rbrace $ from a pool of candidates, and outputs a chain of selected passages.",
"Method ::: Cooperative Reasoner\nTo alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"af6e29e48d2faba6721794b69df129ff67314a89"
],
"answer": [
{
"evidence": [
"To alleviate the noise in the distant supervision signal $\\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:"
],
"extractive_spans": [
"Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards"
],
"free_form_answer": "",
"highlighted_evidence": [
"Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"14dac62604a816e476874958f9232db308ef029e"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f0ac256d61835f95f747206c359e03b9e4acd2e3"
],
"answer": [
{
"evidence": [
"In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages).",
"In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct.",
"The accuracy is defined as the ratio:"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (formula) The accuracy is defined as the ratio # of correct chains predicted to # of evaluation samples",
"highlighted_evidence": [
"In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages).\n\nIn MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct.\n\nThe accuracy is defined as the ratio:",
"The accuracy is defined as the ratio:"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What are two models' architectures in proposed solution?",
"How do two models cooperate to select the most confident chains?",
"How many hand-labeled reasoning chains have been created?",
"What benchmarks are created?"
],
"question_id": [
"bd7039f81a5417474efa36f703ebddcf51835254",
"022e5c996a72aeab890401a7fdb925ecd0570529",
"2a950ede24b26a45613169348d5db9176fda4f82",
"34af2c512ec38483754e94e1ea814aa76552d60a"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An example of reasoning chains in HotpotQA (2- hop) and MedHop (3-hop). HotpotQA provides only supporting passages {P3, P9}, without order and linking information.",
"Figure 2: Model overview. The cooperative Ranker and Reasoner are trained alternatively. The Ranker selects a passage p at each step conditioned on the question q and history selection, and receives reward r1 if p is evidence. Conditioned on q, the Reasoner predicts which entity from p links to the next evidence passage. The Ranker receives extra reward r2 if its next selection is connected by the entity predicted by the Reasoner. Both q and answer a are model inputs. While q is fed to the Ranker/Reasoner as input, empirically the best way of using a is for constructing the candidate set thus computing the reward r1. We omit the flow from q/a for simplicity.",
"Table 1: Reasoning Chain selection results.",
"Table 2: Ablation test on HotpotQA."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"What benchmarks are created?"
] | [
[
"2004.02393-Definition of Chain Accuracy-0",
"2004.02393-Definition of Chain Accuracy-2",
"2004.02393-Definition of Chain Accuracy-1"
]
] | [
"Answer with content missing: (formula) The accuracy is defined as the ratio # of correct chains predicted to # of evaluation samples"
] | 244 |
2004.01694 | A Set of Recommendations for Assessing Human-Machine Parity in Language Translation | The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations. We reassess Hassan et al.'s 2018 investigation into Chinese to English news translation, showing that the finding of human-machine parity was owed to weaknesses in the evaluation design - which is currently considered best practice in the field. We show that the professional human translations contained significantly fewer errors, and that perceived quality in human evaluation depends on the choice of raters, the availability of linguistic context, and the creation of reference translations. Our results call for revisiting current best practices to assess strong machine translation systems in general and human-machine parity in particular, for which we offer a set of recommendations based on our empirical findings. | {
"paragraphs": [
[
"Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation.",
"This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5.",
"Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation.",
"Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis."
],
[
"We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators."
],
[
"The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans.",
"Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively.",
"As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B.",
"This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12."
],
[
"BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation.",
"In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations."
],
[
"The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations."
],
[
"MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents."
],
[
"The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality.",
"We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6)."
],
[
"We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:",
"[labelwidth=1cm, leftmargin=1.25cm]",
"The professional human translations in the dataset of BIBREF3.[1]",
"Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35.",
"The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$.",
"The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1]",
"Statistical significance is denoted by * ($p\\le .05$), ** ($p\\le .01$), and *** ($p\\le .001$) throughout this article, unless otherwise stated."
],
[
"Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type\" BIBREF8.",
"Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it."
],
[
"We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts.",
"We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context.",
"Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country.",
"The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\\alpha =0.05$)."
],
[
"Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\\kappa =0.13$ for non-experts versus $\\kappa =0.254$ for experts).",
"It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24."
],
[
"Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence.",
"While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16."
],
[
"We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation.",
"We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$).",
"Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised.",
"We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully.",
"We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters."
],
[
"",
"Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents.",
"We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section.",
"Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy.",
"Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21)."
],
[
"Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation.",
"Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节\", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34."
],
[
"Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality."
],
[
"Because the translations are created by humans, a number of factors could lead to compromises in quality:",
"If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news.",
"If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology.",
"Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator.",
"In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33.",
"In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level.",
"The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency.",
"To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32.",
"From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view\" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬).",
"Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking.",
"However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories."
],
[
"Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English.",
"According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33.",
"We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on.",
"Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input).",
"We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$).",
"Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text."
],
[
"Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general."
],
[
"In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs."
],
[
"When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4)."
],
[
"Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality."
],
[
"In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30)."
],
[
"Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT.",
"Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation.",
"We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity."
],
[
"We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors.",
"Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves.",
"Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost."
]
],
"section_name": [
"Introduction",
"Background",
"Background ::: Human Evaluation of Machine Translation",
"Background ::: Assessing Human–Machine Parity",
"Background ::: Assessing Human–Machine Parity ::: Choice of Raters",
"Background ::: Assessing Human–Machine Parity ::: Linguistic Context",
"Background ::: Assessing Human–Machine Parity ::: Reference Translations",
"Background ::: Translations",
"Choice of Raters",
"Choice of Raters ::: Evaluation Protocol",
"Choice of Raters ::: Results",
"Linguistic Context",
"Linguistic Context ::: Evaluation Protocol",
"Linguistic Context ::: Results",
"Linguistic Context ::: Discussion",
"Reference Translations",
"Reference Translations ::: Quality",
"Reference Translations ::: Directionality",
"Recommendations",
"Recommendations ::: (R1) Choose professional translators as raters.",
"Recommendations ::: (R2) Evaluate documents, not sentences.",
"Recommendations ::: (R3) Evaluate fluency in addition to adequacy.",
"Recommendations ::: (R4) Do not heavily edit reference translations for fluency.",
"Recommendations ::: (R5) Use original source texts.",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1ddd2172cbc25dc21125633fb2e28aec5c10e7d3"
],
"answer": [
{
"evidence": [
"Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis.",
"In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations.",
"We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6)."
],
"extractive_spans": [
"empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation"
],
"free_form_answer": "",
"highlighted_evidence": [
"We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation.",
"In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations.",
"We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"28aa8fcfcab07884996f3a2b9fa3172dd6d2d6ce"
],
"answer": [
{
"evidence": [
"We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:"
],
"extractive_spans": [
"English ",
"Chinese "
],
"free_form_answer": "",
"highlighted_evidence": [
"We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"14ecb78fcacae0b5f0d6142a9a411d3529f85f49"
],
"answer": [
{
"evidence": [
"Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.",
"Recommendations ::: (R1) Choose professional translators as raters.",
"In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.",
"Recommendations ::: (R2) Evaluate documents, not sentences.",
"When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).",
"Recommendations ::: (R3) Evaluate fluency in addition to adequacy.",
"Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.",
"Recommendations ::: (R4) Do not heavily edit reference translations for fluency.",
"In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).",
"Recommendations ::: (R5) Use original source texts.",
"Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT."
],
"extractive_spans": [
" Choose professional translators as raters",
" Evaluate documents, not sentences",
"Evaluate fluency in addition to adequacy",
"Do not heavily edit reference translations for fluency",
"Use original source texts"
],
"free_form_answer": "",
"highlighted_evidence": [
" In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.\n\nRecommendations ::: (R1) Choose professional translators as raters.\nIn our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.\n\nRecommendations ::: (R2) Evaluate documents, not sentences.\nWhen evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).\n\nRecommendations ::: (R3) Evaluate fluency in addition to adequacy.\nRaters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.\n\nRecommendations ::: (R4) Do not heavily edit reference translations for fluency.\nIn professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).\n\nRecommendations ::: (R5) Use original source texts.\nRaters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"79ae7089b4dbabf590fb2d5377cf0d39c650ea2c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher’s exact test (two-tailed) for each pair of translation outputs.",
"To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32."
],
"extractive_spans": [],
"free_form_answer": "36%",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher’s exact test (two-tailed) for each pair of translation outputs.",
"To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3",
" The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"77a93df6767ecde02e609c91ecef4f61735297e4"
],
"answer": [
{
"evidence": [
"The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations.",
"MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents.",
"The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality."
],
"extractive_spans": [],
"free_form_answer": "MT developers to which crowd workers were compared are usually not professional translators, evaluation of sentences in isolation prevents raters from detecting translation errors, used not originally written Chinese test set\n",
"highlighted_evidence": [
" BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. ",
"We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents.",
"BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"What empricial investigations do they reference?",
"What languages do they investigate for machine translation?",
"What recommendations do they offer?",
"What percentage fewer errors did professional translations make?",
"What was the weakness in Hassan et al's evaluation design?"
],
"question_id": [
"c1429f7fed5a4dda11ac7d9643f97af87a83508b",
"a93d4aa89ac3abbd08d725f3765c4f1bed35c889",
"bc473c5bd0e1a8be9b2037aa7006fd68217c3f47",
"cc5d8e12f6aecf6a5f305e2f8b3a0c67f49801a9",
"9299fe72f19c1974564ea60278e03a423eb335dc"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"search_query": [
"professional machine translation",
"professional machine translation",
"professional machine translation",
"professional machine translation",
"professional machine translation"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Ranks and TrueSkill scores (the higher the better) of one human (HA) and two machine translations (MT1, MT2) for evaluations carried out by expert and non-expert translators. An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank at p ≤ .05.",
"Table 2: Pairwise ranking results for machine (MT1) against professional human translation (HA) as obtained from blind evaluation by professional translators. Preference for MT1 is lower when document-level context is available.",
"Table 4: Pairwise ranking results for one machine (MT1) and two professional human translations (HA, HB) as obtained from blind evaluation by professional translators.",
"Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher’s exact test (two-tailed) for each pair of translation outputs.",
"Table 6: (Continued from previous page.)",
"Table 7: Ranks of the translations given the original language of the source side of the test set shown with their TrueSkill score (the higher the better). An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank at p ≤ .05."
],
"file": [
"6-Table1-1.png",
"8-Table2-1.png",
"11-Table4-1.png",
"12-Table5-1.png",
"14-Table6-1.png",
"15-Table7-1.png"
]
} | [
"What percentage fewer errors did professional translations make?",
"What was the weakness in Hassan et al's evaluation design?"
] | [
[
"2004.01694-12-Table5-1.png",
"2004.01694-Reference Translations ::: Quality-7"
],
[
"2004.01694-Background ::: Assessing Human–Machine Parity ::: Reference Translations-0",
"2004.01694-Background ::: Assessing Human–Machine Parity ::: Choice of Raters-0",
"2004.01694-Background ::: Assessing Human–Machine Parity ::: Linguistic Context-0"
]
] | [
"36%",
"MT developers to which crowd workers were compared are usually not professional translators, evaluation of sentences in isolation prevents raters from detecting translation errors, used not originally written Chinese test set\n"
] | 245 |
1909.02635 | Effective Use of Transformer Networks for Entity Tracking | Tracking entities in procedural language requires understanding the transformations arising from actions on entities as well as those entities' interactions. While self-attention-based pre-trained language encoders like GPT and BERT have been successfully applied across a range of natural language understanding tasks, their ability to handle the nuances of procedural texts is still untested. In this paper, we explore the use of pre-trained transformer networks for entity tracking tasks in procedural text. First, we test standard lightweight approaches for prediction with pre-trained transformers, and find that these approaches underperform even simple baselines. We show that much stronger results can be attained by restructuring the input to guide the transformer model to focus on a particular entity. Second, we assess the degree to which transformer networks capture the process dynamics, investigating such factors as merged entities and oblique entity references. On two different tasks, ingredient detection in recipes and QA over scientific processes, we achieve state-of-the-art results, but our models still largely attend to shallow context clues and do not form complex representations of intermediate entity or process state. | {
"paragraphs": [
[
"Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15.",
"This paper investigates the question of how transformer-based models form entity representations and what these representations capture. We expect that after fine-tuning on a target task, a transformer's output representations should somehow capture relevant entity properties, in the sense that these properties can be extracted by shallow classification either from entity tokens or from marker tokens. However, we observe that such “post-conditioning” approaches don't perform significantly better than rule-based baselines on the tasks we study. We address this by proposing entity-centric ways of structuring input to the transformer networks, using the entity to guide the intrinsic self-attention and form entity-centric representations for all the tokens. We find that our proposed methods lead to a significant improvement in performance over baselines.",
"Although our entity-specific application of transformers is more effective at the entity tracking tasks we study, we perform additional analysis and find that these tasks still do not encourage transformers to form truly deep entity representations. Our performance gain is largely from better understanding of verb semantics in terms of associating process actions with entity the paragraph is conditioned on. The model also does not specialize in “tracking” composed entities per se, again using surface clues like verbs to identify the components involved in a new composition.",
"We evaluate our models on two datasets specifically designed to invoke procedural understanding: (i) Recipes BIBREF16, and (ii) ProPara BIBREF14. For the Recipes dataset, we classify whether an ingredient was affected in a certain step, which requires understanding when ingredients are combined or the focus of the recipe shifts away from them. The ProPara dataset involves answering a more complex set of questions about physical state changes of components in scientific processes. To handle this more structured setting, our transformer produces potentials consumed by a conditional random field which predicts entity states over time. Using a unidirectional GPT-based architecture, we achieve state-of-the-art results on both the datasets; nevertheless, analysis shows that our approach still falls short of capturing the full space of entity interactions."
],
[
"Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts.",
"BIBREF14 introduced the ProPara dataset to probe understanding of scientific processes. The goal is to track the sequence of physical state changes (creation, destruction, and movement) entites undergo over long sequences of process steps. Past work involves both modeling entities across time BIBREF17 and capturing structural constraints inherent in the processes BIBREF18, BIBREF19 Figure FIGREF2b shows an example of the dataset posed as a structured prediction task, as in BIBREF19. For such a domain, it is crucial to capture implicit event occurrences beyond explicit entity mentions. For example, in fuel goes into the generator. The generator converts mechanical energy into electrical energy”, the fuel is implicitly destroyed in the process.",
"BIBREF15 introduced the task of detecting state changes in recipes in the Recipes dataset and proposed an entity-centric memory network neural architecture for simulating action dynamics. Figure FIGREF2a shows an example from the Recipes dataset with a grid showing ingredient presence. We focus specifically on this core problem of ingredient detection; while only one of the sub-tasks associated with their dataset, it reflects some complex semantics involving understanding the current state of the recipe. Tracking of ingredients in the cooking domain is challenging owing to the compositional nature of recipes whereby ingredients mix together and are aliased as intermediate compositions.",
"We pose both of these procedural understanding tasks as classification problems, predicting the state of the entity at each timestep from a set of pre-defined classes. In Figure FIGREF2, these classes correspond to either the presence (1) or absence (0) or the sequence of state changes create (C), move (M), destroy (D), exists (E), and none (O).",
"State-of-the-art approaches on these tasks are inherently entity-centric. Separately, it has been shown that entity-centric language modeling in a continuous framework can lead to better performance for LM related tasks BIBREF20, BIBREF21. Moreover, external data has shown to be useful for modeling process understanding tasks in prior work BIBREF18, BIBREF15, suggesting that pre-trained models may be effective.",
"With such tasks in place, a strong model will ideally learn to form robust entity-centric representation at each time step instead of solely relying on extracting information from the local entity mentions. This expectation is primarily due to the evolving nature of the process domain where entities undergo complex interactions, form intermediate compositions, and are often accompanied by implicit state changes. We now investigate to what extent this is true in a standard application of transformer models to this problem."
],
[
"The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage.",
"Figure FIGREF4 depicts this model. Formally, for a labelled pair $(\\lbrace s_1, s_2, \\dots , s_t\\rbrace , y_{et})$, we encode the tokenized sequence of steps up to the current timestep (the sentences are separated by using a special [SEP] token), independent of the entity. We denote by $X=[h_{1}, h_{2},\\dots , h_{m}]$ the contextualized hidden representation of the $m$ input tokens from the last layer, and by $\\textstyle g_{e}\\!=\\!\\!\\!\\sum \\limits _{\\text{ent toks}}\\!emb(e_i)$ the entity representation for post conditioning. We now use one of the following two ways to make an entity-specific prediction:"
],
[
"We append a $\\texttt {[CLS]}$ token to the input sequence and use the output representation of the $\\texttt {[CLS]}$ token denoted by $h_{ \\texttt {[CLS]}}$ concatenated with the learned BPE embeddings of the entity as the representation $c_{e,t}$ for our entity tracking system. We then use a linear layer over it to get class probabilities:",
"The aim of the [CLS] token is to encode information related to general entity related semantics participating in the recipe (sentence priors). We then use a single linear layer to learn sentence priors and entity priors independently, without strong interaction. We call this model GPT$_{indep}$."
],
[
"Second, we explore a more fine-grained way of using the GPT model outputs. Specifically, we use bilinear attention between $g_e$ and the transformer output for the process tokens $X$ to get a contextual representation $c_{e,t}$ for a given entity. Finally, using a feed-forward network followed by softmax layer gives us the class probabilities:",
"",
"The bilinear attention over the contextual representations of the process tokens allows the model to fetch token content relevant to that particular entity. We call this model GPT$_{attn}$."
],
[
"We evaluate the discussed post-conditioning models on the ingredient detection task of the Recipes dataset. To benchmark the performance, we compare to three rule-based baselines. This includes (i) Majority Class, (ii) Exact Match of an ingredient $e$ in recipe step $s_t$, and (iii) First Occurrence, where we predict the ingredient to be present in all steps following the first exact match. These latter two baselines capture natural modes of reasoning about the dataset: an ingredient is used when it is directly mentioned, or it is used in every step after it is mentioned, reflecting the assumption that a recipe is about incrementally adding ingredients to an ever-growing mixture. We also construct a LSTM baseline to evaluate the performance of ELMo embeddings (ELMo$_{token}$ and ELMo$_{sent}$) BIBREF22 compared to GPT.",
"Table TABREF10 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively.",
"As observed from the results, the post-conditioning frameworks underperform compared to the First Occ baseline. While the CR values appear to be high, which would suggest that the model is capturing the addition of ingredients to the mixture, we note that this value is also lower than the corresponding value for First Occ. This result suggests that the model may be approximating the behavior of this baseline, but doing so poorly. The unconditional self-attention mechanism of the transformers does not seem sufficient to capture the entity details at each time step beyond simple presence or absence. Moreover, we see that GPT$_{indep}$ performs somewhat comparably to GPT$_{attn}$, suggesting that consuming the transformer's output with simple attention is not able to really extract the right entity representation.",
"For ProPara, we observe similar performance trends where the post-conditioning model performed below par with the state-of-the-art architectures."
],
[
"The post-conditioning framework assumes that the transformer network can form strong representations containing entity information accessible in a shallow way based on the target entity. We now propose a model architecture which more strongly conditions on the entity as a part of the intrinsic self-attention mechanism of the transformers.",
"Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for."
],
[
"As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \\times m$ input sequences for fine tuning our classification task."
],
[
"In most experiments, we initialize the network with the weights of the standard pre-trained GPT model, then subsequently do either domain specific LM fine-tuning and supervised task specific fine-tuning."
],
[
"For some procedural domains, we have access to additional unlabeled data. To adapt the LM to capture domain intricacies, we fine-tune the transformer network on this unlabeled corpus."
],
[
"After the domain specific LM fine-tuning, we fine-tune our network parameters for the end task of entity tracking. For fine-tuning for the task, we have a labelled dataset which we denote by $\\mathcal {C}$, the set of labelled pairs $(\\lbrace s_1, s_2, \\dots , s_t\\rbrace , y_{et})$ for a given process. The input is converted according to our chosen entity conditioning procedure, then fed through the pre-trained network.",
"In addition, we observed that adding the language model loss during task specific fine-tuning leads to better performance as well, possibly because it adapts the LM to our task-specific input formulation. Thus,"
],
[
"We first evaluate the proposed entity conditioned self-attention model on the Recipes dataset to compare the performance with the post-conditioning variants."
],
[
"We use the pre-trained GPT architecture in the proposed entity conditioned framework with all its variants. BERT mainly differs in that it is bidirectional, though we also use the pre-trained [CLS] and [SEP] tokens instead of introducing new tokens in the input vocabulary and training them from scratch during fine-tuning. Owing to the lengths of the processes, all our experiments are performed on BERT$_{BASE}$."
],
[
"The most significant prior work on this dataset is the work of BIBREF15. However, their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the high-quality labeled data, instead treating it as dev and test data. Consequently, their model achieves low performance, roughly 56 $F_1 $ while ours achieves $82.5$ $F_1$ (though these are not the exact same test set). Moreover, theirs underperforms the first occurrence baseline, which calls into question the value of that training data. Therefore, we do not compare to this model directly. We use the small set of human-annotated data for our probing task. Our train/dev/test split consists of $600/100/175$ recipes, respectively."
],
[
"Table TABREF20 compares the overall performances of our proposed models. Our best ET$_{GPT}$ model achieves an $F_1$ score of $82.50$. Comparing to the baselines (Majority through First) and post-conditioned models, we see that the early entity conditioning is critical to achieve high performance.",
"Although the First model still achieves the highest CR, due to operating in a high-recall regime, we see that the ET$_{GPT}$ models all significantly outperform the post-conditioning models on this metric, indicating better modeling of these compositions. Both recall and precision are substantially increaesd compared to these baseline models. Interestingly, the ELMo-based model under-performs the first-occurrence baseline, indicating that the LSTM model is not learning much in terms of recognizing complex entity semantics grounded in long term contexts.",
"Comparing the four variants of structuring input in proposed architectures as discussed in Section SECREF4, we observe that the document-level, entity-first model is the best performing variant. Given the left-to-right unidirectional transformer architecture, this model notably forms target-specific representations for all process tokens, compared to using the transformer self-attention only to extract entity specific information at the end of the process."
],
[
"We perform ablations to evaluate the model's dependency on the context and on the target ingredient. Table TABREF23 shows the results for these ablations."
],
[
"In the “no ingredient” baseline (w/o ing.), the model is not provided with the specific ingredient information. Table TABREF23 shows that while not being a strong baseline, the model achieves decent overall accuracy with the drop in UR being higher compared to CR. This indicates that there are some generic indicators (mixture) that it can pick up to try to guess at overall ingredient presence or absence."
],
[
"We compare with a “no context” model (w/o context) which ignore the previous context and only use the current recipe step in determining the ingredient's presence. Table TABREF23 shows that the such model is able to perform surprisingly well, nearly as well as the first occurrence baseline.",
"This is because the model can often recognize words like verbs (for example, add) or nouns (for example, mixture) that indicate many ingredients are being used, and can do well without really tracking any specific entity as desired for the task."
],
[
"Next, we now focus on a structured task to evaluate the performance of the entity tracking architecture in capturing the structural information in the continuous self-attention framework. For this, we use the ProPara dataset and evaluate our proposed model on the comprehension task.",
"Figure FIGREF2b shows an example of a short instance from the ProPara dataset. The task of identifying state change follows a structure satisfying the existence cycle; for example, an entity can not be created after destruction. Our prior work BIBREF19 proposed a structured model for the task that achieved state-of-the-art performance. We adapt our proposed entity tracking transformer models to this structured prediction framework, capturing creation, movement, existence (distinct from movement or creation), destruction, and non-existence.",
"We use the standard evaluation scheme of the ProPara dataset, which is framed as answering the following categories of questions: (Cat-1) Is e created (destroyed, moved) in the process?, (Cat-2) When (step #) is e created (destroyed, moved)?, (Cat-3) Where is e created/destroyed/moved from/to)?"
],
[
"We compare our proposed models to the previous work on the ProPara dataset. This includes the entity specific MRC models, EntNet BIBREF23, QRN BIBREF24, and KG-MRC BIBREF17. Also, BIBREF14 proposed two task specific models, ProLocal and ProGlobal, as baselines for the dataset. Finally, we compare against our past neural CRF entity tracking model (NCET) BIBREF19 which uses ELMo embeddings in a neural CRF architecture.",
"For the proposed GPT architecture, we use the task specific [CLS] token to generate tag potentials instead of class probabilities as we did previously. For BERT, we perform a similar modification as described in the previous task to utilize the pre-trained [CLS] token to generate tag potentials. Finally, we perform a Viterbi decoding at inference time to infer the most likely valid tag sequence."
],
[
"Table TABREF28 compares the performance of the proposed entity tracking models on the sentence level task. Since, we are considering the classification aspect of the task, we compare our model performance for Cat-1 and Cat-2. As shown, the structured document level, entity first ET$_{GPT}$ and ET$_{BERT}$ models achieve state-of-the-art results. We observe that the major source of performance gain is attributed to the improvement in identifying the exact step(s) for the state changes (Cat-2). This shows that the model are able to better track the entities by identifying the exact step of state change (Cat-2) accurately rather than just detecting the presence of such state changes (Cat-1).",
"This task is more highly structured and in some ways more non-local than ingredient prediction; the high performance here shows that the ET$_{GPT}$ model is able to capture document level structural information effectively. Further, the structural constraints from the CRF also aid in making better predictions. For example, in the process “higher pressure causes the sediment to heat up. the heat causes chemical processes. the material becomes a liquid. is known as oil.”, the material is a by-product of the chemical process but there's no direct mention of it. However, the material ceases to exist in the next step, and because the model is able to predict this correctly, maintaining consistency results in the model finally predicting the entire state change correctly as well."
],
[
"Based on the results in the previous section, our models clearly achieve strong performance compared to past approaches. We now revisit the challenging cases discussed in Section SECREF2 to see if our entity tracking approaches are modeling sophisticated entity phenomena as advertised. For both datasets and associated tasks, we isolate the specific set of challenging cases grounded in tracking (i) intermediate compositions formed as part of combination of entities leading to no explicit mention, and (ii) implicit events which change entities' states without explicit mention of the affects."
],
[
"For Recipes, we mainly want to investigate cases of ingredients getting re-engaged in the recipe not in a raw form but in a combined nature with other ingredients and henceforth no explicit mention. For example, eggs in step 4 of Figure FIGREF2a exemplifies this case. The performance in such cases is indicative of how strongly the model can track compositional entities. We also examine the performance for cases where the ingredient is referred by some other name."
],
[
"Formally, we pick the set of examples where the ground truth is a transition from $0 \\rightarrow 1$ (not present to present) and the 1 is a “combined” case. Table TABREF31 shows the model's performance on this subset of cases, of which there are 1049 in the test set. The model achieves an accuracy of 51.1% on these bigrams, which is relatively low given the overall model performance. In the error cases, the model defaults to the $1\\rightarrow 1$ pattern indicative of the First Occ baseline."
],
[
"We observe the model is able to capture ingredients based on their hypernyms (nuts $\\rightarrow $ pecans, salad $\\rightarrow $ lettuce) and rough synonymy (bourbon $\\rightarrow $ scotch). This performance can be partially attributed to the language model pre-training. We can isolate these cases by filtering for uncombined ingredients when there is no matching ingredient token in the step. Out of 552 such cases in the test set, the model predicts 375 correctly giving a recall of $67.9$. This is lower than overall UR; if pre-training behaves as advertised, we expect little degradation in this case, but instead we see performance significantly below the average on uncombined ingredients."
],
[
"One question we can ask of the model's capabilities is to what extent they arise from domain knowledge in the large pre-trained data. We train transformer models from scratch and additionally investigate using the large corpus of unlabeled recipes for our LM pre-training. As can be seen in Table TABREF35, the incorporation of external data leads to major improvements in the overall performance. This gain is largely due to the increase in combined recall. One possible reason could be that external data leads to better understanding of verb semantics and in turn the specific ingredients forming part of the intermediate compositions. Figure FIGREF37 shows that verbs are a critical clue the model relies on to make predictions. Performing LM fine-tuning on top of GPT also gives gains."
],
[
"For ProPara, Table TABREF28 shows that the model does not significantly outperform the SOTA models in state change detection (Cat-1). However, for those correctly detected events, the transformer model outperforms the previous models for detecting the exact step of state change (Cat-2), primarily based on verb semantics. We do a finer-grained study in Table TABREF36 by breaking down the performance for the three state changes: creation (C), movement (M), and destruction (D), separately. Across the three state changes, the model suffers a loss of performance in the movement cases. This is owing to the fact that the movement cases require a deeper compositional and implicit event tracking. Also, a majority of errors leading to false negatives are due to the the formation of new sub-entities which are then mentioned with other names. For example, when talking about weak acid in “the water becomes a weak acid. the water dissolves limestone” the weak acid is also considered to move to the limestone."
],
[
"The model's performance on these challenging task cases suggests that even though it outperforms baselines, it may not be capturing deep reasoning about entities. To understand what the model actually does, we perform analysis of the model's behavior with respect to the input to understand what cues it is picking up on."
],
[
"One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics.",
"In an ideal scenario, we would want the model to track constituent entities by translating the “focus” to track their newly formed compositions with other entities, often aliased by other names like mixture, blend, paste etc. However, the low performance on such cases shown in Section SECREF5 gives further evidence that the model is not doing this."
],
[
"We can study which inputs are important more directly by explicitly removing specific certain words from the input process paragraph and evaluating the performance of the resulting input under the current model setup. We mainly did experiments to examine the importance of: (i) verbs, and (ii) other ingredients.",
"Table TABREF40 presents these ablation studies. We only observe a minor performance drop from $84.59$ to $82.71$ (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to $79.08$ and further omitting both leads to $77.79$. This shows the model’s dependence on verb semantics over tracking the other ingredients."
],
[
"In this paper, we examined the capabilities of transformer networks for capturing entity state semantics. First, we show that the conventional framework of using the transformer networks is not rich enough to capture entity semantics in these cases. We then propose entity-centric ways to formulate richer transformer encoding of the process paragraph, guiding the self-attention in a target entity oriented way. This approach leads to significant performance improvements, but examining model performance more deeply, we conclude that these models still do not model the intermediate compositional entities and perform well by largely relying on surface entity mentions and verb semantics."
],
[
"This work was partially supported by NSF Grant IIS-1814522 and an equipment grant from NVIDIA. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research. Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation. Thanks as well to the anonymous reviewers for their helpful comments."
]
],
"section_name": [
"Introduction",
"Background: Process Understanding",
"Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models",
"Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Task Specific Input Token",
"Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Entity Based Attention",
"Studying Basic Transformer Representations for Entity Tracking ::: Results and Observations",
"Entity-Conditioned Models",
"Entity-Conditioned Models ::: Sentence Level vs. Document Level",
"Entity-Conditioned Models ::: Training Details",
"Entity-Conditioned Models ::: Training Details ::: Domain Specific LM fine-tuning",
"Entity-Conditioned Models ::: Training Details ::: Supervised Task Fine-Tuning",
"Entity-Conditioned Models ::: Experiments: Ingredient Detection",
"Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare",
"Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare ::: Neural Process Networks",
"Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Results",
"Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations",
"Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Ingredient Specificity",
"Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Context Importance",
"Entity-Conditioned Models ::: State Change Detection (ProPara)",
"Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Systems to Compare",
"Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Results",
"Challenging Task Phenomena",
"Challenging Task Phenomena ::: Ingredient Detection",
"Challenging Task Phenomena ::: Ingredient Detection ::: Intermediate Compositions",
"Challenging Task Phenomena ::: Ingredient Detection ::: Hypernymy and Synonymy",
"Challenging Task Phenomena ::: Ingredient Detection ::: Impact of external data",
"Challenging Task Phenomena ::: State Change Detection",
"Analysis",
"Analysis ::: Gradient based Analysis",
"Analysis ::: Input Ablations",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1516c86c36ecb2bb8a543465d6ac12220ed1a226"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"e43a469126ec868403db8a7b388c56e5276b943d"
],
"answer": [
{
"evidence": [
"One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics."
],
"extractive_spans": [],
"free_form_answer": "Using model gradients with respect to input features they presented that the most important model inputs are verbs associated with entities which shows that the model attends to shallow context clues",
"highlighted_evidence": [
"One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2ca5d3901d40c6f75a521812fe5ba4706f954ed8"
],
"answer": [
{
"evidence": [
"Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for.",
"As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \\times m$ input sequences for fine tuning our classification task."
],
"extractive_spans": [],
"free_form_answer": "In four entity-centric ways - entity-first, entity-last, document-level and sentence-level",
"highlighted_evidence": [
"Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. ",
"We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. ",
"As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"",
"",
""
],
"question": [
"Do they report results only on English?",
"What evidence do they present that the model attends to shallow context clues?",
"In what way is the input restructured?"
],
"question_id": [
"0e45aae0e97a6895543e88705e153f084ce9c136",
"c515269b37cc186f6f82ab9ada5d9ca176335ded",
"43f86cd8aafe930ebb35ca919ada33b74b36c7dd"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: Process Examples from (a) RECIPES as a binary classification task of ingredient detection, and (b) PROPARA as a structured prediction task of identifying state change sequences. Both require cross-sentence reasoning, such as knowing what components are in a mixture and understanding verb semantics like combine.",
"Figure 2: Post-conditioning entity tracking models. Bottom: the process paragraph is encoded in an entity-independent manner with transformer network and a separate entity representation g[water] for postconditioning. Top: the two variants for the conditioning: (i) GPTattn, and (ii) GPTindep.",
"Table 1: Templates for different proposed entity-centric modes of structuring input to the transformer networks.",
"Table 2: Performance of the rule-based baselines and the post conditioned models on the ingredient detection task of the RECIPES dataset. These models all underperform First Occ.",
"Figure 3: Entity conditioning model for guiding selfattention: the entity-first, sentence-level input variant fed into a left-to-right unidirectional transformer architecture. Task predictions are made at [CLS] tokens about the entity’s state after the prior sentence.",
"Table 4: Top: we compare how much the model degrades when it conditions on no ingredient at all (w/o ing.), instead making a generic prediction. Bottom: we compare how much using previous context beyond a single sentence impacts the model.",
"Table 3: Performances of different baseline models discussed in Section 3, the ELMo baselines, and the proposed entity-centric approaches with the (D)ocument v (S)entence level variants formulated with both entity (F)irst v. (L)ater. Our ETGPT variants all substantially outperform the baselines.",
"Table 5: Performance of the proposed models on the PROPARA dataset. Our models outperform strong approaches from prior work across all metrics.",
"Table 7: Performance for using unsupervised data for LM training.",
"Table 8: Results for each state change type. Performance on predicting creation and destruction are highest, partially due to the model’s ability to use verb semantics for these tasks.",
"Table 6: Model predictions from the document level entity first GPT model in 1049 cases of intermediate compositions. The model achieves only 51% accuracy in these cases.",
"Figure 4: Gradient of the classification loss of the gold class with respect to inputs when predicting the status of butter in the last sentence. We follow a similar approach as Jain and Wallace (2019) to compute associations. Exact matches of the entity receive high weight, as does a seemingly unrelated verb dredge, which often indicates that the butter has already been used and is therefore present.",
"Table 9: Model’s performance degradation with input ablations. We see that the model’s major source of performance is from verbs than compared to other ingredient’s explicit mentions."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Figure3-1.png",
"6-Table4-1.png",
"6-Table3-1.png",
"7-Table5-1.png",
"8-Table7-1.png",
"8-Table8-1.png",
"8-Table6-1.png",
"9-Figure4-1.png",
"9-Table9-1.png"
]
} | [
"What evidence do they present that the model attends to shallow context clues?",
"In what way is the input restructured?"
] | [
[
"1909.02635-Analysis ::: Gradient based Analysis-0"
],
[
"1909.02635-Entity-Conditioned Models-1",
"1909.02635-Entity-Conditioned Models ::: Sentence Level vs. Document Level-0"
]
] | [
"Using model gradients with respect to input features they presented that the most important model inputs are verbs associated with entities which shows that the model attends to shallow context clues",
"In four entity-centric ways - entity-first, entity-last, document-level and sentence-level"
] | 247 |
1904.00648 | Recognizing Musical Entities in User-generated Content | Recognizing Musical Entities is important for Music Information Retrieval (MIR) since it can improve the performance of several tasks such as music recommendation, genre classification or artist similarity. However, most entity recognition systems in the music domain have concentrated on formal texts (e.g. artists' biographies, encyclopedic articles, etc.), ignoring rich and noisy user-generated content. In this work, we present a novel method to recognize musical entities in Twitter content generated by users following a classical music radio channel. Our approach takes advantage of both formal radio schedule and users' tweets to improve entity recognition. We instantiate several machine learning algorithms to perform entity recognition combining task-specific and corpus-based features. We also show how to improve recognition results by jointly considering formal and user-generated content | {
"paragraphs": [
[
"The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 .",
"In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work.",
"The method proposed makes use of the information extracted from the radio schedule for creating links between users' tweets and tracks broadcasted. Thanks to this linking, we aim to detect when users refer to entities included into the schedule. Apart from that, we consider a series of linguistic features, partly taken from the NLP literature and partly specifically designed for this task, for building statistical models able to recognize the musical entities. To that aim, we perform several experiments with a supervised learning model, Support Vector Machine (SVM), and a recurrent neural network architecture, a bidirectional LSTM with a CRF layer (biLSTM-CRF).",
"The contributions in this work are summarized as follows:",
"The paper is structured as follows. In Section 2, we present a review of the previous works related to Named Entity Recognition, focusing on its application on UGC and MIR. Afterwards, in Section 3 it is presented the methodology of this work, describing the dataset and the method proposed. In Section 4, the results obtained are shown. Finally, in Section 5 conclusions are discussed."
],
[
"Named Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems.",
"In particular, new developments in neural architectures have become an important resource for this task. Their main advantages are that they do not need language-specific knowledge resources BIBREF3 , and they are robust to the noisy and short nature of social media messages BIBREF4 . Indeed, according to a performance analysis of several Named Entity Recognition and Linking systems presented in BIBREF5 , it has been found that poor capitalization is one of the main issues when dealing with microblog content. Apart from that, typographic errors and the ubiquitous occurrence of out-of-vocabulary (OOV) words also cause drops in NER recall and precision, together with shortenings and slang, particularly pronounced in tweets.",
"Music Information Retrieval (MIR) is an interdisciplinary field which borrows tools of several disciplines, such as signal processing, musicology, machine learning, psychology and many others, for extracting knowledge from musical objects (be them audio, texts, etc.) BIBREF6 . In the last decade, several MIR tasks have benefited from NLP, such as sound and music recommendation BIBREF7 , automatic summary of song review BIBREF8 , artist similarity BIBREF9 and genre classification BIBREF10 .",
"In the field of IE, a first approach for detecting musical named entities from raw text, based on Hidden Markov Models, has been proposed in BIBREF11 . In BIBREF12 , the authors combine state-of-the-art Entity Linking (EL) systems to tackle the problem of detecting musical entities from raw texts. The method proposed relies on the argumentum ad populum intuition, so if two or more different EL systems perform the same prediction in linking a named entity mention, the more likely this prediction is to be correct. In detail, the off-the-shelf systems used are: DBpedia Spotlight BIBREF13 , TagMe BIBREF14 , Babelfy BIBREF15 . Moreover, a first Musical Entity Linking, MEL has been presented in BIBREF16 which combines different state-of-the-art NLP libraries and SimpleBrainz, an RDF knowledge base created from MusicBrainz after a simplification process.",
"Furthermore, Twitter has also been at the center of many studies done by the MIR community. As example, for building a music recommender system BIBREF17 analyzes tweets containing keywords like nowplaying or listeningto. In BIBREF9 , a similar dataset it is used for discovering cultural listening patterns. Publicly available Twitter corpora built for MIR investigations have been created, among others the Million Musical Tweets dataset BIBREF18 and the #nowplaying dataset BIBREF19 ."
],
[
"We propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc).",
"As case study, we have chosen to analyze tweets extracted from the channel of a classical music radio, BBC Radio 3. The choice to focus on classical music has been mostly motivated by the particular discrepancy between the informal language used in the social platform and the formal nomenclature of contributors and musical works. Indeed, users when referring to a musician or to a classical piece in a tweet, rarely use the full name of the person or of the work, as shown in Table 2.",
"We extract information from the radio schedule for recreating the musical context to analyze user-generated tweets, detecting when they are referring to a specific work or contributor recently played. We manage to associate to every track broadcasted a list of entities, thanks to the tweets automatically posted by the BBC Radio3 Music Bot, where it is described the track actually on air in the radio. In Table 3, examples of bot-generated tweets are shown.",
"Afterwards, we detect the entities on the user-generated content by means of two methods: on one side, we use the entities extracted from the radio schedule for generating candidates entities in the user-generated tweets, thanks to a matching algorithm based on time proximity and string similarity. On the other side, we create a statistical model capable of detecting entities directly from the UGC, aimed to model the informal language of the raw texts. In Figure 1, an overview of the system proposed is presented."
],
[
"In May 2018, we crawled Twitter using the Python library Tweepy, creating two datasets on which Contributor and Musical Work entities have been manually annotated, using IOB tags.",
"The first set contains user-generated tweets related to the BBC Radio 3 channel. It represents the source of user-generated content on which we aim to predict the named entities. We create it filtering the messages containing hashtags related to BBC Radio 3, such as #BBCRadio3 or #BBCR3. We obtain a set of 2,225 unique user-generated tweets. The second set consists of the messages automatically generated by the BBC Radio 3 Music Bot. This set contains 5,093 automatically generated tweets, thanks to which we have recreated the schedule.",
"In Table 4, the amount of tokens and relative entities annotated are reported for the two datasets. For evaluation purposes, both sets are split in a training part (80%) and two test sets (10% each one) randomly chosen. Within the user-generated corpora, entities annotated are only about 5% of the whole amount of tokens. In the case of the automatically generated tweets, the percentage is significantly greater and entities represent about the 50%."
],
[
"According to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task.",
"In the following sections, we describe separately the features, the word embeddings and the models considered. All the resources used are publicy available.",
"We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 .",
"In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types (\"soprano\", \"violinist\", etc.); 9)Classical work types (\"symphony\", \"overture\", etc.); 10)Musical instruments; 11)Opus forms (\"op\", \"opus\"); 12)Work number forms (\"no\", \"number\"); 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"); 14)Work Modes (\"major\", \"minor\", \"m\"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features).",
"We consider two sets of GloVe word embeddings BIBREF20 for training the neural architecture, one pre-trained with 2B of tweets, publicy downloadable, one trained with a corpora of 300K tweets collected during the 2014-2017 BBC Proms Festivals and disjoint from the data used in our experiments.",
"The first model considered for this task has been the John Platt's sequential minimal optimization algorithm for training a support vector classifier BIBREF21 , implemented in WEKA BIBREF22 . Indeed, in BIBREF23 results shown that SVM outperforms other machine learning models, such as Decision Trees and Naive Bayes, obtaining the best accuracy when detecting named entities from the user-generated tweets.",
"However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported."
],
[
"The bot-generated tweets present a predefined structure and a formal language, which facilitates the entities detection. In this dataset, our goal is to assign to each track played on the radio, represented by a tweet, a list of entities extracted from the tweet raw text. For achieving that, we experiment with the algorithms and features presented previously, obtaining an high level of accuracy, as presented in section 4. The hypothesis considered is that when a radio listener posts a tweet, it is possible that she is referring to a track which has been played a relatively short time before. In this cases, we want to show that knowing the radio schedule can help improving the results when detecting entities.",
"Once assigned a list of entities to each track, we perform two types of matching. Firstly, within the tracks we identify the ones which have been played in a fixed range of time (t) before and after the generation of the user's tweet. Using the resulting tracks, we create a list of candidates entities on which performing string similarity. The score of the matching based on string similarity is computed as the ratio of the number of tokens in common between an entity and the input tweet, and the total number of token of the entity: DISPLAYFORM0 ",
"In order to exclude trivial matches, tokens within a list of stop words are not considered while performing string matching. The final score is a weighted combination of the string matching score and the time proximity of the track, aimed to enhance matches from tracks played closer to the time when the user is posting the tweet.",
"The performance of the algorithm depends, apart from the time proximity threshold t, also on other two thresholds related to the string matching, one for the Musical Work (w) and one for the Contributor (c) entities. It has been necessary for avoiding to include candidate entities matched against the schedule with a low score, often source of false positives or negatives. Consequently, as last step Contributor and Musical Work candidates entities with respectively a string matching score lower than c and w, are filtered out. In Figure 2, an example of Musical Work entity recognized in an user-generated tweet using the schedule information is presented.",
"The entities recognized from the schedule matching are joined with the ones obtained directly from the statistical models. In the joined results, the criteria is to give priority to the entities recognized from the machine learning techniques. If they do not return any entities, the entities predicted by the schedule matching are considered. Our strategy is justified by the poorer results obtained by the NER based only on the schedule matching, compared to the other models used in the experiments, to be presented in the next section."
],
[
"The performances of the NER experiments are reported separately for three different parts of the system proposed.",
"Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot.",
"In Table 7, results of the schedule matching are reported. We can observe how the quality of the linking performed by the algorithm is correlated to the choice of the three thresholds. Indeed, the Precision score increase when the time threshold decrease, admitting less candidates as entities during the matching, and when the string similarity thresholds increase, accepting only candidates with an higher degree of similarity. The behaviour of the Recall score is inverted.",
"Finally, we test the impact of using the schedule matching together with a biLSTM-CRF network. In this experiment, we consider the network trained using all the features proposed, and the embeddings not pre-trained. Table 8 reports the results obtained. We can observe how generally the system benefits from the use of the schedule information. Especially in the testing part, where the neural network recognizes with less accuracy, the explicit information contained in the schedule can be exploited for identifying the entities at which users are referring while listening to the radio and posting the tweets."
],
[
"We have presented in this work a novel method for detecting musical entities from user-generated content, modelling linguistic features with statistical models and extracting contextual information from a radio schedule. We analyzed tweets related to a classical music radio station, integrating its schedule to connect users' messages to tracks broadcasted. We focus on the recognition of two kinds of entities related to the music field, Contributor and Musical Work.",
"According to the results obtained, we have seen a pronounced difference between the system performances when dealing with the Contributor instead of the Musical Work entities. Indeed, the former type of entity has been shown to be more easily detected in comparison to the latter, and we identify several reasons behind this fact. Firstly, Contributor entities are less prone to be shorten or modified, while due to their longness, Musical Work entities often represent only a part of the complete title of a musical piece. Furthermore, Musical Work titles are typically composed by more tokens, including common words which can be easily misclassified. The low performances obtained in the case of Musical Work entities can be a consequences of these observations. On the other hand, when referring to a Contributor users often use only the surname, but in most of the cases it is enough for the system to recognizing the entities.",
"From the experiments we have seen that generally the biLSTM-CRF architecture outperforms the SVM model. The benefit of using the whole set of features is evident in the training part, but while testing the inclusion of the features not always leads to better results. In addition, some of the features designed in our experiments are tailored to the case of classical music, hence they might not be representative if applied to other fields. We do not exclude that our method can be adapted for detecting other kinds of entity, but it might be needed to redefine the features according to the case considered. Similarly, it has not been found a particular advantage of using the pre-trained embeddings instead of the one trained with our corpora. Furthermore, we verified the statistical significance of our experiment by using Wilcoxon Rank-Sum Test, obtaining that there have been not significant difference between the various model considered while testing.",
"The information extracted from the schedule also present several limitations. In fact, the hypothesis that a tweet is referring to a track broadcasted is not always verified. Even if it is common that radios listeners do comments about tracks played, or give suggestion to the radio host about what they would like to listen, it is also true that they might refer to a Contributor or Musical Work unrelated to the radio schedule."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Dataset",
"NER system",
"Schedule matching",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"15418edd8c72bc8bc3efceb68fa9202d76da15a7"
],
"answer": [
{
"evidence": [
"The performances of the NER experiments are reported separately for three different parts of the system proposed.",
"Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot."
],
"extractive_spans": [
"With both test sets performances decrease, varying between 94-97%"
],
"free_form_answer": "",
"highlighted_evidence": [
"The performances of the NER experiments are reported separately for three different parts of the system proposed.",
"Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b6163a58c88f9e2b89b84689a1fbdda6414d2e3c"
],
"answer": [
{
"evidence": [
"In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types (\"soprano\", \"violinist\", etc.); 9)Classical work types (\"symphony\", \"overture\", etc.); 10)Musical instruments; 11)Opus forms (\"op\", \"opus\"); 12)Work number forms (\"no\", \"number\"); 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"); 14)Work Modes (\"major\", \"minor\", \"m\"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features)."
],
"extractive_spans": [
"6)Contributor first names",
"7)Contributor last names",
"8)Contributor types (\"soprano\", \"violinist\", etc.)",
"9)Classical work types (\"symphony\", \"overture\", etc.)",
"10)Musical instruments",
"11)Opus forms (\"op\", \"opus\")",
"12)Work number forms (\"no\", \"number\")",
"13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\")",
"14)Work Modes (\"major\", \"minor\", \"m\")"
],
"free_form_answer": "",
"highlighted_evidence": [
"Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types (\"soprano\", \"violinist\", etc.); 9)Classical work types (\"symphony\", \"overture\", etc.); 10)Musical instruments; 11)Opus forms (\"op\", \"opus\"); 12)Work number forms (\"no\", \"number\"); 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"); 14)Work Modes (\"major\", \"minor\", \"m\")."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5114686c571dbe9da3b0a0a7692a4eec5c53d856"
],
"answer": [
{
"evidence": [
"We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 ."
],
"extractive_spans": [
"standard linguistic features, such as Part-Of-Speech (POS) and chunk tag",
"series of features representing tokens' left and right context"
],
"free_form_answer": "",
"highlighted_evidence": [
"We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3122f0c4f10f3f50f4c501ac9affc51aeca276a1"
],
"answer": [
{
"evidence": [
"However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported."
],
"extractive_spans": [
"biLSTM-networks"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"69a345333a5e18bacc4a7af86bdf08ba2943a19f"
],
"answer": [
{
"evidence": [
"In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"What are their results on the entity recognition task?",
"What task-specific features are used?",
"What kind of corpus-based features are taken into account?",
"Which machine learning algorithms did the explore?",
"What language is the Twitter content in?"
],
"question_id": [
"aa60b0a6c1601e09209626fd8c8bdc463624b0b3",
"3837ae1e91a4feb27f11ac3b14963e9a12f0c05e",
"ef4d6c9416e45301ea1a4d550b7c381f377cacd9",
"689d1d0c4653a8fa87fd0e01fa7e12f75405cd38",
"7920f228de6ef4c685f478bac4c7776443f19f39"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 2. Example of entities annotated and corresponding formal forms, from the user-generated tweet (1) in Table 1.",
"Table 3. Examples of bot-generated tweets.",
"Table 4. Tokens’ distributions within the two datasets: user-generated tweets (top) and bot-generated tweets (bottom)",
"Fig. 2. Example of the workflow for recognizing entities in UGC using the information from the radio schedule",
"Table 6. F1 score for Contributor(C) and Musical Work(MW) entities recognized from bot-generated tweets (top) and user-generated tweets (bottom)",
"Table 7. Precision (P), Recall (R) and F1 score for Contributor (C) and Musical Work (MW) of the schedule matching algorithm. w indicates the Musical Work string similarity threshold, c indicates the Contributor string similarity threshold and t indicates the time proximity threshold in seconds",
"Table 8. Precision (P), Recall (R) and F1 score for Contributor (C) and Musical Work (MW) entities recognized from user-generated tweets using the biLSTM-CRF network together with the schedule matching. The thresholds used for the matching are t=1200, w=0.5, c=0.5."
],
"file": [
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"8-Figure2-1.png",
"9-Table6-1.png",
"9-Table7-1.png",
"10-Table8-1.png"
]
} | [
"What language is the Twitter content in?"
] | [
[
"1904.00648-Introduction-1"
]
] | [
"English"
] | 248 |
1711.11221 | Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches | Sentences in a well-formed text are connected to each other via various links to form the cohesive structure of the text. Current neural machine translation (NMT) systems translate a text in a conventional sentence-by-sentence fashion, ignoring such cross-sentence links and dependencies. This may lead to generate an incoherent target text for a coherent source text. In order to handle this issue, we propose a cache-based approach to modeling coherence for neural machine translation by capturing contextual information either from recently translated sentences or the entire document. Particularly, we explore two types of caches: a dynamic cache, which stores words from the best translation hypotheses of preceding sentences, and a topic cache, which maintains a set of target-side topical words that are semantically related to the document to be translated. On this basis, we build a new layer to score target words in these two caches with a cache-based neural model. Here the estimated probabilities from the cache-based neural model are combined with NMT probabilities into the final word prediction probabilities via a gating mechanism. Finally, the proposed cache-based neural model is trained jointly with NMT system in an end-to-end manner. Experiments and analysis presented in this paper demonstrate that the proposed cache-based model achieves substantial improvements over several state-of-the-art SMT and NMT baselines. | {
"paragraphs": [
[
"In the literature, several cache-based translation models have been proposed for conventional statistical machine translation, besides traditional n-gram language models and neural language models. In this section, we will first introduce related work in cache-based language models and then in translation models.",
"For traditional n-gram language models, Kuhn1990A propose a cache-based language model, which mixes a large global language model with a small local model estimated from recent items in the history of the input stream for speech recongnition. della1992adaptive introduce a MaxEnt-based cache model by integrating a cache into a smoothed trigram language model, reporting reduction in both perplexity and word error rates. chueh2010topic present a new topic cache model for speech recongnition based on latent Dirichlet language model by incorporating a large-span topic cache into the generation of topic mixtures.",
"For neural language models, huang2014cache propose a cache-based RNN inference scheme, which avoids repeated computation of identical LM calls and caches previously computed scores and useful intermediate results and thus reduce the computational expense of RNNLM. Grave2016Improving extend the neural network language model with a neural cache model, which stores recent hidden activations to be used as contextual representations. Our caches significantly differ from these two caches in that we store linguistic items in the cache rather than scores or activations.",
"For neural machine translation, wangexploiting propose a cross-sentence context-aware approach and employ a hierarchy of Recurrent Neural Networks (RNNs) to summarize the cross-sentence context from source-side previous sentences. jean2017does propose a novel larger-context neural machine translation model based on the recent works on larger-context language modelling BIBREF11 and employ the method to model the surrounding text in addition to the source sentence.",
"For cache-based translation models, nepveu2004adaptive propose a dynamic adaptive translation model using cache-based implementation for interactive machine translation, and develop a monolingual dynamic adaptive model and a bilingual dynamic adaptive model. tiedemann2010context propose a cache-based translation model, filling the cache with bilingual phrase pairs from the best translation hypotheses of previous sentences in a document. gong2011cache further propose a cache-based approach to document-level translation, which includes three caches, a dynamic cache, a static cache and a topic cache, to capture various document-level information. bertoldi2013cache describe a cache mechanism to implement online learning in phrase-based SMT and use a repetition rate measure to predict the utility of cached items expected to be useful for the current translation.",
"Our caches are similar to those used by gong2011cache who incorporate these caches into statistical machine translation. We adapt them to neural machine translation with a neural cache model. It is worthwhile to emphasize that such adaptation is nontrivial as shown below because the two translation philosophies and frameworks are significantly different."
],
[
"In this section, we briefly describe the NMT model taken as a baseline. Without loss of generality, we adopt the NMT architecture proposed by bahdanau2015neural, with an encoder-decoder neural network."
],
[
"The encoder uses bidirectional recurrent neural networks (Bi-RNN) to encode a source sentence with a forward and a backward RNN. The forward RNN takes as input a source sentence $x = (x_1, x_2, ..., x_T)$ from left to right and outputs a hidden state sequence $(\\overrightarrow{h_1},\\overrightarrow{h_2}, ..., \\overrightarrow{h_T})$ while the backward RNN reads the sentence in an inverse direction and outputs a backward hidden state sequence $(\\overleftarrow{h_1},\\overleftarrow{h_2}, ..., \\overleftarrow{h_T})$ . The context-dependent word representations of the source sentence $h_j$ (also known as word annotation vectors) are the concatenation of hidden states $\\overrightarrow{h_j}$ and $\\overleftarrow{h_j}$ in the two directions."
],
[
"The decoder is an RNN that predicts target words $y_t$ via a multi-layer perceptron (MLP) neural network. The prediction is based on the decoder RNN hidden state $s_t$ , the previous predicted word $y_{t-1}$ and a source-side context vector $c_t$ . The hidden state $s_t$ of the decoder at time $t$ and the conditional probability of the next word $y_t$ are computed as follows: ",
"$$s_t = f(s_{t-1}, y_{t-1}, c_t)$$ (Eq. 3) ",
"$$p(y_t|y_{<t};x) = g(y_{t-1}, s_t, c_t)$$ (Eq. 4) "
],
[
"In the attention model, the context vector $c_t$ is calculated as a weighted sum over source annotation vectors $(h_1, h_2, ..., h_T)$ : ",
"$$c_t = \\sum _{j=1}^{T_x} \\alpha _{tj}h_j$$ (Eq. 6) ",
"$$\\alpha _{tj} = \\frac{exp(e_{tj})}{\\sum _{k=1}^{T} exp(e_{tk})}$$ (Eq. 7) ",
"where $\\alpha _{tj}$ is the attention weight of each hidden state $h_j$ computed by the attention model, and $a$ is a feed forward neural network with a single hidden layer.",
"The dl4mt tutorial presents an improved implementation of the attention-based NMT system, which feeds the previous word $y_{t-1}$ to the attention model. We use the dl4mt tutorial implementation as our baseline, which we will refer to as RNNSearch*.",
"The proposed cache-based neural approach is implemented on the top of RNNSearch* system, where the encoder-decoder NMT framework is trained to optimize the sum of the conditional log probabilities of correct translations of all source sentences on a parallel corpus as normal."
],
[
"In this section, we elaborate on the proposed cache-based neural model and how we integrate it into neural machine translation, Figure 1 shows the entire architecture of our NMT with the cache-based neural model."
],
[
"The aim of cache is to incorporate document-level constraints and therefore to improve the consistency and coherence of document translations. In this section, we introduce our proposed dynamic cache and topic cache in detail.",
"In order to build the dynamic cache, we dynamically extract words from recently translated sentences and the partial translation of current sentence being translated as words of dynamic cache. We apply the following rules to build the dynamic cache.",
"The max size of the dynamic cache is set to $|c_d|$ .",
"According to the first-in-first-out rule, when the dynamic cache is full and a new word is inserted into the cache, the oldest word in the cache will be removed.",
"Duplicate entries into the dynamic cache are not allowed when a word has been already in the cache.",
"It is worth noting that we also maintain a stop word list, and we added English punctuations and “UNK” into our stop word list. Words in the stop word list would not be inserted into the dynamic cache. So the common words like “a” and “the” cannot appear in the cache. All words in the dynamic cache can be found in the target-side vocabulary of RNNSearch*.",
"In order to build the topic cache, we first use an off-the-shelf LDA topic tool to learn topic distributions of source- and target-side documents separately. Then we estimate a topic projection distribution over all target-side topics $p(z_t|z_s)$ for each source topic $z_s$ by collecting events and accumulating counts of $(z_s, z_t)$ from aligned document pairs. Notice that $z_s/z_t$ is the topic with the highest topic probability $p(z_.|d)$ on the source/target side. Then we can use the topic cache as follows:",
"During the training process of NMT, the learned target-side topic model is used to infer the topic distribution for each target document. For a target document d in the training data, we select the topic $z$ with the highest probability $p(z|d)$ as the topic for the document. The $|c_t|$ most probable topical words in topic $z$ are extracted to fill the topic cache for the document $d$ .",
"In the NMT testing process, we first infer the topic distribution for a source document in question with the learned source-side topic model. From the topic distribution, we choose the topic with the highest probability as the topic for the source document. Then we use the learned topic projection function to map the source topic onto a target topic with the highest projection probability, as illustrated in Figure 2. After that, we use the $|c_t|$ most probable topical words in the projected target topic to fill the topic cache.",
"The words of topic cache and dynamic cache together form the final cache model. In practice, the cache stores word embeddings, as shown in Figure 3. As we do not want to introduce extra embedding parameters, we let the cache share the same target word embedding matrix with the NMT model. In this case, if a word is not in the target-side vocabulary of NMT, we discard the word from the cache."
],
[
"The cache-based neural model is to evaluate the probabilities of words occurring in the cache and to provide the evaluation results for the decoder via a gating mechanism.",
"When the decoder generates the next target word $y_t$ , we hope that the cache can provide helpful information to judge whether $y_t$ is appropriate from the perspective of the document-level cache if $y_t$ occurs in the cache.To achieve this goal, we should appropriately evaluate the word entries in the cache.",
"In this paper, we build a new neural network layer as the scorer for the cache. At each decoding step $t$ , we use the scorer to score $y_t$ if $y_t$ is in the cache. The inputs to the scorer are the current hidden state $s_t$ of the decoder, previous word $y_{t-1}$ , context vector $c_t$ , and the word $y_t$ from the cache. The score of $y_t$ is calculated as follows: ",
"$$score(y_t|y_{<t},x) = g_{cache}(s_t,c_t,y_{t-1},y_t)$$ (Eq. 22) ",
"where $g_{cache}$ is a non-linear function.",
"This score is further used to estimate the cache probability of $y_t$ as follows: ",
"$$p_{cache}(y_t|y_{<t},x) = softmax(score(y_t|y_{<t},x))$$ (Eq. 23) ",
"Since we have two prediction probabilities for the next target word $y_t$ , one from the cache-based neural model $p_{cache}$ , the other originally estimated by the NMT decoder $p_{nmt}$ , how do we integrate these two probabilities? Here, we introduce a gating mechanism to combine them, and word prediction probabilities on the vocabulary of NMT are updated by combining the two probabilities through linear interpolation between the NMT probability and cache-based neural model probability. The final word prediction probability for $y_t$ is calculated as follows: ",
"$$p(y_t|y_{<t},x) = (1 - \\alpha _t)p_{cache}(y_t|y_{<t},x) + \\alpha _tp_{nmt}(y_t|y_{<t},x)$$ (Eq. 26) ",
"Notice that if $y_t$ is not in the cache, we set $p_{cache}(y_t|y_{<t},x) = 0$ , where $\\alpha _t$ is the gate and computed as follows: ",
"$$\\alpha _t = g_{gate}(f_{gate}(s_t,c_t,y_{t-1}))$$ (Eq. 27) ",
"where $f_{gate}$ is a non-linear function and $g_{gate}$ is sigmoid function.",
"We use the contextual elements of $s_t, c_t, y_{t-1}$ to score the current target word occurring in the cache (Eq. (6)) and to estimate the gate (Eq. (9)). If the target word is consistent with the context and in the cache at the same time, the probability of the target word will be high.",
"Finally, we train the proposed cache model jointly with the NMT model towards minimizing the negative log-likelihood on the training corpus. The cost function is computed as follows: ",
"$$L(\\theta ) = -\\sum _{i=1}^N \\sum _{t=1}^Tlogp(y_t|y_{<t},x)$$ (Eq. 28) ",
"where $\\theta $ are all parameters in the cache-based NMT model."
],
[
"Our cache-based NMT system works as follows:",
"When the decoder shifts to a new test document, clear the topic and dynamic cache.",
"Obtain target topical words for the new test document as described in Section 4.1 and fill them in the topic cache.",
"Clear the dynamic cache when translating the first sentence of the test document.",
"For each sentence in the new test document, translate it with the proposed cache-based NMT and continuously expands the dynamic cache with newly generated target words and target words obtained from the best translation hypothesis of previous sentences.",
"In this way, the topic cache can provide useful global information at the beginning of the translation process while the dynamic cache is growing with the progress of translation."
],
[
"We evaluated the effectiveness of the proposed cache-based neural model for neural machine translation on NIST Chinese-English translation tasks."
],
[
"We selected corpora LDC2003E14, LDC2004T07, LDC2005T06, LDC2005T10 and a portion of data from the corpus LDC2004T08 (Hong Kong Hansards/Laws/News) as our bilingual training data, where document boundaries are explicitly kept. In total, our training data contain 103,236 documents and 2.80M sentences. On average, each document consists of 28.4 sentences. We chose NIST05 dataset (1082 sentence pairs) as our development set, and NIST02, NIST04, NIST06 (878, 1788, 1664 sentence pairs. respectively) as our test sets. We compared our proposed model against the following two systems:",
"Moses BIBREF12 : an off-the-shelf phrase-based translation system with its default setting.",
"RNNSearch*: our in-house attention-based NMT system which adopts the feedback attention as described in Section 3 .",
"For Moses, we used the full training data to train the model. We ran GIZA++ BIBREF13 on the training data in both directions, and merged alignments in two directions with “grow-diag-final” refinement rule BIBREF14 to obtain final word alignments. We trained a 5-gram language model on the Xinhua portion of GIGA-WORD corpus using SRILM Toolkit with a modified Kneser-Ney smoothing.",
"For RNNSearch, we used the parallel corpus to train the attention-based NMT model. The encoder of RNNSearch consists of a forward and backward recurrent neural network. The word embedding dimension is 620 and the size of a hidden layer is 1000. The maximum length of sentences that we used to train RNNSearch in our experiments was set to 50 on both Chinese and English side. We used the most frequent 30K words for both Chinese and English. We replaced rare words with a special token “UNK”. Dropout was applied only on the output layer and the dropout rate was set to 0.5. All the other settings were the same as those in BIBREF1 . Once the NMT model was trained, we adopted a beam search to find possible translations with high probabilities. We set the beam width to 10.",
"For the proposed cache-based NMT model, we implemented it on the top of RNNSearch*. We set the size of the dynamic and topic cache $|c_d|$ and $|c_t|$ to 100, 200, respectively. For the dynamic cache, we only kept those most recently-visited items. For the LDA tool, we set the number of topics considered in the model to 100 and set the number of topic words that are used to fill the topic cache to 200. The parameter $\\alpha $ and $\\beta $ of LDA were set to 0.5 and 0.1, respectively. We used a feedforward neural network with two hidden layers to define $g_{cache}$ (Equation (6)) and $f_{gate}$ (Equation (9)). For $f_{gate}$ , the number of units in the two hidden layers were set to 500 and 200 respectively. For $g_{cache}$ , the number of units in the two hidden layers were set to 1000 and 500 respectively. We used a pre-training strategy that has been widely used in the literature to train our proposed model: training the regular attention-based NMT model using our implementation of RNNSearch*, and then using its parameters to initialize the parameters of the proposed model, except for those related to the operations of the proposed cache model.",
"We used the stochastic gradient descent algorithm with mini-batch and Adadelta to train the NMT models. The mini-batch was set to 80 sentences and decay rates $\\rho $ and $\\epsilon $ of Adadelta were set to 0.95 and $10^{-6}$ ."
],
[
"Table 1 shows the results of different models measured in terms of BLEU score. From the table, we can find that our implementation RNNSearch* using the feedback attention and dropout outperforms Moses by 3.23 BLEU points. The proposed model $RNNSearch*_{+Cd}$ achieves an average gain of 1.01 BLEU points over RNNSearch* on all test sets. Further, the model $RNNSearch*_{+Cd, Ct}$ achieves an average gain of 1.60 BLEU points over RNNSearch*, and it outperforms Moses by 4.83 BLEU points. These results strongly suggest that the dynamic and topic cache are very helpful and able to improve translation quality in document translation."
],
[
"In order to validate the effectiveness of the gating mechanism used in the cache-based neural model, we set a fixed gate value for $RNNSearch*_{+Cd,Ct}$ , in other words, we use a mixture of probabilities with fixed proportions to replace the gating mechanism that automatically learns weights for probability mixture.",
"Table 2 displays the result. When we set the gate $\\alpha $ to a fixed value 0.3, the performance has an obvious decline comparing with that of $RNNSearch*_{+Cd,Ct}$ in terms of BLEU score. The performance is even worse than RNNSearch* by 10.11 BLEU points. Therefore without a good mechanism, the cache-based neural model cannot be appropriately integrated into NMT. This shows that the gating mechanism plays a important role in $RNNSearch*_{+Cd,Ct}$ ."
],
[
"When the NMT decoder translates the first sentence of a document, the dynamic cache is empty. In this case, we hope that the topic cache will provide document-level information for translating the first sentence. We therefore further investigate how the topic cache influence the translation of the first sentence in a document. We count and calculate the average number of words that appear in both the translations of the beginning sentences of documents and the topic cache.",
"The statistical results are shown in Table 3. Without using the cache model, RNNSearch* generates translations that contain words from the topic cache as these topic words are tightly related to documents being translated. With the topic cache, our neural cache model enables the translations of the first sentences to be more relevant to the global topics of documents being translated as these translations contain more words from the topic cache that describes these documents. As the dynamic cache is empty when the decoder translates the beginning sentences, the topic cache is complementary to such a cold cache at the start. Comparing the numbers of translations generated by our model and human translations (Reference in Table 3), we can find that with the help of the topic cache, translations of the first sentences of documents are becoming closer to human translations."
],
[
"As shown above, the topic cache is able to influence both the translations of beginning sentences and those of subsequent sentences while the dynamic cache built from translations of preceding sentences has an impact on the translations of subsequent sentences. We further study what roles the dynamic and topic cache play in the translation process. For this aim, we calculate the average number of words in translations generated by $RNNSearch*_{+Cd,Ct}$ that are also in the caches. During the counting process, stop words and “UNK” are removed from sentence and document translations. Table 4 shows the results. If only the topic cache is used ([ $document \\in [Ct]$ , $sentence \\in (Ct)$ ] in Table 4), the cache still can provide useful information to help NMT translate sentences and documents. 28.3 words per document and 2.39 words per sentence are from the topic cache. When both the dynamic and topic cache are used ([ $document \\in [Ct,Cd]$ , $sentence \\in (Ct,Cd)$ ] in Table 4), the numbers of words that both occur in sentence/document translations and the two caches sharply increase from 2.61/30.27 to 6.73/81.16. The reason for this is that words appear in preceding sentences will have a large probability of appearing in subsequent sentences. This shows that the dynamic cache plays a important role in keeping document translations consistent by reusing words from preceding sentences.",
"We also provide two translation examples in Table 5. We can find that RNNSearch* generates different translations “operations” and “actions” for the same chinese word “行动(xingdong)”, while our proposed model produces the same translation “actions”."
],
[
"We want to further study how the proposed cache-based neural model influence coherence in document translation. For this, we follow Lapata2005Automatic to measure coherence as sentence similarity. First, each sentence is represented by the mean of the distributed vectors of its words. Second, the similarity between two sentences is determined by the cosine of their means. ",
"$$sim(S_1,S_2) = cos(\\mu (\\vec{S_1}),\\mu (\\vec{S_2})) \\\\$$ (Eq. 46) ",
"where $\\mu (\\vec{S_i})=\\frac{1}{|S_i|}\\sum _{\\vec{w} \\in S_i}\\vec{w}$ , and $\\vec{w}$ is the vector for word $w$ .",
"We use Word2Vec to get the distributed vectors of words and English Gigaword fourth Edition as training data to train Word2Vec. We consider that embeddings from word2vec trained on large monolingual corpus can well encode semantic information of words. We set the dimensionality of word embeddings to 200. Table 6 shows the average cosine similarity of adjacent sentences on all test sets. From the table, we can find that the $RNNSearch*_{+Cd,Ct}$ model produces better coherence in document translation than RNNSearch* in term of cosine similarity."
],
[
"In this paper, we have presented a novel cache-based neural model for NMT to capture the global topic information and inter-sentence cohesion dependencies. We use a gating mechanism to integrate both the topic and dynamic cache into the proposed neural cache model. Experiment results show that the cache-based neural model achieves consistent and significant improvements in translation quality over several state-of-the-art NMT and SMT baselines. Further analysis reveals that the topic cache and dynamic cache are complementary to each other and that both are able to guide the NMT decoder to use topical words and to reuse words from recently translated sentences as next word predictions."
],
[
"The present research was supported by the National Natural Science Foundation of China (Grant No. 61622209). We would like to thank three anonymous reviewers for their insightful comments."
]
],
"section_name": [
"Related Work",
"Attention-based NMT",
"Encoder",
"Decoder",
"Attention Model",
"The Cache-based Neural Model",
"Dynamic Cache and Topic Cache",
"The Model",
"Decoding Process",
"Experimentation",
"Experimental Setting",
"Experimental Results",
"Effect of the Gating Mechanism",
"Effect of the Topic Cache",
"Analysis on the Cache-based Neural Model",
"Analysis on Translation Coherence",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"24b8501e77da8e331182557dea36f83fd31de3e7"
],
"answer": [
{
"evidence": [
"We want to further study how the proposed cache-based neural model influence coherence in document translation. For this, we follow Lapata2005Automatic to measure coherence as sentence similarity. First, each sentence is represented by the mean of the distributed vectors of its words. Second, the similarity between two sentences is determined by the cosine of their means."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"we follow Lapata2005Automatic to measure coherence as sentence similarity"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"594e0b1297abe0ad3e2555ad27eedfb59c442bb9"
]
},
{
"annotation_id": [
"168fae5dca1b8acf95dd0235b9633bcf0905c4c1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Experiment results on the NIST Chinese-English translation tasks. [+Cd] is the proposed model with the dynamic cache. [+Cd,Ct] is the proposed model with both the dynamic and topic cache. The BLEU scores are case-insensitive. Avg means the average BLEU score on all test sets.",
"FLOAT SELECTED: Table 3: The average number of words in translations of beginning sentences of documents that are also in the topic cache. Reference represents the average number of words in four human translations that are also in the topic cache.",
"FLOAT SELECTED: Table 6: The average cosine similarity of adjacent sentences (coherence) on all test sets."
],
"extractive_spans": [],
"free_form_answer": "BLEU scores, exact matches of words in both translations and topic cache, and cosine similarities of adjacent sentences for coherence.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Experiment results on the NIST Chinese-English translation tasks. [+Cd] is the proposed model with the dynamic cache. [+Cd,Ct] is the proposed model with both the dynamic and topic cache. The BLEU scores are case-insensitive. Avg means the average BLEU score on all test sets.",
"FLOAT SELECTED: Table 3: The average number of words in translations of beginning sentences of documents that are also in the topic cache. Reference represents the average number of words in four human translations that are also in the topic cache.",
"FLOAT SELECTED: Table 6: The average cosine similarity of adjacent sentences (coherence) on all test sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"594e0b1297abe0ad3e2555ad27eedfb59c442bb9"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Did the authors evaluate their system output for coherence?",
"What evaluations did the authors use on their system?"
],
"question_id": [
"9d016eb3913b41f7a18c6fa865897c12b5fe0212",
"c1c611409b5659a1fd4a870b6cc41f042e2e9889"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Architecture of NMT with the neural cache model. Pcache is the probability for a next target word estimated by the cache-based neural model.",
"Figure 2: Schematic diagram of the topic projection during the testing process.",
"Figure 3: Architecture of the cache model.",
"Table 1: Experiment results on the NIST Chinese-English translation tasks. [+Cd] is the proposed model with the dynamic cache. [+Cd,Ct] is the proposed model with both the dynamic and topic cache. The BLEU scores are case-insensitive. Avg means the average BLEU score on all test sets.",
"Table 2: Effect of the gating mechanism. [+α=0.3] is the [+Cd,Ct] with a fixed gate value 0.3.",
"Table 3: The average number of words in translations of beginning sentences of documents that are also in the topic cache. Reference represents the average number of words in four human translations that are also in the topic cache.",
"Table 4: The average number of words in translations generated byRNNSearch∗+Cd,Ct that are also in the dynamic and topic cache. [document/sentence ∈ [Ct]] denote the average number of words that are in both document/sentence translations and the topic cache. [document/sentence ∈ [Cd,Ct]] denote the average number of words occurring in both document/sentence translations and the two caches.",
"Table 5: Translation examples on the test set. SRC for source sentences, REF for human translations. These two sentences (1) and (2) are in the same document.",
"Table 6: The average cosine similarity of adjacent sentences (coherence) on all test sets."
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png",
"8-Table1-1.png",
"8-Table2-1.png",
"9-Table3-1.png",
"9-Table4-1.png",
"9-Table5-1.png",
"10-Table6-1.png"
]
} | [
"What evaluations did the authors use on their system?"
] | [
[
"1711.11221-10-Table6-1.png",
"1711.11221-8-Table1-1.png",
"1711.11221-9-Table3-1.png"
]
] | [
"BLEU scores, exact matches of words in both translations and topic cache, and cosine similarities of adjacent sentences for coherence."
] | 255 |
1912.07025 | Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts | Historical palm-leaf manuscript and early paper documents from Indian subcontinent form an important part of the world's literary and cultural heritage. Despite their importance, large-scale annotated Indic manuscript image datasets do not exist. To address this deficiency, we introduce Indiscapes, the first ever dataset with multi-regional layout annotations for historical Indic manuscripts. To address the challenge of large diversity in scripts and presence of dense, irregular layout elements (e.g. text lines, pictures, multiple documents per image), we adapt a Fully Convolutional Deep Neural Network architecture for fully automatic, instance-level spatial layout parsing of manuscript images. We demonstrate the effectiveness of proposed architecture on images from the Indiscapes dataset. For annotation flexibility and keeping the non-technical nature of domain experts in mind, we also contribute a custom, web-based GUI annotation tool and a dashboard-style analytics portal. Overall, our contributions set the stage for enabling downstream applications such as OCR and word-spotting in historical Indic manuscripts at scale. | {
"paragraphs": [
[
"The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on palm-leaf and early paper documents from the Indian sub-continent. In contrast with modern or recent era documents, such manuscripts are considerably more fragile, prone to degradation from elements of nature and tend to have a short shelf life BIBREF6, BIBREF7, BIBREF8. More worryingly, the domain experts who can decipher such content are small in number and dwindling. Therefore, it is essential to access the content within these documents before it is lost forever.",
"Surprisingly, no large-scale annotated Indic manuscript image datasets exist for the benefit of researchers in the community. In this paper, we take a significant step to address this gap by creating such a dataset. Given the large diversity in language, script and non-textual regional elements in these manuscripts, spatial layout parsing is crucial in enabling downstream applications such as OCR, word-spotting, style-and-content based retrieval and clustering. For this reason, we first tackle the problem of creating a diverse, annotated spatial layout dataset. This has the immediate advantage of bypassing the hurdle of language and script familiarity for annotators since layout annotation does not require any special expertise unlike text annotation.",
"In general, manuscripts from Indian subcontinent pose many unique challenges (Figure FIGREF1). To begin with, the documents exhibit a large multiplicity of languages. This is further magnified by variations in intra-language script systems. Along with text, manuscripts may contain pictures, tables, non-pictorial decorative elements in non-standard layouts. A unique aspect of Indic and South-East Asian manuscripts is the frequent presence of holes punched in the document for the purpose of binding BIBREF8, BIBREF9, BIBREF6. These holes cause unnatural gaps within text lines. The physical dimensions of the manuscripts are typically smaller compared to other historical documents, resulting in a dense content layout. Sometimes, multiple manuscript pages are present in a single image. Moreover, imaging-related factors such as varying scan quality play a role as well. Given all of these challenges, it is important to develop robust and scalable approaches for the problem of layout parsing. In addition, given the typical non-technical nature of domain experts who study manuscripts, it is also important to develop easy-to-use graphical interfaces for annotation, post-annotation visualization and analytics.",
"We make the following contributions:",
"We introduce Indiscapes, the first ever historical Indic manuscript dataset with detailed spatial layout annotations (Section SECREF3).",
"We adapt a deep neural network architecture for instance-level spatial layout parsing of historical manuscript images (Section SECREF16).",
"We also introduce a lightweight web-based GUI for annotation and dashboard-style analytics keeping in mind the non-technical domain experts and the unique layout-level challenges of Indic manuscripts (Section SECREF11)."
],
[
"A number of research groups have invested significant efforts in the creation and maintenance of annotated, publicly available historical manuscript image datasets BIBREF10, BIBREF11, BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF12. Other collections contain character-level and word-level spatial annotations for South-East Asian palm-leaf manuscripts BIBREF9, BIBREF4, BIBREF13. In these latter set of works, annotations for lines are obtained by considering the polygonal region formed by union of character bounding boxes as a line. While studies on Indic palm-leaf and paper-based manuscripts exist, these are typically conducted on small and often, private collections of documents BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. No publicly available large-scale, annotated dataset of historical Indic manuscripts exists to the best of our knowledge. In contrast with existing collections, our proposed dataset contains a much larger diversity in terms of document type (palm-leaf and early paper), scripts and annotated layout elements (see Tables TABREF5,TABREF8). An additional level of complexity arises from the presence of multiple manuscript pages within a single image (see Fig. FIGREF1).",
"A number of contributions can also be found for the task of historical document layout parsing BIBREF21, BIBREF22, BIBREF23, BIBREF24. Wei et al. BIBREF22 explore the effect of using a hybrid feature selection method while using autoencoders for semantic segmentation in five historical English and Medieval European manuscript datasets. Chen et al. BIBREF24 explore the use of Fully Convolutional Networks (FCN) for the same datasets. Barakat et al. BIBREF25 propose a FCN for segmenting closely spaced, arbitrarily oriented text lines from an Arabic manuscript dataset. The mentioned approaches, coupled with efforts to conduct competitions on various aspects of historical document layout analysis have aided progress in this area BIBREF26, BIBREF27, BIBREF28. A variety of layout parsing approaches, including those employing the modern paradigm of deep learning, have been proposed for Indic BIBREF17, BIBREF19, BIBREF29, BIBREF20 and South-East Asian BIBREF23, BIBREF30, BIBREF13, BIBREF31, BIBREF32 palm-leaf and paper manuscript images. However, existing approaches typically employ brittle hand-crafted features or demonstrate performance on datasets which are limited in terms of layout diversity. Similar to many recent works, we employ Fully Convolutional Networks in our approach. However, a crucial distinction lies in our formulation of layout parsing as an instance segmentation problem, rather than just a semantic segmentation problem. This avoids the problem of closely spaced layout regions (e.g. lines) being perceived as contiguous blobs.",
"The ready availability of annotation and analysis tools has facilitated progress in creation and analysis of historical document manuscripts BIBREF33, BIBREF34, BIBREF35. The tool we propose in the paper contains many of the features found in existing annotation systems. However, some of these systems are primarily oriented towards single-user, offline annotation and do not enable a unified management of annotation process and monitoring of annotator performance. In contrast, our web-based system addresses these aspects and provides additional capabilities. Many of the additional features in our system are tailored for annotation and examining annotation analytics for documents with dense and irregular layout elements, especially those found in Indic manuscripts. In this respect, our annotation system is closer to the recent trend of collaborative, cloud/web-based annotation systems and services BIBREF36, BIBREF37, BIBREF38."
],
[
"The Indic manuscript document images in our dataset are obtained from two sources. The first source is the publicly available Indic manuscript collection from University of Pennsylvania's Rare Book and Manuscript Library BIBREF39, also referred to as Penn-in-Hand (PIH). From the $2{,}880$ Indic manuscript book-sets, we carefully curated 193 manuscript images for annotation. Our curated selection aims to maximize the diversity of the dataset in terms of various attributes such as the extent of document degradation, script language, presence of non-textual elements (e.g. pictures, tables) and number of lines. Some images contain multiple manuscript pages stacked vertically or horizontally (see bottom-left image in Figure FIGREF1). The second source for manuscript images in our dataset is Bhoomi, an assorted collection of 315 images sourced from multiple Oriental Research Institutes and libraries across India. As with the first collection, we chose a subset intended to maximize the overall diversity of the dataset. However, this latter set of images are characterized by a relatively inferior document quality, presence of multiple languages and from a layout point of view, predominantly contain long, closely and irregularly spaced text lines, binding holes and degradations (Figure FIGREF1). Though some document images contain multiple manuscripts, we do not attempt to split the image into multiple pages. While this poses a challenge for annotation and automatic image parsing, retaining such images in the dataset eliminates manual/semi-automatic intervention. As our results show, our approach can successfully handle such multi-page documents, thereby making it truly an end-to-end system.",
"Overall, our dataset contains 508 annotated Indic manuscripts. Some salient aspects of the dataset can be viewed in Table TABREF5 and a pictorial illustration of layout regions can be viewed in Figure FIGREF13. Note that multiple regions can overlap, unlike existing historical document datasets which typically contain disjoint region annotations.",
"For the rest of the section, we discuss the challenges associated with annotating Indic manuscripts (Section SECREF9) and our web-based annotation tool (Section SECREF11)."
],
[
"A variety of unique challenges exist in the context of annotating Indic manuscript layouts. The challenges arise from three major sources.",
"Content: The documents are written in a large variety of Indic languages. Some languages even exhibit intra-language script variations. A large pool of annotators familiar with the languages and scripts present in the corpus is required to ensure proper annotation of lines and character components.",
"Layout: Unlike some of the existing datasets, Indic manuscripts contain non-textual elements such as color pictures, tables and document decorations. These elements are frequently interspersed with text in non-standard layouts. In many cases, the manuscripts contain one or more physical holes, designed for a thread-like material to pass through and bind the leaves together as a book. Such holes vary in terms of spatial location, count and hole diameter. When the holes are present in the middle of the document, they cause a break in the contiguity of lines. In some documents, the line contiguity is broken by a `virtual' hole-like gap, possibly intended for creation of the punched hole at a future time. In many cases, the separation between lines is extremely small. The handwritten nature of these documents and the surface material result in extremely uneven lines, necessitating meticulous and slow annotation. If multiple manuscript pages are present, the stacking order could be horizontal or vertical. Overall, the sheer variety in layout elements poses a significant challenge, not only for annotation, but also for automated layout parsing.",
"Degradations: Historical Indic manuscripts tend to be inherently fragile and prone to damage due to various sources – wood-and-leaf-boring insects, humidity seepage, improper storage and handling etc. While some degradations cause the edges of the document to become frayed, others manifest as irregularly shaped perforations in the document interior. It may be important to identify such degradations before attempting lexically-focused tasks such as OCR or word-spotting."
],
[
"Keeping the aforementioned challenges in mind, we introduce a new browser-based annotation tool (see Figure FIGREF10). The tool is designed to operate both stand-alone and as a web-service. The web-service mode enables features such as distributed parallel sessions by registered annotators, dashboard-based live session monitoring and a wide variety of annotation-related analytics. On the front-end, a freehand region option is provided alongside the usual rectangle and polygon to enable maximum annotation flexibility. The web-service version also features a `Correction-mode' which enables annotators to correct existing annotations from previous annotators. Additionally, the tool has been designed to enable lexical (text) annotations in future."
],
[
"To succeed at layout parsing of manuscripts, we require a system which can accurately localize various types of regions (e.g. text lines, isolated character components, physical degradation, pictures, holes). More importantly, we require a system which can isolate individual instances of each region (e.g. multiple text lines) in the manuscript image. Also, in our case, the annotation regions for manuscripts are not disjoint and can overlap (e.g. The annotation region for a text line can overlap with the annotation region of a hole (see Figure FIGREF13)). Therefore, we require a system which can accommodate such overlaps. To meet all of these requirements, we model our problem as one of semantic instance-level segmentation and employ the Mask R-CNN BIBREF40 architecture which has proven to be very effective at the task of object-instance segmentation in photos. Next, we briefly describe the Mask R-CNN architecture and our modifications of the same. Subsequently, we provide details related to implementation (Section SECREF17), model training (Section SECREF18) and inference (Section SECREF19)."
],
[
"The Mask-RCNN architecture contains three stages as described below (see Figure FIGREF12).",
"Backbone: The first stage, referred to as the backbone, is used to extract features from the input image. It consists of a convolutional network combined with a feature-pyramid network BIBREF41, thereby enabling multi-scale features to be extracted. We use the first four blocks of ResNet-50 BIBREF42 as the convolutional network.",
"Region Proposal Network (RPN): This is a convolutional network which scans the pyramid feature map generated by the backbone network and generates rectangular regions commonly called `object proposals' which are likely to contain objects of interest. For each level of the feature pyramid and for each spatial location at a given level, a set of level-specific bounding boxes called anchors are generated. The anchors typically span a range of aspect ratios (e.g. $1:2, 1:1, 2:1$) for flexibility in detection. For each anchor, the RPN network predicts (i) the probability of an object being present (`objectness score') (ii) offset coordinates of a bounding box relative to location of the anchor. The generated bounding boxes are first filtered according to the `objectness score'. From boxes which survive the filtering, those that overlap with the underlying object above a certain threshold are chosen. After applying non-maximal suppression to remove overlapping boxes with relatively smaller objectness scores, the final set of boxes which remain are termed `object proposals' or Regions-of-Interest (RoI).",
"Multi-Task Branch Networks: The RoIs obtained from RPN are warped into fixed dimensions and overlaid on feature maps extracted from the backbone to obtain RoI-specific features. These features are fed to three parallel task sub-networks. The first sub-network maps these features to region labels (e.g. Hole,Character-Line-Segment) while the second sub-network maps the RoI features to bounding boxes. The third sub-network is fully convolutional and maps the features to the pixel mask of the underlying region. Note that the ability of the architecture to predict masks independently for each RoI plays a crucial role in obtaining instance segmentations. Another advantage is that it naturally addresses situations where annotations or predictions overlap."
],
[
"The dataset splits used for training, validation and test phases can be seen in Table TABREF6. All manuscript images are adaptively resized to ensure the width does not exceed 1024 pixels. The images are padded with zeros such that the input to the deep network has spatial dimensions of $1024 \\times 1024$. The ground truth region masks are initially subjected to a similar resizing procedure. Subsequently, they are downsized to $28 \\times 28$ in order to match output dimensions of the mask sub-network."
],
[
"The network is initialized with weights obtained from a Mask R-CNN trained on the MS-COCO BIBREF43 dataset with a ResNet-50 backbone. We found that this results in faster convergence and stabler training compared to using weights from a Mask-RCNN trained on ImageNet BIBREF44 or training from scratch. Within the RPN network, we use custom-designed anchors of 5 different scales and with 3 different aspect ratios. Specifically, we use the following aspect ratios – 1:1,1:3,1:10 – keeping in mind the typical spatial extents of the various region classes. We also limit the number of RoIs (`object proposals') to 512. We use categorical cross entropy loss $\\mathcal {L}_{RPN}$ for RPN classification network. Within the task branches, we use categorical cross entropy loss $\\mathcal {L}_{r}$ for region classification branch, smooth L1 loss BIBREF45 ($\\mathcal {L}_{bb}$) for final bounding box prediction and per-pixel binary cross entropy loss $\\mathcal {L}_{mask}$ for mask prediction. The total loss is a convex combination of these losses, i.e. $\\mathcal {L} = \\lambda _{RPN} \\mathcal {L}_{RPN} + \\lambda _{r} \\mathcal {L}_{r} + \\lambda _{bb} \\mathcal {L}_{bb} + \\lambda _{mask} \\mathcal {L}_{mask}$. The weighting factors ($\\lambda $s) are set to 1. However, to ensure priority for our task of interest namely mask prediction, we set $\\lambda _{mask}=2$. For optimization, we use Stochastic Gradient Descent (SGD) optimizer with a gradient norm clipping value of $0.5$. The batch size, momentum and weight decay are set to 1, $0.9$ and $10^{-3}$ respectively. Given the relatively smaller size of our manuscript dataset compared to the photo dataset (MS-COCO) used to originally train the base Mask R-CNN, we adopt a multi-stage training strategy. For the first stage (30 epochs), we train only the task branch sub-networks using a learning rate of $10^{-3}$ while freezing weights in the rest of the overall network. This ensures that the task branches are fine-tuned for the types of regions contained in manuscript images. For the second stage (20 epochs), we additionally train stage-4 and up of the backbone ResNet-50. This enables extraction of appropriate semantic features from manuscript images. The omission of the initial 3 stages in the backbone for training is due to the fact that they provide generic, re-usable low-level features. To ensure priority coverage of hard-to-localize regions, we use focal loss BIBREF46 for mask generation. For the final stage (15 epochs), we train the entire network using a learning rate of $10^{-4}$."
],
[
"During inference, the images are rescaled and processed using the procedure described at the beginning of the subsection. The number of RoIs retained after non-maximal suppression (NMS) from the RPN is set to 1000. From these, we choose the top 100 region detections with objectness score exceeding $0.5$ and feed the corresponding RoIs to the mask branch sub-network for mask generation. It is important to note that this strategy is different from the parallel generation of outputs and use of the task sub-networks during training. The generated masks are then binarized using an empirically chosen threshold of $0.4$ and rescaled to their original size using bilinear interpolation. On these generated masks, NMS with a threshold value of $0.5$ is applied to obtain the final set of predicted masks."
],
[
"For quantitative evaluation, we compute Average Precision (AP) for a particular IoU threshold, a measure widely reported in instance segmentation literature BIBREF47, BIBREF43. We specifically report $AP_{50}$ and $AP_{75}$, corresponding to AP at IoU thresholds 50 and 75 respectively BIBREF40. In addition, we report an overall score by averaging AP at different IoU thresholds ranging from $0.5$ to $0.95$ in steps of $0.05$.",
"The AP measure characterizes performance at document level. To characterize performance for each region type, we report two additional measures BIBREF24 – average class-wise IoU (cwIoU) and average class-wise per-pixel accuracy (cwAcc). Consider a fixed test document $k$. Suppose there are $r_i$ regions of class $i$ and let ${IoU}_r$ denote the IoU score for one such region $r$, i.e. $1 \\leqslant r \\leqslant r_i$. The per-class IoU score for class $i$ and document $k$ is computed as ${cwIoU}^d_i = \\frac{\\sum _r {IoU}_r}{r_i}$. Suppose there are $N_i$ documents containing at least a single region of class $i$ in ground-truth. The overall per-class IoU score for class $i$ is computed as ${cwIoU}_i = \\frac{\\sum _d {cwIoU}^d_i}{N_i}$. In a similar manner, we define class-wise pixel accuracy ${pwAcc}^d_i$ at document level and average it across all the documents containing class $i$, i.e. ${cwAcc}_i = \\frac{\\sum _d {pwAcc}^d_i}{N_i}$. Note that our approach for computing class-wise scores prevents documents with a relatively larger number of class instances from dominating the score and in this sense, differs from existing approaches BIBREF24"
],
[
"We report quantitative results using the measures described in Section SECREF20. Table TABREF14 reports Average Precision and Table TABREF15 reports class-wise average IOUs and per-pixel accuracies. Qualitative results can be viewed in Figure FIGREF13. Despite the challenges posed by manuscripts, our model performs reasonably well across a variety of classes. As the qualitative results indicate, the model predicts accurate masks for almost all the regions. The results also indicate that our model handles overlap between Holes and Character line segments well. From ablative experiments, we found that our choice of focal loss was crucial in obtaining accurate mask boundaries. Unlike traditional semantic segmentation which would have produced a single blob-like region for line segments, our instance-based approach isolates each text line separately. Additionally, the clear demarcation between Page-Boundary and background indicates that our system identifies semantically relevant regions for downstream analysis. As the result at the bottom of Figure FIGREF13 shows, our system can even handle images with multiple pages, thus removing the need for any pre-processing related to isolation of individual pages.",
"From quantitative results, we observe that Holes, Character line segments, Page boundary and Pictures are parsed the best while Physical degradations are difficult to parse due to the relatively small footprint and inconsistent patterns in degradations. The results show that performance for Penn in Hand (PIH) documents is better compared to Bhoomi manuscripts. We conjecture that the presence of closely spaced and unevenly written lines in latter is the cause. In our approach, two (or more) objects may share the same bounding box in terms of overlap and it is not possible to determine which box to choose during mask prediction. Consequently, an underlying line's boundary may either end up not being detected or the predicted mask might be poorly localized. However, this is not a systemic problem since our model achieves good performance even for very dense Bhoomi document line layouts."
],
[
"Via this paper, we propose Indiscapes, the first dataset with layout annotations for historical Indic manuscripts. We believe that the availability of layout annotations will play a crucial role in reducing the overall complexity for OCR and other tasks such as word-spotting, style-and-content based retrieval. In the long-term, we intend to expand the dataset, not only numerically but also in terms of layout, script and language diversity. As a significant contribution, we have also adapted a deep-network based instance segmentation framework custom modified for fully automatic layout parsing. Given the general nature of our framework, advances in instance segmentation approaches can be leveraged thereby improving performance over time. Our proposed web-based annotator system, although designed for Indic manuscripts, is flexible, and could be reused for similar manuscripts from Asian subcontinent. We intend to expand the capabilities of our annotator system in many useful ways. For instance, the layout estimated by our deep-network could be provided to annotators for correction, thus reducing annotation efforts. Finally, we plan to have our dataset, instance segmentation system and annotator system publicly available. This would enable large-scale data collection and automated analysis efforts for Indic as well as other historical Asian manuscripts. The repositories related to the systems presented in this paper and the Indiscapes dataset can be accessed at https://ihdia.iiit.ac.in."
],
[
"We would like to thank Dr. Sai Susarla for enabling access to the Bhoomi document collection. We also thank Poreddy Mourya Kumar Reddy, Gollapudi Sai Vamsi Krishna for their contributions related to dashboard and various annotators for their labelling efforts."
]
],
"section_name": [
"Introduction",
"Related Work",
"Indiscapes: The Indic manuscript dataset",
"Indiscapes: The Indic manuscript dataset ::: Annotation Challenges",
"Indiscapes: The Indic manuscript dataset ::: Annotation Tool",
"Indic Manuscript Layout Parsing",
"Indic Manuscript Layout Parsing ::: Network Architecture",
"Indic Manuscript Layout Parsing ::: Implementation Details",
"Indic Manuscript Layout Parsing ::: Implementation Details ::: Training",
"Indic Manuscript Layout Parsing ::: Implementation Details ::: Inference",
"Indic Manuscript Layout Parsing ::: Evaluation",
"Results",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"16ff9c9f07a060d809fdb92a6e6044c47a21faf3"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE IV: Class-wise average IoUs and per-pixel accuracies on the test set. Refer to Table I for full names of abbreviated region types listed at top of the table.",
"FLOAT SELECTED: TABLE I: Counts for various annotated region types in INDISCAPES dataset. The abbreviations used for region types are given below each region type."
],
"extractive_spans": [],
"free_form_answer": "Combined per-pixel accuracy for character line segments is 74.79",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE IV: Class-wise average IoUs and per-pixel accuracies on the test set. Refer to Table I for full names of abbreviated region types listed at top of the table.",
"FLOAT SELECTED: TABLE I: Counts for various annotated region types in INDISCAPES dataset. The abbreviations used for region types are given below each region type."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"edd6026b3573e63afd587768f066b5bdc87c9446"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE III: Scripts in the INDISCAPES dataset."
],
"extractive_spans": [],
"free_form_answer": "508",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE III: Scripts in the INDISCAPES dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"c55585ec881d12ccf06f64dedfe417e3dd1722bb"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What accuracy does CNN model achieve?",
"How many documents are in the Indiscapes dataset?",
"What language(s) are the manuscripts written in?"
],
"question_id": [
"79bb1a1b71a1149e33e8b51ffdb83124c18f3e9c",
"26faad6f42b6d628f341c8d4ce5a08a591eea8c2",
"20be7a776dfda0d3c9dc10270699061cb9bc8297"
],
"question_writer": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
],
"search_query": [
"historical",
"historical",
"historical"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1: The five images on the left, enclosed by pink dotted line, are from the BHOOMI palm leaf manuscript collection while the remaining images (enclosed by blue dotted line) are from the ’Penn-in-Hand’ collection (refer to Section III). Note the inter-collection differences, closely spaced and unevenly written text lines, presence of various non-textual layout regions (pictures, holes, library stamps), physical degradation and presence of multiple manuscripts per image. All of these factors pose great challenges for annotation and machine-based parsing.",
"TABLE I: Counts for various annotated region types in INDISCAPES dataset. The abbreviations used for region types are given below each region type.",
"TABLE II: Dataset splits used for learning and inference.",
"TABLE III: Scripts in the INDISCAPES dataset.",
"Fig. 2: Screenshots of our web-based annotator (left) and analytics dashboard (right).",
"Fig. 3: The architecture adopted for Indic Manuscript Layout Parsing. Refer to Section IV for details.",
"TABLE IV: Class-wise average IoUs and per-pixel accuracies on the test set. Refer to Table I for full names of abbreviated region types listed at top of the table.",
"TABLE V: AP at IoU thresholds 50, 75 and overall AP averaged over IoU range for test set.",
"Fig. 4: Ground truth annotations (left) and predicted instance segmentations (right) for test set images. Note that we use colored shading only to visualize individual region instances and not to color-code region types. The region label abbreviations are shown alongside the regions. CLS : Character Line Segment, PB : Page Boundary, H : Hole, BL : Boundary Line, CC : Character Component, PD : Physical Degradation."
],
"file": [
"2-Figure1-1.png",
"3-TableI-1.png",
"3-TableII-1.png",
"3-TableIII-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-TableIV-1.png",
"5-TableV-1.png",
"6-Figure4-1.png"
]
} | [
"What accuracy does CNN model achieve?",
"How many documents are in the Indiscapes dataset?"
] | [
[
"1912.07025-3-TableI-1.png",
"1912.07025-5-TableIV-1.png"
],
[
"1912.07025-3-TableIII-1.png"
]
] | [
"Combined per-pixel accuracy for character line segments is 74.79",
"508"
] | 256 |
1709.01256 | Semantic Document Distance Measures and Unsupervised Document Revision Detection | In this paper, we model the document revision detection problem as a minimum cost branching problem that relies on computing document distances. Furthermore, we propose two new document distance measures, word vector-based Dynamic Time Warping (wDTW) and word vector-based Tree Edit Distance (wTED). Our revision detection system is designed for a large scale corpus and implemented in Apache Spark. We demonstrate that our system can more precisely detect revisions than state-of-the-art methods by utilizing the Wikipedia revision dumps https://snap.stanford.edu/data/wiki-meta.html and simulated data sets. | {
"paragraphs": [
[
"It is a common habit for people to keep several versions of documents, which creates duplicate data. A scholarly article is normally revised several times before being published. An academic paper may be listed on personal websites, digital conference libraries, Google Scholar, etc. In major corporations, a document typically goes through several revisions involving multiple editors and authors. Users would benefit from visualizing the entire history of a document. It is worthwhile to develop a system that is able to intelligently identify, manage and represent revisions. Given a collection of text documents, our study identifies revision relationships in a completely unsupervised way. For each document in a corpus we only use its content and the last modified timestamp. We assume that a document can be revised by many users, but that the documents are not merged together. We consider collaborative editing as revising documents one by one.",
"The two research problems that are most relevant to document revision detection are plagiarism detection and revision provenance. In a plagiarism detection system, every incoming document is compared with all registered non-plagiarized documents BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The system returns true if an original copy is found in the database; otherwise, the system returns false and adds the document to the database. Thus, it is a 1-to-n problem. Revision provenance is a 1-to-1 problem as it keeps track of detailed updates of one document BIBREF4 , BIBREF5 . Real-world applications include GitHub, version control in Microsoft Word and Wikipedia version trees BIBREF6 . In contrast, our system solves an n-to-n problem on a large scale. Our potential target data sources, such as the entire web or internal corpora in corporations, contain numerous original documents and their revisions. The aim is to find all revision document pairs within a reasonable time.",
"Document revision detection, plagiarism detection and revision provenance all rely on comparing the content of two documents and assessing a distance/similarity score. The classic document similarity measure, especially for plagiarism detection, is fingerprinting BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Fixed-length fingerprints are created using hash functions to represent document features and are then used to measure document similarities. However, the main purpose of fingerprinting is to reduce computation instead of improving accuracy, and it cannot capture word semantics. Another widely used approach is computing the sentence-to-sentence Levenshtein distance and assigning an overall score for every document pair BIBREF13 . Nevertheless, due to the large number of existing documents, as well as the large number of sentences in each document, the Levenshtein distance is not computation-friendly. Although alternatives such as the vector space model (VSM) can largely reduce the computation time, their effectiveness is low. More importantly, none of the above approaches can capture semantic meanings of words, which heavily limits the performances of these approaches. For instance, from a semantic perspective, “I went to the bank\" is expected to be similar to “I withdrew some money\" rather than “I went hiking.\" Our document distance measures are inspired by the weaknesses of current document distance/similarity measures and recently proposed models for word representations such as word2vec BIBREF14 and Paragraph Vector (PV) BIBREF15 . Replacing words with distributed vector embeddings makes it feasible to measure semantic distances using advanced algorithms, e.g., Dynamic Time Warping (DTW) BIBREF16 , BIBREF17 , BIBREF18 and Tree Edit Distance (TED) BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . Although calculating text distance using DTW BIBREF27 , TED BIBREF28 or Word Mover's Distance (WMV) BIBREF29 has been attempted in the past, these measures are not ideal for large-scale document distance calculation. The first two algorithms were designed for sentence distances instead of document distances. The third measure computes the distance of two documents by solving a transshipment problem between words in the two documents and uses word2vec embeddings to calculate semantic distances of words. The biggest limitation of WMV is its long computation time. We show in Section SECREF54 that our wDTW and wTED measures yield more precise distance scores with much shorter running time than WMV.",
"We recast the problem of detecting document revisions as a network optimization problem (see Section SECREF2 ) and consequently as a set of document distance problems (see Section SECREF4 ). We use trained word vectors to represent words, concatenate the word vectors to represent documents and combine word2vec with DTW or TED. Meanwhile, in order to guarantee reasonable computation time in large data sets, we calculate document distances at the paragraph level with Apache Spark. A distance score is computed by feeding paragraph representations to DTW or TED. Our code and data are publicly available. ",
"The primary contributions of this work are as follows.",
"The rest of this paper is organized in five parts. In Section 2, we clarify related terms and explain the methodology for document revision detection. In Section 3, we provide a brief background on existing document similarity measures and present our wDTW and wTED algorithms as well as the overall process flow. In Section 4, we demonstrate our revision detection results on Wikipedia revision dumps and six simulated data sets. Finally, in Section 5, we summarize some concluding remarks and discuss avenues for future work and improvements."
],
[
"The two requirements for a document INLINEFORM0 being a revision of another document INLINEFORM1 are that INLINEFORM2 has been created later than INLINEFORM3 and that the content of INLINEFORM4 is similar to (has been modified from) that of INLINEFORM5 . More specifically, given a corpus INLINEFORM6 , for any two documents INLINEFORM7 , we want to find out the yes/no revision relationship of INLINEFORM8 and INLINEFORM9 , and then output all such revision pairs.",
"We assume that each document has a creation date (the last modified timestamp) which is readily available from the meta data of the document. In this section we also assume that we have a INLINEFORM0 method and a cut-off threshold INLINEFORM1 . We represent a corpus as network INLINEFORM2 , for example Figure FIGREF5 , in which a vertex corresponds to a document. There is an arc INLINEFORM3 if and only if INLINEFORM4 and the creation date of INLINEFORM5 is before the creation date of INLINEFORM6 . In other words, INLINEFORM7 is a revision candidate for INLINEFORM8 . By construction, INLINEFORM9 is acyclic. For instance, INLINEFORM10 is a revision candidate for INLINEFORM11 and INLINEFORM12 . Note that we allow one document to be the original document of several revised documents. As we only need to focus on revision candidates, we reduce INLINEFORM13 to INLINEFORM14 , shown in Figure FIGREF5 , by removing isolated vertices. We define the weight of an arc as the distance score between the two vertices. Recall the assumption that a document can be a revision of at most one document. In other words, documents cannot be merged. Due to this assumption, all revision pairs form a branching in INLINEFORM15 . (A branching is a subgraph where each vertex has an in-degree of at most 1.) The document revision problem is to find a minimum cost branching in INLINEFORM16 (see Fig FIGREF5 ).",
"The minimum branching problem was earlier solved by BIBREF30 edmonds1967optimum and BIBREF31 velardi2013ontolearn. The details of his algorithm are as follows.",
"In our case, INLINEFORM0 is acyclic and, therefore, the second step never occurs. For this reason, Algorithm SECREF2 solves the document revision problem.",
"Find minimum branching INLINEFORM0 for network INLINEFORM1 ",
"[1]",
"Input: INLINEFORM0 INLINEFORM1 ",
"every vertex INLINEFORM0 Set INLINEFORM1 to correspond to all arcs with head INLINEFORM2 Select INLINEFORM3 such that INLINEFORM4 is minimum INLINEFORM5 ",
"Output: INLINEFORM0 ",
"The essential part of determining the minimum branching INLINEFORM0 is extracting arcs with the lowest distance scores. This is equivalent to finding the most similar document from the revision candidates for every original document."
],
[
"In this section, we first introduce the classic VSM model, the word2vec model, DTW and TED. We next demonstrate how to combine the above components to construct our semantic document distance measures: wDTW and wTED. We also discuss the implementation of our revision detection system."
],
[
"VSM represents a set of documents as vectors of identifiers. The identifier of a word used in this work is the tf-idf weight. We represent documents as tf-idf vectors, and thus the similarity of two documents can be described by the cosine distance between their vectors. VSM has low algorithm complexity but cannot represent the semantics of words since it is based on the bag-of-words assumption.",
"Word2vec produces semantic embeddings for words using a two-layer neural network. Specifically, word2vec relies on a skip-gram model that uses the current word to predict context words in a surrounding window to maximize the average log probability. Words with similar meanings tend to have similar embeddings.",
"DTW was developed originally for speech recognition in time series analysis and has been widely used to measure the distance between two sequences of vectors.",
"Given two sequences of feature vectors: INLINEFORM0 and INLINEFORM1 , DTW finds the optimal alignment for INLINEFORM2 and INLINEFORM3 by first constructing an INLINEFORM4 matrix in which the INLINEFORM5 element is the alignment cost of INLINEFORM6 and INLINEFORM7 , and then retrieving the path from one corner to the diagonal one through the matrix that has the minimal cumulative distance. This algorithm is described by the following formula. DISPLAYFORM0 ",
"TED was initially defined to calculate the minimal cost of node edit operations for transforming one labeled tree into another. The node edit operations are defined as follows.",
"Deletion Delete a node and connect its children to its parent maintaining the order.",
"Insertion Insert a node between an existing node and a subsequence of consecutive children of this node.",
"Substitution Rename the label of a node.",
"Let INLINEFORM0 and INLINEFORM1 be two labeled trees, and INLINEFORM2 be the INLINEFORM3 node in INLINEFORM4 . INLINEFORM5 corresponds to a mapping from INLINEFORM6 to INLINEFORM7 . TED finds mapping INLINEFORM8 with the minimal edit cost based on INLINEFORM9 ",
"where INLINEFORM0 means transferring INLINEFORM1 to INLINEFORM2 based on INLINEFORM3 , and INLINEFORM4 represents an empty node."
],
[
"According to the description of DTW in Section UID14 , the distance between two documents can be calculated using DTW by replacing each element in the feature vectors INLINEFORM0 and INLINEFORM1 with a word vector. However, computing the DTW distance between two documents at the word level is basically as expensive as calculating the Levenshtein distance. Thus in this section we propose an improved algorithm that is more appropriate for document distance calculation.",
"In order to receive semantic representations for documents and maintain a reasonable algorithm complexity, we use word2vec to train word vectors and represent each paragraph as a sequence of vectors. Note that in both wDTW and wTED we take document titles and section titles as paragraphs. Although a more recently proposed model PV can directly train vector representations for short texts such as movie reviews BIBREF15 , our experiments in Section SECREF54 show that PV is not appropriate for standard paragraphs in general documents. Therefore, we use word2vec in our work. Algorithm SECREF20 describes how we compute the distance between two paragraphs based on DTW and word vectors. The distance between one paragraph in a document and one paragraph in another document can be pre-calculated in parallel using Spark to provide faster computation for wDTW and wTED.",
"DistPara",
"[h] Replace the words in paragraphs INLINEFORM0 and INLINEFORM1 with word2vec embeddings: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 calculate INLINEFORM15 Return: INLINEFORM16 "
],
[
"As a document can be considered as a sequence of paragraphs, wDTW returns the distance between two documents by applying another DTW on top of paragraphs. The cost function is exactly the DistPara distance of two paragraphs given in Algorithm SECREF20 . Algorithm SECREF21 and Figure FIGREF22 describe our wDTW measure. wDTW observes semantic information from word vectors, which is fundamentally different from the word distance calculated from hierarchies among words in the algorithm proposed by BIBREF27 liu2007sentence. The shortcomings of their work are that it is difficult to learn semantic taxonomy of all words and that their DTW algorithm can only be applied to sentences not documents.",
"wDTW",
"[h] Represent documents INLINEFORM0 and INLINEFORM1 with vectors of paragraphs: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 DistPara INLINEFORM15 calculate INLINEFORM16 Return: INLINEFORM17 "
],
[
"TED is reasonable for measuring document distances as documents can be easily transformed to tree structures visualized in Figure FIGREF24 . The document tree concept was originally proposed by BIBREF0 si1997check. A document can be viewed at multiple abstraction levels that include the document title, its sections, subsections, etc. Thus for each document we can build a tree-like structure with title INLINEFORM0 sections INLINEFORM1 subsections INLINEFORM2 ... INLINEFORM3 paragraphs being paths from the root to leaves. Child nodes are ordered from left to right as they appear in the document.",
"We represent labels in a document tree as the vector sequences of titles, sections, subsections and paragraphs with word2vec embeddings. wTED converts documents to tree structures and then uses DistPara distances. More formally, the distance between two nodes is computed as follows.",
"The cost of substitution is the DistPara value of the two nodes.",
"The cost of insertion is the DistPara value of an empty sequence and the label of the inserted node. This essentially means that the cost is the sum of the L2-norms of the word vectors in that node.",
"The cost of deletion is the same as the cost of insertion.",
"Compared to the algorithm proposed by BIBREF28 sidorov2015computing, wTED provides different edit cost functions and uses document tree structures instead of syntactic n-grams, and thus wTED yields more meaningful distance scores for long documents. Algorithm SECREF23 and Figure FIGREF28 describe how we calculate the edit cost between two document trees.",
"wTED",
"[1] Convert documents INLINEFORM0 and INLINEFORM1 to trees INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 ",
"Initialize tree edit distance INLINEFORM0 node label INLINEFORM1 node label INLINEFORM2 Update TED mapping cost INLINEFORM3 using INLINEFORM4 DistPara INLINEFORM5 INLINEFORM6 DistPara INLINEFORM7 INLINEFORM8 DistPara INLINEFORM9 ",
"Return: INLINEFORM0 "
],
[
"Our system is a boosting learner that is composed of four modules: weak filter, strong filter, revision network and optimal subnetwork. First of all, we sort all documents by timestamps and pair up documents so that we only compare each document with documents that have been created earlier. In the first module, we calculate the VSM similarity scores for all pairs and eliminate those with scores that are lower than an empirical threshold ( INLINEFORM0 ). This is what we call the weak filter. After that, we apply the strong filter wDTW or wTED on the available pairs and filter out document pairs having distances higher than a threshold INLINEFORM1 . For VSM in Section SECREF32 , we directly filter out document pairs having similarity scores lower than a threshold INLINEFORM2 . The cut-off threshold estimation is explained in Section SECREF30 . The remaining document pairs from the strong filter are then sent to the revision network module. In the end, we output the optimal revision pairs following the minimum branching strategy."
],
[
"Hyperprameter INLINEFORM0 is calibrated by calculating the absolute extreme based on an initial set of documents, i.e., all processed documents since the moment the system was put in use. Based on this set, we calculate all distance/similarity scores and create a histogram, see Figure FIGREF31 . The figure shows the correlation between the number of document pairs and the similarity scores in the training process of one simulated corpus using VSM. The optimal INLINEFORM1 in this example is around 0.6 where the number of document pairs noticeably drops.",
"As the system continues running, new documents become available and INLINEFORM0 can be periodically updated by using the same method."
],
[
"This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods."
],
[
"We denote the following distance/similarity measures.",
"wDTW: Our semantic distance measure explained in Section SECREF21 .",
"wTED: Our semantic distance measure explained in Section SECREF23 .",
"WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents.",
"VSM: The similarity measure introduced in Section UID12 .",
"PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 .",
"PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 .",
"Our experiments were conducted on an Apache Spark cluster with 32 cores and 320 GB total memory. We implemented wDTW, wTED, WMD, VSM, PV-DTW and PV-TED in Java Spark. The paragraph vectors for PV-DTW and PV-TED were trained by gensim. "
],
[
"In this section, we introduce the two data sets we used for our revision detection experiments: Wikipedia revision dumps and a document revision data set generated by a computer simulation. The two data sets differ in that the Wikipedia revision dumps only contain linear revision chains, while the simulated data sets also contains tree-structured revision chains, which can be very common in real-world data.",
"The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data.",
"We pre-processed the Wikipedia revision dumps using the JWPL Revision Machine BIBREF32 and produced a data set that contains 62,234 documents with 46,354 revisions. As we noticed that short documents just contributed to noise (graffiti) in the data, we eliminated documents that have fewer than three paragraphs and fewer than 300 words. We removed empty lines in the documents and trained word2vec embeddings on the entire corpus. We used the documents occurring in the first INLINEFORM0 of the revision period for INLINEFORM1 calibration, and the remaining documents for test.",
"The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period.",
"We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5."
],
[
"We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record.",
"We illustrate the performances of wDTW, wTED, WMD, VSM, PV-DTW and PV-TED on the Wikipedia revision dumps in Figure FIGREF43 . wDTW and wTED have the highest F-measure scores compared to the rest of four measures, and wDTW also have the highest precision and recall scores. Figure FIGREF49 shows the average evaluation results on the simulated data sets. From left to right, the corpus size increases and the revision chains become longer, thus it becomes more challenging to detect document revisions. Overall, wDTW consistently performs the best. WMD is slightly better than wTED. In particular, when the corpus size increases, the performances of WMD, VSM, PV-DTW and PV-TED drop faster than wDTW and wTED. Because the revision operations were randomly selected in each corpus, it is possible that there are non-monotone points in the series.",
"wDTW and wTED perform better than WMD especially when the corpus is large, because they use dynamic programming to find the global optimal alignment for documents. In contrast, WMD relies on a greedy algorithm that sums up the minimal cost for every word. wDTW and wTED perform better than PV-DTW and PV-TED, which indicates that our DistPara distance in Algorithm SECREF20 is more accurate than the Euclidian distance between paragraph vectors trained by PV.",
"We show in Table TABREF53 the average running time of the six distance/similarity measures. In all the experiments, VSM is the fastest, wTED is faster than wDTW, and WMD is the slowest. Running WMD is extremely expensive because WMD needs to solve an INLINEFORM0 sequential transshipment problem for every two documents where INLINEFORM1 is the average number of words in a document. In contrast, by splitting this heavy computation into several smaller problems (finding the distance between any two paragraphs), which can be run in parallel, wDTW and wTED scale much better.",
"Combining Figure FIGREF43 , Figure FIGREF49 and Table TABREF53 we conclude that wDTW yields the most accurate results using marginally more time than VSM, PV-TED and PV-DTW, but much less running time than WMD. wTED returns satisfactory results using shorter time than wDTW."
],
[
"This paper has explored how DTW and TED can be extended with word2vec to construct semantic document distance measures: wDTW and wTED. By representing paragraphs with concatenations of word vectors, wDTW and wTED are able to capture the semantics of the words and thus give more accurate distance scores. In order to detect revisions, we have used minimum branching on an appropriately developed network with document distance scores serving as arc weights. We have also assessed the efficiency of the method of retrieving an optimal revision subnetwork by finding the minimum branching.",
"Furthermore, we have compared wDTW and wTED with several distance measures for revision detection tasks. Our results demonstrate the effectiveness and robustness of wDTW and wTED in the Wikipedia revision dumps and our simulated data sets. In order to reduce the computation time, we have computed document distances at the paragraph level and implemented a boosting learning system using Apache Spark. Although we have demonstrated the superiority of our semantic measures only in the revision detection experiments, wDTW and wTED can also be used as semantic distance measures in many clustering, classification tasks.",
"Our revision detection system can be enhanced with richer features such as author information and writing styles, and exact changes in revision pairs. Another interesting aspect we would like to explore in the future is reducing the complexities of calculating the distance between two paragraphs."
],
[
"This work was supported in part by Intel Corporation, Semiconductor Research Corporation (SRC)."
]
],
"section_name": [
"Introduction",
"Revision Network",
"Distance/similarity Measures",
"Background",
"Semantic Distance between Paragraphs",
"Word Vector-based Dynamic Time Warping",
"Word Vector-based Tree Edit Distance",
"Process Flow",
"Estimating the Cut-off Threshold",
"Numerical Experiments",
"Distance/Similarity Measures",
"Data Sets",
"Results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"8b0add840d20bf740a040223502d86b77dee5181"
],
"answer": [
{
"evidence": [
"We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record."
],
"extractive_spans": [
"precision",
"recall",
"F-measure"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use precision, recall and F-measure to evaluate the detected revisions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"e5bcb929f7ac154baa12daa401937be57459067b"
],
"answer": [
{
"evidence": [
"The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data."
],
"extractive_spans": [
"eight GB"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1726f69f9e25a1a5f704a4aa45afbfc4fd153ef6"
],
"answer": [
{
"evidence": [
"The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period.",
"We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5."
],
"extractive_spans": [],
"free_form_answer": "There are 6 simulated datasets collected which is initialised with a corpus of size 550 and simulated by generating new documents from Wikipedia extracts and replacing existing documents",
"highlighted_evidence": [
"We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents.",
"The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts.",
"We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"a66b83c113c34aefe009dce1acd436272846ee73"
],
"answer": [
{
"evidence": [
"This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods.",
"We denote the following distance/similarity measures.",
"WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents.",
"VSM: The similarity measure introduced in Section UID12 .",
"PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 .",
"PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 ."
],
"extractive_spans": [
"WMD",
"VSM",
"PV-DTW",
"PV-TED"
],
"free_form_answer": "",
"highlighted_evidence": [
"This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods.",
"We denote the following distance/similarity measures.",
"WMD: The Word Mover's Distance introduced in Section SECREF1 .",
"VSM: The similarity measure introduced in Section UID12 .",
"PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 .",
"PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What metrics are used to evaluation revision detection?",
"How large is the Wikipedia revision dump dataset?",
"What are simulated datasets collected?",
"Which are the state-of-the-art models?"
],
"question_id": [
"3bfb8c12f151dada259fbd511358914c4b4e1b0e",
"3f85cc5be84479ba668db6d9f614fedbff6d77f1",
"126e8112e26ebf8c19ca7ff3dd06691732118e90",
"be08ef81c3cfaaaf35c7414397a1871611f1a7fd"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Revision network visualization",
"Figure 2: Setting τ",
"Figure 3: Corpora simulation",
"Figure 4: Precision, recall and F-measure on the Wikipedia revision dumps",
"Table 1: A simulated data set",
"Figure 5: Average precision, recall and F-measure on the simulated data sets",
"Table 2: Running time of VSM, PV-TED, PV-DTW, wTED, wDTW and WMD",
"Figure 1: wDTW visualization",
"Figure 2: wTED visualization"
],
"file": [
"3-Figure1-1.png",
"6-Figure2-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png",
"7-Table1-1.png",
"8-Figure5-1.png",
"8-Table2-1.png",
"11-Figure1-1.png",
"11-Figure2-1.png"
]
} | [
"What are simulated datasets collected?"
] | [
[
"1709.01256-Data Sets-3",
"1709.01256-Data Sets-4"
]
] | [
"There are 6 simulated datasets collected which is initialised with a corpus of size 550 and simulated by generating new documents from Wikipedia extracts and replacing existing documents"
] | 257 |
1902.11049 | Evaluating Rewards for Question Generation Models | Recent approaches to question generation have used modifications to a Seq2Seq architecture inspired by advances in machine translation. Models are trained using teacher forcing to optimise only the one-step-ahead prediction. However, at test time, the model is asked to generate a whole sequence, causing errors to propagate through the generation process (exposure bias). A number of authors have proposed countering this bias by optimising for a reward that is less tightly coupled to the training data, using reinforcement learning. We optimise directly for quality metrics, including a novel approach using a discriminator learned directly from the training data. We confirm that policy gradient methods can be used to decouple training from the ground truth, leading to increases in the metrics used as rewards. We perform a human evaluation, and show that although these metrics have previously been assumed to be good proxies for question quality, they are poorly aligned with human judgement and the model simply learns to exploit the weaknesses of the reward source. | {
"paragraphs": [
[
"Posing questions about a document in natural language is a crucial aspect of the effort to automatically process natural language data, enabling machines to ask clarification questions BIBREF0 , become more robust to queries BIBREF1 , and to act as automatic tutors BIBREF2 .",
"Recent approaches to question generation have used Seq2Seq BIBREF3 models with attention BIBREF4 and a form of copy mechanism BIBREF5 , BIBREF6 . Such models are trained to generate a plausible question, conditioned on an input document and answer span within that document BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 .",
"There are currently no dedicated question generation datasets, and authors have used the context-question-answer triples available in SQuAD. Only a single question is available for each context-answer pair, and models are trained using teacher forcing BIBREF11 . This lack of diverse training data combined with the one-step-ahead training procedure exacerbates the problem of exposure bias BIBREF12 . The model does not learn how to distribute probability mass over sequences that are valid but different to the ground truth; during inference, the model must predict the whole sequence, and may not be robust to mistakes during decoding.",
"Recent work has investigated training the models directly on a performance based objective, either by optimising for BLEU score BIBREF13 or other quality metrics BIBREF10 . By decoupling the training procedure from the ground truth data, the model is able to explore the space of possible questions and become more robust to mistakes during decoding. While the metrics used often seem to be intuitively good choices, there is an assumption that they are good proxies for question quality which has not yet been confirmed.",
"Our contributions are as follows. We perform fine tuning using a range of rewards, including an adversarial objective. We show that although fine tuning leads to increases in reward scores, the resulting models perform worse when evaluated by human workers. We also demonstrate that the generated questions exploit weaknesses in the reward models."
],
[
"Many of the advances in natural language generation have been led by machine translation BIBREF3 , BIBREF4 , BIBREF6 .",
"Previous work on question generation has made extensive use of these techniques. BIBREF8 use a Seq2Seq based model to generate questions conditioned on context-answer pairs, and build on this work by preprocessing the context to resolve coreferences and adding a pointer network BIBREF9 . Similarly, BIBREF7 use a part-of-speech tagger to augment the embedding vectors. Both authors perform a human evaluation of their models, and show significant improvement over their baseline. BIBREF13 use a similar model, but apply it to the task of generating questions without conditioning on a specific answer span. BIBREF14 use a modified context encoder based on multi-perspective context matching BIBREF15 .",
" BIBREF16 propose a framework for fine tuning using policy gradients, using BLEU and other automatic metrics linked to the ground truth data as the rewards. BIBREF10 describe a Seq2Seq model with attention and a pointer network, with an additional encoding layer for the answer. They also describe a method for further tuning their model on language model and question answering reward objectives using policy gradients. Unfortunately they do not perform any human evaluation to determine whether this tuning led to improved question quality.",
"For the related task of summarisation, BIBREF17 propose a framework for fine tuning a summarisation model using reinforcement learning, with the ROUGE similarity metric used as the reward."
],
[
"The task is to generate a natural language question, conditioned on a document and answer. For example, given the input document “this paper investigates rewards for question generation\" and answer “question generation\", the model should produce a question such as “what is investigated in the paper?\""
],
[
"We use the model architecture described by BIBREF10 . Briefly, this is a Seq2Seq model BIBREF3 with attention BIBREF4 and copy mechanism BIBREF5 , BIBREF6 . BIBREF10 also add an additional answer encoder layer, and initialise the decoder with a hidden state constructed from the final state of the encoder. Beam search BIBREF18 is used to sample from the model at inference time. The model was trained using maximum likelihood before fine tuning was applied. Our implementation achieves a competitive BLEU-4 score BIBREF19 of $13.5$ on the test set used by BIBREF8 , before fine tuning."
],
[
"Generated questions should be formed of language that is both fluent and relevant to the context and answer. We therefore performed fine tuning on a trained model, using rewards given either by the negative perplexity under a LSTM language model, or the F1 score attained by a question answering (QA) system, or a weighted combination of both. The language model is a standard recurrent neural network formed of a single LSTM layer. For the QA system, we use QANet BIBREF1 as implemented by BIBREF20 ."
],
[
"Additionally, we propose a novel approach by learning the reward directly from the training data, using a discriminator detailed in Appendix \"Discriminator architecture\" . We pre-trained the discriminator to predict whether an input question and associated context-answer pair were generated by our model, or originated from the training data. We then used as the reward the probability estimated by the discriminator that a generated question was in fact real. In other words, the generator was rewarded for successfully fooling the discriminator. We also experimented with interleaving updates to the discriminator within the fine tuning phase, allowing the discriminator to become adversarial and adapt alongside the generator.",
"These rewards $R(\\hat{Y})$ were used to update the model parameters via the REINFORCE policy gradient algorithm BIBREF21 , according to $\\nabla \\mathcal {L} = \\nabla \\frac{1}{l} \\sum \\limits _t (\\frac{R(\\hat{Y})-\\mu _R}{\\sigma _R}) \\log p(\\hat{y}_t | \\hat{y}_{< t}, \\mathbf {D}, \\mathbf {A})$ . We teacher forced the decoder with the generated sequence to reproduce the activations calculated during beam search, to enable backpropagation. All rewards were normalised with a simple form of PopArt BIBREF22 , with the running mean $\\mu _R$ and standard deviation $\\sigma _R$ updated online during training. We continued to apply a maximum likelihood training objective during this fine tuning."
],
[
"We report the negative log-likelihood (NLL) of the test set under the different models, as well as the corpus level BLEU-4 score BIBREF19 of the generated questions compared to the ground truth. We also report the rewards achieved on the test set, as the QA, LM and discriminator scores.",
"For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer."
],
[
"Table 2 shows the changes in automatic metrics for models fine tuned on various combinations of rewards, compared to the model without tuning. In all cases, the BLEU score reduced, as the training objective was no longer closely coupled to the training data. In general, models achieved better scores on the metrics on which they were fine tuned. Jointly training on a QA and LM reward resulted in better LM scores than training on only a LM reward. We conclude that fine tuning using policy gradients can be used to attain higher rewards, as expected.",
"Table 3 shows the human evaluation scores for a subset of the fine tuned models. The model fine tuned on a QA and LM objective is rated as significantly worse by human annotators, despite achieving higher scores in the automatic metrics. In other words, the training objective given by these reward sources does not correspond to true question quality, despite them being intuitively good choices.",
"The model fine tuned using an adversarial discriminator has also failed to achieve better human ratings, with the discriminator model unable to learn a useful reward source.",
"Table 1 shows an example where fine tuning has not only failed to improve the quality of generated questions, but has caused the model to exploit the reward source. The model fine tuned on a LM reward has degenerated into producing a loop of words that is evidently deemed probable, while the model trained on a QA reward has learned that it can simply point at the location of the answer. This observation is supported by the metrics; the model fine tuned on a QA reward has suffered a catastrophic worsening in LM score of +226.",
"Figure 1 shows the automatic scores against human ratings for all rated questions. The correlation coefficient between human relevance and automatic QA scores was 0.439, and between fluency and LM score was only 0.355. While the automatic scores are good indicators of whether a question will achieve the lowest human rating or not, they do not differentiate clearly between the higher ratings: training a model on these objectives will not necessarily learn to generate better questions. A good question will likely attain a high QA and LM score, but the inverse is not true; a sequence may exploit the weaknesses of the metrics and achieve a high score despite being unintelligible to a human. We conclude that fine tuning a question generation model on these rewards does not lead to better quality questions."
],
[
"In this paper, we investigated the use of external reward sources for fine tuning question generation models to counteract the lack of task-specific training data. We showed that although fine tuning can be used to attain higher rewards, this does not equate to better quality questions when rated by humans. Using QA and LM rewards as a training objective causes the generator to expose the weaknesses in these models, which in turn suggests a possible use of this approach for generating adversarial training examples for QA models. The QA and LM scores are well correlated with human ratings at the lower end of the scale, suggesting they could be used as part of a reranking or filtering system."
],
[
"We used an architecture based on a modified QANet as shown in Figure 2 , replacing the output layers of the model to produce a single probability. Since the discriminator is also able to consider a full context-question-answer triple as input (as opposed to a context-question pair for the QA task), we fused this information in the output layers.",
"Specifically, we applied max pooling over time to the output of the first two encoders, and we took the mean of the outputs of the third encoder that formed part of the answer span. These three reduced encodings were concatenated, a 64 unit hidden layer with ReLU activation applied, and the output passed through a single unit sigmoid output layer to give the estimated probability that an input context-question-answer triple originated from the ground truth dataset or was generated."
]
],
"section_name": [
"Introduction",
"Background",
"Experimental setup",
"Model description",
"Fine tuning",
"Adversarial training",
"Evaluation",
"Results",
"Conclusion",
"Discriminator architecture"
]
} | {
"answers": [
{
"annotation_id": [
"17e6b7b37247467814f2f6f83917ca3c8623aedd"
],
"answer": [
{
"evidence": [
"For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer."
],
"extractive_spans": [],
"free_form_answer": "rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context",
"highlighted_evidence": [
"For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"somewhat"
],
"question": [
"What human evaluation metrics were used in the paper?"
],
"question_id": [
"bfce2afe7a4b71f9127d4f9ef479a0bfb16eaf76"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question generation"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Example generated questions for various fine-tuning objectives. The answer is highlighted in bold. The model trained on a QA reward has learned to simply point at the answer and exploit the QA model, while the model trained on a language model objective has learned to repeat common phrase templates.",
"Table 2: Changes in automatic evaluation metrics after models were fine tuned on various objectives. QA refers to the F1 score obtained by a question answering system on the generated questions. LM refers to the perplexity of generated questions under a separate language model. The discriminator reward refers to the percentage of generated sequences that fooled the discriminator. Lower LM and NLL scores are better. BLEU scores decreased in all cases.",
"Table 3: Summary of human evaluation of selected models",
"Figure 1: Comparison of human and automatic metrics.",
"Figure 2: Discriminator architecture diagram."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Figure1-1.png",
"6-Figure2-1.png"
]
} | [
"What human evaluation metrics were used in the paper?"
] | [
[
"1902.11049-Evaluation-1"
]
] | [
"rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context"
] | 259 |
1905.06906 | Gated Convolutional Neural Networks for Domain Adaptation | Domain Adaptation explores the idea of how to maximize performance on a target domain, distinct from source domain, upon which the classifier was trained. This idea has been explored for the task of sentiment analysis extensively. The training of reviews pertaining to one domain and evaluation on another domain is widely studied for modeling a domain independent algorithm. This further helps in understanding correlation between domains. In this paper, we show that Gated Convolutional Neural Networks (GCN) perform effectively at learning sentiment analysis in a manner where domain dependant knowledge is filtered out using its gates. We perform our experiments on multiple gate architectures: Gated Tanh ReLU Unit (GTRU), Gated Tanh Unit (GTU) and Gated Linear Unit (GLU). Extensive experimentation on two standard datasets relevant to the task, reveal that training with Gated Convolutional Neural Networks give significantly better performance on target domains than regular convolution and recurrent based architectures. While complex architectures like attention, filter domain specific knowledge as well, their complexity order is remarkably high as compared to gated architectures. GCNs rely on convolution hence gaining an upper hand through parallelization. | {
"paragraphs": [
[
"With the advancement in technology and invention of modern web applications like Facebook and Twitter, users started expressing their opinions and ideologies at a scale unseen before. The growth of e-commerce companies like Amazon, Walmart have created a revolutionary impact in the field of consumer business. People buy products online through these companies and write reviews for their products. These consumer reviews act as a bridge between consumers and companies. Through these reviews, companies polish the quality of their services. Sentiment Classification (SC) is one of the major applications of Natural Language Processing (NLP) which aims to find the polarity of text. In the early stages BIBREF0 of text classification, sentiment classification was performed using traditional feature selection techniques like Bag-of-Words (BoW) BIBREF1 or TF-IDF. These features were further used to train machine learning classifiers like Naive Bayes (NB) BIBREF2 and Support Vector Machines (SVM) BIBREF3 . They are shown to act as strong baselines for text classification BIBREF4 . However, these models ignore word level semantic knowledge and sequential nature of text. Neural networks were proposed to learn distributed representations of words BIBREF5 . Skip-gram and CBOW architectures BIBREF6 were introduced to learn high quality word representations which constituted a major breakthrough in NLP. Several neural network architectures like recursive neural networks BIBREF7 and convolutional neural networks BIBREF8 achieved excellent results in text classification. Recurrent neural networks which were proposed for dealing sequential inputs suffer from vanishing BIBREF9 and exploding gradient problems BIBREF10 . To overcome this problem, Long Short Term Memory (LSTM) was introduced BIBREF11 .",
"All these architectures have been successful in performing sentiment classification for a specific domain utilizing large amounts of labelled data. However, there exists insufficient labelled data for a target domain of interest. Therefore, Domain Adaptation (DA) exploits knowledge from a relevant domain with abundant labeled data to perform sentiment classification on an unseen target domain. However, expressions of sentiment vary in each domain. For example, in $\\textit {Books}$ domain, words $\\textit {thoughtful}$ and $\\textit {comprehensive}$ are used to express sentiment whereas $\\textit {cheap}$ and $\\textit {costly}$ are used in $\\textit {Electronics}$ domain. Hence, models should generalize well for all domains. Several methods have been introduced for performing Domain Adaptation. Blitzer BIBREF12 proposed Structural Correspondence Learning (SCL) which relies on pivot features between source and target domains. Pan BIBREF13 performed Domain Adaptation using Spectral Feature Alignment (SFA) that aligns features across different domains. Glorot BIBREF14 proposed Stacked Denoising Autoencoder (SDA) that learns generalized feature representations across domains. Zheng BIBREF15 proposed end-to-end adversarial network for Domain Adaptation. Qi BIBREF16 proposed a memory network for Domain Adaptation. Zheng BIBREF17 proposed a Hierarchical transfer network relying on attention for Domain Adaptation.",
"However, all the above architectures use a different sub-network altogether to incorporate domain agnostic knowledge and is combined with main network in the final layers. This makes these architectures computationally intensive. To address this issue, we propose a Gated Convolutional Neural Network (GCN) model that learns domain agnostic knowledge using gated mechanism BIBREF18 . Convolution layers learns the higher level representations for source domain and gated layer selects domain agnostic representations. Unlike other models, GCN doesn't rely on a special sub-network for learning domain agnostic representations. As, gated mechanism is applied on Convolution layers, GCN is computationally efficient."
],
[
"Traditionally methods for tackling Domain Adaptation are lexicon based. Blitzer BIBREF19 used a pivot method to select features that occur frequently in both domains. It assumes that the selected pivot features can reliably represent the source domain. The pivots are selected using mutual information between selected features and the source domain labels. SFA BIBREF13 method argues that pivot features selected from source domain cannot attest a representation of target domain. Hence, SFA tries to exploit the relationship between domain-specific and domain independent words via simultaneously co-clustering them in a common latent space. SDA BIBREF14 performs Domain Adaptation by learning intermediate representations through auto-encoders. Yu BIBREF20 used two auxiliary tasks to help induce sentence embeddings that work well across different domains. These embeddings are trained using Convolutional Neural Networks (CNN).",
"Gated convolutional neural networks have achieved state-of-art results in language modelling BIBREF18 . Since then, they have been used in different areas of natural language processing (NLP) like sentence similarity BIBREF21 and aspect based sentiment analysis BIBREF22 ."
],
[
"In this section, we introduce a model based on Gated Convolutional Neural Networks for Domain Adaptation. We present the problem definition of Domain Adaptation, followed by the architecture of the proposed model."
],
[
"Given a source domain $D_{S}$ represented as $D_{S}$ = { $(x_{s_{1}},y_{s_{1}})$ , $(x_{s_{2}},y_{s_{2}})$ .... $(x_{s_{n}},y_{s_{n}})$ } where $x_{s_{i}} \\in \\mathbb {R}$ represents the vector of $i^{th}$ source text and $y_{s_{i}}$ represents the corresponding source domain label. Let $T_{S}$ represent the task in source domain. Given a target domain $D_{T}$ represented as $D_{S}$0 = { $D_{S}$1 , $D_{S}$2 .... $D_{S}$3 }, where $D_{S}$4 represents the vector of $D_{S}$5 target text and $D_{S}$6 represents corresponding target domain label. Let $D_{S}$7 represent the task in target domain. Domain Adaptation (DA) is defined by the target predictive function $D_{S}$8 calculated using the knowledge of $D_{S}$9 and $(x_{s_{1}},y_{s_{1}})$0 where $(x_{s_{1}},y_{s_{1}})$1 but $(x_{s_{1}},y_{s_{1}})$2 . It is imperative to note that the domains are different but only a single task. In this paper, the task is sentiment classification."
],
[
"The proposed model architecture is shown in the Figure 1 . Recurrent Neural Networks like LSTM, GRU update their weights at every timestep sequentially and hence lack parallelization over inputs in training. In case of attention based models, the attention layer has to wait for outputs from all timesteps. Hence, these models fail to take the advantage of parallelism either. Since, proposed model is based on convolution layers and gated mechanism, it can be parallelized efficiently. The convolution layers learn higher level representations for the source domain. The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling.",
"Let $I$ denote the input sentence represented as $I$ = { $w_{1}$ $w_{2}$ $w_{3}$ ... $w_{N}$ } where $w_{i}$ represents the $i_{th}$ word in $I$ and $N$ is the maximum sentence length considered. Let $I$0 be the vocabulary size for each dataset and $I$1 denote the word embedding matrix where each $I$2 is a $I$3 dimensional vector. Input sentences whose length is less than $I$4 are padded with 0s to reach maximum sentence length. Words absent in the pretrained word embeddings are initialized to 0s. Therefore each input sentence $I$5 is converted to $I$6 dimensional vector. Convolution operation is applied on $I$7 with kernel $I$8 . The convolution operation is one-dimensional, applied with a fixed window size across words. We consider kernel size of 3,4 and 5. The weight initialization of these kernels is done using glorot uniform BIBREF23 . Each kernel is a feature detector which extracts patterns from n-grams. After convolution we obtain a new feature map $I$9 = [ $w_{1}$0 ] for each kernel $w_{1}$1 . ",
"$$C_{i} = f(P_{i:i+h} \\ast W_{a} + b_{a})$$ (Eq. 5) ",
"where $f$ represents the activation function in convolution layer. The gated mechanism is applied on each convolution layer. Each gated layer learns to filter domain agnostic representations for every time step $i$ . ",
"$$S_{i} = g(P_{i:i+h} \\ast W_{s} + b_{s})$$ (Eq. 6) ",
"where $g$ is the activation function used in gated convolution layer. The outputs from convolution layer and gated convolution layer are element wise multiplied to compute a new feature representation $G_{i}$ ",
"$$G_{i} = C_{i} \\times S_{i}$$ (Eq. 7) ",
"Maxpooling operation is applied across each filter in this new feature representation to get the most important features BIBREF8 . As shown in Figure 1 the outputs from maxpooling layer across all filters are concatenated. The concatenated layer is fully connected to output layer. Sigmoid is used as the activation function in the output layer."
],
[
"Gating mechanisms have been effective in Recurrent Neural Networks like GRU and LSTM. They control the information flow through their recurrent cells. In case of GCN, these gated units control the domain information that flows to pooling layers. The model must be robust to change in domain knowledge and should be able to generalize well across different domains. We use the gated mechanisms Gated Tanh Unit (GTU) and Gated Linear Unit (GLU) and Gated Tanh ReLU Unit (GTRU) BIBREF22 in proposed model. The gated architectures are shown in figure 2 . The outputs from Gated Tanh Unit is calculated as $tanh(P \\ast W + c) \\times \\sigma (P \\ast V + c)$ . In case of Gated Linear Unit, it is calculated as $(P \\ast W + c) \\times \\sigma (P \\ast V + c)$ where $tanh$ and $\\sigma $ denotes Tanh and Sigmoid activation functions respectively. In case of Gated Tanh ReLU Unit, output is calculated as $tanh(P \\ast W + c) \\times relu(P \\ast V + c)$ "
],
[
"Multi Domain Dataset BIBREF19 is a short dataset with reviews from distinct domains namely Books(B), DVD(D), Electronics(E) and Kitchen(K). Each domain consists of 2000 reviews equally divided among positive and negative sentiment. We consider 1280 reviews for training, 320 reviews for validation and 400 reviews for testing from each domain.",
"Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain."
],
[
"To evaluate the performance of proposed model, we consider various baselines like traditional lexicon approaches, CNN models without gating mechanisms and LSTM models.",
"Bag-of-words (BoW) is one of the strongest baselines in text classification BIBREF4 . We consider all the words as features with a minimum frequency of 5. These features are trained using Logistic Regression (LR).",
"TF-IDF is a feature selection technique built upon Bag-of-Words. We consider all the words with a minimum frequency of 5. The features selected are trained using Logistic Regression (LR).",
"Paragraph2vec or doc2vec BIBREF25 is a strong and popularly used baseline for text classification. Paragraph2Vec represents each sentence or paragraph in the form of a distributed representation. We trained our own doc2vec model using DBOW model. The paragraph vectors obtained are trained using Feed Forward Neural Network (FNN).",
"To show the effectiveness of gated layer, we consider a CNN model which does not contain gated layers. Hence, we consider Static CNN model, a popular CNN architecture proposed in Kim BIBREF8 as a baseline.",
"Wang BIBREF26 proposed a combination of Convolutional and Recurrent Neural Network for sentiment Analysis of short texts. This model takes the advantages of features learned by CNN and long-distance dependencies learned by RNN. It achieved remarkable results on benchmark datasets. We report the results using code published by the authors.",
"We offer a comparison with LSTM model with a single hidden layer. This model is trained with equivalent experimental settings as proposed model.",
"In this baseline, attention mechanism BIBREF27 is applied on the top of LSTM outputs across different timesteps."
],
[
"All the models are experimented with approximately matching number of parameters for a solid comparison using a Tesla K80 GPU.",
"Input Each word in the input sentence is converted to a 300 dimensional vector using GloVe pretrained vectors BIBREF28 . A maximum sentence length 100 is considered for all the datasets. Sentences with length less than 100 are padded with 0s.",
"Architecture details: The model is implemented using keras. We considered 100 convolution filters for each of the kernels of sizes 3,4 and 5. To get the same sentence length after convolution operation zero padding is done on the input.",
"Training Each sentence or paragraph is converted to lower case. Stopword removal is not done. A vocabulary size of 20000 is considered for all the datasets. We apply a dropout layer BIBREF29 with a probability of 0.5, on the embedding layer and probability 0.2, on the dense layer that connects the output layer. Adadelta BIBREF30 is used as the optimizer for training with gradient descent updates. Batch-size of 16 is taken for MDD and 50 for ARD. The model is trained for 50 epochs. We employ an early stopping mechanism based on validation loss for a patience of 10 epochs. The models are trained on source domain and tested on unseen target domain in all experiments."
],
[
"The performance of all models on MDD is shown in Tables 2 and 3 while for ARD, in Tables 4 and 5 . All values are shown in accuracy percentage. Furthermore time complexity of each model is presented in Table 1 ."
],
[
"We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.",
"In Figure 3 , we have illustrated the visualization of convolution outputs(kernel size = 3) from the sigmoid gate in GLU across domains. As the kernel size is 3, each row in the output corresponds to a trigram from input sentence. This heat map visualizes values of all 100 filters and their average for every input trigram. These examples demonstrate what the convolution gate learns. Trigrams with domain independent but heavy polarity like “_ _ good” and “_ costly would” have higher weightage. Meanwhile, Trigrams with domain specific terms like “quality functional case” and “sell entire kitchen” get some of the least weights. In Figure 3 (b) example, the trigram “would have to” just consists of function words, hence gets the least weight. While “sell entire kitchen” gets more weight comparatively. This might be because while function words are merely grammatical units which contribute minimal to overall sentiment, domain specific terms like “sell” may contain sentiment level knowledge only relevant within the domain. In such a case it is possible that the filters effectively propagate sentiment level knowledge from domain specific terms as well.",
"We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 .",
"While the gated architectures outperform other baselines, within them as well we make observations. Gated Linear Unit (GLU) performs the best often over other gated architectures. In case of GTU, outputs from Sigmoid and Tanh are multiplied together, this may result in small gradients, and hence resulting in the vanishing gradient problem. However, this will not be the in the case of GLU, as the activation is linear. In case of GTRU, outputs from Tanh and ReLU are multiplied. In ReLU, because of absence of negative activations, corresponding Tanh outputs will be completely ignored, resulting in loss of some domain independent knowledge."
],
[
"In this paper, we proposed Gated Convolutional Neural Network(GCN) model for Domain Adaptation in Sentiment Analysis. We show that gates in GCN, filter out domain dependant knowledge, hence performing better at an unseen target domain. Our experiments reveal that gated architectures outperform other popular recurrent and non-gated architectures. Furthermore, because these architectures rely on convolutions, they take advantage of parellalization, vastly reducing time complexity."
]
],
"section_name": [
"Introduction",
"Related Work",
"Gated Convolutional Neural Networks",
"Problem Definition",
"Model Architecture",
"Gating mechanisms",
"Datasets",
"Baselines",
"Implementation details",
"Results",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"cb93eb69ccaf6c5aeb4a0872eca940f6e7c3de73"
],
"answer": [
{
"evidence": [
"Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain."
],
"extractive_spans": [],
"free_form_answer": "reviews under distinct product categories are considered specific domain knowledge",
"highlighted_evidence": [
"Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"17f76c3bdf4540ead18e680255d62b29b9465324"
],
"answer": [
{
"evidence": [
"We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 .",
"We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models.",
"The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"b8e383cc449251a1ee84b2df1f89fc66aa517156"
],
"answer": [
{
"evidence": [
"The proposed model architecture is shown in the Figure 1 . Recurrent Neural Networks like LSTM, GRU update their weights at every timestep sequentially and hence lack parallelization over inputs in training. In case of attention based models, the attention layer has to wait for outputs from all timesteps. Hence, these models fail to take the advantage of parallelism either. Since, proposed model is based on convolution layers and gated mechanism, it can be parallelized efficiently. The convolution layers learn higher level representations for the source domain. The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling.",
"We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.",
"We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling.",
"The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.",
"We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"",
"",
""
],
"question": [
"For the purposes of this paper, how is something determined to be domain specific knowledge?",
"Does the fact that GCNs can perform well on this tell us that the task is simpler than previously thought?",
"Are there conceptual benefits to using GCNs over more complex architectures like attention?"
],
"question_id": [
"dfbab3cd991f86d998223726617d61113caa6193",
"df510c85c277afc67799abcb503caa248c448ad2",
"d95180d72d329a27ddf2fd5cc6919f469632a895"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Fig. 1: Architecture of the proposed model",
"Fig. 2: Variations in gates of the proposed GCN architecture.",
"Table 1: Average training time for all the models on ARD",
"Table 2: Accuracy scores on Multi Domain Dataset.",
"Table 3: Accuracy scores on Multi Domain Dataset.",
"Table 4: Accuracy scores on Amazon Reviews Dataset.",
"Table 5: Accuracy scores on Amazon Reviews Dataset.",
"Fig. 3: Visualizing outputs from gated convolutions (filter size = 3) of GLU for example sentences, darker indicates higher weightage"
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"7-Table1-1.png",
"8-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"10-Figure3-1.png"
]
} | [
"For the purposes of this paper, how is something determined to be domain specific knowledge?"
] | [
[
"1905.06906-Datasets-1"
]
] | [
"reviews under distinct product categories are considered specific domain knowledge"
] | 260 |
1809.09795 | Deep contextualized word representations for detecting sarcasm and irony | Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indicators for irony or sarcasm across dynamic contexts, we propose a model that uses character-level vector representations of words, based on ELMo. We test our model on 7 different datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them, and otherwise offering competitive results. | {
"paragraphs": [
[
"Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with the concept of meaning the opposite of what is literally expressed at its core. There is no consensus in academic research on the formal definition, both terms are non-static, depending on different factors such as context, domain and even region in some cases BIBREF1 .",
"In light of the general complexity of natural language, this presents a range of challenges, from the initial dataset design and annotation to computational methods and evaluation BIBREF2 . The difficulties lie in capturing linguistic nuances, context-dependencies and latent meaning, due to richness of dynamic variants and figurative use of language BIBREF3 .",
"The automatic detection of sarcastic expressions often relies on the contrast between positive and negative sentiment BIBREF4 . This incongruence can be found on a lexical level with sentiment-bearing words, as in \"I love being ignored\". In more complex linguistic settings an action or a situation can be perceived as negative, without revealing any affect-related lexical elements. The intention of the speaker as well as common knowledge or shared experience can be key aspects, as in \"I love waking up at 5 am\", which can be sarcastic, but not necessarily. Similarly, verbal irony is referred to as saying the opposite of what is meant and based on sentiment contrast BIBREF5 , whereas situational irony is seen as describing circumstances with unexpected consequences BIBREF6 , BIBREF7 .",
"Empirical studies have shown that there are specific linguistic cues and combinations of such that can serve as indicators for sarcastic and ironic expressions. Lexical and morpho-syntactic cues include exclamations and interjections, typographic markers such as all caps, quotation marks and emoticons, intensifiers and hyperboles BIBREF8 , BIBREF9 . In the case of Twitter, the usage of emojis and hashtags has also proven to help automatic irony detection.",
"We propose a purely character-based architecture which tackles these challenges by allowing us to use a learned representation that models features derived from morpho-syntactic cues. To do so, we use deep contextualized word representations, which have recently been used to achieve the state of the art on six NLP tasks, including sentiment analysis BIBREF10 . We test our proposed architecture on 7 different irony/sarcasm datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them and otherwise offering competitive results, showing the effectiveness of our proposal. We make our code available at https://github.com/epochx/elmo4irony."
],
[
"Apart from the relevance for industry applications related to sentiment analysis, sarcasm and irony detection has received great traction within the NLP research community, resulting in a variety of methods, shared tasks and benchmark datasets. Computational approaches for the classification task range from rule-based systems BIBREF4 , BIBREF11 and statistical methods and machine learning algorithms such as Support Vector Machines BIBREF3 , BIBREF12 , Naive Bayes and Decision Trees BIBREF13 leveraging extensive feature sets, to deep learning-based approaches. In this context, BIBREF14 . delivered state-of-the-art results by using an intra-attentional component in addition to a recurrent neural network. Previous work such as the one by BIBREF15 had proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that also achieved excellent results. A comprehensive survey on automatic sarcasm detection was done by BIBREF16 , while computational irony detection was reviewed by BIBREF17 .",
"Further improvements both in terms of classic and deep models came as a result of the SemEval 2018 Shared Task on Irony in English Tweets BIBREF18 . The system that achieved the best results was hybrid, namely, a densely-connected BiLSTM with a multi-task learning strategy, which also makes use of features such as POS tags and lexicons BIBREF19 ."
],
[
"The wide spectrum of linguistic cues that can serve as indicators for sarcastic and ironic expressions has been usually exploited for automatic sarcasm or irony detection by modeling them in the form of binary features in traditional machine learning.",
"On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags.",
"The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .",
"Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification."
],
[
"We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.",
"Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.",
"Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.",
"Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 .",
"In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus.",
"In terms of pre-processing, as in our case the preservation of morphological structures is crucial, the amount of normalization is minimal. Concretely, we forgo stemming or lemmatizing, punctuation removal and lowercasing. We limit ourselves to replacing user mentions and URLs with one generic token respectively. In the case of the SemEval-2018 dataset, an additional step was to remove the hashtags #sarcasm, #irony and #not, as they are the artifacts used for creating the dataset. For tokenizing, we use a variation of the Twokenizer BIBREF26 to better deal with emojis.",
"Our models are trained using Adam with a learning rate of 0.001 and a decay rate of 0.5 when there is no improvement on the accuracy on the validation set, which we use to select the best models. We also experimented using a slanted triangular learning rate scheme, which was shown by BIBREF27 to deliver excellent results on several tasks, but in practice we did not obtain significant differences. We experimented with batch sizes of 16, 32 and 64, and dropouts ranging from 0.1 to 0.5. The size of the LSTM hidden layer was fixed to 1,024, based on our preliminary experiments. We do not train the ELMo embeddings, but allow their dropouts to be active during training."
],
[
"Table TABREF2 summarizes our results. For each dataset, the top row denotes our baseline and the second row shows our best comparable model. Rows with FULL models denote our best single model trained with all the development available data, without any other preprocessing other than mentioned in the previous section. In the case of the Twitter datasets, rows indicated as AUG refer to our the models trained using the augmented version of the corresponding datasets.",
"For the case of the SemEval-2018 dataset we use the best performing model from the Shared Task as a baseline, taken from the task description paper BIBREF18 . As the winning system is a voting-based ensemble of 10 models, for comparison, we report results using an equivalent setting. For the Riloff, Ptáček, SC-V1 and SC-V2 datasets, our baseline models are taken directly from BIBREF14 . As their pre-processing includes truncating sentence lengths at 40 and 80 tokens for the Twitter and Dialog datasets respectively, while always removing examples with less than 5 tokens, we replicate those steps and report our results under these settings. Finally, for the Reddit datasets, our baselines are taken from BIBREF21 . Although their models are trained for binary classification, instead of reporting the performance in terms of standard classification evaluation metrics, their proposed evaluation task is predicting which of two given statements that share the same context is sarcastic, with performance measured solely by accuracy. We follow this and report our results.",
"In summary, we see our introduced models are able to outperform all previously proposed methods for all metrics, except for the SemEval-2018 best system. Although our approach yields higher Precision, it is not able to reach the given Recall and F1-Score. We note that in terms of single-model architectures, our setting offers increased performance compared to BIBREF19 and their obtained F1-score of 0.674. Moreover, our system does so without requiring external features or multi-task learning. For the other tasks we are able to outperform BIBREF14 without requiring any kind of intra-attention. This shows the effectiveness of using pre-trained character-based word representations, that allow us to recover many of the morpho-syntactic cues that tend to denote irony and sarcasm.",
"Finally, our experiments showed that enlarging existing Twitter datasets by adding external soft-labeled data from the same media source does not yield improvements in the overall performance. This complies with the observations made by BIBREF18 . Since we have designed our augmentation tactics to maximize the overlap in terms of topic, we believe the soft-annotated nature of the additional data we have used is the reason that keeps the model from improving further."
],
[
"We have presented a deep learning model based on character-level word representations obtained from ELMo. It is able to obtain the state of the art in sarcasm and irony detection in 6 out of 7 datasets derived from 3 different data sources. Our results also showed that the model does not benefit from using additional soft-labeled data in any of the three tested Twitter datasets, showing that manually-annotated data may be needed in order to improve the performance in this way."
]
],
"section_name": [
"Introduction",
"Related work",
"Proposed Approach",
"Experimental Setup",
"Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"f0359b9fa0253f4c525798ade165f7b481f56f79"
],
"answer": [
{
"evidence": [
"We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.",
"Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.",
"Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.",
"Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 .",
"In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.",
"Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 ",
"Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.",
"Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.",
"Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 .",
" To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"18016d1acfcc7b6103afc803290537c3c1f1fd56"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.",
"Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.",
"Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.",
"Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ."
],
"extractive_spans": [
"SemEval 2018 Task 3",
"BIBREF20",
"BIBREF4",
"SARC 2.0",
"SARC 2.0 pol",
"Sarcasm Corpus V1 (SC-V1)",
"Sarcasm Corpus V2 (SC-V2)"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.",
"Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.",
"Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.",
"Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"e28d5fd6dc9a7a62a4379a2ef6ecd8067f107814"
],
"answer": [
{
"evidence": [
"We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.",
"Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.",
"Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.",
"Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ."
],
"extractive_spans": [
"Twitter",
"Reddit",
"Online Dialogues"
],
"free_form_answer": "",
"highlighted_evidence": [
"We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary.",
"Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol.",
"Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag.",
"Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"688d0d6d3bd1d868fc1805da56a9bbee0719fade"
],
"answer": [
{
"evidence": [
"The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .",
"Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification."
],
"extractive_spans": [],
"free_form_answer": "A bi-LSTM with max-pooling on top of it",
"highlighted_evidence": [
"Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10",
"Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 .",
"Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"4b83c2f3ddd9bea522c8164c3ca418c289cda628"
],
"answer": [
{
"evidence": [
"On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags."
],
"extractive_spans": [
"all caps",
"quotation marks",
"emoticons",
"emojis",
"hashtags"
],
"free_form_answer": "",
"highlighted_evidence": [
"Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate only on English?",
"What are the 7 different datasets?",
"What are the three different sources of data?",
"What type of model are the ELMo representations used in?",
"Which morphosyntactic features are thought to indicate irony or sarcasm?"
],
"question_id": [
"e196e2ce72eb8b2d50732c26e9bf346df6643f69",
"46570c8faaeefecc8232cfc2faab0005faaba35f",
"982d375378238d0adbc9a4c987d633ed16b7f98f",
"bbdb2942dc6de3d384e3a1b705af996a5341031b",
"4ec538e114356f72ef82f001549accefaf85e99c"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"irony",
"irony",
"irony",
"irony",
"irony"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.",
"Table 2: Summary of our obtained results."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png"
]
} | [
"What type of model are the ELMo representations used in?"
] | [
[
"1809.09795-Proposed Approach-2",
"1809.09795-Proposed Approach-3"
]
] | [
"A bi-LSTM with max-pooling on top of it"
] | 261 |
2003.01769 | Phonetic Feedback for Speech Enhancement With and Without Parallel Speech Data | While deep learning systems have gained significant ground in speech enhancement research, these systems have yet to make use of the full potential of deep learning systems to provide high-level feedback. In particular, phonetic feedback is rare in speech enhancement research even though it includes valuable top-down information. We use the technique of mimic loss to provide phonetic feedback to an off-the-shelf enhancement system, and find gains in objective intelligibility scores on CHiME-4 data. This technique takes a frozen acoustic model trained on clean speech to provide valuable feedback to the enhancement model, even in the case where no parallel speech data is available. Our work is one of the first to show intelligibility improvement for neural enhancement systems without parallel speech data, and we show phonetic feedback can improve a state-of-the-art neural enhancement system trained with parallel speech data. | {
"paragraphs": [
[
"Typical speech enhancement techniques focus on local criteria for improving speech intelligibility and quality. Time-frequency prediction techniques use local spectral quality estimates as an objective function; time domain methods directly predict clean output with a potential spectral quality metric BIBREF0. Such techniques have been extremely successful in predicting a speech denoising function, but also require parallel clean and noisy speech for training. The trained systems implicitly learn the phonetic patterns of the speech signal in the coordinated output of time-domain or time-frequency units. However, our hypothesis is that directly providing phonetic feedback can be a powerful additional signal for speech enhancement. For example, many local metrics will be more attuned to high-energy regions of speech, but not all phones of a language carry equal energy in production (compare /v/ to /ae/).",
"Our proxy for phonetic intelligibility is a frozen automatic speech recognition (ASR) acoustic model trained on clean speech; the loss functions we incorporate into training encourage the speech enhancement system to produce output that is interpretable to a fixed acoustic model as clean speech, by making the output of the acoustic model mimic its behavior under clean speech. This mimic loss BIBREF1 provides key linguistic insights to the enhancement model about what a recognizable phoneme looks like.",
"When no parallel data is available, but transcripts are available, a loss is easily computed against hard senone labels and backpropagated to the enhancement model trained from scratch. Since the clean acoustic model is frozen, the only way for the enhancement model to improve the loss is to make a signal that is more recognizable to the acoustic model. The improvement by this model demonstrates the power of phonetic feedback; very few neural enhancement techniques until now have been able to achieve improvements without parallel data.",
"When parallel data is available, mimic loss works by comparing the outputs of the acoustic model on clean speech with the outputs of the acoustic model on denoised speech. This is a more informative loss than the loss against hard senone labels, and is complimentary to local losses. We show that mimic loss can be applied to an off-the-shelf enhancement system and gives an improvement in intelligibility scores. Our technique is agnostic to the enhancement system as long as it is differentiably trainable.",
"Mimic loss has previously improved performance on robust ASR tasks BIBREF1, but has not yet demonstrated success at enhancement metrics, and has not been used in a non-parallel setting. We seek to demonstrate these advantages here:",
"We show that using hard targets in the mimic loss framework leads to improvements in objective intelligibility metrics when no parallel data is available.",
"We show that when parallel data is available, training the state-of-the-art method with mimic loss improves objective intelligibility metrics."
],
[
"Speech enhancement is a rich field of work with a huge variety of techniques. Spectral feature based enhancement systems have focused on masking approaches BIBREF2, and have gained popularity with deep learning techniques BIBREF3 for ideal ratio mask and ideal binary mask estimation BIBREF4."
],
[
"Perceptual losses are a form of knowledge transfer BIBREF5, which is defined as the technique of adding auxiliary information at train time, to better inform the trained model. The first perceptual loss was introduced for the task of style transfer BIBREF6. These losses depends on a pre-trained network that can disentangle relevant factors. Two examples are fed through the network to generate a loss at a high level of the network. In style transfer, the perceptual loss ensures that the high-level contents of an image remain the same, while allowing the texture of the image to change.",
"For speech-related tasks a perceptual loss has been used to denoise time-domain speech data BIBREF7, where the loss was called a \"deep feature loss\". The perceiving network was trained for acoustic environment detection and domestic audio tagging. The clean and denoised signals are both fed to this network, and a loss is computed at a higher level.",
"Perceptual loss has also been used for spectral-domain data, in the mimic loss framework. This has been used for spectral mapping for robust ASR in BIBREF1 and BIBREF8. The perceiving network in this case is an acoustic model trained with senone targets. Clean and denoised spectral features are fed through the acoustic model, and a loss is computed from the outputs of the network. These works did not evaluate mimic loss for speech enhancement, nor did they develop the framework for use without parallel data."
],
[
"One approach for enhancement without parallel data introduces an adversarial loss to generate realistic masks BIBREF9. However, this work is only evaluated for ASR performance, and not speech enhancement performance.",
"For the related task of voice conversion, a sparse representation was used by BIBREF10 to do conversion without parallel data. This wasn't evaluated on enhancement metrics or ASR metrics, but would prove an interesting approach.",
"Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics."
],
[
"As noted before, we build on the work by Pandey and Wang that denoises the speech signal in the time domain, but computes a mapping loss on the spectral magnitudes of the clean and denoised speech samples. This is possible because the STFT operation for computing the spectral features is fully differentiable. This framework for enhancement lends itself to other spectral processing techniques, such as mimic loss.",
"In order to train this off-the-shelf denoiser using the mimic loss objective, we first train an acoustic model on clean spectral magnitudes. The training objective for this model is cross-entropy loss against hard senone targets. Crucially, the weights of the acoustic model are frozen during the training of the enhancement model. This prevents passing information from enhancement model to acoustic model in a manner other than by producing a signal that behaves like clean speech. This is in contrast to joint training, where the weights of the acoustic model are updated at the same time as the denoising model weights, which usually leads to a degradation in enhancement metrics.",
"Without parallel speech examples, we apply the mimic loss framework by using hard senone targets instead of soft targets. The loss against these hard targets is cross-entropy loss ($L_{CE}$). The senone labels can be gathered from a hard alignment of the transcripts with the noisy or denoised features; the process does not require clean speech samples. Since this method only has access to phone alignments and not clean spectra, we do not expect it to improve the speech quality, but expect it to improve intelligibility.",
"We also ran experiments on different formats for the mimic loss when parallel data is available. Setting the mapping losses to be $L_1$ was determined to be most effective by Pandey and Wang. For the mimic loss, we tried both teacher-student learning with $L_1$ and $L_2$ losses, and knowledge-distillation with various temperature parameters on the softmax outputs. We found that using $L_1$ loss on the pre-softmax outputs performed the best, likely due to the fact that the other losses are also $L_1$. When the loss types are different, one loss type usually comes to dominate, but each loss serves an important purpose here.",
"We provide an example of the effects of mimic loss, both with and without parallel data, by showing the log-mel filterbank features, seen in Figure FIGREF6. A set of relatively high-frequency and low-magnitude features is seen in the highlighted portion of the features. Since local metrics tend to emphasize regions of high energy differences, they miss this important phonetic information. However, in the mimic-loss-trained systems, this information is retained."
],
[
"For all experiments, we use the CHiME-4 corpus, a popular corpus for robust ASR experiments, though it has not often been used for enhancement experiments. During training, we randomly select a channel for each example each epoch, and we evaluate our enhancement results on channel 5 of et05.",
"Before training the enhancement system, we train the acoustic model used for mimic loss on the clean spectral magnitudes available in CHiME-4. Our architecture is a Wide-ResNet-inspired model, that takes a whole utterance and produces a posterior over each frame. The model has 4 blocks of 3 layers, where the blocks have 128, 256, 512, 1024 filters respectively. The first layer of each block has a stride of 2, down-sampling the input. After the convolutional layers, the filters are divided into 16 parts, and each part is fed to a fully-connected layer, so the number of output posterior vectors is the same as the input frames. This is an utterance-level version of the model in BIBREF8.",
"In the case of parallel data, the best results were obtained by training the network for only a few epochs (we used 5). However, when using hard targets, we achieved better results from using the fully-converged network. We suspect that the outputs of the converged network more closely reflect the one-hot nature of the senone labels, which makes training easier for the enhancement model when hard targets are used. On the other hand, only lightly training the acoustic model generates softer targets when parallel data is available.",
"For our enhancement model, we began with the state-of-the-art framework introduced by Pandey and Wang in BIBREF0, called AECNN. We reproduce the architecture of their system, replacing the PReLU activations with leaky ReLU activations, since the performance is similar, but the leaky ReLU network has fewer parameters."
],
[
"We first train this network without the use of parallel data, using only the senone targets, and starting from random weights in the AECNN. In Table TABREF8 we see results for enhancement without parallel data: the cross-entropy loss with senone targets given a frozen clean-speech network is enough to improve eSTOI by 4.3 points. This is a surprising improvement in intelligibility given the lack of parallel data, and demonstrates that phonetic information alone is powerful enough to provide improvements to speech intelligibility metrics. The degradation in SI-SDR performance, a measure of speech quality, is expected, given that the denoising model does not have access to clean data, and may corrupt the phase.",
"We compare also against joint training of the enhancement model with the acoustic model. This is a common technique for robust ASR, but has not been evaluated for enhancement. With the hard targets, joint training performs poorly on enhancement, due to co-adaptation of the enhancement and acoustic model networks. Freezing the acoustic model network is critical since it requires the enhancement model to produce speech the acoustic model sees as “clean.”"
],
[
"In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result.",
"We also compare against joint training with an identical setup to the mimic setup (i.e. a combination of three losses: teacher-student loss against the clean outputs, spectral magnitude loss, and time-domain loss). The jointly trained acoustic model is initialized with the weights of the system trained on clean speech. We find that joint training performs much better on the enhancement metrics in this setup, though still not quite as well as the mimic setup. Compared to the previous experiment without parallel data, the presence of the spectral magnitude and time-domain losses likely keep the enhancement output more stable when joint training, at the cost of requiring parallel training data."
],
[
"We have shown that phonetic feedback is valuable for speech enhancement systems. In addition, we show that our approach to this feedback, the mimic loss framework, is useful in many scenarios: with and without the presence of parallel data, in both the enhancement and robust ASR scenarios. Using this framework, we show improvement on a state-of-the-art model for speech enhancement. The methodology is agnostic to the enhancement technique, so may be applicable to other differentiably trained enhancement modules.",
"In the future, we hope to address the reduction in speech quality scores when training without parallel data. One approach may be to add a GAN loss to the denoised time-domain signal, which may help with introduced distortions. In addition, we could soften the cross-entropy loss to an $L_1$ loss by generating \"prototypical\" posterior distributions for each senone, averaged across the training dataset. Mimic loss as a framework allows for a rich space of future possibilities. To that end, we have made our code available at http://github.com/OSU-slatelab/mimic-enhance."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Perceptual Loss",
"Related Work ::: Enhancement Without Parallel Data",
"Mimic Loss for Enhancement",
"Experiments",
"Experiments ::: Without parallel data",
"Experiments ::: With parallel data",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"186ba39454e05f9639db6260d2b306a1537e7783"
],
"answer": [
{
"evidence": [
"Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics."
],
"extractive_spans": [
"a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13"
],
"free_form_answer": "",
"highlighted_evidence": [
"Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"764b4ad6a436ff5d072579453ba166f41ace98c0"
],
"answer": [
{
"evidence": [
"In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result.",
"FLOAT SELECTED: Table 2. Speech enhancement scores for the state-of-the-art system trained with the parallel data available in the CHiME4 corpus. Evaluation is done on channel 5 of the simulation et05 data. Mimic loss is applied to the AECNN model trained with time-domain mapping loss only, as well as time-domain and spectral magnitude mapping losses. The joint training system is done with an identical setup to the mimic system with all three losses."
],
"extractive_spans": [],
"free_form_answer": "Improved AECNN-T by 2.1 and AECNN-T-SM BY 0.9",
"highlighted_evidence": [
"In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result.",
"FLOAT SELECTED: Table 2. Speech enhancement scores for the state-of-the-art system trained with the parallel data available in the CHiME4 corpus. Evaluation is done on channel 5 of the simulation et05 data. Mimic loss is applied to the AECNN model trained with time-domain mapping loss only, as well as time-domain and spectral magnitude mapping losses. The joint training system is done with an identical setup to the mimic system with all three losses."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which frozen acoustic model do they use?",
"By how much does using phonetic feedback improve state-of-the-art systems?"
],
"question_id": [
"7dce1b64c0040500951c864fce93d1ad7a1809bc",
"e1b36927114969f3b759cba056cfb3756de474e4"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
" ",
" "
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Operations are listed inside shapes, the circles are operations that are not parameterized, the rectangles represent parameterized operations. The gray operations are not trained, meaning the loss is backpropagated without any updates until the front-end denoiser is reached.",
"Fig. 2. Comparison of a short segment of the log-mel filterbank features of utterance M06 441C020F STR from the CHiME-4 corpus. The generation procedure for the features are as follows: (a) noisy, (b) clean, (c) non-parallel mimic, (d) local losses, (e) local + mimic loss. Highlighted is a region enhanced by mimic loss but ignored by local losses.",
"Table 1. Speech enhancement scores for the state-of-the-art architecture trained from scratch without the parallel clean speech data from the CHiME-4 corpus. Evaluation is done on channel 5 of the simulated et05 data. The joint training is done with an identical setup to the mimic system.",
"Table 2. Speech enhancement scores for the state-of-the-art system trained with the parallel data available in the CHiME4 corpus. Evaluation is done on channel 5 of the simulation et05 data. Mimic loss is applied to the AECNN model trained with time-domain mapping loss only, as well as time-domain and spectral magnitude mapping losses. The joint training system is done with an identical setup to the mimic system with all three losses."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"By how much does using phonetic feedback improve state-of-the-art systems?"
] | [
[
"2003.01769-4-Table2-1.png",
"2003.01769-Experiments ::: With parallel data-0"
]
] | [
"Improved AECNN-T by 2.1 and AECNN-T-SM BY 0.9"
] | 263 |
1806.09103 | Subword-augmented Embedding for Cloze Reading Comprehension | Representation learning is the foundation of machine reading comprehension. In state-of-the-art models, deep learning methods broadly use word and character level representations. However, character is not naturally the minimal linguistic unit. In addition, with a simple concatenation of character and word embedding, previous models actually give suboptimal solution. In this paper, we propose to use subword rather than character for word embedding enhancement. We also empirically explore different augmentation strategies on subword-augmented embedding to enhance the cloze-style reading comprehension model reader. In detail, we present a reader that uses subword-level representation to augment word embedding with a short list to handle rare words effectively. A thorough examination is conducted to evaluate the comprehensive performance and generalization ability of the proposed reader. Experimental results show that the proposed approach helps the reader significantly outperform the state-of-the-art baselines on various public datasets. | {
"paragraphs": [
[
"This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/",
"A recent hot challenge is to train machines to read and comprehend human languages. Towards this end, various machine reading comprehension datasets have been released, including cloze-style BIBREF0 , BIBREF1 , BIBREF2 and user-query types BIBREF3 , BIBREF4 . Meanwhile, a number of deep learning models are designed to take up the challenges, most of which focus on attention mechanism BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . However, how to represent word in an effective way remains an open problem for diverse natural language processing tasks, including machine reading comprehension for different languages. Particularly, for a language like Chinese with a large set of characters (typically, thousands of), lots of which are semantically ambiguous, using either word-level or character-level embedding alone to build the word representations would not be accurate enough. This work especially focuses on a cloze-style reading comprehension task over fairy stories, which is highly challenging due to diverse semantic patterns with personified expressions and reference.",
"In real practice, a reading comprehension model or system which is often called reader in literatures easily suffers from out-of-vocabulary (OOV) word issues, especially for the cloze-style reading comprehension tasks when the ground-truth answers tend to include rare words or named entities (NE), which are hardly fully recorded in the vocabulary. This is more challenging in Chinese. There are over 13,000 characters in Chinese while there are only 26 letters in English without regard to punctuation marks. If a reading comprehension system cannot effectively manage the OOV issues, the performance will not be semantically accurate for the task.",
"Commonly, words are represented as vectors using either word embedding or character embedding. For the former, each word is mapped into low dimensional dense vectors from a lookup table. Character representations are usually obtained by applying neural networks on the character sequence of the word, and their hidden states are obtained to form the representation. Intuitively, word-level representation is good at catching global context and dependency relationships between words, while character embedding helps for dealing with rare word representation.",
"However, the minimal meaningful unit below word usually is not character, which motivates researchers to explore the potential unit (subword) between character and word to model sub-word morphologies or lexical semantics. In fact, morphological compounding (e.g. sunshine or playground) is one of the most common and productive methods of word formation across human languages, which inspires us to represent word by meaningful sub-word units. Recently, researchers have started to work on morphologically informed word embeddings BIBREF11 , BIBREF12 , aiming at better capturing syntactic, lexical and morphological information. With ready subwords, we do not have to work with characters, and segmentation could be stopped at the subword-level to reach a meaningful representation.",
"In this paper, we present various simple yet accurate subword-augmented embedding (SAW) strategies and propose SAW Reader as an instance. Specifically, we adopt subword information to enrich word embedding and survey different SAW operations to integrate word-level and subword-level embedding for a fine-grained representation. To ensure adequate training of OOV and low-frequency words, we employ a short list mechanism. Our evaluation will be performed on three public Chinese reading comprehension datasets and one English benchmark dataset for showing our method is also effective in multi-lingual case."
],
[
"The concerned reading comprehension task can be roughly categorized as user-query type and cloze-style according to the answer form. Answers in the former are usually a span of texts while in the cloze-style task, the answers are words or phrases which lets the latter be the harder-hit area of OOV issues, inspiring us to select the cloze-style as our testbed for SAW strategies. Our preliminary study shows even for the advanced word-character based GA reader, OOV answers still account for nearly 1/5 in the error results. This also motivates us to explore better representations to further performance improvement.",
"The cloze-style task in this work can be described as a triple INLINEFORM0 , where INLINEFORM1 is a document (context), INLINEFORM2 is a query over the contents of INLINEFORM3 , in which a word or phrase is the right answer INLINEFORM4 . This section will introduce the proposed SAW Reader in the context of cloze-style reading comprehension. Given the triple INLINEFORM5 , the SAW Reader will be built in the following steps."
],
[
"Word in most languages usually can be split into meaningful subword units despite of the writing form. For example, “indispensable\" could be split into the following subwords: INLINEFORM0 .",
"In our implementation, we adopt Byte Pair Encoding (BPE) BIBREF13 which is a simple data compression technique that iteratively replaces the most frequent pair of bytes in a sequence by a single, unused byte. BPE allows for the representation of an open vocabulary through a fixed-size vocabulary of variable-length character sequences, making it a very suitable word segmentation strategy for neural network models.",
"The generalized framework can be described as follows. Firstly, all the input sequences (strings) are tokenized into a sequence of single-character subwords, then we repeat,",
"Count all bigrams under the current segmentation status of all sequences.",
"Find the bigram with the highest frequency and merge them in all the sequences. Note the segmentation status is updating now.",
"If the merging times do not reach the specified number, go back to 1, otherwise the algorithm ends.",
"In BIBREF14 , BPE is adopted to segment infrequent words into sub-word units for machine translation. However, there is a key difference between the motivations for subword segmentation. We aim to refine the word representations by using subwords, for both frequent and infrequent words, which is more generally motivated. To this end, we adaptively tokenize words in multi-granularity by controlling the merging times."
],
[
"Our subwords are also formed as character n-grams, do not cross word boundaries. After using unsupervised segmentation methods to split each word into a subword sequence, an augmented embedding (AE) is to straightforwardly integrate word embedding INLINEFORM0 and subword embedding INLINEFORM1 for a given word INLINEFORM2 . INLINEFORM3 ",
" where INLINEFORM0 denotes the detailed integration operation. In this work, we investigate concatenation (concat), element-wise summation (sum) and element-wise multiplication (mul). Thus, each document INLINEFORM1 and query INLINEFORM2 is represented as INLINEFORM3 matrix where INLINEFORM4 denotes the dimension of word embedding and INLINEFORM5 is the number of words in the input.",
"Subword embedding could be useful to refine the word embedding in a finer-grained way, we also consider improving word representation from itself. For quite a lot of words, especially those rare ones, their word embedding is extremely hard to learn due to the data sparse issue. Actually, if all the words in the dataset are used to build the vocabulary, the OOV words from the test set will not obtain adequate training. If they are initiated inappropriately, either with relatively high or low weights, they will harm the answer prediction. To alleviate the OOV issues, we keep a short list INLINEFORM0 for specific words. INLINEFORM1 ",
"If INLINEFORM0 is in INLINEFORM1 , the immediate word embedding INLINEFORM2 is indexed from word lookup table INLINEFORM3 where INLINEFORM4 denotes the size (recorded words) of lookup table. Otherwise, it will be represented as the randomly initialized default word (denoted by a specific mark INLINEFORM5 ). Note that, this is intuitively like “guessing” the possible unknown words (which will appear during test) from the vocabulary during training and only the word embedding of the OOV words will be replaced by INLINEFORM6 while their subword embedding INLINEFORM7 will still be processed using the original word. In this way, the OOV words could be tuned sufficiently with expressive meaning after training. During test, the word embedding of unknown words would not severely bias its final representation. Thus, INLINEFORM8 ( INLINEFORM9 ) can be rewritten as INLINEFORM10 ",
"In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequency from high to low. A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table. For example, INLINEFORM1 =0.9 means the least frequent 10% words are replaced with the default UNK notation.",
"The subword embedding INLINEFORM0 is generated by taking the final outputs of a bidirectional gated recurrent unit (GRU) BIBREF15 applied to the embeddings from a lookup table of subwords. The structure of GRU used in this paper are described as follows. INLINEFORM1 ",
" where INLINEFORM0 denotes the element-wise multiplication. INLINEFORM1 and INLINEFORM2 are the reset and update gates respectively, and INLINEFORM3 are the hidden states. A bi-directional GRU (BiGRU) processes the sequence in both forward and backward directions. Subwords of each word are successively fed to forward GRU and backward GRU to obtain the internal features of two directions. The output for each input is the concatenation of the two vectors from both directions: INLINEFORM4 . Then, the output of BiGRUs is passed to a fully connected layer to obtain the final subword embedding INLINEFORM5 . INLINEFORM6 "
],
[
"Our attention module is based on the Gated attention Reader (GA Reader) proposed by BIBREF9 . We choose this model due to its simplicity with comparable performance so that we can focus on the effectiveness of SAW strategies. This module can be described in the following two steps. After augmented embedding, we use two BiGRUs to get contextual representations of the document and query respectively, where the representation of each word is formed by concatenating the forward and backward hidden states. INLINEFORM0 ",
" For each word INLINEFORM0 in INLINEFORM1 , we form a word-specific representation of the query INLINEFORM2 using soft attention, and then adopt element-wise product to multiply the query representation with the document word representation. INLINEFORM3 ",
" where INLINEFORM0 denotes the multiplication operator to model the interactions between INLINEFORM1 and INLINEFORM2 . Then, the document contextual representation INLINEFORM3 is gated by query representation.",
"Suppose the network has INLINEFORM0 layers. At each layer, the document representation INLINEFORM1 is updated through above attention learning. After going through all the layers, our model comes to answer prediction phase. We use all the words in the document to form the candidate set INLINEFORM2 . Let INLINEFORM3 denote the INLINEFORM4 -th intermediate output of query representation INLINEFORM5 and INLINEFORM6 represent the full output of document representation INLINEFORM7 . The probability of each candidate word INLINEFORM8 as being the answer is predicted using a softmax layer over the inner-product between INLINEFORM9 and INLINEFORM10 . INLINEFORM11 ",
" where vector INLINEFORM0 denotes the probability distribution over all the words in the document. Note that each word may occur several times in the document. Thus, the probabilities of each candidate word occurring in different positions of the document are summed up for final prediction. INLINEFORM1 ",
" where INLINEFORM0 denotes the set of positions that a particular word INLINEFORM1 occurs in the document INLINEFORM2 . The training objective is to maximize INLINEFORM3 where INLINEFORM4 is the correct answer.",
"Finally, the candidate word with the highest probability will be chosen as the predicted answer. INLINEFORM0 ",
"Different from recent work employing complex attention mechanisms BIBREF5 , BIBREF7 , BIBREF16 , our attention mechanism is much more simple with comparable performance so that we can focus on the effectiveness of SAW strategies."
],
[
"To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 . In these datasets, a story containing consecutive sentences is formed as the Document and one of the sentences is either automatically or manually selected as the Query where one token is replaced by a placeholder to indicate the answer to fill in. Table TABREF8 gives data statistics. Different from the current cloze-style datasets for English reading comprehension, such as CBT, Daily Mail and CNN BIBREF0 , the three Chinese datasets do not provide candidate answers. Thus, the model has to find the correct answer from the entire document.",
"Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case. We only focus on subsets where the answer is either a common noun (CN) or NE which is more challenging since the answer is likely to be rare words. We evaluate all the models in terms of accuracy, which is the standard evaluation metric for this task.",
"Throughout this paper, we use the same model setting to make fair comparisons. According to our preliminary experiments, we report the results based on the following settings. The default integration strategy is element-wise product. Word embeddings were 200 INLINEFORM0 and pre-trained by word2vec BIBREF18 toolkit on Wikipedia corpus. Subword embedding were 100 INLINEFORM1 and randomly initialized with the uniformed distribution in the interval [-0:05; 0:05]. Our model was implemented using the Theano and Lasagne Python libraries. We used stochastic gradient descent with ADAM updates for optimization BIBREF19 . The batch size was 64 and the initial learning rate was 0.001 which was halved every epoch after the second epoch. We also used gradient clipping with a threshold of 10 to stabilize GRU training BIBREF20 . We use three attention layers for all experiments. The GRU hidden units for both the word and subword representation were 128. The default frequency filter proportion was 0.9 and the default merging times of BPE was 1,000. We also apply dropout between layers with a dropout rate of 0.5 ."
],
[
"[7]http://www.hfl-tek.com/cmrc2017/leaderboard.html",
"Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability.",
"We also list different integration operations for word and subword embeddings. Table TABREF19 shows the comparisons. From the results, we can see that Word + BPE outperforms Word + Char which indicates subword embedding works essentially. We also observe that mul outperforms the other two operations, concat and sum. This reveals that mul might be more informative than concat and sum operations. The superiority might be due to element-wise product being capable of modeling the interactions and eliminating distribution differences between word and subword embedding. Intuitively, this is also similar to endow subword-aware “attention” over the word embedding. In contrast, concatenation operation may cause too high dimension, which leads to serious over-fitting issues, and sum operation is too simple to prevent from detailed information losing.",
"Since there is no training set for CFT dataset, our model is trained on PD training set. Note that the CFT dataset is harder for the machine to answer because the test set is further processed by human evaluation, and may not be accordance with the pattern of PD dataset. The results on PD and CFT datasets are listed in Table TABREF20 . As we see that, our SAW Reader significantly outperforms the CAS Reader in all types of testing, with improvements of 7.0% on PD and 8.8% on CFT test sets, respectively. Although the domain and topic of PD and CFT datasets are quite different, the results indicate that our model also works effectively for out-of-domain learning.",
"To verify if our method can only work for Chinese, we also evaluate the effectiveness of the proposed method on benchmark English dataset. We use CBT dataset as our testbed to evaluate the performance. For a fair comparison, we simply set the same parameters as before. Table TABREF22 shows the results. We observe that our model outperforms most of the previously public works, with 2.4 % gains on the CBT-NE test set compared with GA Reader which adopts word and character embedding concatenation. Our SAW Reader also achieves comparable performance with FG Reader who adopts neural gates to combine word-level and character-level representations with assistance of extra features including NE, POS and word frequency while our model is much simpler and faster. This result shows our SAW Reader is not restricted to Chinese reading comprehension, but also for other languages."
],
[
"The vocabulary size could seriously involve the segmentation granularity. For BPE segmentation, the resulted subword vocabulary size is equal to the merging times plus the number of single-character types. To have an insight of the influence, we adopt merge times from 0 to 20 INLINEFORM0 , and conduct quantitative study on CMRC-2017 for BPE segmentation. Figure FIGREF25 shows the results. We observe that when the vocabulary size is 1 INLINEFORM1 , the models could obtain the best performance. The results indicate that for a task like reading comprehension the subwords, being a highly flexible grained representation between character and word, tends to be more like characters instead of words. However, when the subwords completely fall into characters, the model performs the worst. This indicates that the balance between word and character is quite critical and an appropriate grain of character-word segmentation could essentially improve the word representation."
],
[
"To investigate the impact of the short list to the model performance, we conduct quantitative study on the filter ratio from [0.1, 0.2, ..., 1]. The results on the CMRC-2017 dataset are depicted in Figure FIGREF25 . As we can see that when INLINEFORM0 our SAW reader can obtain the best performance, showing that building the vocabulary among all the training set is not optimal and properly reducing the frequency filter ratio can boost the accuracy. This is partially attributed to training the model from the full vocabulary would cause serious over-fitting as the rare words representations can not obtain sufficient tuning. If the rare words are not initialized properly, they would also bias the whole word representations. Thus a model without OOV mechanism will fail to precisely represent those inevitable OOV words from test sets."
],
[
"In text understanding tasks, if the ground-truth answer is OOV word or contains OOV word(s), the performance of deep neural networks would severely drop due to the incomplete representation, especially for cloze-style reading comprehension task where the answer is only one word or phrase. In CMRC-2017, we observe questions with OOV answers (denoted as “OOV questions\") account for 17.22% in the error results of the best Word + Char embedding based model. With BPE subword embedding, 12.17% of these “OOV questions\" could be correctly answered. This shows the subword representations could be essentially useful for modeling rare and unseen words.",
"To analyze the reading process of SAW Reader, we draw the attention distributions at intermediate layers as shown in Figure FIGREF28 . We observe the salient candidates in the document can be focused after the pair-wise matching of document and query and the right answer (“The mole\") could obtain a high weight at the very beginning. After attention learning, the key evidence of the answer would be collected and irrelevant parts would be ignored. This shows our SAW Reader is effective at selecting the vital points at the fundamental embedding layer, guiding the attention layers to collect more relevant pieces."
],
[
"Recently, many deep learning models have been proposed for reading comprehension BIBREF16 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF9 , BIBREF26 , BIBREF27 . Notably, Chen2016A conducted an in-depth and thoughtful examination on the comprehension task based on an attentive neural network and an entity-centric classifier with a careful analysis based on handful features. kadlec2016text proposed the Attention Sum Reader (AS Reader) that uses attention to directly pick the answer from the context, which is motivated by the Pointer Network BIBREF28 . Instead of summing the attention of query-to-document, GA Reader BIBREF9 defined an element-wise product to endowing attention on each word of the document using the entire query representation to build query-specific representations of words in the document for accurate answer selection. Wang2017Gated employed gated self-matching networks (R-net) on passage against passage itself to refine passage representation with information from the whole passage. Cui2016Attention introduced an “attended attention\" mechanism (AoA) where query-to-document and document-to-query are mutually attentive and interactive to each other."
],
[
"Distributed word representation plays a fundamental role in neural models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . Recently, character embeddings are widely used to enrich word representations BIBREF37 , BIBREF21 , BIBREF38 , BIBREF39 . Yang2016Words explored a fine-grained gating mechanism (FG Reader) to dynamically combine word-level and character-level representations based on properties of the words. However, this method is computationally complex and it is not end-to-end, requiring extra labels such as NE and POS tags. Seo2016Bidirectional concatenated the character and word embedding to feed a two-layer Highway Network.",
"Not only for machine reading comprehension tasks, character embedding has also benefit other natural language process tasks, such as word segmentation BIBREF40 , machine translation BIBREF38 , tagging BIBREF41 , BIBREF42 and language modeling BIBREF43 , BIBREF44 . However, character embedding only shows marginal improvement due to a lack internal semantics. Lexical, syntactic and morphological information are also considered to improve word representation BIBREF12 , BIBREF45 . Bojanowski2016Enriching proposed to learn representations for character INLINEFORM0 -gram vectors and represent words as the sum of the INLINEFORM1 -gram vectors. Avraham2017The built a model inspired by BIBREF46 , who used morphological tags instead of INLINEFORM2 -grams. They jointly trained their morphological and semantic embeddings, implicitly assuming that morphological and semantic information should live in the same space. However, the linguistic knowledge resulting subwords, typically, morphological suffix, prefix or stem, may not be suitable for different kinds of languages and tasks. Sennrich2015Neural introduced the byte pair encoding (BPE) compression algorithm into neural machine translation for being capable of open-vocabulary translation by encoding rare and unknown words as subword units. Instead, we consider refining the word representations for both frequent and infrequent words from a computational perspective. Our proposed subword-augmented embedding approach is more general, which can be adopted to enhance the representation for each word by adaptively altering the segmentation granularity in multiple NLP tasks."
],
[
"This paper presents an effective neural architecture, called subword-augmented word embedding to enhance the model performance for the cloze-style reading comprehension task. The proposed SAW Reader uses subword embedding to enhance the word representation and limit the word frequency spectrum to train rare words efficiently. With the help of the short list, the model size will also be reduced together with training speedup. Unlike most existing works, which introduce either complex attentive architectures or many manual features, our model is much more simple yet effective. Giving state-of-the-art performance on multiple benchmarks, the proposed reader has been proved effective for learning joint representation at both word and subword level and alleviating OOV difficulties."
]
],
"section_name": [
"Introduction",
"The Subword-augmented Word Embedding",
"BPE Subword Segmentation",
"Subword-augmented Word Embedding",
"Attention Module",
"Dataset and Settings",
"Main Results",
"Merging Times of BPE",
"Filter Mechanism",
"Subword-Augmented Representations",
"Machine Reading Comprehension",
"Augmented Word Embedding",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"60c9b737810c6bf6d0978eadbb33409f3b4734ff"
],
"answer": [
{
"evidence": [
"In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequency from high to low. A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table. For example, INLINEFORM1 =0.9 means the least frequent 10% words are replaced with the default UNK notation."
],
"extractive_spans": [
"low-frequency words"
],
"free_form_answer": "",
"highlighted_evidence": [
"A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"5aaa03e0f41c9ea0f27c3e28b771d586e12ba858"
],
"answer": [
{
"evidence": [
"To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 . In these datasets, a story containing consecutive sentences is formed as the Document and one of the sentences is either automatically or manually selected as the Query where one token is replaced by a placeholder to indicate the answer to fill in. Table TABREF8 gives data statistics. Different from the current cloze-style datasets for English reading comprehension, such as CBT, Daily Mail and CNN BIBREF0 , the three Chinese datasets do not provide candidate answers. Thus, the model has to find the correct answer from the entire document.",
"Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case. We only focus on subsets where the answer is either a common noun (CN) or NE which is more challenging since the answer is likely to be rare words. We evaluate all the models in terms of accuracy, which is the standard evaluation metric for this task."
],
"extractive_spans": [
"CMRC-2017",
"People's Daily (PD)",
"Children Fairy Tales (CFT) ",
"Children's Book Test (CBT)"
],
"free_form_answer": "",
"highlighted_evidence": [
"To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 ",
"Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"a701def54b5c6fae9f04b640fde1eb6fae682fe0"
],
"answer": [
{
"evidence": [
"Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability.",
"FLOAT SELECTED: Table 2: Accuracy on CMRC-2017 dataset. Results marked with † are from the latest official CMRC2017 Leaderboard 7. The best results are in bold face.",
"FLOAT SELECTED: Table 3: Case study on CMRC-2017."
],
"extractive_spans": [],
"free_form_answer": "AS Reader, GA Reader, CAS Reader",
"highlighted_evidence": [
"Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline",
"FLOAT SELECTED: Table 2: Accuracy on CMRC-2017 dataset. Results marked with † are from the latest official CMRC2017 Leaderboard 7. The best results are in bold face.",
"FLOAT SELECTED: Table 3: Case study on CMRC-2017."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"how are rare words defined?",
"which public datasets were used?",
"what are the baselines?"
],
"question_id": [
"859e0bed084f47796417656d7a68849eb9cb324f",
"04e90c93d046cd89acef5a7c58952f54de689103",
"f513e27db363c28d19a29e01f758437d7477eb24"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: Architecture of the proposed Subword-augmented Embedding Reader (SAW Reader).",
"Table 1: Data statistics of CMRC-2017, PD and CFT.",
"Table 2: Accuracy on CMRC-2017 dataset. Results marked with † are from the latest official CMRC2017 Leaderboard 7. The best results are in bold face.",
"Table 3: Case study on CMRC-2017.",
"Table 5: Accuracy on CBT dataset. Results marked with ‡ are of previously published works (Dhingra et al., 2017; Cui et al., 2016; Yang et al., 2017).",
"Figure 2: Case study of the subword vocabulary size of BPE.",
"Figure 3: Quantitative study on the influence of the short list.",
"Figure 4: Pair-wise attention visualization."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"8-Table5-1.png",
"8-Figure2-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png"
]
} | [
"what are the baselines?"
] | [
[
"1806.09103-Main Results-1",
"1806.09103-6-Table2-1.png",
"1806.09103-7-Table3-1.png"
]
] | [
"AS Reader, GA Reader, CAS Reader"
] | 271 |
1911.13087 | Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset | We present an experimental dataset, Basic Dataset for Sorani Kurdish Automatic Speech Recognition (BD-4SK-ASR), which we used in the first attempt in developing an automatic speech recognition for Sorani Kurdish. The objective of the project was to develop a system that automatically could recognize simple sentences based on the vocabulary which is used in grades one to three of the primary schools in the Kurdistan Region of Iraq. We used CMUSphinx as our experimental environment. We developed a dataset to train the system. The dataset is publicly available for non-commercial use under the CC BY-NC-SA 4.0 license. | {
"paragraphs": [
[
"Kurdish language processing requires endeavor by interested researchers and scholars to overcome with a large gap which it has regarding the resource scarcity. The areas that need attention and the efforts required have been addressed in BIBREF0.",
"The Kurdish speech recognition is an area which has not been studied so far. We were not able to retrieve any resources in the literature regarding this subject.",
"In this paper, we present a dataset based on CMUShpinx BIBREF1 for Sorani Kurdish. We call it a Dataset for Sorani Kurdish Automatic Speech Recognition (BD-4SK-ASR). Although other technologies are emerging, CMUShpinx could still be used for experimental studies.",
"The rest of this paper is organized as follows. Section SECREF2 reviews the related work. Section SECREF3 presents different parts of the dataset, such as the dictionary, phoneset, transcriptions, corpus, and language model. Finally, Section SECREF4 concludes the paper and suggests some areas for future work."
],
[
"The work on Automatic Speech Recognition (ASR) has a long history, but we could not retrieve any literature on Kurdish ASR at the time of compiling this article. However, the literature on ASR for different languages is resourceful. Also, researchers have widely used CMUSphinx for ASR though other technologies have been emerging in recent years BIBREF1.",
"We decided to use CMUSphinx because we found it a proper and well-established environment to start Kurdish ASR."
],
[
"To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences.",
"In the following sections, we present the available items in the dataset. The dataset ia available on https://github.com/KurdishBLARK/BD-4SK-ASR."
],
[
"The phoneset includes 34 phones for Sorani Kurdish. A sample of the file content is given below.",
"R",
"RR",
"S",
"SIL",
"SH",
"T",
"V",
"W",
"WW",
"Figure FIGREF3 shows the Sorani letters in Persian-Arabic script, the suggested phoneme (capital English letters), and an example of the transformation of words in the developed corpus."
],
[
"The filler phone file usually contains fillers in spoken sentences. In our basic sentences, we have only considered silence. Therefore it only includes three lines to indicate the possible pauses at the beginning and end of the sentences and also after each word."
],
[
"This file includes the list of files in which the narrated sentences have been recorded. The recorded files are in wav formats. However, in the file IDs, the extension is omitted. A sample of the file content is given below. The test directory is the directory in which the files are located.",
"test/T1-1-50-01",
"test/T1-1-50-02",
"test/T1-1-50-03",
"test/T1-1-50-04",
"test/T1-1-50-05",
"test/T1-1-50-06"
],
[
"This file contains the transcription of each sentence based on the phoneset along with the file ID in which the equivalent narration has been saved. The following is a sample of the content of the file.",
"<s> BYR RRAAMAAN DAARISTAANA AMAANAY </s> (T1-1-50-18)",
"<s> DWWRA HAWLER CHIRAAYA SARDAAN NABWW </s> (T1-1-50-19)",
"<s> SAALL DYWAAR QWTAABXAANA NACHIN </s> (T1-1-50-20)",
"<s> XWENDIN ANDAAMAANY GASHA </s> (T1-1-50-21)",
"<s> NAMAAM WRYAA KIRD PSHWWDAA </s> (T1-1-50-22)",
"<s> DARCHWWY DAKAN DAKAWET </s> (T1-1-50-23)",
"<s> CHAND BIRAAT MAQAST </s> (T1-1-50-24)",
"<s> BAAXCHAKAY DAAYK DARCHWWY </s> (T1-1-50-25)",
"<s> RROZH JWAAN DAKAWET ZYAANYAAN </s> (T1-1-50-26)",
""
],
[
"The corpus includes 2000 sentences. Theses sentence are random renderings of 200 sentences, which we have taken from Sorani Kurdish books of the grades one to three of the primary school in the Kurdistan Region of Iraq. The reason that we have taken only 200 sentences is to have a smaller dictionary and also to increase the repetition of each word in the narrated speech. We transformed the corpus sentences, which are in Persian-Arabic script, into the format which complies with the suggested phones for the related Sorani letters (see Section SECREF6)."
],
[
"Two thousand narration files were created. We used Audacity to record the narrations. We used a normal laptop in a quiet room and minimized the background noise. However, we could not manage to avoid the noise of the fan of the laptop. A single speaker narrated the 2000 sentences, which took several days. We set the Audacity software to have a sampling rate of 16, 16-bit bit rate, and a mono (single) channel. The noise reduction db was set to 6, the sensitivity to 4.00, and the frequency smoothing to 0."
],
[
"We created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams."
],
[
"We presented a dataset, BD-4SK-ASR, that could be used in training and developing an acoustic model for Automatic Speech Recognition in CMUSphinx environment for Sorani Kurdish. The Kurdish books of grades one to three of primary schools in the Kurdistan Region of Iraq were used to extract 200 sample sentences. The dataset includes the dictionary, the phoneset, the transcriptions of the corpus sentences using the suggested phones, the recorded narrations of the sentences, and the acoustic model. The dataset could be used to start experiments on Sorani Kurdish ASR.",
"As it was mentioned before, research and development on Kurdish ASR require a huge amount of effort. A variety of areas must be explored, and various resources must be collected and developed. The multi-dialect characteristic of Kurdish makes these tasks rather demanding. To participate in these efforts, we are interested in the expansion of Kurdish ASR by developing a larger dataset based on larger Sorani corpora, working on the other Kurdish dialects, and using new environments for ASR such as Kaldi."
]
],
"section_name": [
"Introduction",
"Related work",
"The BD-4SK-ASR Dataset",
"The BD-4SK-ASR Dataset ::: Phoeset",
"The BD-4SK-ASR Dataset ::: Filler phones",
"The BD-4SK-ASR Dataset ::: The File IDs",
"The BD-4SK-ASR Dataset ::: The Transcription",
"The BD-4SK-ASR Dataset ::: The Corpus",
"The BD-4SK-ASR Dataset ::: The Narration Files",
"The BD-4SK-ASR Dataset ::: The Language Model",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"da0fc36116ec0c88876ce022a0c985ce91bedf28"
],
"answer": [
{
"evidence": [
"The BD-4SK-ASR Dataset ::: The Language Model",
"We created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams."
],
"extractive_spans": [],
"free_form_answer": "They were able to create a language model from the dataset, but did not test.",
"highlighted_evidence": [
"The BD-4SK-ASR Dataset ::: The Language Model\nWe created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"19041484d3b9b018a43dda76cda73c122af29409"
],
"answer": [
{
"evidence": [
"To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences."
],
"extractive_spans": [],
"free_form_answer": "extracted text from Sorani Kurdish books of primary school and randomly created sentences",
"highlighted_evidence": [
"To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1e944910b2cf0cf2f08c17a36761cd1f98e8ce6d"
],
"answer": [
{
"evidence": [
"To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences."
],
"extractive_spans": [
"2000 sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"To develop the dataset, we extracted 200 sentences from Sorani Kurdish books of grades one to three of the primary school in the Kurdistan Region of Iraq. We randomly created 2000 sentences from the extracted sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"e23a5aaf6306bad1b6967aff6e406cbf8971b298"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6e7a28a48be66a416bfa8421a6d91bb2f601935f"
],
"answer": [
{
"evidence": [
"Two thousand narration files were created. We used Audacity to record the narrations. We used a normal laptop in a quiet room and minimized the background noise. However, we could not manage to avoid the noise of the fan of the laptop. A single speaker narrated the 2000 sentences, which took several days. We set the Audacity software to have a sampling rate of 16, 16-bit bit rate, and a mono (single) channel. The noise reduction db was set to 6, the sensitivity to 4.00, and the frequency smoothing to 0."
],
"extractive_spans": [],
"free_form_answer": "1",
"highlighted_evidence": [
"A single speaker narrated the 2000 sentences, which took several days. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"b84d0ea71de51ad7340cb2e31a1f903ae9c0fe52"
],
"answer": [
{
"evidence": [
"The corpus includes 2000 sentences. Theses sentence are random renderings of 200 sentences, which we have taken from Sorani Kurdish books of the grades one to three of the primary school in the Kurdistan Region of Iraq. The reason that we have taken only 200 sentences is to have a smaller dictionary and also to increase the repetition of each word in the narrated speech. We transformed the corpus sentences, which are in Persian-Arabic script, into the format which complies with the suggested phones for the related Sorani letters (see Section SECREF6)."
],
"extractive_spans": [
"2000"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus includes 2000 sentences. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"no",
"no",
"no"
],
"question": [
"What are the results of the experiment?",
"How was the dataset collected?",
"What is the size of the dataset?",
"How many different subjects does the dataset contain?",
"How many annotators participated?",
"How long is the dataset?"
],
"question_id": [
"eb5ed1dd26fd9adb587d29225c7951a476c6ec28",
"0828cfcf0e9e02834cc5f279a98e277d9138ffd9",
"7b2de0109b68f78afa9e6190c82ca9ffaf62f9bd",
"482ac96ff675975227b6d7058b9b87aeab6f81fe",
"3f3c09c1fd542c1d9acf197957c66b79ea1baf6e",
"0a82534ec6e294ab952103f11f56fd99137adc1f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"dataset",
"dataset",
"dataset",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The Sorani sounds along with their phoneme representation."
],
"file": [
"3-Figure1-1.png"
]
} | [
"What are the results of the experiment?",
"How was the dataset collected?",
"How many annotators participated?"
] | [
[
"1911.13087-The BD-4SK-ASR Dataset ::: The Language Model-0"
],
[
"1911.13087-The BD-4SK-ASR Dataset-0"
],
[
"1911.13087-The BD-4SK-ASR Dataset ::: The Narration Files-0"
]
] | [
"They were able to create a language model from the dataset, but did not test.",
"extracted text from Sorani Kurdish books of primary school and randomly created sentences",
"1"
] | 272 |
1711.02013 | Neural Language Modeling by Jointly Learning Syntax and Lexicon | We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks. | {
"paragraphs": [
[
"Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BIBREF0 . To generate a proper sentence, tokens are put together with a specific syntactic structure. Understanding a sentence also requires lexical information to provide meanings, and syntactical knowledge to correctly combine meanings. Current neural language models can provide meaningful word represent BIBREF1 , BIBREF2 , BIBREF3 . However, standard recurrent neural networks only implicitly model syntax, thus fail to efficiently use structure information BIBREF4 .",
"Developing a deep neural network that can leverage syntactic knowledge to form a better semantic representation has received a great deal of attention in recent years BIBREF5 , BIBREF4 , BIBREF6 . Integrating syntactic structure into a language model is important for different reasons: 1) to obtain a hierarchical representation with increasing levels of abstraction, which is a key feature of deep neural networks and of the human brain BIBREF7 , BIBREF8 , BIBREF9 ; 2) to capture complex linguistic phenomena, like long-term dependency problem BIBREF4 and the compositional effects BIBREF5 ; 3) to provide shortcut for gradient back-propagation BIBREF6 .",
"A syntactic parser is the most common source for structure information. Supervised parsers can achieve very high performance on well constructed sentences. Hence, parsers can provide accurate information about how to compose word semantics into sentence semantics BIBREF5 , or how to generate the next word given previous words BIBREF10 . However, only major languages have treebank data for training parsers, and it request expensive human expert annotation. People also tend to break language rules in many circumstances (such as writing a tweet). These defects limit the generalization capability of supervised parsers.",
"Unsupervised syntactic structure induction has been among the longstanding challenges of computational linguistic BIBREF11 , BIBREF12 , BIBREF13 . Researchers are interested in this problem for a variety of reasons: to be able to parse languages for which no annotated treebanks exist BIBREF14 ; to create a dependency structure to better suit a particular NLP application BIBREF10 ; to empirically argue for or against the poverty of the stimulus BIBREF15 , BIBREF16 ; and to examine cognitive issues in language learning BIBREF17 .",
"In this paper, we propose a novel neural language model: Parsing-Reading-Predict Networks (PRPN), which can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to form a better language model. With our model, we assume that language can be naturally represented as a tree-structured graph. The model is composed of three parts:",
"We evaluate our model on three tasks: word-level language modeling, character-level language modeling, and unsupervised constituency parsing. The proposed model achieves (or is close to) the state-of-the-art on both word-level and character-level language modeling. The model's unsupervised parsing outperforms some strong baseline models, demonstrating that the structure found by our model is similar to the intrinsic structure provided by human experts."
],
[
"The idea of introducing some structures, especially trees, into language understanding to help a downstream task has been explored in various ways. For example, BIBREF5 , BIBREF4 learn a bottom-up encoder, taking as an input a parse tree supplied from an external parser. There are models that are able to infer a tree during test time, while still need supervised signal on tree structure during training. For example, BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , etc. Moreover, BIBREF22 did an in-depth analysis of recursive models that are able to learn tree structure without being exposed to any grammar trees. Our model is also able to infer tree structure in an unsupervised setting, but different from theirs, it is a recurrent network that implicitly models tree structure through attention.",
"Apart from the approach of using recursive networks to capture structures, there is another line of research which try to learn recurrent features at multiple scales, which can be dated back to 1990s (e.g. BIBREF23 , BIBREF24 , BIBREF25 ). The NARX RNN BIBREF25 is another example which used a feed forward net taking different inputs with predefined time delays to model long-term dependencies. More recently, BIBREF26 also used multiple layers of recurrent networks with different pre-defined updating frequencies. Instead, our model tries to learn the structure from data, rather than predefining it. In that respect, BIBREF6 relates to our model since it proposes a hierarchical multi-scale structure with binary gates controlling intra-layer connections, and the gating mechanism is learned from data too. The difference is that their gating mechanism controls the updates of higher layers directly, while ours control it softly through an attention mechanism.",
"In terms of language modeling, syntactic language modeling can be dated back to BIBREF27 . BIBREF28 , BIBREF29 have also proposed language models with a top-down parsing mechanism. Recently BIBREF30 , BIBREF31 have introduced neural networks into this space. It learns both a discriminative and a generative model with top-down parsing, trained with a supervision signal from parsed sentences in the corpus. There are also dependency-based approaches using neural networks, including BIBREF32 , BIBREF33 , BIBREF34 .",
"Parsers are also related to our work since they are all inferring grammatical tree structure given a sentence. For example, SPINN BIBREF35 is a shift-reduce parser that uses an LSTM as its composition function. The transition classifier in SPINN is supervisedly trained on the Stanford PCFG Parser BIBREF36 output. Unsupervised parsers are more aligned with what our model is doing. BIBREF12 presented a generative model for the unsupervised learning of dependency structures. BIBREF11 is a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. We compare our parsing quality with the aforementioned two papers in Section SECREF43 ."
],
[
"Suppose we have a sequence of tokens INLINEFORM0 governed by the tree structure showed in Figure FIGREF4 . The leafs INLINEFORM1 are observed tokens. Node INLINEFORM2 represents the meaning of the constituent formed by its leaves INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 stands for the leftmost child and right most child. Root INLINEFORM6 represents the meaning of the whole sequence. Arrows represent the dependency relations between nodes. The underlying assumption is that each node depends only on its parent and its left siblings.",
"Directly modeling the tree structure is a challenging task, usually requiring supervision to learn BIBREF4 . In addition, relying on tree structures can result in a model that is not sufficiently robust to face ungrammatical sentences BIBREF37 . In contrast, recurrent models provide a convenient way to model sequential data, with the current hidden state only depends on the last hidden state. This makes models more robust when facing nonconforming sequential data, but it suffers from neglecting the real dependency relation that dominates the structure of natural language sentences.",
"In this paper, we use skip-connection to integrate structured dependency relations with recurrent neural network. In other words, the current hidden state does not only depend on the last hidden state, but also on previous hidden states that have a direct syntactic relation to the current one.",
"Figure FIGREF5 shows the structure of our model. The non-leaf node INLINEFORM0 is represented by a set of hidden states INLINEFORM1 , where INLINEFORM2 is the left most descendant leaf and INLINEFORM3 is the right most one. Arrows shows skip connections built by our model according to the latent structure. Skip connections are controlled by gates INLINEFORM4 . In order to define INLINEFORM5 , we introduce a latent variable INLINEFORM6 to represent local structural context of INLINEFORM7 :",
"and gates are defined as: DISPLAYFORM0 ",
"Given this architecture, the siblings dependency relation is modeled by at least one skip-connect. The skip connection will directly feed information forward, and pass gradient backward. The parent-to-child relation will be implicitly modeled by skip-connect relation between nodes.",
"The model recurrently updates the hidden states according to: DISPLAYFORM0 ",
"and the probability distribution for next word is approximated by: DISPLAYFORM0 ",
" where INLINEFORM0 are gates that control skip-connections. Both INLINEFORM1 and INLINEFORM2 have a structured attention mechanism that takes INLINEFORM3 as input and forces the model to focus on the most related information. Since INLINEFORM4 is an unobserved latent variable, We explain an approximation for INLINEFORM5 in the next section. The structured attention mechanism is explained in section SECREF21 ."
],
[
"In this section we give a probabilistic view on how to model the local structure of language. A detailed elaboration for this section is given in Appendix . At time step INLINEFORM0 , INLINEFORM1 represents the probability of choosing one out of INLINEFORM2 possible local structures. We propose to model the distribution by the Stick-Breaking Process: DISPLAYFORM0 ",
"The formula can be understood by noting that after the time step INLINEFORM0 have their probabilities assigned, INLINEFORM1 is remaining probability, INLINEFORM2 is the portion of remaining probability that we assign to time step INLINEFORM3 . Variable INLINEFORM4 is parametrized in the next section.",
"As shown in Appendix , the expectation of gate value INLINEFORM0 is the Cumulative Distribution Function (CDF) of INLINEFORM1 . Thus, we can replace the discrete gate value by its expectation: DISPLAYFORM0 ",
"With these relaxations, Eq. EQREF9 and EQREF10 can be approximated by using a soft gating vector to update the hidden state and predict the next token."
],
[
"In Eq. EQREF12 , INLINEFORM0 is the portion of the remaining probability that we assign to position INLINEFORM1 . Because the stick-breaking process should assign high probability to INLINEFORM2 , which is the closest constituent-beginning word. The model should assign large INLINEFORM3 to words beginning new constituents. While INLINEFORM4 itself is a constituent-beginning word, the model should assign large INLINEFORM5 to words beginning larger constituents. In other words, the model will consider longer dependency relations for the first word in constituent. Given the sentence in Figure FIGREF4 , at time step INLINEFORM6 , both INLINEFORM7 and INLINEFORM8 should be close to 1, and all other INLINEFORM9 should be close to 0.",
"In order to parametrize INLINEFORM0 , our basic hypothesis is that words in the same constituent should have a closer syntactic relation within themselves, and that this syntactical proximity can be represented by a scalar value. From the tree structure point of view, the shortest path between leafs in same subtree is shorter than the one between leafs in different subtree.",
"To model syntactical proximity, we introduce a new feature Syntactic Distance. For a sentence with length INLINEFORM0 , we define a set of INLINEFORM1 real valued scalar variables INLINEFORM2 , with INLINEFORM3 representing a measure of the syntactic relation between the pair of adjacent words INLINEFORM4 . INLINEFORM5 could be the last word in previous sentence or a padding token. For time step INLINEFORM6 , we want to find the closest words INLINEFORM7 , that have larger syntactic distance than INLINEFORM8 . Thus INLINEFORM9 can be defined as: DISPLAYFORM0 ",
"where INLINEFORM0 . INLINEFORM1 is the temperature parameter that controls the sensitivity of INLINEFORM2 to the differences between distances.",
"The Syntactic Distance has some nice properties that both allow us to infer a tree structure from it and be robust to intermediate non-valid tree structures that the model may encounter during learning. In Appendix and we list these properties and further explain the meanings of their values.",
" BIBREF38 shows that it's possible to identify the beginning and ending words of a constituent using local information. In our model, the syntactic distance between a given token (which is usually represented as a vector word embedding INLINEFORM0 ) and its previous token INLINEFORM1 , is provided by a convolutional kernel over a set of consecutive previous tokens INLINEFORM2 . This convolution is depicted as the gray triangles shown in Figure FIGREF20 . Each triangle here represent 2 layers of convolution. Formally, the syntactic distance INLINEFORM3 between token INLINEFORM4 and INLINEFORM5 is computed by DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 , INLINEFORM1 are the kernel parameters. INLINEFORM2 and INLINEFORM3 can be seen as another convolutional kernel with window size 1, convolved over INLINEFORM4 's. Here the kernel window size INLINEFORM5 determines how far back into the history node INLINEFORM6 can reach while computing its syntactic distance INLINEFORM7 . Thus we call it the look-back range.",
"Convolving INLINEFORM0 and INLINEFORM1 on the whole sequence with length INLINEFORM2 yields a set of distances. For the tokens in the beginning of the sequence, we simply pad INLINEFORM3 zero vectors to the front of the sequence in order to get INLINEFORM4 outputs."
],
[
"The Reading Network generate new states INLINEFORM0 considering on input INLINEFORM1 , previous memory states INLINEFORM2 , and gates INLINEFORM3 , as shown in Eq. EQREF9 .",
"Similar to Long Short-Term Memory-Network (LSTMN) BIBREF39 , the Reading Network maintains the memory states by maintaining two sets of vectors: a hidden tape INLINEFORM0 , and a memory tape INLINEFORM1 , where INLINEFORM2 is the upper bound for the memory span. Hidden states INLINEFORM3 is now represented by a tuple of two vectors INLINEFORM4 . The Reading Network captures the dependency relation by a modified attention mechanism: structured attention. At each step of recurrence, the model summarizes the previous recurrent states via the structured attention mechanism, then performs a normal LSTM update, with hidden and cell states output by the attention mechanism.",
"At each time step INLINEFORM0 , the read operation attentively links the current token to previous memories with a structured attention layer: DISPLAYFORM0 ",
" where, INLINEFORM0 is the dimension of the hidden state. Modulated by the gates in Eq. EQREF13 , the structured intra-attention weight is defined as: DISPLAYFORM0 ",
" This yields a probability distribution over the hidden state vectors of previous tokens. We can then compute an adaptive summary vector for the previous hidden tape and memory denoting by INLINEFORM0 and INLINEFORM1 : DISPLAYFORM0 ",
"Structured attention provides a way to model the dependency relations shown in Figure FIGREF4 .",
"The Reading Network takes INLINEFORM0 , INLINEFORM1 and INLINEFORM2 as input, computes the values of INLINEFORM3 and INLINEFORM4 by the LSTM recurrent update BIBREF40 . Then the write operation concatenates INLINEFORM5 and INLINEFORM6 to the end of hidden and memory tape."
],
[
"Predict Network models the probability distribution of next word INLINEFORM0 , considering on hidden states INLINEFORM1 , and gates INLINEFORM2 . Note that, at time step INLINEFORM3 , the model cannot observe INLINEFORM4 , a temporary estimation of INLINEFORM5 is computed considering on INLINEFORM6 : DISPLAYFORM0 ",
"From there we compute its corresponding INLINEFORM0 and INLINEFORM1 for Eq. EQREF10 . We parametrize INLINEFORM2 function as: DISPLAYFORM0 ",
" where INLINEFORM0 is an adaptive summary of INLINEFORM1 , output by structured attention controlled by INLINEFORM2 . INLINEFORM3 could be a simple feed-forward MLP, or more complex architecture, like ResNet, to add more depth to the model."
],
[
"We evaluate the proposed model on three tasks, character-level language modeling, word-level language modeling, and unsupervised constituency parsing."
],
[
"From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.",
"When training, we use truncated back-propagation, and feed the final memory position from the previous batch as the initial memory of next one. At the beginning of training and test time, the model initial hidden states are filled with zero. Optimization is performed with Adam using learning rate INLINEFORM0 , weight decay INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . We carry out gradient clipping with maximum norm 1.0. The learning rate is multiplied by 0.1 whenever validation performance does not improve during 2 checkpoints. These checkpoints are performed at the end of each epoch. We also apply layer normalization BIBREF41 to the Reading Network and batch normalization to the Predict Network and parsing network. For all of the character-level language modeling experiments, we apply the same procedure, varying only the number of hidden units, mini-batch size and dropout rate.",
"we process the Penn Treebank dataset BIBREF42 by following the procedure introduced in BIBREF43 . For character-level PTB, Reading Network has two recurrent layers, Predict Network has one residual block. Hidden state size is 1024 units. The input and output embedding size are 128, and not shared. Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 , upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 100 timesteps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0, 0.25, 0.1) respectively.",
"In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a result our model achieve state-of-the-art performance and significantly outperform baseline models. It is worth noting that HM-LSTM BIBREF6 also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours."
],
[
"Comparing to character-level language modeling, word-level language modeling needs to deal with complex syntactic structure and various linguistic phenomena. But it has less long-term dependencies. We evaluate the word-level variant of our language model on a preprocessed version of the Penn Treebank (PTB) BIBREF42 and Text8 BIBREF49 dataset.",
"We apply the same procedure and hyper-parameters as in character-level language model. Except optimization is performed with Adam with INLINEFORM0 . This turns off the exponential moving average for estimates of the means of the gradients BIBREF50 . We also adapt the number of hidden units, mini-batch size and the dropout rate according to the different tasks.",
"we process the Penn Treebank dataset BIBREF43 by following the procedure introduced in BIBREF51 . For word-level PTB, the Reading Network has two recurrent layers and the Predict Network do not have residual block. The hidden state size is 1200 units and the input and output embedding sizes are 800, and shared BIBREF52 , BIBREF53 . Look-back range INLINEFORM0 , temperature parameter INLINEFORM1 and the upper band of memory span INLINEFORM2 . We use a batch size of 64, truncated back-propagation with 35 time-steps. The values used of dropout on input/output embeddings, between recurrent layers, and on recurrent states were (0.7, 0.5, 0.5) respectively.",
"dataset contains 17M training tokens and has a vocabulary size of 44k words. The dataset is partitioned into a training set (first 99M characters) and a development set (last 1M characters) that is used to report performance. As this dataset contains various articles from Wikipedia, the longer term information (such as current topic) plays a bigger role than in the PTB experiments BIBREF61 . We apply the same procedure and hyper-parameters as in character-level PTB, except we use a batch size of 128. The values used of dropout on input/output embeddings, between Recurrent Layers and on recurrent states were (0.4, 0.2, 0.2) respectively.",
"In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention."
],
[
"The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix .",
"Different from the previous experiment setting, the model treat each sentence independently during train and test time. When training, we feed one batch of sentences at each iteration. In a batch, shorter sentences are padded with 0. At the beginning of the iteration, the model's initial hidden states are filled with zero. When testing, we feed on sentence one by one to the model, then use the gate value output by the model to recursively combine tokens into constituents, as described in Appendix .",
"Table TABREF44 summarizes the results. Our model significantly outperform the RANDOM baseline indicate a high consistency with human annotation. Our model also shows a comparable performance with CCM model. In fact our parsing network and CCM both focus on the relation between successive tokens. As described in Section SECREF14 , our model computes syntactic distance between all successive pair of tokens, then our parsing algorithm recursively assemble tokens into constituents according to the learned distance. CCM also recursively model the probability whether a contiguous subsequences of a sentence is a constituent. Thus, one can understand how our model is outperformed by DMV+CCM and UML-DOP models. The DMV+CCM model has extra information from a dependency parser. The UML-DOP approach captures both contiguous and non-contiguous lexical dependencies BIBREF13 ."
],
[
"In this paper, we propose a novel neural language model that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. We introduce a new neural parsing network: Parsing-Reading-Predict Network, that can make differentiable parsing decisions. We use a new structured attention mechanism to control skip connections in a recurrent neural network. Hence induced syntactic structure information can be used to improve the model's performance. Via this mechanism, the gradient can be directly back-propagated from the language model loss function into the neural Parsing Network. The proposed model achieve (or is close to) the state-of-the-art on both word/character-level language modeling tasks. Experiment also shows that the inferred syntactic structure highly correlated to human expert annotation."
],
[
"The authors would like to thank Timothy J. O'Donnell and Chris Dyer for the helpful discussions."
]
],
"section_name": [
"Introduction",
"Related Work",
"Motivation",
"Modeling Local Structure",
"Parsing Network",
"Reading Network",
"Predict Network",
"Experiments",
"Character-level Language Model",
"Word-level Language Model",
"Unsupervised Constituency Parsing",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"22b7cf887e3387634b67deae37c4d197a85c1f98"
],
"answer": [
{
"evidence": [
"In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. In other words, if the model sees a space, it will attend on all previous step. If the model sees a letter, it will attend no further then the last space step. The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data. As a result our model achieve state-of-the-art performance and significantly outperform baseline models. It is worth noting that HM-LSTM BIBREF6 also unsupervisedly induce similar structure from data. But discrete operations in HM-LSTM make their training procedure more complicated then ours."
],
"extractive_spans": [],
"free_form_answer": "By visualizing syntactic distance estimated by the parsing network",
"highlighted_evidence": [
"In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB test set. We observe that the syntactic distance tends to be higher between the last character of a word and a space, which is a reasonable breakpoint to separate between words. ",
"The model autonomously discovered to avoid inter-word attention connection, and use the hidden states of space (separator) tokens to summarize previous information. This is strong proof that the model can understand the latent structure of data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"19730a5d76cf81f3614aa41243672f3eab75e322"
],
"answer": [
{
"evidence": [
"From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.",
"The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix ."
],
"extractive_spans": [
"Penn Treebank",
"Text8",
"WSJ10"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.",
"The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"42cdfe407e54aa6c90a61c0943fb456f7f75d7b8"
],
"answer": [
{
"evidence": [
"In Table TABREF39 , our results are comparable to the state-of-the-art methods. Since we do not have the same computational resource used in BIBREF50 to tune hyper-parameters at large scale, we expect that our model could achieve better performance after an aggressive hyperparameter tuning process. As shown in Table TABREF42 , our method outperform baseline methods. It is worth noticing that the continuous cache pointer can also be applied to output of our Predict Network without modification. Visualizations of tree structure generated from learned PTB language model are included in Appendix . In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. By removing Parsing Network, we observe a significant drop of performance. This stands as empirical evidence regarding the benefit of having structure information to control attention.",
"FLOAT SELECTED: Table 1: BPC on the Penn Treebank test set",
"Word-level Language Model"
],
"extractive_spans": [],
"free_form_answer": "BPC, Perplexity",
"highlighted_evidence": [
"In Table TABREF40 , we show the value of test perplexity for different variants of PRPN, each variant remove part of the model. ",
"FLOAT SELECTED: Table 1: BPC on the Penn Treebank test set",
"Word-level Language Model"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they show their model discovers underlying syntactic structure?",
"Which dataset do they experiment with?",
"How do they measure performance of language model tasks?"
],
"question_id": [
"d824f837d8bc17f399e9b8ce8b30795944df0d51",
"2ff3898fbb5954aa82dd2f60b37dd303449c81ba",
"3070d6d6a52aa070f0c0a7b4de8abddd3da4f056"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Hard arrow represents syntactic tree structure and parent-to-child dependency relation, dash arrow represents dependency relation between siblings",
"Figure 2: Proposed model architecture, hard line indicate valid connection in Reading Network, dash line indicate valid connection in Predict Network.",
"Figure 3: Convolutional network for computing syntactic distance. Gray triangles represent 2 layers of convolution, d0 to d7 are the syntactic distance output by each of the kernel position. The blue bars indicate the amplitude of di’s, and yi’s are the inferred constituents.",
"Figure 4: Syntactic distance estimated by Parsing Network. The model is trained on PTB dataset at the character level. Each blue bar is positioned between two characters, and represents the syntactic distance between them. From these distances we can infer a tree structure according to Section 4.2.",
"Table 1: BPC on the Penn Treebank test set",
"Table 2: PPL on the Penn Treebank test set",
"Table 3: Ablation test on the Penn Treebank. “- Parsing Net” means that we remove Parsing Network and replace Structured Attention with normal attention mechanism; “- Reading Net Attention” means that we remove Structured Attention from Reading Network, that is equivalent to replace Reading Network with a normal 2-layer LSTM; “- Predict Net Attention” means that we remove Structured Attention from Predict Network, that is equivalent to have a standard projection layer; “Our 2-layer LSTM” is equivalent to remove Parsing Network and remove Structured Attention from both Reading and Predict Network.",
"Table 4: PPL on the Text8 valid set",
"Table 5: Parsing Performance on the WSJ10 dataset",
"Figure 5: Syntactic structures of two different sentences inferred from {di} given by Parsing Network."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png",
"8-Table1-1.png",
"8-Table2-1.png",
"9-Table3-1.png",
"9-Table4-1.png",
"10-Table5-1.png",
"15-Figure5-1.png"
]
} | [
"How do they show their model discovers underlying syntactic structure?",
"How do they measure performance of language model tasks?"
] | [
[
"1711.02013-Character-level Language Model-3"
],
[
"1711.02013-8-Table1-1.png",
"1711.02013-Word-level Language Model-4"
]
] | [
"By visualizing syntactic distance estimated by the parsing network",
"BPC, Perplexity"
] | 274 |
1909.00183 | Extracting information from free text through unsupervised graph-based clustering: an application to patient incident records | The large volume of text in electronic healthcare records often remains underused due to a lack of methodologies to extract interpretable content. Here we present an unsupervised framework for the analysis of free text that combines text-embedding with paragraph vectors and graph-theoretical multiscale community detection. We analyse text from a corpus of patient incident reports from the National Health Service in England to find content-based clusters of reports in an unsupervised manner and at different levels of resolution. Our unsupervised method extracts groups with high intrinsic textual consistency and compares well against categories hand-coded by healthcare personnel. We also show how to use our content-driven clusters to improve the supervised prediction of the degree of harm of the incident based on the text of the report. Finally, we discuss future directions to monitor reports over time, and to detect emerging trends outside pre-existing categories. | {
"paragraphs": [
[
"",
"The vast amounts of data collected by healthcare providers in conjunction with modern data analytics present a unique opportunity to improve the quality and safety of medical care for patient benefit BIBREF1. Much recent research in this area has been on personalised medicine, with the aim to deliver improved diagnostic and treatment through the synergistic integration of datasets at the level of the individual. A different source of healthcare data pertains to organisational matters. In the United Kingdom, the National Health Service (NHS) has a long history of documenting the different aspects of healthcare provision, and is currently in the process of making available properly anonymised datasets, with the aim of leveraging advanced analytics to improve NHS services.",
"One such database is the National Reporting and Learning System (NRLS), a repository of patient safety incident reports from the NHS in England and Wales set up in 2003, which now contains over 13 million records. The incidents are reported under standardised categories and contain both organisational and spatio-temporal information (structured data) and a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission or discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into complex processes in healthcare with a view towards service improvement.",
"Although statistical analyses are routinely performed on the structured data (dates, locations, hand-coded categories, etc), free text is typically read manually and often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. These limitations are due to a lack of methodologies that can provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Automatic categorisation of incidents from free text would sidestep human error and difficulties in assigning incidents to a priori pre-defined lists in the reporting system. Such tools can also offer unbiased insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.",
"In this work, we showcase an algorithmic methodology that detects content-based groups of records in an unsupervised manner, based only on the free (unstructured) textual descriptions of the incidents. To do so, we combine deep neural-network high-dimensional text-embedding algorithms with graph-theoretical methods for multiscale clustering. Specifically, we apply the framework of Markov Stability (MS), a multiscale community detection algorithm, to sparsified graphs of documents obtained from text vector similarities. Our method departs both from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF2, and from more recent approaches that have used deep neural network based language models, but have used k-means clustering without a graph-based analysis BIBREF3. Previous applications of network theory to text analysis have included the work of Lanchichinetti and co-workers BIBREF4, who proposed a probabilistic graph construction analysed with the InfoMap algorithm BIBREF5; however, their community detection was carried out at a single-scale and the BoW representation of text lacks the power of text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than from pre-designed classifications. The obtained results can help mitigate human error or effort in finding the right category in complex classification trees. We illustrate in our analysis the insight gained from this unsupervised, multi-resolution approach in this specialised corpus of medical records.",
"As an additional application, we use machine learning methods for the prediction of the degree of harm of incidents directly from the text in the NRLS incident reports. Although the degree of harm is recorded by the reporting person for every event, this information can be unreliable as reporters have been known to game the system, or to give different answers depending on their professional status BIBREF6. Previous work on predicting the severity of adverse events BIBREF7, BIBREF8 used reports submitted to the Advanced Incident Management System by Australian public hospitals, and used BoW and Support Vector Machines (SVMs) to detect extreme-risk events. Here we demonstrate that publicly reported measures derived from NHS Staff Surveys can help select ground truth labels that allow supervised training of machine learning classifiers to predict the degree of harm directly from text embeddings. Further, we show that the unsupervised clusters of content derived with our method improve the classification results significantly.",
"An a posteriori manual labelling by three clinicians agree with our predictions based purely on text almost as much as with the original hand-coded labels. These results indicate that incidents can be automatically classified according to their degree of harm based only on their textual descriptions, and underlines the potential of automatic document analysis to help reduce human workload."
],
[
"The full dataset includes more than 13 million confidential reports of patient safety incidents reported to the National Reporting and Learning System (NRLS) between 2004 and 2016 from NHS trusts and hospitals in England and Wales. Each record has more than 170 features, including organisational details (e.g., time, trust code and location), anonymised patient information, medication and medical devices, among many other details. In most records, there is also a detailed description of the incident in free text, although the quality of the text is highly variable.",
"The records are manually classified by operators according to a two-level system of incident types. The top level contains 15 categories including general classes such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure', alongside more specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'.",
"Each record is also labelled based on the degree of harm to the patients as one of: `No Harm', `Low Harm', `Moderate Harm', `Severe Harm' or `Death'. These degrees are precisely defined by the WHO BIBREF9 and the NHS BIBREF10."
],
[
"Our framework combines text-embedding, geometric graph construction and multi-resolution community detection to identify, rather than impose, content-based clusters from free, unstructured text in an unsupervised manner.",
"Figure FIGREF2 shows a summary of our pipeline. First, we pre-process each document to transform text into consecutive word tokens, with words in their most normalised forms and some words removed if they have no distinctive meaning when used out of context BIBREF11, BIBREF12. We then train a paragraph vector model using the Document to Vector (Doc2Vec) framework BIBREF13 on the full set (13 million) of pre-processed text records. (Training a vector model on smaller sets of 1 million records also produces good results as seen in Table TABREF5). This training step of the text model is only done once.",
"The trained Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each document in our target analysis set. We then compute a matrix containing all the pairwise (cosine) similarities between the Doc2Vec document vectors. This similarity matrix can be thought of as the adjacency matrix of a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. MS uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need to choose a priori the number or type of clusters.",
"The partitions found by MS across levels of resolution are analysed a posteriori through visualisations and quantitative scores. The visualisations include: (i) word clouds to summarise the main content; (ii) graph layouts; and (iii) Sankey diagrams and contingency tables that capture correspondences between partitions. The quantitative scores include: (i) the intrinsic topic coherence (measured by the pairwise mutual information BIBREF19, BIBREF20); and (ii) the similarity to hand-coded categories (measured by the normalised mutual information BIBREF21).",
"Our framework also covers prediction of the degree of harm (DoH) caused to the patient usig text embeddings and the unsupervised cluster assignments obtaind from our multiscale graph partitioning. To perform this task, we use the hand-coded DoH from the NRLS to train three commonly used classifiers BIBREF22, BIBREF23 (Ridge, Support Vector Machine with a linear kernel, Random Forest) to predict the DoH using TF-iDF and Doc2Vec embeddings of the text and our MS cluster assignments. The classifiers are then evaluated in predicting the DoH using cross-validation.",
"We now explain the steps of the methodological pipeline in more detail."
],
[
"Text preprocessing is important to enhance the performance of text embedding techniques. We applied standard preprocessing to the raw text of all 13 million records in our corpus, as follows. We divide our documents into iterative word tokens using the NLTK library BIBREF11 and remove punctuation and digit-only tokens. We then apply word stemming using the Porter algorithm BIBREF12, BIBREF24. If the Porter method cannot find a stemmed version for a token, we apply the Snowball algorithm BIBREF25. Finally, we remove any stop-words (repeat words with low content) using NLTK's stop-word list. Although pre-processing reduces some of the syntactic information, it consolidates the semantic information of the vocabulary. We note that the incident descriptions contain typos and acronyms, which have been left uncorrected to avoid manual intervention or the use of spell checkers, so as to mimic as closely as possible a realistic scenario."
],
[
"Computational text analysis relies on a mathematical representation of the base units of text (character $n$-grams, words or documents). Since our methodology is unsupervised, we avoid the use of labelled data, in contrast to supervised or semi-supervised classification methods BIBREF26, BIBREF27. In our work, we use a representation of text documents as vectors following recent developments in the field.",
"Traditionally, bag-of-words (BoW) methods represented documents as vectors of word frequencies weighted by inverse document frequency (TF-iDF). Such methods provide a statistical description of documents but they do not carry information about the order or proximity of words to each other and hence disregard semantic or syntactic relationships between words. In addition, BoW representations carry little information content as they tend to be high-dimensional and very sparse, due to the large size of word dictionaries and low frequencies of many terms.",
"Recently, deep neural network language models have successfully overcome the limitations of BoW methods by incorporating neighbourhoods in the mathematical description of each term. Distributed Bag of Words (DBOW), better known as Doc2Vec BIBREF13, is a form of Paragraph Vectors (PV) which creates a model that represents any word sequence (i.e. sentences, paragraphs, documents) as $d$-dimensional vectors, where $d$ is user-defined (typically $d=300$). Training a Doc2Vec model starts with a random $d$-dimensional vector assignment for each document in the corpus. A stochastic gradient descent algorithm iterates over the corpus with the objective of predicting a randomly sampled set of words from each document by using only the document's $d$-dimensional vector BIBREF13. The objective function being optimised by PV-DBOW is similar to the skip-gram model in Refs. BIBREF28, BIBREF29. Doc2Vec has been shown BIBREF30 to capture both semantic and syntactic characterisations of the input text, and outperforms BoW-based models such as LDA BIBREF2.",
"Benchmarking the Doc2Vec training: Here, we use the Gensim Python library BIBREF31 to train the PV-DBOW model. The Doc2Vec training was repeated several times with a variety of training hyper-parameters (chosen based on our own numerical experiments and the general guidelines provided by BIBREF32) in order to optimise the output. To characterise the usability and quality of models, we trained Doc2Vec models using text corpora of different sizes and content with different sets of hyper-parameters. . In particular, we checked the effect of corpus size by training Doc2Vec models on the full 13 million NRLS records and on randomly sampled subsets of 1 million and 2 million records.",
"Since our target analysis has heavy medical content and specific use of words, we also tested the importance of the training corpus by generating an additional Doc2Vec model using a set of 5 million articles from the English Wikipedia representing standard, generic English usage, which works well in the analysis of news articles BIBREF33.",
"The results in Table TABREF5 show that training on the highly specific text from the NRLS records is an important ingredient in the successful vectorisation of the documents, as shown by the degraded performance for the Wikipedia model across a variety of training hyper-parameters. On the other hand, reducing the size of the corpus from 13 million to 1 million records did not affect the benchmarking dramatically. This robustness of the results to the size of the training corpus was confirmed further with the use of more detailed metrics, as discussed below in Section SECREF27 (see e.g., Figure FIGREF29).",
"Based on our benchmarking, henceforth we use the Doc2Vec model trained on the 13+ million NRLS records with the following hyper-parameters: {training method = dbow, number of dimensions for feature vectors size = 300, number of epochs = 10, window size = 15, minimum count = 5, number of negative samples = 5, random down-sampling threshold for frequent words = 0.001 }. As an indication of computational cost, the training of this model takes approximately 11 hours (run in parallel with 7 threads) on shared servers."
],
[
"Once the Doc2Vec model is trained, we use it to infer a vector for each record in our analysis subset and construct $\\hat{S}$, a similarity matrix between the vectors by: computing the matrix of cosine similarities between all pairs of records, $S_\\text{cos}$; transforming it into a distance matrix $D_{cos} = 1-S_{cos}$; applying element-wise max norm to obtain $\\hat{D}=\\Vert D_{cos}\\Vert _{max}$; and normalising the similarity matrix $\\hat{S} = 1-\\hat{D}$ which has elements in the interval $[0,1]$.",
"This similarity matrix can be thought of as the adjacency matrix of a fully connected weighted graph. However, such a graph contains many edges with small weights reflecting the fact that in high-dimensional noisy data even the least similar nodes present a substantial degree of similarity. Indeed, such weak similarities are in most cases redundant and can be explained through stronger pairwise similarities. These weak, redundant edges obscure the graph structure, as shown by the diffuse visualisation in Figure FIGREF7A.",
"To reveal the graph structure, we sparsify the similarity matrix to obtain a MST-kNN graph BIBREF14 based on a geometric heuristic that preserves the global connectivity of the graph while retaining details about the local geometry of the dataset. The MST-kNN algorithm starts by computing the minimum spanning tree (MST) of the full matrix $\\hat{D}$, i.e., the tree with $(N-1)$ edges connecting all nodes in the graph with minimal sum of edge weights (distances). The MST is computed using the Kruskal algorithm implemented in SciPy BIBREF34. To this MST, we add edges connecting each node to its $k$ nearest nodes (kNN) if they are not already in the MST. Here $k$ is an user-defined parameter that regulates the sparsity of the resulting graph. The binary adjacency matrix of the MST-kNN graph is Hadamard-multiplied with $\\hat{S}$ to give the adjacency matrix $A$ of the weighted, undirected sparsified graph.",
"The network visualisations in Figure FIGREF7 give an intuitive picture of the effect of sparsification as $k$ is decreased. If $k$ is very small, the graph is very sparse but not robust to noise. As $k$ is increased, the local similarities between documents induce the formation of dense subgraphs (which appear closer in the graph visualisation layout). When the number of neighbours becomes too large, the local structure becomes diffuse and the subgraphs lose coherence, signalling the degradation of the local graph structure. Relatively sparse graphs that preserve important edges and global connectivity of the dataset (guaranteed here by the MST) have computational advantages when using community detection algorithms.",
"Although we use here the MST-kNN construction due to its simplicity and robustness, network inference, graph sparsification and graph construction from data is an active area of research, and several alternatives exist based on different heuristics, e.g., Graphical Lasso BIBREF35, Planar Maximally Filtered Graph BIBREF36, spectral sparsification BIBREF37, or the Relaxed Minimum Spanning Tree (RMST) BIBREF38. We have experimented with some of those methods and obtained comparable results. A detailed comparison of sparsification methods as well as the choice of distance in defining the similarity matrix $\\hat{S}$ is left for future work."
],
[
"Community detection encompasses various graph partitioning approaches which aim to find `good' partitions into subgraphs (or communities) according to different cost functions, without imposing the number of communities a priori BIBREF39. The notion of community depends on the choice of cost function. Commonly, communities are taken to be subgraphs whose nodes are connected strongly within the community with relatively weak inter-community edges. Such structural notion is related to balanced cuts. Other cost functions are posed in terms of transitions inside and outside of the communities, usually as one-step processes BIBREF5. When transition paths of all lengths are considered, the concept of community becomes intrinsically multi-scale, i.e., different partitions are relevant at different time scales leading to a multi-level description dictated by the transition dynamics BIBREF15, BIBREF40, BIBREF16. This leads to the framework of Markov Stability (MS), a dynamics-based, multi-scale community detection methodology, which recovers several well-known heuristics as particular cases BIBREF15, BIBREF17, BIBREF18.",
"MS is an unsupervised community detection method that finds robust and stable partitions of a graph (and the associated communities) under the evolution of a continuous-time diffusion process without a priori choice of the number or type of communities or their relative relationships BIBREF15, BIBREF40, BIBREF16, BIBREF41 . In simple terms, MS can be understood by analogy to a drop of ink diffusing on the graph: the ink diffuses homogeneously unless the graph has intrinsic sub-structures, in which case the ink gets transiently contained, over particular time scales, within groups of nodes. The existence of such transients indicates a natural scale to partition the graph along the subgraphs (or communities) where the diffusion is transiently trapped. As the process continues to evolve, the ink diffuses out of those communities but might get transiently contained in other, larger subgraphs, if such multi-level structure exists. By analysing the Markov dynamics over time, MS detects the structure of the graph across scales. If a graph has no natural scales for partitioning, then MS returns no communities. The Markov time $t$ thus acts as a resolution parameter that allows us to extract robust partitions that persist over particular time scales, in an unsupervised manner.",
"Mathematically, given the adjacency matrix $A_{N \\times N}$ of the graph obtained as described previously, let us define the diagonal matrix $D=\\text{diag}(\\mathbf {d})$, where $\\mathbf {d}=A \\mathbf {1}$ is the degree vector. The random walk Laplacian matrix is defined as $L_\\text{RW}=I_N-D^{-1}A$, where $I_N$ is the identity matrix of size $N$ and the transition matrix (or kernel) of the associated continuous-time Markov process is $P(t)=e^{-t L_\\text{RW}}, \\, t>0$ BIBREF16. Any partition $\\mathcal {H}$ into $C$ clusters is associated with a binary membership matrix $H_{N \\times C}$ that maps the $N$ nodes into the clusters. Below, we will use the matrix $H$ to denote the corresponding partition $\\mathcal {H}$. We can then compute the $C\\times C$ clustered autocovariance matrix:",
"where $\\pi $ is the steady-state distribution of the process and $\\Pi =\\text{diag}(\\pi )$. The element $[R(t,H)]_{\\alpha \\beta }$ quantifies the probability that a random walker starting from community $\\alpha $ at $t=0$ will be in community $\\beta $ at time $t$, minus the probability that this event occurs by chance at stationarity.",
"The above definitions allow us to introduce our cost function measuring the goodness of a partition over time $t$, termed the Markov Stability of partition $H$:",
"A partition $H$ that maximises $r(t,H)$ is comprised of communities that preserve the flow within themselves over time $t$, since in that case the diagonal elements of $R(t,H)$ will be large and the off-diagonal elements will be small. For details, see BIBREF15, BIBREF40, BIBREF16, BIBREF42.",
"Our computational algorithm thus searches for partitions at each Markov time $t$ that maximise $r(t,H)$. Although the maximisation of (DISPLAY_FORM11) is an NP-hard problem (hence with no guarantees for global optimality), there are efficient optimisation methods that work well in practice. Our implementation here uses the Louvain Algorithm BIBREF43, BIBREF18 which is efficient and known to give good results when applied to benchmarks. To obtain robust partitions, we run the Louvain algorithm 500 times with different initialisations at each Markov time and pick the best 50 with the highest Markov Stability value $r(t,H)$. We then compute the variation of information BIBREF44 of this ensemble of solutions $VI(t)$, as a measure of the reproducibility of the result under the optimisation. In addition, we search for partitions that are persistent across time $t$, as given by low values of the variation of information between optimised partitions across time $VI(t,t^{\\prime })$. Robust partitions are therefore indicated by Markov times where $VI(t)$ shows a dip and $VI(t,t^{\\prime })$ has an extended plateau with low values, indicating consistency under the optimisation and validity over extended scales BIBREF42, BIBREF16. Below, we apply MS to find partitions across scales of the similarity graph of documents, $A$. The communities detected correspond to groups of documents with similar content at different levels of granularity."
],
[
"Graph layouts: We use the ForceAtlas2 BIBREF45 layout algorithm to represent graphs on the plane. This layout assigns a harmonic spring to each edge and finds through iterative rearrangements finds an arrangement on the plane that balances attractive and repulsive forces between nodes. Hence similar nodes tend to appear close together on this layout. We colour the nodes by either hand-coded categories (Figure FIGREF7) or multiscale MS communities (Figure FIGREF21). Spatially coherent colourings on this layout imply good clusters in terms of the similarity graph.",
"Tracking membership through Sankey diagrams: Sankey diagrams allow us to visualise the relationship of node membership across different partitions and with respect to the hand-coded categories. Two-layer Sankey diagrams (e.g., Fig. FIGREF22) reflect the correspondence between MS clusters and the hand-coded external categories, whereas we use a multilayer Sankey diagram in Fig. FIGREF21 to present the multi-resolution MS community detection across scales.",
"Normalised contingency tables: To capture the relationship between our MS clusters and the hand-coded categories, we also provide a complementary visualisation as z-score heatmaps of normalised contingency tables, e.g., Fig. FIGREF22. This allows us to compare the relative association of content clusters to the external categories at different resolution levels. A quantification of the overall correspondence is also provided by the $NMI$ score in Eq. (DISPLAY_FORM17).",
"Word clouds of increased intelligibility through lemmatisation: Our method clusters text documents according to their intrinsic content. This can be understood as a type of topic detection. To visualise the content of clusters, we use Word Clouds as basic, yet intuitive, summaries of information to extract insights and compare a posteriori with hand-coded categories. They can also provide an aid for monitoring results when used by practitioners.",
"The stemming methods described in Section SECREF3 truncate words severely to enhance the power of the language processing computational methods by reducing the redundancy in the word corpus. Yet when presenting the results back to a human observer, it is desirable to report the cluster content with words that are readily comprehensible. To generate comprehensible word clouds in our a posteriori analyses, we use a text processing method similar to the one described in BIBREF46. Specifically, we use the part of speech (POS) tagging module from NLTK to leave out sentence parts except the adjectives, nouns, and verbs. We also remove less meaningful common verbs such as `be', `have', and `do' and their variations. The remaining words are then lemmatised in order to normalise variations of the same word. Finally, we use the Python library wordcloud to create word clouds with 2 or 3-gram frequency list of common word groups."
],
[
"Although our dataset has a classification hand-coded by a human operator, we do not use it in our analysis. Indeed, one of our aims is to explore the relevance of the fixed external classes as compared to content-driven groupings obtained in an unsupervised manner. Therefore we provide a double route to quantify the quality of the clusters by computing two complementary measures: (i) an intrinsic measure of topic coherence, and (ii) a measure of similarity to the external hand-coded categories.",
"Topic coherence of text: As an intrinsic measure of consistency of word association, we use the pointwise mutual information ($PMI$) BIBREF19, BIBREF47. The $PMI$ is an information-theoretical score that captures the probability of words being used together in the same group of documents. The $PMI$ score for a pair of words $(w_1,w_2)$ is:",
"where the probabilities of the words $P(w_1)$, $P(w_2)$, and of their co-occurrence $P(w_1 w_2)$ are obtained from the corpus. We obtain an aggregate $\\widehat{PMI}$ for the graph partition $C=\\lbrace c_i\\rbrace $ by computing the $PMI$ for each cluster, as the median $PMI$ between its 10 most common words (changing the number of words gives similar results), and computing the weighted average of the $PMI$ cluster scores:",
"where $c_i$ denotes the clusters in partition $C$, each with size $n_i$, so that $N=\\sum _{c_i \\in C} n_i$ is the total number of nodes. Here $S_i$ denotes the set of top 10 words for cluster $c_i$.",
"The $PMI$ score has been shown to perform well BIBREF19, BIBREF47 when compared to human interpretation of topics on different corpora BIBREF48, BIBREF49, and is designed to evaluate topical coherence for groups of documents, in contrast to other tools aimed at short forms of text. See BIBREF26, BIBREF27, BIBREF50, BIBREF51 for other examples.",
"Here, we use the $\\widehat{PMI}$ score to evaluate partitions without any reference to an externally labelled `ground truth'.",
"Similarity between the obtained partitions and the hand-coded categories: To quantify how our content-driven unsupervised clusters compare against the external classification, we use the normalised mutual information ($NMI$), a well-known information-theoretical score that quantifies the similarity between clusterings considering correct and incorrect assignments in terms of the information between the clusterings. The NMI between two partitions $C$ and $D$ of the same graph is:",
"where $I(C,D)$ is the Mutual Information and $H(C)$ and $H(D)$ are the entropies of the two partitions.",
"The $NMI$ is bounded ($0 \\le NMI \\le 1$) and a higher value corresponds to higher similarity of the partitions (i.e., $NMI=1$ when there is perfect agreement between partitions $C$ and $D$). The $NMI$ score is directly related to the V-measure in the computer science literature BIBREF52."
],
[
"As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). A one-hot encoding is applied to turn these categorical values into numerical ones. We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification.",
"The supervised classification was carried out by training on features and text three classifiers commonly applied to text classification tasks BIBREF22, BIBREF23: a Ridge classifier, Support Vector Machines with a linear kernel, and Random Forests. The goal is to predict the degree of harm (DoH) among five possible values (1-5). The classification is carried out with five-fold cross validation, using 80% of the data to train the model and the remaining 20% to test it. As a measure of performance of the classifiers and models, we use the weighted average of the F1 score for all levels of DoH, which takes into account both precision and recall, i.e., both the exactness and completeness of the model."
],
[
"We showcase our methodology through the analysis of the text from NRLS patient incident reports. In addition to textual descriptions, the reports are hand-coded upon reporting with up to 170 features per case, including a two-level manual classification of the incidents.",
"Here, we only use the text component and apply our graph-based text clustering to a set of 3229 reports from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014. As summarised in Figure FIGREF2, we start by training our Doc2Vec text embedding using the full 13+ million records collected by the NRLS since 2004 (although, as discussed above, a much smaller corpus of NRLS documents can be used). We then infer vectors for our 3229 records, compute the cosine similarity matrix and construct an MST-kNN graph with $k=13$ for our graph-based clustering. (We have confirmed the robustness of the MST-kNN construction in our data for $k>13$ by scanning values of $k \\in [1,50]$, see Section SECREF27). We then applied Markov Stability, a multi-resolution graph partitioning algorithm to the MST-kNN graph. We scan across Markov time ($t \\in [0.01, 100]$ in steps of 0.01). At each $t$, we run 500 independent Louvain optimisations to select the optimal partition found, as well as quantifying the robustness to optimisation by computing the average variation of information $VI(t)$ between the top 50 partitions. Once the full scan across $t$ is finalised, we compute $VI(t,t^{\\prime })$, the variation of information between the optimised partitions found across the scan in Markov time, to select partitions that are robust across scales."
],
[
"Figure FIGREF21 presents a summary of our MS analysis. We plot the number of clusters of the optimal partition and the two metrics of variation of information across all Markov times. The existence of a long plateau in $VI(t,t^{\\prime })$ coupled to a dip in $VI(t)$ implies the presence of a partition that is robust both to the optimisation and across Markov time. To illustrate the multi-scale features of the method, we choose several of these robust partitions, from finer (44 communities) to coarser (3 communities), obtained at five Markov times and examine their structure and content. The multi-level Sankey diagram summarises the relationship of the partitions across levels.",
"The MS analysis of the graph reveals a multi-level structure of partitions, with a strong quasi-hierarchical organisation. We remark that our optimisation does not impose any hierarchical structure a priori, so that the observed consistency of communities across levels is intrinsic to the data and suggests the existence of sub-themes that integrate into larger thematic categories. The unsupervised detection of intrinsic scales by MS enables us to obtain groups of records with high content similarity at different levels of granularity. This capability can be used by practitioners to tune the level of description to their specific needs, and is used below as an aid in our supervised classification task in Section SECREF4.",
"To ascertain the relevance of the layers of content found by MS, we examined the five levels of resolution in Figure FIGREF21. For each level, we produced lemmatised word clouds, which we used to generate descriptive content labels for the communities. We then compared a posteriori the content clusters with the hand-coded categories through a Sankey diagram and a contingency table. The results are shown in Figures FIGREF22–FIGREF25 for each of the levels.",
"The partition into 44 communities presents content clusters with well-defined characterisations, as shown by the Sankey diagram and the highly clustered structure of the contingency table (Figure FIGREF22). Compared to the 15 hand-coded categories, this 44-community partition provides finer groupings corresponding to specific sub-themes within the generic hand-coded categories. This is apparent in the hand-coded classes `Accidents', `Medication', `Clinical assessment', `Documentation' and `Infrastructure', where a variety of meaningful subtopics are identified (see Fig. FIGREF23 for details). In other cases, however, the content clusters cut across the external categories, e.g., the clusters on labour ward, chemotherapy, radiotherapy and infection control are coherent in content but can belong to several of the external classes. At this level of resolution, our algorithm also identified highly specific topics as separate content clusters, including blood transfusions, pressure ulcer, consent, mental health, and child protection, which have no direct relationship with the external classes provided to the operator.",
"Figure FIGREF24A and FIGREF24B present the results for two partitions at medium level of resolution, where the number of communities (12 and 17) is close to that of hand-coded categories (15). As expected from the quasi-hierarchy detected by our multi-resolution analysis, we find that the communities in the 17-way and 12-way partitions emerge from consistent aggregation of the smaller communities in the 44-way partition in Figure FIGREF22. Focussing on the 12-way partition, we see that some of the sub-themes in Figure FIGREF23 are merged into more general topics. An example is Accidents (community 2 in Fig. FIGREF24A), a merger of seven finer communities, which corresponds well with the external category `Patient accidents'. A similar phenomenon is seen for the Nursing cluster (community 1), which falls completely under the external category `Infrastructure'. The clusters related to `Medication' similarly aggregate into a larger community (community 3), yet there still remains a smaller, specific community related to Homecare medication (community 12) with distinct content. Other communities, on the other hand, still strand across external categories. This is clearly observable in communities 10 and 11 (Samples/ lab tests/forms and Referrals/appointments), which fall naturally across the `Documentation' and `Clinical Assessment'. Similarly, community 9 (Patient transfers) sits across the `Admission/Transfer' and `Infrastructure' external categories, due to its relation to nursing and hospital constraints. A substantial proportion of records was hand-coded under the generic `Treatment/Procedure' class, yet MS splits into into content clusters that retain medical coherence, e.g., Radiotherapy (Comm. 4), Blood transfusions (Comm. 7), IV/cannula (Comm. 5), Pressure ulcer (Comm. 8), and the large community Labour ward (Comm. 6).",
"The medical specificity of the Radiotherapy, Pressure ulcer and Labour ward clusters means that they are still preserved as separate groups to the next level of coarseness in the 7-way partition (Figure FIGREF25A). The mergers in this case lead to a larger communities referring to Medication, Referrals/Forms and Staffing/Patient transfers. Figure FIGREF25B shows the final level of agglomeration into 3 content clusters: records referring to Accidents; a group broadly referring to matters Procedural (referrals, forms, staffing, medical procedures) cutting across external categories; and the Labour ward cluster, still on its own as a subgroup with distinctive content.",
"This process of agglomeration of content, from sub-themes into larger themes, as a result of the multi-scale hierarchy of MS graph partitions is shown explicitly with word clouds in Figure FIGREF26 for the 17-, 12- and 7-way partitions. Our results show good overall correspondence with the hand-coded categories across resolutions, yet our results also reveal complementary categories of incidents not defined in the external classification. The possibility of tuning the granularity afforded by our method can be used to provide a distinct level of resolution in certain areas corresponding to specialised or particular sub-themes."
],
[
"We have examined quantitatively the robustness of the results to parametric and methodological choices in different steps of our framework. Specifically, we evaluate the effect of: (i) using Doc2Vec embeddings instead of BoW vectors; (ii) the size of corpus for training Doc2Vec; (iii) the sparsity of the MST-kNN graph construction. We have also carried out quantitative comparisons to other methods for topic detection and clustering: (i) LDA-BoW, and (ii) several standard clustering methods.",
"Doc2Vec provides improved clusters compared to BoW: As compared to standard bag of words (BoW), fixed-sized vector embeddings (Doc2Vec) produces lower dimensional vector representations with higher semantic and syntactic content. Doc2Vec outperforms BoW representations in practical benchmarks of semantic similarity and is less sensitive to hyper-parameters BIBREF30. To quantify the improvement provided by Doc2Vec, we constructed a MST-kNN graph from TF-iDF vectors and ran MS on this TF-iDF similarity graph. Figure FIGREF28 shows that Doc2Vec outperforms BoW across all resolutions in terms of both $NMI$ and $\\widehat{PMI}$ scores.",
"Robustness to the size of the Doc2Vec training dataset : Table TABREF5 indicates a small effect of the size of the training corpus on the Doc2Vec model. To confirm this, we trained two additional Doc2Vec models on sets of 1 million and 2 million records (randomly chosen from the full 13+ million records) and followed the same procedure to construct the MST-kNN graph and carry out the MS analysis. Figure FIGREF29 shows that the performance is affected only mildly by the size of the Doc2Vec training set.",
"Robustness to the level of graph sparsification:",
"We sparsify the matrix of cosine similarities using the MST-kNN graph construction. The smaller the value of $k$, the sparser the graph. Sparser graphs have computational advantages for community detection algorithms, but too much sparsification degrades the results. Figure FIGREF30 shows the effect of sparsification in the graph construction on the performance of MS clusters. Our results are robust to the choice of $k$, provided it is not too small: both the $NMI$ and $\\widehat{PMI}$ scores reach a similar level for values of $k$ above 13-16. Due to computational efficiency, we favour a relatively small value of $k=13$.",
"Comparison of MS partitions to Latent Dirichlet Allocation with Bag-of-Words (LDA-BoW): We have compared the MS results to LDA, a widely used methodology for text analysis. A key difference in LDA is that a different model needs to be trained when the number of topics changes, whereas our MS method produces clusterings at all levels of resolution in one go. To compare the outcomes, we trained five LDA models corresponding to the five MS levels in Figure FIGREF21. Table TABREF31 shows that MS and LDA give partitions that are comparably similar to the hand-coded categories (as measured with $NMI$), with some differences depending on the scale, whereas the MS clusters have higher topic coherence (as given by $\\widehat{PMI}$) across all scales.",
"To give an indication of computational cost, we ran both methods on the same servers. Our method takes approximately 13 hours in total (11 hours to train the Doc2Vec model on 13 million records and 2 hours to produce the full MS scan with 400 partitions across all resolutions). The time required to train just the 5 LDA models on the same corpus amounts to 30 hours (with timings ranging from $\\sim $2 hours for the 3 topic LDA model to 12.5 hours for the 44 topic LDA model). This comparison also highlights the conceptual difference between our multi-scale methodology and LDA topic modelling. While LDA computes topics at a pre-determined level of resolution, our method obtains partitions at all resolutions in one sweep of the Markov time, from which relevant partitions are chosen based on their robustness. The MS partitions at all resolutions are available for further investigation if so needed.",
"Comparison of MS to other partitioning and community detection algorithms: We have partitioned the same kNN-MST graph using several well-known algorithms readily available in code libraries (i.e., the iGraph module for Python): Modularity Optimisation BIBREF53, InfoMap BIBREF5, Walktrap BIBREF54, Label Propagation BIBREF55, and Multi-resolution Louvain BIBREF43. Note that, in contrast with our multiscale MS analysis, these methods give just one partition at a particular resolution (or two for the Louvain implementation in iGraph). Figure FIGREF32 shows that MS provides improved or equal results to all those other graph partitioning methods for both $NMI$ and $\\widehat{PMI}$ across all scales. Only for very fine resolution (more than 50 clusters) does Infomap, which partitions graphs into small clique-like subgraphs BIBREF40, BIBREF56, provide a slightly improved $NMI$. Therefore, MS finds both relevant and high quality clusterings across all scales by sweeping the Markov time parameter."
],
[
"Here we approach the task of training a supervised classifier that predicts the degree of harm of an incident based on other features of the record (such as location, external category, and medical specialty) and on the textual component of the report. To this end, we use the embedded text vectors and MS cluster labels of the records as features to predict the degree of harm to the patient.",
"Each NRLS record has more than 170 features filled manually by healthcare staff, including the degree of harm (DoH) to the patient, a crucial assessment of the reported incident. The incident is classified into five levels: 'No harm', 'Low', 'Moderate', 'Severe', and 'Death'. However, the reported DoH is not consistent across hospitals and can be unreliable BIBREF6.",
"The lack of reliability of the recorded DoH poses a challenge when training supervised models. Given the size of the dataset, it is not realistic to ask medics to re-evaluate incidents manually. Instead, we use the publicly available `Learning from mistakes league table' based on NHS staff survey data to identify organisations (NHS Trusts) with `outstanding' (O) and `poor reporting culture' (PRC). Our hypothesis is that training our classifiers on records from organisations with better rankings in the league table should lead to improved prediction. If there is a real disparity in the manual classification among organisations, only incidents labelled by O-ranked Trusts should be regarded as a `ground truth'."
],
[
"We study NRLS incidents reported between 2015 and 2017 from O-ranked and PRC-ranked Trusts. The 2015-17 NRLS dataset is very unbalanced: there are 2,038,889 “No harm” incidents against only 6,754 “Death” incidents. To tackle this issue, we sample our dataset as recommended by BIBREF8, and randomly select 1,016 records each of `No harm' , `Low', and `Moderate', and 508 records each of `Severe' and `Death' incidents, from each type of Trust. We thus obtain two datasets (O and PRC) consisting of a total of 4,064 incidents each.",
"For each dataset (O and PRC), we train three classifiers (Ridge, Support Vector Machine with a linear kernel, and Random Forest) with five-fold cross validation, and we compute the F-1 scores of each fold to evaluate the model performance. We first train models using three categories from the reports: location (L), external hand-coded category (C), and medical specialty (S). We also compute the performance of models trained on text features, both TF-iDF and Doc2Vec. We also study models trained on a mixture of text and categories. Finally, we run Markov Stability as described above to obtain cluster labels for each dataset (O and PRC) at different resolutions (70, 45, 30 and 13 communities). We then evaluate if it is advantageous to include the labels of the MS clusters as additional features.",
"Table TABREF34 presents the results of our numerical experiments. Our first observation is that, for this data, SVM with linear kernel has the best performance (similar to Ridge), and Random Forests perform poorly in general. There are several conclusions from our study. First, there is a consistent difference between the scores of the O and PRC datasets (ranging from 1.7% to 11.2% for an average of 5.6%), thus confirming our hypothesis that automated classification performs better when training with data from organizations with better rankings in the league table. Second, using text features is highly advantageous in predicting the degree of harm compared to category alone: there is a substantial increase of up to 100% in the F1 score between column 1 (all three categories) and column 2 (Tf-iDF). Furthermore, adding categorical features (L, C, or S) to the TF-iDF text features improves the scores only marginally (around 2%), as seen by comparing columns 3–6 with column 2.",
"Given the demonstrated importance of text, we studied the effect of using more refined textual features for classification. In columns 7-10, we considered the effect of adding to TF-iDF the MS labels extracted from our text analysis (as described above), and we find a larger improvement of around 7% with respect to mere TF-iDF (column 2). The improvement is larger for finer clusterings into 70 and 45 communities, which contain enough detail that can be associated with levels of risk (e.g., type of accident). This supports the value of the multi-resolution groupings we have extracted through our analysis.",
"We also studied the impact of using Doc2Vec vectors as features. Interestingly, the comparison between columns 2 and 11 shows that there is only a slight improvement of 2% when using Doc2Vec instead of TF-iDF features for the case of records from O-ranked institutions, but the improvement is of 12% for the records from PRC Trusts. This differences suggests that the usage of terms is more precise in O-ranked hospitals so that the differences between TF-iDF are minimised, while the advantages of the syntactic and semantic reconstruction of the Doc2Vec embedding becomes more important in the case of PRC Trusts.",
"Based on these findings, we build our final model that uses a Support Vector Machine classifier with both Doc2Vec embeddings and the MS labels for 30 content clusters (encoded via a One-Hot encoder) as features. We choose to keep only 30 communities as this performs well when combined with the Doc2Vec embedding (without slowing too much the classifier). We performed a grid search to optimise the hyperparameters of our model (penalty = 10, tolerance for stopping criterion = 0.0001, linear kernel). For the O-ranked records, our model achieves a weighted F1 score of 0.657, with a 19% improvement with respect to TF-iDF text features and a 107% improvement with respect to categorical features. (For the PRC records, the corresponding improvements are 33% and 215%, respectively.) Note that similar improvements are also obtained for the other classifiers when using Doc2Vec and MS labels as features. It is also worth noting that the differences in the prediction of DoH between PRC and O-ranked records is reduced when using text tools and, specifically, the F1-score of the SVM classifier based on Doc2Vec with MS is almost the same for both datasets. Hence the difference in the quality of the reporting categories can be ameliorated by the use of the textual content of the reports. We summarise the main comparison of the performance of the SVM classifier based on categorical, raw text, and text with content for both datasets in Figure FIGREF35.",
"Examination of the types of errors and ex novo re-classification by clinicians:",
"A further analysis of the confusion matrices used to compute the F1 score reveals that most of the errors of our model are concentrated in the `No harm', `Low harm' and `Moderate harm' categories, whereas fewer errors are incurred in the `Severe harm' and `Death' categories. Therefore, our method is more likely to return false alarms rather than missing important and harmful incidents.",
"In order to have a further evaluation of our results, we asked three clinicians to analyse ex novo a randomly chosen sample of 135 descriptions of incidents, and to determine their degree of harm based on the information in the incident report. The sample was selected from the O-ranked dataset and no extra information apart from the text was provided. We then compared the DoH assigned by the clinicians with both the results of our classifier and the recorded DoH in the dataset.",
"Remarkably, the agreement rate of the clinicians' assessment with the recorded DoH was surprisingly low. For example, the agreement in the `No Harm' incidents was only 38%, and in the `Severe' incidents only 49%. In most cases, though, the disparities amounted to switching the DoH by one degree above or below. To reduce this variability, we analysed the outcomes in terms of three larger groups: `No Harm' and `Low Harm' incidents were considered as one outcome; `Moderate Harm' was kept separate; and `Severe Harm' and `Death' were grouped as one outcome, since they both need to be notified to NHS safety managers.",
"The results are presented in Table TABREF36. Our classification agrees as well as the pre-existing DoH in the dataset with the ex novo assessment of the clinicians, but our method has higher agreement in the severe and deadly incidents. These results confirm that our method performs as well as the original annotators but is better at identifying risky events."
],
[
"We have applied a multiscale graph partitioning algorithm (Markov Stability) to extract content-based clusters of documents from a textual dataset of incident reports in an unsupervised manner at different levels of resolution. The method uses paragraph vectors to represent the records and analyses the ensuing similarity graph of documents through multi-resolution capabilities to capture clusters without imposing a priori their number or structure. The different levels of resolution found to be relevant can be chosen by the practitioner to suit the requirements of detail for each specific task. For example, the top level categories of the pre-defined classification hierarchy are highly diverse in size, with large groups such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure' alongside small, specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'. Our multi-scale partitioning finds additional subcategories with medical detail within some of the large categories (Fig. FIGREF22 and FIGREF23).",
"Our a posteriori analysis showed that the method recovers meaningful clusters of content as measured by the similarity of the groups against the hand-coded categories and by the intrinsic topic coherence of the clusters. The clusters have high medical content, thus providing complementary information to the externally imposed classification categories. Indeed, some of the most relevant and persistent communities emerge because of their highly homogeneous medical content, even if they cannot be mapped to standardised external categories.",
"An area of future research will be to confirm if the finer unsupervised cluster found by our analysis are consistent with a second level in the hierarchy of external categories (Level 2, around 100 categories), which is used less consistently in hospital settings. The use of content-driven classification of reports could also be important within current efforts by the World Health Organisation (WHO) under the framework for the International Classification for Patient Safety (ICPS) BIBREF9 to establish a set of conceptual categories to monitor, analyse and interpret information to improve patient care.",
"We have used our clusters within a supervised classifier to predict the degree of harm of an incident based only on free-text descriptions. The degree of harm is an important measure in hospital evaluation and has been shown to depend on the reporting culture of the particular organisation. Overall, our method shows that text description complemented by the topic labels extracted by our method show improved performance in this task. The use of such enhanced NLP tools could help improve reporting frequency and quality, in addition to reducing burden to staff, since most of the necessary information can be retrieved automatically from text descriptions. Further work, would aim to add interpretability to the supervised classification BIBREF57, so as to provide medical staff with a clearer view of the outcomes of our method and to encourage its uptake.",
"One of the advantages of a free text analytical approach is the provision, in a timely manner, of an intelligible description of incident report categories derived directly from the 'words' of the reporters themselves. Insights from the analysis of such free text entries can add rich information than would have not otherwise been obtained from pre-defined classes. Not only could this improve the current state of play where much of the free text of these reports goes unused, but by avoiding the strict assignment to pre-defined categories of fixed granularity free text analysis could open an opportunity for feedback and learning through more nuanced classifications as a complementary axis to existing approaches.",
"Currently, local incident reporting systems used by hospitals to submit reports to the NRLS require risk managers to improve data quality, due to errors or uncertainty in categorisation. The application of free text analytical approaches has the potential to free up time from this labour-intensive task, focussing instead in quality improvement derived from the content of the data itself. Additionally, the method allows for the discovery of emerging topics or classes of incidents directly from the data when such events do not fit existing categories by using methods for anomaly detection to decide whether new topic clusters should be created. This is a direction of future work.",
"Further work also includes the use of our method to enable comparisons across healthcare organisations and also to monitor changes in their incident reports over time. Another interesting direction is to provide online classification suggestions to users based on the text they input as an aid with decision support and data collection, which can also help fine-tune the predefined categories. Finally, it would be interesting to test if the use of deep learning algorithms can improve our classification scores.",
"We thank Elias Bamis, Zijing Liu and Michael Schaub for helpful discussions. This research was supported by the National Institute for Health Research (NIHR) Imperial Patient Safety Translational Research Centre and NIHR Imperial Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health. All authors acknowledge support from the EPSRC through award EP/N014529/1 funding the EPSRC Centre for Mathematics of Precision Healthcare."
]
],
"section_name": [
"Introduction",
"Introduction ::: Data description",
"Graph-based framework for text analysis and clustering",
"Graph-based framework for text analysis and clustering ::: Text Preprocessing",
"Graph-based framework for text analysis and clustering ::: Text Vector Embedding",
"Graph-based framework for text analysis and clustering ::: Similarity graph of documents from text similarities",
"Graph-based framework for text analysis and clustering ::: Multiscale Graph Partitioning",
"Graph-based framework for text analysis and clustering ::: Visualisation and interpretation of the results",
"Graph-based framework for text analysis and clustering ::: Quantitative benchmarking of topic clusters",
"Graph-based framework for text analysis and clustering ::: Supervised Classification for Degree of Harm",
"Application to the clustering of hospital incident text reports",
"Application to the clustering of hospital incident text reports ::: Markov Stability extracts content clusters at different levels of granularity",
"Application to the clustering of hospital incident text reports ::: Robustness of the results and comparison with other methods",
"Using free-text descriptions to predict the degree of harm of patient safety incidents with a supervised classifier",
"Using free-text descriptions to predict the degree of harm of patient safety incidents with a supervised classifier ::: Supervised classification of degree of harm",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"82453702db84beeb6427825f2997da5bb04df935"
],
"answer": [
{
"evidence": [
"As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). A one-hot encoding is applied to turn these categorical values into numerical ones. We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification."
],
"extractive_spans": [],
"free_form_answer": "they are used as additional features in a supervised classification task",
"highlighted_evidence": [
"As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident directly from the text and the hand-coded features (e.g., external category, medical specialty, location). ",
"We also checked if using our unsupervised content-driven cluster labels as additional features can improve the performance of the supervised classification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"6af21ecba3913d0642839a78afa05336601103e4"
],
"answer": [
{
"evidence": [
"The trained Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each document in our target analysis set. We then compute a matrix containing all the pairwise (cosine) similarities between the Doc2Vec document vectors. This similarity matrix can be thought of as the adjacency matrix of a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. MS uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need to choose a priori the number or type of clusters."
],
"extractive_spans": [],
"free_form_answer": "A combination of Minimum spanning trees, K-Nearest Neighbors and Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18",
"highlighted_evidence": [
"We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph BIBREF14, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The MST-kNN graph is then analysed with Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18, a multi-resolution graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"How are content clusters used to improve the prediction of incident severity?",
"What cluster identification method is used in this paper?"
],
"question_id": [
"ee9b95d773e060dced08705db8d79a0a6ef353da",
"dbdf13cb4faa1785bdee90734f6c16380459520b"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1: Pipeline for data analysis contains training of the text embedding model along with the two methods we showcase in this work. First is the graph-based unsupervised clustering of documents at different levels of resolution to find topic clusters only from the free text descriptions of hospital incident reports from the NRLS database. Second one uses the topic clusters to improve supervised classification performance of degree of harm prediction.",
"Table 1: Benchmarking of text corpora used for Doc2Vec training. A Doc2Vec model was trained on three corpora of NRLS records of different sizes and a corpus of Wikipedia articles using a variety of hyper-parameters. The scores represent the quality of the vectors inferred using the corresponding model. Specifically, we calcule centroids for the 15 externally hand-coded categories and select the 100 nearest reports for each centroid. We then report the number of incident reports (out of 1500) correctly assigned to their centroid.",
"Fig. 2: Similarity graphs generated from the vectors of a subset of 3229 patient records. Each node represents a record and is coloured according to its hand-coded, external category to aid visualisation but these external categories are not used to produce our content-driven clustering in Figure 3. Layouts for: (a) full, weighted normalised similarity matrix Ŝ without MST-kNN applied, and (b)–(e)MST-kNN graphs generated from the data with increasing sparsity as k is reduced. The structure of the graph is sharpened for intermediate values of k.",
"Fig. 3: The top plot presents the results of the Markov Stability algorithm across Markov times, showing the number of clusters of the optimised partition (red), the variation of information V I(t) for the ensemble of optimised solutions at each time (blue) and the variation of Information V I(t, t ′) between the optimised partitions across Markov time (background colourmap). Relevant partitions are indicated by dips of V I(t) and extended plateaux of V I(t, t ′). We choose five levels with different resolutions (from 44 communities to 3) in our analysis. The Sankey diagram below illustrates how the communities of documents (indicated by numbers and colours) map across Markov time scales. The community structure across scales present a strong quasi-hierarchical character—a result of the analysis and the properties of the data, since it is not imposed a priori. The different partitions for the five chosen levels are shown on a graph layout for the document similarity graph created with the MST-kNN algorithm with k = 13. The colours correspond to the communities found by MS indicating content clusters.",
"Fig. 4: Summary of the 44-community partition found with the MS algorithm in an unsupervised manner directly from the text of the incident reports. The 44 content communities are compared a posteriori to the 15 hand-coded categories (indicated by names and colours) through a Sankey diagram between communities and categories (left), and through a z-score contingency table (right). We have assigned a descriptive label to the content communities based on their word clouds in Figure 5.",
"Fig. 5: Word clouds of the 44-community partition showing the detailed content of the communities found. The word clouds are split into two sub-figures (A) and (B) for ease of visualisation.",
"Fig. 6: Summary of: (A) 17-way and (B) 12-way MS content clusters and their correspondence to the external categories.",
"Fig. 7: Summary of MS partitions into (A) 7 communities and (B) 3 communities, showing their correspondence to external hand-coded categories. Some of the MS content clusters have strong medical content (e.g., labour ward, radiotherapy, pressure ulcer) and are not grouped with other procedural records due to their semantic distinctiveness, even to this coarse level of clustering.",
"Fig. 8: Word clouds of the MS partitions into 17, 12 and 7 clusters show a multiresolution coarsening in the content following the quasi-hierarchical community structure found in the document similarity graph.",
"Fig. 9: Comparison of MS applied to Doc2Vec versus BoW (using TF-iDF) similarity graphs obtained under the same graph construction. (A) Similarity against the externally hand-coded categories measured with N MI; (B) intrinsic topic coherence of the computed clusters measured with P̂MI.",
"Fig. 10: Evaluating the effect of the size of the training corpus. (A) Similarity to hand-coded categories (measured with N MI) and (B) Topic Coherence score (measured with P̂MI) of the MS clusters when the Doc2Vec model is trained on: 1 million, 2 million, and the full set of 13 million records. The corpus size does not affect the results.",
"Fig. 11: Effect of the sparsification of the MST-kNN graphs on MS clusters. (A) Similarity against the externally hand-coded categories measured with N MI; (B) Intrinsic topic coherence of the computed clusters measured with P̂MI. The clusters have similar quality for values of k above 13-16.",
"Table 2: Similarity to hand-coded categories (N MI) and topic coherence (P̂MI) for the five MS resolutions in Figure 3 and their corresponding LDA models.",
"Fig. 12: Comparison of MS results versus other common community detection or graph partitioning methods: (A) Similarity against the externally hand-coded categories measured with N MI; (B) intrinsic topic coherence of the computed clusters measured with P̂MI. MS provides high quality clusters across all scales.",
"Table 3: Weighted F1-scores for three classifiers (Ridge, SVM with a linear kernel, Random Forest) trained on the O and PRC datasets of incident reports, for different features: non-textual categorical features (L: Localisation; C: Hand-coded Category; S: Medical Specialty); TF-iDF textual features (TF-iDF embedding of text in incident report); Doc2Vec textual features (Doc2Vec embedding of text in incident report); labels of X=70, 45, 30, 13 communities obtained from unsupervised Markov Stability analysis (MS-x). The SVM classifier performs best across the dataset. The classification is better for O-ranked records compared to PRC-ranked records. Text classifiers have highly improved performance compared to purely categorical classifiers. The best classifier is based on Doc2Vec features augmented by MS labels obtained with our unsupervised framework.",
"Fig. 13: Performance of the SVM classifier based on categorical features alone, text features (TF-iDF) alone, and text features (Doc2Vec) with content labels (MS30) on both sets of incident reports: the one collecged from Outstanding Trusts (’O’-ranked) and from Trusts with a Poor Reporting Culture (’PRC’-ranked). The inclusion of more sophisticated text and content labels improves prediction and closes the gap in the quality between both sets of records.",
"Table 4: The ex novo re-classification by three clinicians of 135 incident reports (chosen at random) is compared to the pre-existing classification in the dataset and the prediction of our model."
],
"file": [
"5-Figure1-1.png",
"7-Table1-1.png",
"9-Figure2-1.png",
"15-Figure3-1.png",
"16-Figure4-1.png",
"17-Figure5-1.png",
"19-Figure6-1.png",
"20-Figure7-1.png",
"21-Figure8-1.png",
"22-Figure9-1.png",
"23-Figure10-1.png",
"24-Figure11-1.png",
"24-Table2-1.png",
"25-Figure12-1.png",
"27-Table3-1.png",
"28-Figure13-1.png",
"29-Table4-1.png"
]
} | [
"How are content clusters used to improve the prediction of incident severity?",
"What cluster identification method is used in this paper?"
] | [
[
"1909.00183-Graph-based framework for text analysis and clustering ::: Supervised Classification for Degree of Harm-0"
],
[
"1909.00183-Graph-based framework for text analysis and clustering-2"
]
] | [
"they are used as additional features in a supervised classification task",
"A combination of Minimum spanning trees, K-Nearest Neighbors and Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18"
] | 275 |
1801.09030 | Exploration on Generating Traditional Chinese Medicine Prescriptions from Symptoms with an End-to-End Approach | Traditional Chinese Medicine (TCM) is an influential form of medical treatment in China and surrounding areas. In this paper, we propose a TCM prescription generation task that aims to automatically generate a herbal medicine prescription based on textual symptom descriptions. Sequence-tosequence (seq2seq) model has been successful in dealing with sequence generation tasks. We explore a potential end-to-end solution to the TCM prescription generation task using seq2seq models. However, experiments show that directly applying seq2seq model leads to unfruitful results due to the repetition problem. To solve the problem, we propose a novel decoder with coverage mechanism and a novel soft loss function. The experimental results demonstrate the effectiveness of the proposed approach. Judged by professors who excel in TCM, the generated prescriptions are rated 7.3 out of 10. It shows that the model can indeed help with the prescribing procedure in real life. | {
"paragraphs": [
[
"Traditional Chinese Medicine (TCM) is one of the most important forms of medical treatment in China and the surrounding areas. TCM has accumulated large quantities of documentation and therapy records in the long history of development. Prescriptions consisting of herbal medication are the most important form of TCM treatment. TCM practitioners prescribe according to a patient's symptoms that are observed and analyzed by the practitioners themselves instead of using medical equipment, e.g., the CT. The patient takes the decoction made out of the herbal medication in the prescription. A complete prescription includes the composition of herbs, the proportion of herbs, the preparation method and the doses of the decoction. In this work, we focus on the composition part of the prescription, which is the most essential part of the prescription.",
"During the long history of TCM, there has been a number of therapy records or treatment guidelines in the TCM classics composed by outstanding TCM researchers and practitioners. In real life, TCM practitioners often take these classical records for reference when prescribing for the patient, which inspires us to design a model that can automatically generate prescriptions by learning from these classics. It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely. An example of TCM prescription is shown in Table 1 . The herbs in the prescription are organized in a weak order. By “weak order”, we mean that the effect of the herbs are not influenced by the order. However, the order of the herbs reflects the way of thinking when constructing the prescription. Therefore, the herbs are connected to each other, and the most important ones are usually listed first.",
"Due to the lack of digitalization and formalization, TCM has not attracted sufficient attention in the artificial intelligence community. To facilitate the studies on automatic TCM prescription generation, we collect and clean a large number of prescriptions as well as their corresponding symptom descriptions from the Internet.",
"Inspired by the great success of natural language generation tasks like neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , abstractive summarization BIBREF3 , generative question answering BIBREF4 , and neural dialogue response generation BIBREF5 , BIBREF6 , we propose to adopt the end-to-end paradigm, mainly the sequence to sequence model, to tackle the task of generating TCM prescriptions based on textual symptom descriptions.",
"The sequence to sequence model (seq2seq) consists of an encoder that encodes the input sequence and a decoder that generates the output sequence. The success in the language generation tasks indicates that the seq2seq model can learn the semantic relation between the output sequence and the input sequence quite well. It is also a desirable characteristic for generating prescriptions according to the textual symptom description.",
"The prescription generation task is similar to the generative question answering (QA). In such task settings, the encoder part of the model takes in the question, and encodes the sequence of tokens into a set of hidden states, which embody the information of the question. The decoder part then iteratively generates tokens based on the information encoded in the hidden states of the encoder. The model would learn how to generate response after training on the corresponding question-answer pairs.",
"In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set of TCM herbs that form a prescription as the answer to the question. However, the set of herbs is different from the textual answers to a question in the QA task. A difference that is most evident is that there will not be any duplication of herbs in the prescription. However, the basic seq2seq model sometimes produces the same herb tokens repeatedly when applied to the TCM prescription generation task. This phenomenon can hurt the performance of recall rate even after applying a post-process to eliminate repetitions. Because in a limited length of the prescription , the model would produce the same token over and over again, rather than real and novel ones. Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order. In this paper, we explore to automatically generate TCM prescriptions based on textual symptoms. We propose a soft seq2seq model with coverage mechanism and a novel soft loss function. The coverage mechanism is designed to make the model aware of the herbs that have already been generated while the soft loss function is to relieve the side effect of strict order assumption. In the experiment results, our proposed model beats all the baselines in professional evaluations, and we observe a large increase in both the recall rate and the F1 score compared with the basic seq2seq model.",
"The main contributions of this paper lie in the following three folds:"
],
[
"There has not been much work concerning computational TCM. zhou2010development attempted to build a TCM clinical data warehouse so that the TCM knowledge can be analyzed and used. This is a typical way of collecting data, since the number of prescriptions given by the practitioners in the clinics is very large. However, in reality, most of the TCM doctors do not refer to the constructed digital systems, because the quality of the input data tends to be poor. Therefore, we choose prescriptions in the classics (books or documentation) of TCM. Although the available data can be fewer than the clinical data, it guarantees the quality of the prescriptions.",
"wang2004self attempted to construct a self-learning expert system with several simple classifiers to facilitate the TCM diagnosis procedure, Wang2013TCM proposed to use shallow neural networks and CRF based multi-labeling learning methods to model TCM inquiry process, but they only considered the disease of chronic gastritis and its taxonomy is very simple. These methods either utilize traditional data mining methods or are highly involved with expert crafted systems. Zhang2011Topic,Zhu2017TCM proposed to use LDA to model the herbs. li2017distributed proposed to learn the distributed embedding for TCM herbs with recurrent neural networks."
],
[
"Neural sequence to sequence model has proven to be very effective in a wide range of natural language generation tasks, including neural machine translation and abstractive text summarization. In this section, we first describe the definition of the TCM prescription generation task. Then, we introduce how to apply seq2seq model in the prescription composition task. Next, we show how to guide the model to generate more fruitful herbs in the setting of this task by introducing coverage mechanism. Finally, we introduce our novel soft loss function that relieves the strict assumption of order between tokens. An overview of the our final model is shown in Figure 1 ."
],
[
"Given a TCM herbal treatment dataset that consists of $N$ data samples, the $i$ -th data sample ( $x^{(i)}, p^{(i)}$ ) contains one piece of source text $x^{(i)}$ that describes the symptoms, and $M_{i}$ TCM herbs $(p_{1}^{i},p_{2}^{i}, ..., p_{M_{i}}^{i})$ that make up the herb prescription $p^{(i)}$ .",
"We view the symptoms as a sequence of characters $x^{(i)} = (x^{(i)}_{1}, x^{(i)}_{2}, ..., x^{(i)}_{T})$ . We do not segment the characters into words because they are mostly in traditional Chinese that uses characters as basic semantic units. The herbs $p_{1}^{i},p_{2}^{i}, ..., p_{M_{i}}^{i}$ are all different from each other."
],
[
"Sequence-to-sequence model was first proposed to solve the machine translation problem. The model consists of two parts, an encoder and a decoder. The encoder is bound to take in the source sequence and compress the sequence into a series of hidden states. The decoder is used to generate a sequence of target tokens based on the information embodied in the hidden states given by the encoder. Typically, both the encoder and the decoder are implemented with recurrent neural networks (RNN).",
"In our TCM prescription generation task, the encoder RNN converts the variable-length symptoms in character sequence $x = (x_{1},x_{2},...,x_{T})$ into a set of hidden representations $h = (h_{1},h_{2},...,h_{T})$ , by iterating the following equations along time $t$ : ",
"$$h_{t} = f(x_{t},h_{t-1})$$ (Eq. 8) ",
"where $f$ is a RNN family function. In our implementation, we choose gated recurrent unit (GRU BIBREF1 ) as $f$ , as the gating mechanism is expected to model long distance dependency better. Furthermore, we choose the bidirectional version of recurrent neural networks as the encoder to solve the problem that the later words get more emphasis in the unidirectional version. We concatenate both the $h_{t}$ in the forward and backward pass and get $\\widehat{h_{t}}$ as the final representation of the hidden state at time step $t$ .",
"We get the context vector $c$ representing the whole source $x$ at the $t$ -th time through a non-linear function $q$ , normally known as the attention mechanism: ",
"$$c_{t} = \\sum _{j=1}^{T}\\alpha _{tj}h_{j} \\\\\n\\alpha _{tj} = \\frac{\\text{exp}\\left( a\\left(s_{t-1},h_{j}\\right)\\right)}{\\sum _{k=1}^{T}\\text{exp}\\left( a\\left(s_{t-1},h_{k}\\right)\\right)}$$ (Eq. 9) ",
"The context vector $c_{t}$ is calculated as a weighted sum of hidden representation produced by the encoder $\\textbf {h} = (h_{1},...,h_{T})$ . $a(s_{t-1},h_{j})$ is a soft alignment function that measures the relevance between $s_{t-1}$ and $h_{j}$ . It computes how much $h_j$ is needed for the $t$ -th output word based on the previous hidden state of the decoder $s_{t-1}$ . The decoder is another RNN. It generates a variable-length sequence $y = (y_{1},y_{2}, ..., y_{T^{\\prime }})$ token by token (herb), through a conditional language model: ",
"$$s_{t} = f(s_{t-1},c_{t},Ey_{t-1}) \\\\\np(y_{t}|y_{1,...,t},x) = g(s_{t})$$ (Eq. 10) ",
"where $s_{t}$ is the hidden state of the decoder RNN at time step $t$ . $f$ is also a gated recurrent unit. The non-linear function $g$ is a $softmax$ layer, which outputs the probabilities of all the herbs in the herb vocabulary. $E \\in (V\\times d)$ is the embedding matrix of the target tokens, $V$ is the number of herb vocabulary, $d$ is the embedding dimension. $y_{t-1}$ is the last predicted token.",
"In the decoder, the context vector $c_{t}$ is calculated based on the hidden state $s_{t-1}$ of the decoder at time step $t-1$ and all the hidden states in the encoder. The procedure is known as the attention mechanism. The attention mechanism is expected to supplement the information from the source sequence that is more connected to the current hidden state of the decoder instead of only depending on a fixed vector produced by the encoder.",
"The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence. A soft version of cross entropy loss is applied to maximize the conditional probability, which we will describe in detail."
],
[
"Different from natural language generation tasks, there is no duplicate herb in the TCM prescription generation task. When directly applying seq2seq model in this task, the decoder tends to generate some frequently observed herbs over and over again. Although we can prune the repeated herbs through post processing by eliminating the repeated ones, it still hurts the recall performance as the maximum length of a prescription is limited. This situation is still true when we use a $<EOS>$ label to indicate where the generation should stop.",
"To encourage the decoder to generate more diverse and reasonable herb tokens, we propose to apply coverage mechanism to make the model aware of the already generated herbs. Coverage mechanism BIBREF7 , BIBREF8 , BIBREF9 was first proposed to help the decoder focus on the part that has not been paid much attention by feeding a fertility vector to the attention calculation, indicating how much information of the input is used.",
"In our model, we do not use the fertility vector to tune the attention weights. The reason is that the symptoms are related to others and altogether describe the whole disease, which is explained in Section \"Introduction\" . Still, inspired by its motivation, we adapt the coverage mechanism to the decoder where a coverage vector is fed to the GRU cell together with the context vector. Equation 10 is then replaced by the following ones. ",
"$$a_{t} = \\tanh (WD_{t}+b) \\\\\ns_{t} = f(s_{t-1}, c_{t}, Ey_{t-1}, a_{t})$$ (Eq. 12) ",
"where $a_{t}$ is the coverage vector at the $t$ -th time step in decoding. $D_{t}$ is the one-hot representation of the generated tokens until the $t$ -th time step. $W\\in \\mathbb {R}^{V\\times H}$ is a learnable parameter matrix, where $V$ is the size of the herb vocabulary and $H$ is the size of the hidden state. By feeding the coverage vector, which is also a sketch of the generated herbs, to the GRU as part of the input, our model can softly switch more probability to the herbs that have not been predicted. This way, the model is encouraged to produce novel herbs rather than repeatedly predicting the frequently observed ones, thus increasing the recall rate."
],
[
"We argue that even though the order of the herbs matters when generating the prescription BIBREF10 , BIBREF11 , we should not strictly restrict the order. However, the traditional cross entropy loss function applied to the basic seq2seq model puts a strict assumption on the order of the labels. To deal with the task of predicting weakly ordered labels (or even unordered labels), we propose a soft loss function instead of the original hard cross entropy loss function: ",
"$$loss = -\\sum _{t}\\ q^{\\prime }_{t}\\ log(p_t)$$ (Eq. 14) ",
"Instead of using the original hard one-hot target probability $q_t$ , we use a soft target probability distribution $q^{\\prime }_{t}$ , which is calculated according to $q_t$ and the target sequence $\\mathbf {q}$ of this sample. Let $\\mathbf {q_v}$ denote the bag of words representation of $\\mathbf {q}$ , where only slots of the target herbs in $\\mathbf {q}$ are filled with $1s$ . We use a function $\\xi $ to project the original target label probability $q_t$ into a new probability distribution $q^{\\prime }_{t}$0 . ",
"$$q^{\\prime }_t = \\xi (q_t, \\mathbf {q_v})$$ (Eq. 15) ",
"This function $\\xi $ is designed so as to decrease the harsh punishment when the model predicts the labels in the wrong order. In this paper, we apply a simple yet effective projection function as Equation 16 . This is an example implementation, and one can design more sophisticated projection functions if needed. ",
"$$\\xi (y_t,\\mathbf {s}) = ((\\mathbf {q_v}/M) + y_t) / 2 $$ (Eq. 16) ",
"where $M$ is the length of $q$ . This function means that at the $t$ -th time of decoding, for each target herb token $p_i$ , we first split a probability density of $1.0$ equally across all the $l$ herbs into $1/M$ . Then, we take the average of this probability distribution and the original probability $q_t$ to be the final probability distribution at time $t$ ."
],
[
"We crawl the data from UTF8gbsnTCM Prescription Knowledge Base (中医方剂知识库) . This knowledge base includes comprehensive TCM documentation in the history. The database includes 710 TCM historic books or documents as well as some modern ones, consisting of 85,166 prescriptions in total. Each item in the database provides the name, the origin, the composition, the effect, the contraindications, and the preparation method. We clean and formalize the database and get 82,044 usable symptom-prescription pairs",
"In the process of formalization, we temporarily omit the dose information and the preparation method description, as we are mainly concerned with the composition. Because the names of the herbs have evolved a lot, we conclude heuristic rules as well as specific projection rules to project some rarely seen herbs to their similar forms that are normally referred to. There are also prescriptions that refer to the name of other prescriptions. We simply substitute these names with their constituents.",
"To make the experiment result more robust, we conduct our experiments on two separate test datasets. The first one is a subset of the data described above. We randomly split the whole data into three parts, the training data (90%), the development data (5%) and the test data (5%). The second one is a set of symptom-prescription pairs we manually extracted from the modern text book of the course Formulaology of TCM (UTF8gbsn中医方剂学) that is popularly adopted by many TCM colleges in China.",
"There are more cases in the first sampled test dataset (4,102 examples), but it suffers from lower quality, as this dataset was parsed with simple rules, which may not cover all exceptions. The second test dataset has been proofread and all of the prescriptions are the most classical and influential ones in the history. So the quality is much better than the first one. However, the number of the cases is limited. There are 141 symptom-prescription pairs in the second dataset. Thus we use two test sets to do evaluation to take the advantages of both data magnitude and quality."
],
[
"In our experiments, we implement our models with the PyTorch toolkit . We set the embedding size of both Chinese characters in the symptoms and the herb tokens to 100. We set the hidden state size to 300, and the batch size to 20. We set the maximum length of the herb sequence to 20 because the length of nearly all the prescriptions are within this range (see Table 2 for the statistics of the length of prescriptions). Unless specifically stated, we use bidirectional gated recurrent neural networks (BiGRNN) to encode the symptoms. Adam BIBREF12 , and use the model parameters that generate the best F1 score on the development set in testing"
],
[
"In this sub-section, we present the Multi-label baseline we apply. In this model, we use a BiGRNN as the encoder, which encodes symptoms in the same way as it is described in Section \"Methodology\" . Because the position of the herbs does not matter in the results, for the generation part, we implement a multi-label classification method to predict the herbs. We use the multi-label max-margin loss (MultiLabelMarginLoss in pytorch) as the optimization objective, because this loss function is more insensitive to the threshold, thus making the model more robust. We set the threshold to be 0.5, that is, if the probability given by the model is above 0.5 and within the top $k$ range (we set k to 20 in our experiment, same to seq2seq model), we take the tokens as answers. The way to calculate probability is shown below. ",
"$$p(i) = \\sigma (W_{o}h_{T})$$ (Eq. 23) ",
"where $\\sigma $ indicates the non-linear function $sigmoid$ , $W_{o} \\in \\mathbb {R}^{H \\times V}$ , $H$ is the size of the hidden state produced by the encoder and $V$ is the size of the herb vocabulary. $h_{T}$ is the last hidden state produced by the encoder.",
"During evaluation, we choose the herbs satisfying two conditions:",
"The predicted probability of the herb is within top $k$ among all the herbs, where $k$ is a hyper-parameter. We set $k$ to be the same as the maximum length of seq2seq based models (20).",
"The predicted probability is above a threshold 0.5 (related to the max-margin)."
],
[
"Since medical treatment is a very complex task, we invite two professors from Beijing University of Chinese Medicine, which is one of the best Traditional Chinese Medicine academies in China. Both of the professors enjoy over five years of practicing traditional Chinese medical treatment. The evaluators are asked to evaluate the prescriptions with scores between 0 and 10. Both the textual symptoms and the standard reference are given, which is similar to the form of evaluation in a normal TCM examination. Different from the automatic evaluation method, the human evaluators focus on the potential curative effect of the candidate answers, rather than merely the literal similarity. We believe this way of evaluation is much more reasonable and close to reality.",
"Because the evaluation procedure is very time consuming (each item requires more than 1 minute), we only ask the evaluators to judge the results from test set 2.",
"As shown in Table 3 , both of the basic seq2seq model and our proposed modification are much better than the multi-label baseline. Our proposed model gets a high score of 7.3, which can be of real help to TCM practitioners when prescribing in the real life treatment."
],
[
"We use micro Precision, Recall, and F1 score as the automatic metrics to evaluate the results, because the internal order between the herbs does not matter when we do not consider the prescribing process.",
"In Table 4 , we show the results of our proposed models as well as the baseline models. One thing that should be noted is that since the data in Test set 2 (extracted from text book) have much better quality than Test set 1, the performance on Test set 2 is much higher than it is on Test set 1, which is consistent with our instinct.",
"From the experiment results we can see that the baseline model multi-label has higher micro recall rate 29.72, 40.49 but much lower micro precision 10.83, 13.51. This is because unlike the seq2seq model that dynamically determines the length of the generated sequence, the output length is rigid and can only be determined by thresholds. We take the tokens within the top 20 as the answer for the multi-label model.",
"As to the basic seq2seq model, although it beats the multi-label model overall, the recall rate drops substantially. This problem is partly caused by the repetition problem, the basic seq2seq model sometimes predicts high frequent tokens instead of more meaningful ones. Apart from this, although the seq2seq based model is better able to model the correlation between target labels, it makes a strong assumption on the order of the target sequence. In the prescription generation task, the order between herb tokens are helpful for generating the sequence. However, since the order between the herbs does not affect the effect of the prescription, we do not consider the order when evaluating the generated sequence. We call the phenomenon that the herbs are under the “weak order”. The much too strong assumption on order can hurt the performance of the model when the correct tokens are placed in the wrong order.",
"In Table 5 we show the effect of applying coverage mechanism and soft loss function.",
"Coverage mechanism gives a sketch on the prescription. The mechanism not only encourages the model to generate novel herbs but also enables the model to generate tokens based on the already predicted ones. This can be proved by the improvement on Test set 2, where both the precision and the recall are improved over the basic seq2seq model.",
"The most significant improvement comes from applying the soft loss function. The soft loss function can relieve the strong assumption of order made by seq2seq model. Because predicting a correct token in the wrong position is not as harmful as predicting a completely wrong token. This simple modification gives a big improvement on both test sets for all the three evaluation metrics."
],
[
"In this subsection, we show an example generated by various models in Table 6 in test set 2 because the quality of test set 2 is much more satisfactory. The multi-label model produces too many herbs that lower the precision, we do not go deep into its results, already we report its results in the table.",
"For the basic seq2seq model, the result is better than multi-label baseline in this case. UTF8gbsn“柴胡” (radix bupleuri)、“葛根” (the root of kudzu vine) can be roughly matched with “恶风发热,汗出头疼” (Aversion to wind, fever, sweating, headache), “甘草” (Glycyrrhiza)、“陈皮” (dried tangerine or orange peel)、“桔梗” (Platycodon grandiflorum) can be roughly matched with “鼻鸣咽干,苔白不渴” (nasal obstruction, dry throat, white tongue coating, not thirsty), “川芎” (Ligusticum wallichii) can be used to treat the symptom of “头疼” (headache). In this case, most of the herbs can be matched with certain symptoms in the textual description. However, the problem is that unlike the reference, the composition of herbs lacks the overall design. The symptoms should not be treated independently, as they are connected to other symptoms. For example, the appearance of symptom UTF8gbsn“头疼” (headache) must be treated together with UTF8gbsn“汗出” (sweat). When there is simply headache without sweat, UTF8gbsn“川芎” (Ligusticum wallichii) may be suitable. However, since there is already sweat, this herb is not suitable in this situation. This drawback results from the fact that this model heavily relies on the attention mechanism that tries to match the current hidden state in the decoder to a part of the context in the encoder every time it predicts a token.",
"Translation: UTF8gbsn桂枝 - cassia twig, 芍药 - Chinese herbaceous peony 大黄 - Rhubarb, 厚朴 - Magnolia officinalis, 枳实 - Fructus Aurantii Immaturus, 芒硝 - Mirabilite, 栀子 - Cape Jasmine Fruit, 枳壳 - Fructus Aurantii, 当归 - Angelica Sinensis, 甘草 - Glycyrrhiza, 黄芩 - Scutellaria, 生姜 - ginger, 大枣 - Chinese date, 柴胡 - radix bupleuri, 葛根 - the root of kudzu vine, 陈皮 - dried tangerine or orange peel, 桔梗 - Platycodon grandiflorum, 川芎 - Ligusticum wallichii, 麻黄 - Chinese ephedra",
"For our proposed model, the results are much more satisfactory. UTF8gbsn“外感风寒” (Exogenous wind-cold exterior deficiency syndrome) is the reason of the disease, the symptoms UTF8gbsn“恶风发热,汗出头疼,鼻鸣咽干,苔白不渴,脉浮缓或浮弱” (Aversion to wind, fever, sweating, headache, nasal obstruction, dry throat, white tongue coating, not thirsty, floating slow pulse or floating weak pulse) are the corresponding results. The prescription generated by our proposed model can also be used to cure UTF8gbsn“外感风寒” (Exogenous wind-cold exterior deficiency syndrome), in fact UTF8gbsn“麻黄” (Chinese ephedra) and “桂枝” (cassia twig) together is a common combination to cure cold. However, UTF8gbsn“麻黄” (Chinese ephedra) is not suitable here because there is already sweat. One of the most common effect of UTF8gbsn“麻黄” (Chinese ephedra) is to make the patient sweat. Since there is already sweat, it should not be used. Compared with the basic seq2seq model, our proposed model have a sense of overall disease, rather than merely discretely focusing on individual symptoms.",
"From the above analysis, we can see that compared with the basic seq2seq model, our proposed soft seq2seq model is aware more of the connections between symptoms, and has a better overall view on the disease. This advantage is correspondent to the principle of prescribing in TCM that the prescription should be focusing on the UTF8gbsn“辩证” (the reason behind the symptoms) rather than the superficial UTF8gbsn“症” (symptoms)."
],
[
"In this paper, we propose a TCM prescription generation task that automatically predicts the herbs in a prescription based on the textual symptom descriptions. To our knowledge, this is the first time that this critical and practicable task has been considered. To advance the research in this task, we construct a dataset of 82,044 symptom-prescription pairs based on the TCM Prescription Knowledge Base.",
"Besides the automatic evaluation, we also invite professionals to evaluate the prescriptions given by various models, the results of which show that our model reaches the score of 7.3 out of 10, demonstrating the effectiveness. In the experiments, we observe that directly applying seq2seq model would lead to the repetition problem that lowers the recall rate and the strong assumption of the order between herb tokens can hurt the performance. We propose to apply the coverage mechanism and the soft loss function to solve this problem. From the experimental results, we can see that this approach alleviates the repetition problem and results in an improved recall rate."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Task Definition",
"Basic Encoder-Decoder Model",
"Coverage Mechanism",
"Soft Loss Function",
"Dataset Construction",
"Experiment Settings",
"Proposed Baseline",
"Human Evaluation",
"Automatic Evaluation Results",
"Case Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"19fe7a6492b6ef59d3db2c54da84da629ce7faf4"
],
"answer": [
{
"evidence": [
"In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set of TCM herbs that form a prescription as the answer to the question. However, the set of herbs is different from the textual answers to a question in the QA task. A difference that is most evident is that there will not be any duplication of herbs in the prescription. However, the basic seq2seq model sometimes produces the same herb tokens repeatedly when applied to the TCM prescription generation task. This phenomenon can hurt the performance of recall rate even after applying a post-process to eliminate repetitions. Because in a limited length of the prescription , the model would produce the same token over and over again, rather than real and novel ones. Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order. In this paper, we explore to automatically generate TCM prescriptions based on textual symptoms. We propose a soft seq2seq model with coverage mechanism and a novel soft loss function. The coverage mechanism is designed to make the model aware of the herbs that have already been generated while the soft loss function is to relieve the side effect of strict order assumption. In the experiment results, our proposed model beats all the baselines in professional evaluations, and we observe a large increase in both the recall rate and the F1 score compared with the basic seq2seq model."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"3076e9b3ba1e4630b314a53bedce5b5e6db30a91"
],
"answer": [
{
"evidence": [
"During the long history of TCM, there has been a number of therapy records or treatment guidelines in the TCM classics composed by outstanding TCM researchers and practitioners. In real life, TCM practitioners often take these classical records for reference when prescribing for the patient, which inspires us to design a model that can automatically generate prescriptions by learning from these classics. It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely. An example of TCM prescription is shown in Table 1 . The herbs in the prescription are organized in a weak order. By “weak order”, we mean that the effect of the herbs are not influenced by the order. However, the order of the herbs reflects the way of thinking when constructing the prescription. Therefore, the herbs are connected to each other, and the most important ones are usually listed first."
],
"extractive_spans": [],
"free_form_answer": "They think it will help human TCM practitioners make prescriptions.",
"highlighted_evidence": [
"It also needs to be noted that due to the issues in actual practice, the objective of this work is to generate candidate prescriptions to facilitate the prescribing procedure instead of substituting the human practitioners completely."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat"
],
"question": [
"Do they impose any grammatical constraints over the generated output?",
"Why did they think this was a good idea?"
],
"question_id": [
"5d5a571ff04a5fdd656ca87f6525a60e917d6558",
"3c362bfa11c60bad6c7ea83f8753d427cda77de0"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: An example of a TCM symptom-prescription pair. As we are mainly concerned with the composition of the prescription, we only provide the herbs in the prescription.",
"Figure 1: An illustration of our model. The model is built on the basis of seq2seq model with attention mechanism. We use a coverage mechanism to reduce repetition problem. The coverage mechanism is realized by adding a coverage vector to the decoder.",
"Table 2: The statistic of the length of prescriptions. Crawled data means the overall data crawled from the Internet, including the training set data, the development set data and test set 1. Textbook data is the same to test set 2. Under 20 means the percentage of data that are shorter or equal than length 20.",
"Table 3: Professional evaluation on the test set 2. The score range is 0∼10. The Pearson’s correlation coefficient between the two evaluators is 0.72 and the Spearman’s correlation coefficient is 0.72. Both p-values are less than 0.01, indicating strong agreement.",
"Table 4: Automatic evaluation results of different models on the two test datasets. Multi-label is introduced in Section 4.3. Test set 1 is the subset of the large dataset collected from the Internet, which is homogeneous to the training set. Test set 2 is the test set extracted from the prescription text book.",
"Table 5: Ablation results of applying coverage mechanism and soft loss function. Test set 1 and test set 2 are the same as Table 4",
"Table 6: Actual predictions made by various models in test set 2. Multi-label model generates too many herb tokens, so we do not list all of them here. Reference is the standard answer prescription given by the text book.4"
],
"file": [
"1-Table1-1.png",
"4-Figure1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png"
]
} | [
"Why did they think this was a good idea?"
] | [
[
"1801.09030-Introduction-1"
]
] | [
"They think it will help human TCM practitioners make prescriptions."
] | 278 |
1804.03396 | QA4IE: A Question Answering based Framework for Information Extraction | Information Extraction (IE) refers to automatically extracting structured relation tuples from unstructured texts. Common IE solutions, including Relation Extraction (RE) and open IE systems, can hardly handle cross-sentence tuples, and are severely restricted by limited relation types as well as informal relation specifications (e.g., free-text based relation tuples). In order to overcome these weaknesses, we propose a novel IE framework named QA4IE, which leverages the flexible question answering (QA) approaches to produce high quality relation triples across sentences. Based on the framework, we develop a large IE benchmark with high quality human evaluation. This benchmark contains 293K documents, 2M golden relation triples, and 636 relation types. We compare our system with some IE baselines on our benchmark and the results show that our system achieves great improvements. | {
"paragraphs": [
[
"Information Extraction (IE), which refers to extracting structured information (i.e., relation tuples) from unstructured text, is the key problem in making use of large-scale texts. High quality extracted relation tuples can be used in various downstream applications such as Knowledge Base Population BIBREF0 , Knowledge Graph Acquisition BIBREF1 , and Natural Language Understanding. However, existing IE systems still cannot produce high-quality relation tuples to effectively support downstream applications."
],
[
"Most of previous IE systems can be divided into Relation Extraction (RE) based systems BIBREF2 , BIBREF3 and Open IE systems BIBREF4 , BIBREF5 , BIBREF6 .",
"Early work on RE decomposes the problem into Named Entity Recognition (NER) and relation classification. With the recent development of neural networks (NN), NN based NER models BIBREF7 , BIBREF8 and relation classification models BIBREF9 show better performance than previous handcrafted feature based methods. The recently proposed RE systems BIBREF10 , BIBREF11 try to jointly perform entity recognition and relation extraction to improve the performance. One limitation of existing RE benchmarks, e.g., NYT BIBREF12 , Wiki-KBP BIBREF13 and BioInfer BIBREF14 , is that they only involve 24, 19 and 94 relation types respectively comparing with thousands of relation types in knowledge bases such as DBpedia BIBREF15 , BIBREF16 . Besides, existing RE systems can only extract relation tuples from a single sentence while the cross-sentence information is ignored. Therefore, existing RE based systems are not powerful enough to support downstream applications in terms of performance or scalability.",
"On the other hand, early work on Open IE is mainly based on bootstrapping and pattern learning methods BIBREF17 . Recent work incorporates lexical features and sentence parsing results to automatically build a large number of pattern templates, based on which the systems can extract relation tuples from an input sentence BIBREF4 , BIBREF5 , BIBREF6 . An obvious weakness is that the extracted relations are formed by free texts which means they may be polysemous or synonymous and thus cannot be directly used without disambiguation and aggregation. The extracted free-text relations also bring extra manual evaluation cost, and how to automatically evaluate different Open IE systems fairly is an open problem. Stanovsky and Dagan BIBREF18 try to solve this problem by creating an Open IE benchmark with the help of QA-SRL annotations BIBREF19 . Nevertheless, the benchmark only involves 10K golden relation tuples. Hence, Open IE in its current form cannot provide a satisfactory solution to high-quality IE that supports downstream applications.",
"There are some recently proposed IE approaches which try to incorporate Question Answering (QA) techniques into IE. Levy et al. BIBREF20 propose to reduce the RE problem to answering simple reading comprehension questions. They build a question template for each relation type, and by asking questions with a relevant sentence and the first entity given, they can obtain relation triples from the sentence corresponding to the relation type and the first entity. Roth et al. BIBREF21 further improve the model performance on a similar problem setting. However, these approaches focus on sentence level relation argument extractions and do not provide a full-stack solution to general IE. In particular, they do not provide a solution to extract the first entity and its corresponding relation types before applying QA. Besides, sentence level relation extraction ignores the information across sentences such as coreference and inference between sentences, which greatly reduces the information extracted from the documents."
],
[
"To overcome the above weaknesses of existing IE systems, we propose a novel IE framework named QA4IE to perform document level general IE with the help of state-of-the-art approaches in Question Answering (QA) and Machine Reading Comprehension (MRC) area.",
"The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \\lbrace e_i, r_{ij}, e_j\\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:",
"Recognize all the candidate entities in the input document $D$ according to the knowledge base $K$ . These entities serve as the first entity $e_i$ in the relation triples $R$ .",
"For each candidate entity $e_i$ , discover the potential relations/properties as $r_{ij}$ from the knowledge base $K$ .",
"Given a candidate entity-relation or entity-property pair $\\lbrace e_i, r_{ij}\\rbrace $ as a query, find the corresponding entity or value $e_j$ in the input document $D$ using a QA system. The query here can be directly formed by the word sequence of $\\lbrace e_i, r_{ij}\\rbrace $ , or built from templates as in BIBREF20 .",
"Since the results of step 3 are formed by free texts in the input document $D$ , we need to link the results to the knowledge base $K$ .",
"This framework determines each of the three elements in relation triples step by step. Step 1 is equivalent to named entity recognition (NER), and state-of-the-art NER systems BIBREF22 , BIBREF8 can achieve over 0.91 F1-score on CoNLL'03 BIBREF23 , a well-known NER benchmark. For attribution discovery in step 2, we can take advantage of existing knowledge base ontologies such as Wikipedia Ontology to obtain a candidate relation/property list according to NER results in step 1. Besides, there is also some existing work on attribution discovery BIBREF24 , BIBREF25 and ontology construction BIBREF26 that can be used to solve the problem in step 2. The most difficult part in our framework is step 3 in which we need to find the entity (or value) $e_j$ in document $D$ according to the previous entity-relation (or entity-property) pair $\\lbrace e_i, r_{ij}\\rbrace $ . Inspired by recent success in QA and MRC BIBREF27 , BIBREF28 , BIBREF29 , we propose to solve step 3 in the setting of SQuAD BIBREF30 which is a very popular QA task. The problem setting of SQuAD is that given a document $\\tilde{D}$ and a question $q$ , output a segment of text $a$ in $\\tilde{D}$ as the answer to the question. In our framework, we assign the input document $D$ as $\\tilde{D}$ and the entity-relation (or entity-property) pair $\\lbrace e_i, r_{ij}\\rbrace $ as $D$0 , and then we can get the answer $D$1 with a QA model. Finally in step 4, since the QA model can only produce answers formed by input free texts, we need to link the answer $D$2 to an entity $D$3 in the knowledge base $D$4 , and the entity $D$5 will form the target relation triple as $D$6 . Existing Entity Linking (EL) systems BIBREF31 , BIBREF32 directly solve this problem especially when we have high quality QA results from step 3.",
"As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.",
"Recent success on QA and MRC is mainly attributed to advanced deep learning architectures such as attention-based and memory-augmented neural networks BIBREF35 , BIBREF36 and the availability of large-scale datasets BIBREF37 , BIBREF38 especially SQuAD. The differences between step 3 and SQuAD can be summarized as follows. First, the answer to the question in SQuAD is restricted to a continuous segment of the input text, but in QA4IE, we remove this constraint which may reduce the number of target relation triples. Second, in existing QA and MRC benchmarks, the input documents are not very long and the questions may be complex and difficult to understand by the model, while in QA4IE, the input documents may be longer but the questions formed by entity-relation (or entity-property) pair are much simpler. Therefore, in our model, we incorporate Pointer Networks BIBREF39 to adapt to the answers formed by any words within the document in any order as well as Self-Matching Networks BIBREF29 to enhance the ability on modeling longer input documents."
],
[
"The contributions of this paper are as follows:",
"We propose a novel IE framework named QA4IE to overcome the weaknesses of existing IE systems. As we discussed above, the problem of step 1, 2 and 4 can be solved by existing work and we propose to solve the problem of step 3 with QA models.",
"To train a high quality neural network QA model, we build a large IE benchmark in QA style named QA4IE benchmark which consists of 293K Wikipedia articles and 2 million golden relation triples with 636 different relation types.",
"To adapt QA models to the IE problem, we propose an approach that enhances existing QA models with Pointer Networks and Self-Matching Networks.",
"We compare our model with IE baselines on our QA4IE benchmark and achieve a great improvement over previous baselines.",
"We open source our code and benchmark for repeatable experiments and further study of IE."
],
[
"This section briefly presents the construction pipeline of QA4IE benchmark to solve the problem of step 3 as in our framework (Figure 1 ). Existing largest IE benchmark BIBREF18 is created with the help of QA-SRL annotations BIBREF19 which consists of 3.2K sentences and 10K golden extractions. Following this idea, we study recent large-scale QA and MRC datasets and find that WikiReading BIBREF33 creates a large-scale QA dataset based on Wikipedia articles and WikiData relation triples BIBREF34 . However, we observe about 11% of QA pairs with errors such as wrong answer locations or mismatch between answer string and answer words. Besides, there are over 50% of QA pairs with the answer involving words out of the input text or containing multiple answers. We consider these cases out of the problem scope of this paper and only focus on the information within the input text.",
"Therefore, we choose to build the benchmark referring the implementation of WikiReading based on Wikipedia articles and golden triples from Wikidata and DBpedia BIBREF15 , BIBREF16 . Specifically, we build our QA4IE benchmark in the following steps.",
"Dump and Preprocessing. We dump the English Wikipedia articles with Wikidata knowledge base and match each article with its corresponding relation triples according to its title. After cleaning data by removing low frequency tokens and special characters, we obtain over 4M articles and 18M triples with over 800 relation types.",
"Clipping. We discard the triples with multiple entities (or values) for $e_j$ (account for about 6%, e.g., a book may have multiple authors). Besides, we discard the triples with any word in $e_j$ out of the corresponding article (account for about 50%). After this step, we obtain about 3.5M articles and 9M triples with 636 relation types.",
"Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles.",
"Distillation. Since our benchmark is for IE, we prefer the articles with more golden triples involved by assuming that Wikipedia articles with more annotated triples are more informative and better annotated. Therefore, we figure out the distribution of the number of golden triples in articles and decide to discard the articles with less than 6 golden triples (account for about 80%). After this step, we obtain about 200K articles and 1.4M triples with 636 relation types.",
"Query and Answer Assignment. For each golden triple $\\lbrace e_i, r_{ij}, e_j\\rbrace $ , we assign the relation/property $r_{ij}$ as the query and the entity $e_j$ as the answer because the Wikipedia article and its corresponding golden triples are all about the same entity $e_i$ which is unnecessary in the queries. Besides, we find the location of each $e_j$ in the corresponding article as the answer location. As we discussed in Section 1, we do not restrict $e_j$ to a continuous segment in the article as required in SQuAD. Thus we first try to detect a matched span for each $e_j$ and assign this span as the answer location. Then for each of the rest $e_j$ which has no matched span, we search a matched sub-sequence in the article and assign the index sequence as the answer location. We name them span-triples and seq-triples respectively. Note that each triple will have an answer location because we have discarded the triples with unseen words in $e_j$ and if we can find multiple answer locations, all of them will be assigned as ground truths.",
"Dataset Splitting. For comparing the performance on span-triples and seq-triples, we set up two different datasets named QA4IE-SPAN and QA4IE-SEQ. In QA4IE-SPAN, only articles with all span-triples are involved, while in QA4IE-SEQ, articles with seq-triples are also involved. For studying the influence of the article length as longer articles are normally more difficult to model by LSTMs, we split the articles according to the article length. We name the set of articles with lengths shorter than 400 as S, lengths between 400 and 700 as M, lengths greater than 700 as L. Therefore we obtain 6 different datasets named QA4IE-SPAN-S/M/L and QA4IE-SEQ-S/M/L. A 5/1/5 splitting of train/dev/test sets is performed. The detailed statistics of QA4IE benchmark are provided in Table 1 .",
"We further compare our QA4IE benchmark with some existing IE and QA benchmarks in Table 2 . One can observe that QA4IE benchmark is much larger than previous IE and QA benchmarks except for WikiReading and Zero-Shot Benchmark. However, as we mentioned at the beginning of Section 2, WikiReading is problematic for IE settings. Besides, Zero-Shot Benchmark is a sentence-level dataset and we have described the disadvantage of ignoring information across sentences at Section 1.1. Thus to our best knowledge, QA4IE benchmark is the largest document level IE benchmark and it can be easily extended if we change our distillation strategy."
],
[
"In this section, we describe our Question Answering model for IE. The model overview is illustrated in Figure 2 .",
"The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as",
"$$\\begin{split}\ng_t &= {\\rm sigmoid}(W_gx_t+b_g) \\\\\ns_t &= {\\rm relu } (W_xx_t+b_x) \\\\\nu_t &= g_t \\odot s_t + (1 - g_t) \\odot x_t~.\n\\end{split}$$ (Eq. 18) ",
"Here $W_g, W_x \\in \\mathbb {R}^{d \\times 2d}$ and $b_g, b_x \\in \\mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .",
"Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:",
"$$\\begin{split}\nu_t^{^{\\prime }} &= {\\rm BiLSTM}(u^{^{\\prime }}_{t-1},u_t) \\\\\nv_t^{^{\\prime }} &= {\\rm BiLSTM}(v^{^{\\prime }}_{t-1},v_t)~.\n\\end{split}$$ (Eq. 19) ",
"Here we obtain $\\mathbf {U} = [u_1^{^{\\prime }}, ... , u_n^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times n}$ and $\\mathbf {V} = [v_1^{^{\\prime }}, ... , v_m^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times m}$ . Then we feed $\\mathbf {U}$ and $\\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.",
"After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as",
"$$\\begin{split}\no_t &= {\\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\\\\ns_j^t &= w^T {\\rm tanh}(W_hh_j+\\tilde{W_h}h_t)\\\\\n\\alpha _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\\nc_t &= \\Sigma _{i=1}^n\\alpha _i^th_i ~.\n\\end{split}$$ (Eq. 20) ",
"Here $W_h, \\tilde{W_h} \\in \\mathbb {R}^{d \\times 8d}$ and $w \\in \\mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as",
"$$\\begin{split}\np_t &= {\\rm LSTM}(p_{t-1}, c_t) \\\\\ns_j^t &= w^T {\\rm tanh}(W_oo_j+W_pp_{t-1})\\\\\n\\beta _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\\nc_t &= \\Sigma _{i=1}^n\\beta _i^to_i~.\n\\end{split}$$ (Eq. 21) ",
"The initial state of LSTM $p_0$ is $o_n$ . We can then model the probability of the $t^{th}$ token $a^t$ by",
"$$& {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O}) = (\\beta _1^t, \\beta _2^t, ... , \\beta _n^t, \\beta _{n+1}^t) \\nonumber \\\\\n& {\\rm P}(a^t_i) \\triangleq {\\rm P}(a^t = i|a^1, ... , a^{t-1}, \\mathbf {O}) = \\beta _i^t ~.$$ (Eq. 22) ",
"Here $\\beta _{n+1}^t$ denotes the probability of generating the “ ${\\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\\textbf {a}$ is as follows",
"$${\\rm P}(\\textbf {a}|\\mathbf {O}) = \\prod _t {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O})~.$$ (Eq. 23) ",
"Given the supervision of answer sequence $\\mathbf {y} = (y_1, ... , y_L)$ , we can write down the loss function of our model as",
"$${\\rm L(\\theta )} = -\\sum _{t=1}^L \\log {\\rm P} (a^t_{y_t})~.$$ (Eq. 24) ",
"To train our model, we minimize the loss function ${\\rm L(\\theta )}$ based on training examples."
],
[
"We build our QA4IE benchmark following the steps described in Section 2. In experiments, we train and evaluate our QA models on the corresponding train and test sets while the hyper-parameters are tuned on dev sets. In order to make our experiments more informative, we also evaluate our model on SQuAD dataset BIBREF30 .",
"The preprocessing of our QA4IE benchmark and SQuAD dataset are all performed with the open source code from BIBREF27 . We use 100 1D filters with width 5 to construct the CharCNN in our char embedding layer. We set the hidden size $d=100$ for all the hidden states in our model. The optimizer we use is the AdaDelta optimizer BIBREF45 with an initial learning rate of 2. A dropout BIBREF46 rate of 0.2 is applied in all the CNN, LSTM and linear transformation layers in our model during training. For SQuAD dataset and our small sized QA4IE-SPAN/SEQ-S datasets, we set the max length of input texts as 400 and a mini-batch size of 20. For middle sized (and large sized) QA4IE datasets, we set the max length as 700 (800) and batch size as 7 (5). We introduce an early stopping in training process after 10 epochs. Our model is trained on a GTX 1080 Ti GPU and it takes about 14 hours on small sized QA4IE datasets. We implement our model with TensorFlow BIBREF47 and optimize the computational expensive LSTM layers with LSTMBlockFusedCell."
],
[
"We first perform experiments in QA settings to evaluate our QA model on both SQuAD dataset and QA4IE benchmark. Since our goal is to solve IE, not QA, the motivation of this part of experiments is to evaluate the performance of our model and make a comparison between QA4IE benchmark and existing datasets. Two metrics are introduced in the SQuAD dataset: Exact Match (EM) and F1-score. EM measures the percentage that the model prediction matches one of the ground truth answers exactly while F1-score measures the overlap between the prediction and ground truth answers. Our QA4IE benchmark also adopts these two metrics.",
"Table 3 presents the results of our QA model on SQuAD dataset. Our model outperforms the previous sequence model but is not competitive with span models because it is designed to produce sequence answers in IE settings while baseline span models are designed to produce span answers for SQuAD dataset.",
"The comparison between our QA model and two baseline QA models on our QA4IE benchmark is shown in Table 4 . For training of both baseline QA models, we use the same configuration of max input length as our model and tune the rest of hyper-parameters on dev sets. Our model outperforms these two baselines on all 6 datasets. The performance is good on S and M datasets but worse for longer documents. As we mentioned in Section 4.1, we set the max input length as 800 and ignore the rest words on L datasets. Actually, there are 11% of queries with no answers in the first 800 words in our benchmark. Processing longer documents is a tough problem BIBREF51 and we leave this to our future work.",
"To study the improvement of each component in our model, we present model ablation study results in Table 5 . We do not involve Attention Flow Layer and Pointer Network Decoder as they cannot be replaced by other architectures with the model still working. We can observe that the first three components can effectively improve the performance but Self Matching Layer makes the training more computationally expensive by 40%. Besides, the LSTMBlockFusedCell works effectively and accelerates the training process by 6 times without influencing the performance."
],
[
"In this subsection, we put our QA model in the entire pipeline of our QA4IE framework (Figure 1 ) and evaluate the framework in IE settings. Existing IE systems are all free-text based Open IE systems, so we need to manually evaluate the free-text based results in order to compare our model with the baselines. Therefore, we conduct experiments on a small dataset, the dev set of QA4IE-SPAN-S which consists of 4393 documents and 28501 ground truth queries.",
"Our QA4IE benchmark is based on Wikipedia articles and all the ground truth triples of each article have the same first entity (i.e. the title of the article). Thus, we can directly use the title of the article as the first entity of each triple without performing step 1 (entity recognition) in our framework. Besides, all the ground truth triples in our benchmark are from knowledge base where they are disambiguated and aggregated in the first place, and therefore step 4 (entity linking) is very simple and we do not evaluate it in our experiments.",
"A major difference between QA settings and IE settings is that in QA settings, each query corresponds to an answer, while in the QA4IE framework, the QA model take a candidate entity-relation (or entity-property) pair as the query and it needs to tell whether an answer to the query can be found in the input text. We can consider the IE settings here as performing step 2 and then step 3 in the QA4IE framework.",
"In step 2, we need to build a candidate query list for each article in the dataset. Instead of incorporating existing ontology or knowledge base, we use a simple but effective way to build the candidate query list of an article. Since we have a ground truth query list with labeled answers of each article, we can add all the neighboring queries of each ground truth query into the query list. The neighboring queries are defined as two queries that co-occur in the same ground truth query list of any articles in the dataset. We transform the dev set of QA4IE-SPAN-S above by adding neighboring queries into the query list. After this step, the number of queries grows to 426336, and only 28501 of them are ground truth queries labeled with an answer.",
"In step 3, we require our QA model to output a confidence score along with the answer to each candidate query. Our QA model produces no answer to a query when the confidence score is less than a threshold $\\delta $ or the output is an “ ${\\rm eos}$ ” symbol. For the answers with a confidence score $\\ge \\delta $ , we evaluate them by the EM measurement with ground truth answers and count the true positive samples in order to calculate the precision and recall under the threshold $\\delta $ . Specifically, we try two confidence scores calculated as follows:",
"$$\\begin{split}\n{\\rm Score_{mul}} = \\prod _{t=1}^L{\\rm P}(a^t_{i_t}),~~~{\\rm Score_{avg}} = \\sum _{t=1}^L{\\rm P}(a^t_{i_t}) / L ~,\n\\end{split}$$ (Eq. 34) ",
"where $(a^1_{i_1}, ... , a^L_{i_L})$ is the answer sequence and ${\\rm P}(a^t_i)$ is defined in Eq. ( 22 ). ${\\rm Score_{mul}}$ is equivalent to the training loss in Eq. ( 24 ) and ${\\rm Score_{avg}}$ takes the answer length into account.",
"The precision-recall curves of our framework based on the two confidence scores are plotted in Figure 3 . We can observe that the EM rate we achieve in QA settings is actually the best recall (91.87) in this curve (by setting $\\delta = 0$ ). The best F1-scores of the two curves are 29.97 (precision $= 21.61$ , recall $= 48.85$ , $\\delta = 0.91$ ) for ${\\rm Score_{mul}}$ and 31.05 (precision $= 23.93$ , recall $= 44.21$ , $\\delta = 0.97$ ) for ${\\rm Score_{avg}}$ . ${\\rm Score_{avg}}$ is better than $= 21.61$0 , which suggests that the answer length should be taken into account.",
"We then evaluate existing IE systems on the dev set of QA4IE-SPAN-S and empirically compare them with our framework. Note that while BIBREF20 is closely related to our work, we cannot fairly compare our framework with BIBREF20 because their systems are in the sentence level and require additional negative samples for training. BIBREF21 is also related to our work, but their dataset and code have not been published yet. Therefore, we choose to evaluate three popular Open IE systems, Open IE 4 BIBREF6 , Stanford IE BIBREF4 and ClauseIE BIBREF5 .",
"Since Open IE systems take a single sentence as input and output a set of free-text based triples, we need to find the sentences involving ground truth answers and feed the sentences into the Open IE systems. In the dev set of QA4IE-SPAN-S, there are 28501 queries with 44449 answer locations labeled in the 4393 documents. By feeding the 44449 sentences into the Open IE systems, we obtain a set of extracted triples from each sentence. We calculate the number of true positive samples by first filtering out triples with less than 20% words overlapping with ground truth answers and then asking two human annotators to verify the remaining triples independently. Since in the experiments, our framework is given the ground-truth first entity of each triple (the title of the corresponding Wikipedia article) while the baseline systems do not have this information, we ask our human annotators to ignore the mistakes on the first entities when evaluating triples produced by the baseline systems to offset this disadvantage. For example, the 3rd case of ClauseIE and the 4th case of Open IE 4 in Table 7 are all labeled as correct by our annotators even though the first entities are pronouns. The two human annotators reached an agreement on 191 out of 195 randomly selected cases.",
"The evaluation results of the three Open IE baselines are shown in Table 6 . We can observe that most of the extracted triples are not related to ground truths and the precision and recall are all very low (around 1%) although we have already helped the baseline systems locate the sentences containing ground truth answers."
],
[
"In this subsection, we perform case studies of IE settings in Table 7 to better understand the models and benchmarks. The baseline Open IE systems produce triples by analyzing the subjects, predicates and objects in input sentences, and thus our annotators lower the bar of accepting triples. However, the analysis on semantic roles and parsing trees cannot work very well on complicated input sentences like the 2nd and the 3rd cases. Besides, the baseline systems can hardly solve the last two cases which require inference on input sentences.",
"Our framework works very well on this dataset with the QA measurements EM $= 91.87$ and F1 $= 93.53$ and the IE measurements can be found in Figure 3 . Most of the error cases are the fourth case which is acceptable by human annotators. Note that our framework takes the whole document as the input while the baseline systems take the individual sentence as the input, which means the experiment setting is much more difficult for our framework."
],
[
"Finally, we perform a human evaluation on our QA4IE benchmark to verify the reliability of former experiments. The evaluation metrics are as follows:",
"Triple Accuracy is to check whether each ground truth triple is accurate (one cannot find conflicts between the ground truth triple and the corresponding article) because the ground truth triples from WikiData and DBpedia may be incorrect or incomplete.",
"Contextual Consistency is to check whether the context of each answer location is consistent with the corresponding ground truth triple (one can infer from the context to obtain the ground truth triple) because we keep all matched answer locations as ground truths but some of them may be irrelevant with the corresponding triple.",
"Triple Consistency is to check whether there is at least one answer location that is contextually consistent for each ground truth triple. It can be calculated by counting the results of Contextual Consistency.",
"We randomly sample 25 articles respectively from the 6 datasets (in total of 1002 ground truth triples with 2691 labeled answer locations) and let two human annotators label the Triple Accuracy for each ground truth triple and the Contextual Consistency for each answer location. The two human annotators reached an agreement on 131 of 132 randomly selected Triple Accuracy cases and on 229 of 234 randomly selected Contextual Consistency cases. The human evaluation results are shown in Table 8 . We can find that the Triple Accuracy and the Triple Consistency is acceptable while the Contextual Consistency still needs to be improved. The Contextual Consistency problem is a weakness of distant supervision, and we leave this to our future work."
],
[
"In this paper, we propose a novel QA based IE framework named QA4IE to address the weaknesses of previous IE solutions. In our framework (Figure 1 ), we divide the complicated IE problem into four steps and show that the step 1, 2 and 4 can be solved well enough by existing work. For the most difficult step 3, we transform it to a QA problem and solve it with our QA model. To train this QA model, we construct a large IE benchmark named QA4IE benchmark that consists of 293K documents and 2 million golden relation triples with 636 different relation types. To our best knowledge, our QA4IE benchmark is the largest document level IE benchmark. We compare our system with existing best IE baseline systems on our QA4IE benchmark and the results show that our system achieves a great improvement over baseline systems.",
"For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction."
],
[
"W. Zhang is the corresponding author of this paper. The work done by SJTU is sponsored by National Natural Science Foundation of China (61632017, 61702327, 61772333) and Shanghai Sailing Program (17YF1428200)."
]
],
"section_name": [
"Introduction and Background",
"Previous IE Systems",
"QA4IE Framework",
"Contributions",
"QA4IE Benchmark Construction",
"Question Answering Model",
"Experimental Setup",
"Results in QA Settings",
"Results in IE Settings",
"Case Study",
"Human Evaluation on QA4IE Benchmark",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"5370c482a9e9c424d28b8ecadac5f0bad4cc0b9e"
],
"answer": [
{
"evidence": [
"The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as",
"$$\\begin{split} g_t &= {\\rm sigmoid}(W_gx_t+b_g) \\\\ s_t &= {\\rm relu } (W_xx_t+b_x) \\\\ u_t &= g_t \\odot s_t + (1 - g_t) \\odot x_t~. \\end{split}$$ (Eq. 18)",
"Here $W_g, W_x \\in \\mathbb {R}^{d \\times 2d}$ and $b_g, b_x \\in \\mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .",
"Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:",
"Here we obtain $\\mathbf {U} = [u_1^{^{\\prime }}, ... , u_n^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times n}$ and $\\mathbf {V} = [v_1^{^{\\prime }}, ... , v_m^{^{\\prime }}] \\in \\mathbb {R}^{2d \\times m}$ . Then we feed $\\mathbf {U}$ and $\\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.",
"After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as",
"$$\\begin{split} o_t &= {\\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\\\ s_j^t &= w^T {\\rm tanh}(W_hh_j+\\tilde{W_h}h_t)\\\\ \\alpha _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\alpha _i^th_i ~. \\end{split}$$ (Eq. 20)",
"Here $W_h, \\tilde{W_h} \\in \\mathbb {R}^{d \\times 8d}$ and $w \\in \\mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as",
"$$\\begin{split} p_t &= {\\rm LSTM}(p_{t-1}, c_t) \\\\ s_j^t &= w^T {\\rm tanh}(W_oo_j+W_pp_{t-1})\\\\ \\beta _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\beta _i^to_i~. \\end{split}$$ (Eq. 21)",
"Here $\\beta _{n+1}^t$ denotes the probability of generating the “ ${\\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\\textbf {a}$ is as follows",
"$${\\rm P}(\\textbf {a}|\\mathbf {O}) = \\prod _t {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O})~.$$ (Eq. 23)"
],
"extractive_spans": [],
"free_form_answer": "A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer.",
"highlighted_evidence": [
"The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as\n\n$$\\begin{split} g_t &= {\\rm sigmoid}(W_gx_t+b_g) \\\\ s_t &= {\\rm relu } (W_xx_t+b_x) \\\\ u_t &= g_t \\odot s_t + (1 - g_t) \\odot x_t~. \\end{split}$$ (Eq. 18)",
"The same Highway Layer is applied to $q_t$ and produces $v_t$ .",
"Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:",
"Then we feed $\\mathbf {U}$ and $\\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query.",
"Therefore, we introduce Self-Matching Layer BIBREF29 in our model as\n\n$$\\begin{split} o_t &= {\\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\\\ s_j^t &= w^T {\\rm tanh}(W_hh_j+\\tilde{W_h}h_t)\\\\ \\alpha _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\alpha _i^th_i ~. \\end{split}$$ (Eq. 20)",
"Finally we feed the embeddings $\\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as\n\n$$\\begin{split} p_t &= {\\rm LSTM}(p_{t-1}, c_t) \\\\ s_j^t &= w^T {\\rm tanh}(W_oo_j+W_pp_{t-1})\\\\ \\beta _i^t &= {\\rm exp}(s_i^t)/\\Sigma _{j=1}^n{\\rm exp}(s_j^t)\\\\ c_t &= \\Sigma _{i=1}^n\\beta _i^to_i~. \\end{split}$$ (Eq. 21)",
"Therefore, the probability of generating the answer sequence $\\textbf {a}$ is as follows\n\n$${\\rm P}(\\textbf {a}|\\mathbf {O}) = \\prod _t {\\rm P}(a^t | a^1, ... , a^{t-1}, \\mathbf {O})~.$$ (Eq. 23)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
},
{
"annotation_id": [
"b38449c5de925046121e3e09d3e32348e23e9a99"
],
"answer": [
{
"evidence": [
"For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper. Besides, processing longer documents and improving the quality of our benchmark are all challenging problems as we mentioned previously. We hope this work can provide new thoughts for the area of information extraction.",
"The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \\lbrace e_i, r_{ij}, e_j\\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation. We ignore the adverbials and only consider the entity pairs and their relations as in standard RE settings. Note that we process the entire document as a whole instead of processing individual sentences separately as in previous systems. As shown in Figure 1 , our QA4IE framework consists of four key steps:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For the future work, we plan to solve the triples with multiple entities as the second entity, which is excluded from problem scope in this paper.",
"The input of QA4IE is a document $D$ with an existing knowledge base $K$ and the output is a set of relation triples $R = \\lbrace e_i, r_{ij}, e_j\\rbrace $ in $D$ where $e_i$ and $e_j$ are two individual entities and $r_{ij}$ is their relation."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
},
{
"annotation_id": [
"1a703b3a71caca8e01e48af84574b49a0a704560"
],
"answer": [
{
"evidence": [
"As mentioned above, step 1, 2 and 4 in the QA4IE framework can be solved by existing work. Therefore, in this paper, we mainly focus on step 3. According to the recent progress in QA and MRC, deep neural networks are very good at solving this kind of problem with a large-scale dataset to train the network. However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark. Inspired by WikiReading BIBREF33 , a recent large-scale QA benchmark over Wikipedia, we find that the articles in Wikipedia together with the high quality triples in knowledge bases such as Wikidata BIBREF34 and DBpedia can form the supervision we need. Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.",
"Incorporating DBpedia. Unlike WikiData, DBpedia is constructed automatically without human verification. Relations and properties in DBpedia are coarse and noisy. Thus we fix the existing 636 relation types in WikiData and build a projection from DBpedia relations to these 636 relation types. We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations. Then we gather all the DBpedia triples with the first entity is corresponding to one of the above 3.5M articles and the relation is one of the projected 148 relations. After the same clipping process as above and removing the repetitive triples, we obtain 394K additional triples in 302K existing Wikipedia articles."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"However, all previous IE benchmarks BIBREF18 are too small to train neural network models typically used in QA, and thus we need to build a large benchmark.",
"Therefore, we build a large scale benchmark named QA4IE benchmark which consists of 293K Wikipedia articles and 2M golden relation triples with 636 different relation types.",
"We manually find 148 relations which can be projected to a WikiData relation out of 2064 DBpedia relations."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"What QA models were used?",
"Can this approach model n-ary relations?",
"Was this benchmark automatically created from an existing dataset?"
],
"question_id": [
"fd8b6723ad5f52770bec9009e45f860f4a8c4321",
"4ce3a6632e7d86d29a42bd1fcf325114b3c11d46",
"e7c0cdc05b48889905cc03215d1993ab94fb6eaa"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"information extraction",
"information extraction",
"information extraction"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. An overview of our QA4IE Framework.",
"Table 1. Detailed Statistics of QA4IE Benchmark.",
"Table 2. Comparison between existing IE benchmarks and QA benchmarks. The first two are IE benchmarks and the rest four are QA benchmarks.",
"Fig. 2. An overview of our QA model.",
"Table 3. Comparison of QA models on SQuAD datasets. We only include the single model results on the dev set from published papers.",
"Table 4. Comparison of QA models on 6 datasets of our QA4IE benchmark. The BiDAF model cannot work on our SEQ datasets thus the results are N/A.",
"Fig. 3. Precision-recall curves with two confidence scores on the dev set of QA4IE-SPAN-S.",
"Table 6. Results of three Open IE baselines on the dev set of QA4IE-SPAN-S.",
"Table 7. Case study of three Open IE baselines and our framework on dev set of QA4IE-SPAN-S, the results of baselines are judged by two human annotators while the results of our framework are measured by Exact Match with ground truth. The triples in red indicate the wrong cases.",
"Table 8. Human evaluation on QA4IE benchmark."
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Figure2-1.png",
"9-Table3-1.png",
"10-Table4-1.png",
"11-Figure3-1.png",
"12-Table6-1.png",
"13-Table7-1.png",
"14-Table8-1.png"
]
} | [
"What QA models were used?"
] | [
[
"1804.03396-Question Answering Model-13",
"1804.03396-Question Answering Model-9",
"1804.03396-Question Answering Model-6",
"1804.03396-Question Answering Model-7",
"1804.03396-Question Answering Model-3",
"1804.03396-Question Answering Model-1",
"1804.03396-Question Answering Model-4"
]
] | [
"A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer."
] | 280 |
1707.03764 | N-GrAM: New Groningen Author-profiling Model | We describe our participation in the PAN 2017 shared task on Author Profiling, identifying authors' gender and language variety for English, Spanish, Arabic and Portuguese. We describe both the final, submitted system, and a series of negative results. Our aim was to create a single model for both gender and language, and for all language varieties. Our best-performing system (on cross-validated results) is a linear support vector machine (SVM) with word unigrams and character 3- to 5-grams as features. A set of additional features, including POS tags, additional datasets, geographic entities, and Twitter handles, hurt, rather than improve, performance. Results from cross-validation indicated high performance overall and results on the test set confirmed them, at 0.86 averaged accuracy, with performance on sub-tasks ranging from 0.68 to 0.98. | {
"paragraphs": [
[
"With the rise of social media, more and more people acquire some kind of on-line presence or persona, mostly made up of images and text. This means that these people can be considered authors, and thus that we can profile them as such. Profiling authors, that is, inferring personal characteristics from text, can reveal many things, such as their age, gender, personality traits, location, even though writers might not consciously choose to put indicators of those characteristics in the text. The uses for this are obvious, for cases like targeted advertising and other use cases, such as security, but it is also interesting from a linguistic standpoint.",
"In the shared task on author profiling BIBREF0 , organised within the PAN framework BIBREF1 , the aim is to infer Twitter users' gender and language variety from their tweets in four different languages: English, Spanish, Arabic, and Portuguese. Gender consists of a binary classification (male/female), whereas language variety differs per language, from 2 varieties for Portuguese (Brazilian and Portugal) to 7 varieties for Spanish (Argentina, Chile, Colombia, Mexico, Peru, Spain, Venezuela). The challenge is thus to classify users along two very different axes, and in four highly different languages – forcing participants to either build models that can capture these traits very generally (language-independent) or tailor-make models for each language or subtask.",
"Even when looking at the two tasks separately, it looks like the very same features could be reliable clues for classification. Indeed, for both profiling authors on Twitter as well as for discriminating between similar languages, word and character n-grams have proved to be the strongest predictors of gender as well as language varieties. For language varieties discrimination, the systems that performed best at the DSL shared tasks in 2016 (on test set B, i.e. social media) used word/character n-grams, independently of the algorithm BIBREF2 . The crucial contribution of these features was also observed by BIBREF3 , BIBREF4 , who participated in the 2017 DSL shared task with the two best performing systems. For author profiling, it has been shown that tf-idf weighted n-gram features, both in terms of characters and words, are very successful in capturing especially gender distinctions BIBREF5 . If different aspects such as language variety and gender of a speaker on Twitter might be captured by the same features, can we build a single model that will characterise both aspects at once?",
"In the context of the PAN 2017 competition on user profiling we therefore experimented with enriching a basic character and word n-gram model by including a variety of features that we believed should work. We also tried to view the task jointly and model the two problems as one single label, but single modelling worked best.",
"In this paper we report how our final submitted system works, and provide some general data analysis, but we also devote substantial space to describing what we tried (under which motivations), as we believe this is very informative towards future developments of author profiling systems."
],
[
"After an extensive grid-search we submitted as our final run, a simple SVM system (using the scikit-learn LinearSVM implementation) that uses character 3- to 5-grams and word 1- to 2-grams with tf-idf weighting with sublinear term frequency scaling, where instead of the standard term frequency the following is used:",
" INLINEFORM0 ",
"We ran the grid search over both tasks and all languages on a 64-core machine with 1 TB RAM (see Table TABREF2 for the list of values over which the grid search was performed). The full search took about a day to complete. In particular, using min_df=2 (i.e. excluding all terms that are used by only one author) seems to have a strong positive effect and greatly reduces the feature size as there are many words that appear only once. The different optimal parameters for different languages provided only a slight performance boost for each language. We decided that this increase was too small to be significant, so we decided to use a single parameter set for all languages and both tasks."
],
[
"The training dataset provided consist of 11400 sets of tweets, each set representing a single author. The target labels are evenly distributed across variety and gender. The labels for the gender classification task are `male' and `female'. Table TABREF4 shows the labels for the language variation task and also shows the data distribution across languages.",
"We produced two visualisations, one per label (i.e. variety and gender), in order to gain some insights that could help the feature engineering process. For the variety label we trained a decision tree classifier using word unigrams: although the performance is poor (accuracy score of 0.63) this setup has the benefit of being easy to interpret: Figure FIGREF3 shows which features are used for the first splits of the tree.",
"We also created a visualisation of the English dataset using the tool described in BIBREF6 , and comparing the most frequent words used by males to those used by females. The visualisation shown in Figure SECREF6 indicates several interesting things about the gendered use of language. The words used often by males and very seldom by females are often sport-related, and include words such as “league”, and “chelsea”. There are several emojis that are used frequently by females and infrequently by males, e.g. “”, “”, as well as words like “kitten”, “mom”, “sister” and “chocolate”. In the top right of the visualisation we see words like “trump” and “sleep”, which indicates that these words are used very frequently, but equally so by both genders. This also shows that distinguishing words include both time-specific ones, like “gilmore” and “imacelebrityau”, and general words from everyday life, which are less likely to be subject to time-specific trends, like “player”, and “chocolate”."
],
[
"This section is meant to highlight all of the potential contributions to the systems which turned out to be detrimental to performance, when compared to the simpler system that we have described in Section SECREF2 . We divide our attempts according to the different ways we attempted to enhance performance: manipulating the data itself (adding more, and changing preprocessing), using a large variety of features, and changing strategies in modelling the problem by using different algorithms and paradigms. All reported results are on the PAN 2017 training data using five-fold cross-validation, unless otherwise specified."
],
[
"We extended the training dataset by adding data and gender labels from the PAN 16 Author Profiling shared task BIBREF5 . However, the additional data consistently resulted in lower cross-validation scores than when using only the training data provided with the PAN 17 task. One possible explanation for this is that our unigram model captures aspects that are tied specifically to the PAN 17 dataset, because it contains topics that may not be present in datasets that were collected in a different time period. To confirm this, we attempted to train on English data from PAN 17 and predict gender labels for the English data from PAN 16, as well as vice versa. Training on the PAN 16 data resulted in an accuracy score of 0.754 for the PAN 17 task, and training on PAN 17 gave an accuracy score of 0.70 for PAN 16, both scores significantly lower than cross-validated results on data from a single year.",
"We attempted to classify the English tweets by Gender using only the data collected by BIBREF7 . This dataset consists of aggregated word counts by gender for about 14,000 Twitter users and 9 million Tweets. We used this data to calculate whether each word in our dataset was a `male' word (used more by males), or a `female' word, and classified users as male or female based on a majority count of the words they used. Using this method we achieved 71.2 percent accuracy for the English gender data, showing that this simple method can provide a reasonable baseline to the gender task.",
"We experimented with different tokenization techniques for different languages, but our average results did not improve, so we decided to use the default scikit-learn tokenizer.",
"We tried adding POS-tags to the English tweets using the spaCy tagger: compared to the model using unigrams only the performances dropped slightly for gender and a bit more for variety:",
"It is not clear whether the missed increase in performance is due to the fact that the data are not normal (i.e. the tokenizer is not Twitter specific) or to the fact that POS tags confuse the classifier. Considering the results we decided not to include a POS-tagger in the final system.",
"()",
"In April 2015, SwiftKey did an extensive report on emoji use by country. They discovered that emoji use varies across languages and across language varieties. For example, they found that Australians use double the average amount of alcohol-themed emoji and use more junk food and holiday emoji than anywhere else in the world.",
"We tried to leverage these findings but the results were disappointing. We used a list of emojis as a vocabulary for the td/idf vectorizer. Encouraged by the results of the SwiftKey report, we tried first to use emojis as the only vocabulary and although the results are above the baseline and also quite high considering the type of features, they were still below the simple unigram model. Adding emojis as extra features to the unigram model also did not provide any improvement.",
"Since emojis are used across languages we built a single model for the four languages. We trained the model for the gender label on English, Portuguese and Arabic and tested it on Spanish: the system scored 0.67 in accuracy.",
"We looked at accuracy scores for the English gender and variety data more closely. We tried different representations of the tweet texts, to see what kind of words were most predictive of variety and gender. Specifically, we look at using only words that start with an uppercase letter, only words that start with a lowercase letter, only Twitter handles (words that start with an \"@\") and all the text excluding the handles.",
"It is interesting that the accuracies are so high although we are using only a basic unigram model, without looking at the character n-grams that we include in our final model. Representing each text only by the Twitter handles used in that text results in 0.77 accuracy for variety, probably because users tend to interact with other users who are in the same geographic area. However, excluding handles from the texts barely decreases performance for the variety task, showing that while the handles can be discriminative, they are not necessary for this task. It is also interesting to note that for this dataset, looking only at words beginning with an uppercase character results in nearly the same score for the Gender task as we get when using all of the available text, while using only lowercase words decreases performance. The opposite is true for the variety task, where using lowercase-only words results in as good performance as using all the text, but using only uppercase words decreases accuracy by over 10 percent.",
"We tried using the counts of geographical names related to the language varieties were as a feature. We also treated this list of locations as vocabulary for our model. Both these approaches did not improve our model.",
"We then tried enriching the data to improve the Unigram model. For each of the language varieties, we obtained 100 geographical location names, representing the cities with the most inhabitants. When this location was mentioned in the tweet, the language variety the location was part of was added to the tweet.",
"We attempted to use Twitter handles in a similar manner. The 100 most-followed Twitter users per language variety were found and the language variety was added to the text when one of its popular Twitter users was mentioned.",
"Unfortunately, this method did not improve our model. We suspect that the information is being captured by the n-gram model, which could explain why this did not improve performance.",
"We have tried the partial setup of last year's winning system, GronUP BIBREF8 , with the distinction that we had to classify language variety instead of age groups. We have excluded the features that are language-dependent (i.e. pos-tagging and misspelling/typos), and experimented with various feature combinations of the rest while keeping word and character n-grams the same. We achieved average accuracy from 0.810 to 0.830, which is clearly lower than our simple final model."
],
[
"We tried to build a single model that predicts at the same time both the language variety and the gender of each user: as expected (since the task is harder) the performance goes down when compared to a model trained independently on each label. However, as highlighted in Table TABREF21 , the results are still surprisingly high. To train the system we simply merged the two labels.",
"We experimented with Facebook's FastText system, which is an out-of-the-box supervised learning classifier BIBREF9 . We used only the data for the English gender task, trying both tweet-level and author-level classification. We pre-processed all text with the NLTK Tweet Tokenizer and used the classification-example script provided with the FastText code base. Training on 3,000 authors and testing on 600 authors gave an accuracy score of 0.64. Changing the FastText parameters such as number of epochs, word n-grams, and learning rate showed no improvement. We achieved an accuracy on 0.79 when we attempted to classify on a per-tweet basis (300,000 tweets for training and 85,071 for test), but this is an easier task as some authors are split over the training and test sets. There are various ways to summarise per-tweet predictions into author-predictions, but we did not experiment further as it seemed that the SVM system worked better for the amount of data we have.",
"In the final system we used the SVM classifier because it outperformed all the others that we tried. Table TABREF23 highlights the results."
],
[
"For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline).",
"Results are broken down per language, and are summarised as both joint and average scores. The joint score is is the percentage of texts for which both gender and variety were predicted correctly at the same time. The average is calculated as the mean over all languages.",
"N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task."
],
[
"We conclude that, for the current author profiling task, a seemingly simple system using word and character n-grams and an SVM classifier proves very hard to beat. Indeed, N-GrAM turned out to be the best-performing out of the 22 systems submitted in this shared task. Using additional training data, `smart' features, and hand-crafted resources hurts rather than helps performance. A possible lesson to take from this would be that manually crafting features serves only to hinder a machine learning algorithm's ability to find patterns in a dataset, and perhaps it is better to focus one's efforts on parameter optimisation instead of feature engineering.",
"However, we believe that this is too strong a conclusion to draw from this limited study, since several factors specific to this setting need to be taken into account. For one, a support vector machine clearly outperforms other classifiers, but this does not mean that this is an inherently more powerful. Rather, we expect that an SVM is the best choice for the given amount of training data, but with more training data, a neural network-based approach would achieve better results.",
"Regarding the frustrating lack of benefit from more advanced features than n-grams, a possible explanation comes from a closer inspection of the data. Both the decision tree model (see Figure FIGREF3 ) and the data visualisation (see Figure SECREF6 ) give us an insight in the most discriminating features in the dataset. In the case of language variety, we see that place names can be informative features, and could therefore be used as a proxy for geographical location, which in turn serves as a proxy for language variety. Adding place names explicitly to our model did not yield performance improvements, which we take to indicate that this information is already captured by n-gram features. Whether and how geographical information in the text can be useful in identifying language variety, is a matter for future research.",
"In the case of gender, many useful features are ones that are highly specific to the Twitter platform (#iconnecthearts), time (cruz), and topics (pbsnewshour) in this dataset, which we suspect would not carry over well to other datasets, but provide high accuracy in this case. Conversely, features designed to capture gender in a more general sense do not yield any benefit over the more specific features, although they would likely be useful for a robust, cross-dataset system. These hypotheses could be assessed in the future by testing author profiling systems in a cross-platform, cross-time setting.",
" Scatter plot of terms commonly used by male and female English speakers."
]
],
"section_name": [
"Introduction",
"Final System",
"Data Analysis",
"Alternative Features and Methods: An Analysis of Negative Results",
"Supplementary Data and Features",
"Modelling",
"Results on Test Data",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"ca1cbe32990697dc4b2c440c07fa82bfeee4c346"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction.",
"For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). For the global scores, all languages are combined. We present finer-grained scores showing the breakdown per language in Table TABREF24 . We compare our gender and variety accuracies against the LDR-baseline BIBREF10 , a low dimensionality representation especially tailored to language variety identification, provided by the organisers. The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline)."
],
"extractive_spans": [],
"free_form_answer": "They achieved best result in the PAN 2017 shared task with accuracy for Variety prediction task 0.0013 more than the 2nd best baseline, accuracy for Gender prediction task 0.0029 more than 2nd best baseline and accuracy for Joint prediction task 0.0101 more than the 2nd best baseline",
"highlighted_evidence": [
"FLOAT SELECTED: Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction.",
"For the final evaluation we submitted our system, N-GrAM, as described in Section 2. Overall, N-GrAM came first in the shared task, with a score of 0.8253 for gender 0.9184 for variety, a joint score of 0.8361 and an average score of 0.8599 (final rankings were taken from this average score BIBREF0 ). ",
"We present finer-grained scores showing the breakdown per language in Table TABREF24 .",
"The final column, + 2nd shows the difference between N-GrAM and that achieved by the second-highest ranked system (excluding the baseline).\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"33c0a0971c00615c05d4259aaa489ee926bd3fb8"
],
"answer": [
{
"evidence": [
"N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task."
],
"extractive_spans": [],
"free_form_answer": "Gender prediction task",
"highlighted_evidence": [
"N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5c112a7545f5de3816ae328c728a1109da194b90"
],
"answer": [
{
"evidence": [
"N-GrAM ranked first in all cases except for the language variety task. In this case, the baseline was the top-ranked system, and ours was second by a small margin. Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task."
],
"extractive_spans": [],
"free_form_answer": "Variety prediction task",
"highlighted_evidence": [
"Our system significantly out-performed the baseline on the joint task, as the baseline scored significantly lower for the gender task than for the variety task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do their results compare against other competitors in the PAN 2017 shared task on Author Profiling?",
"On which task does do model do worst?",
"On which task does do model do best?"
],
"question_id": [
"157b9f6f8fb5d370fa23df31de24ae7efb75d6f3",
"9bcc1df7ad103c7a21d69761c452ad3cd2951bda",
"8427988488b5ecdbe4b57b3813b3f981b07f53a5"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1. Results (accuracy) for the 5-fold cross-validation",
"Table 2. A list of values over which we performed the grid search.",
"Figure 1. Decision Tree output",
"Table 4. Results (accuracy) on the English data for Gender and Variety with and without part of speech tags.",
"Table 5. Results (accuracy) on the English data for Gender and Variety when excluding certain words. We preprocessed the text to exclude the specified word-patterns and then vectorized the resulting text with tf-idf. Classification was done using an SVM with a linear kernel over five-fold cross-validation.",
"Table 7. Performances per classifier: DT: Decision Tree; MLP: Multi-Layer Perceptron, NB: Naive Bayes.",
"Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"4-Figure1-1.png",
"5-Table4-1.png",
"6-Table5-1.png",
"8-Table7-1.png",
"8-Table8-1.png"
]
} | [
"How do their results compare against other competitors in the PAN 2017 shared task on Author Profiling?",
"On which task does do model do worst?",
"On which task does do model do best?"
] | [
[
"1707.03764-Results on Test Data-0",
"1707.03764-8-Table8-1.png"
],
[
"1707.03764-Results on Test Data-2"
],
[
"1707.03764-Results on Test Data-2"
]
] | [
"They achieved best result in the PAN 2017 shared task with accuracy for Variety prediction task 0.0013 more than the 2nd best baseline, accuracy for Gender prediction task 0.0029 more than 2nd best baseline and accuracy for Joint prediction task 0.0101 more than the 2nd best baseline",
"Gender prediction task",
"Variety prediction task"
] | 282 |
1911.03842 | Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation | Models often easily learn biases present in the training data, and their predictions directly reflect this bias. We analyze the presence of gender bias in dialogue and examine the subsequent effect on generative chitchat dialogue models. Based on this analysis, we propose a combination of three techniques to mitigate bias: counterfactual data augmentation, targeted data collection, and conditional training. We focus on the multi-player text-based fantasy adventure dataset LIGHT as a testbed for our work. LIGHT contains gender imbalance between male and female characters with around 1.6 times as many male characters, likely because it is entirely collected by crowdworkers and reflects common biases that exist in fantasy or medieval settings. We show that (i) our proposed techniques mitigate gender bias by balancing the genderedness of generated dialogue utterances; and (ii) they work particularly well in combination. Further, we show through various metrics---such as quantity of gendered words, a dialogue safety classifier, and human evaluation---that our models generate less gendered, but still engaging chitchat responses. | {
"paragraphs": [
[
"Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias.",
"We use the dialogue dataset from the LIGHT text adventure world BIBREF0 as a testbed for our investigation into de-biasing dialogues. The dataset consists of a set of crowd-sourced locations, characters, and objects, which form the backdrop for the dialogues between characters. In the dialogue creation phase, crowdworkers are presented with personas for characters—which themselves were written by other crowdworkers—that they should enact; the dialogues the crowdworkers generate from these personas form the dialogue dataset. Dialogue datasets are susceptible to reflecting the biases of the crowdworkers as they are often collected solely via crowdsourcing. Further, the game's medieval setting may encourage crowdworkers to generate text which accentuates the historical biases and inequalities of that time period BIBREF10, BIBREF11. However, despite the fact that the dialogues take place in a fantasy adventure world, LIGHT is a game and thus we are under no obligation to recreate historical biases in this environment, and can instead use creative license to shape it into a fun world with gender parity.",
"We use the dialogues in LIGHT because we find that it is highly imbalanced with respect to gender: there are over 60% more male-gendered characters than female. We primarily address the discrepancy in the representation of male and female genders, although there are many characters that are gender neutral (like “trees\") or for which the gender could not be determined. We did not find any explicitly identified non-binary characters. We note that this is a bias in and of itself, and should be addressed in future work. We show that training on gender biased data leads existing generative dialogue models to amplify gender bias further. To offset this, we collect additional in-domain personas and dialogues to balance gender and increase the diversity of personas in the dataset. Next, we combine this approach with Counterfactual Data Augmentation and methods for controllable text generation to mitigate the bias in dialogue generation. Our proposed techniques create models that produce engaging responses with less gender bias."
],
[
"Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations."
],
[
"Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1."
],
[
"We quantitatively analyze bias by first examining whether the existing personas are offensive, and second, evaluating their gender balance. To assess the pervasiveness of unsafe content present in personas, we asked three independent annotators to examine each character's persona for potentially offensive content. If annotators selected that the content was offensive or maybe offensive, they were asked to place it in one of four categories – racist, sexist, classist, other – and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these personas are removed from the dataset.",
"We further examined gender bias in personas. Annotators were asked to label the gender of each character based on their persona description (choosing “neutral\" if it was not explicit in the persona). This annotation is possible because some personas include lines such as I am a young woman, although the majority of personas do not mention an explicit gender. Annotators found nearly 50% more male-gendered characters than female-gendered characters (Table TABREF5).",
"While annotators labeled personas as explicitly male, female, or gender-neutral, gender bias may still exist in personas beyond explicit sentences such as I am a young man. For example, personas can contain gendered references such as I want to follow in my father's footsteps rather than mother's footsteps. These relational nouns BIBREF19, BIBREF20 such as father encode a specific relationship that can be gender biased. In this example, that relationship would be between the character and a man, rather than a woman. We analyzed the frequency of references to other gendered characters in the personas by counting the appearance of gendered words using the list compiled by BIBREF21 (for example he vs. she), and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women."
],
[
"After analyzing the bias in LIGHT personas, we go on to analyze the bias in dialogues created from those personas and how to quantify it."
],
[
"In our analysis, we found many examples of biased utterances in the data used to train dialogue agents. For example, the character with a queen persona utters the line I spend my days embroidery and having a talk with the ladies. Another character in a dialogue admires a sultry wench with fire in her eyes. An example of persona bias propagating to the dialogue can be found in Table TABREF2."
],
[
"Sexism is clearly present in many datasets BIBREF9, but finding a good way to measure sexism, especially at scale, can be challenging. A simple answer would be to rely on crowdworkers operating under their own notions of “sexism” to annotate the dialogues. However, in our experience, crowdworkers hold a range of views, often different from ours, as to what counts as sexism, making mere human evaluation far from sufficient. Note that the original LIGHT personas and dialogues were generated by crowdworkers, leaving little reason to believe that crowdworkers will be proficient at spotting the sexism that they themselves embued the dataset with in the first place. Therefore, we supplement our crowdworker-collected human annotations of gender bias with additional quantitative measurements: we measure the ratio of gendered words (taken from the union of several existing gendered word lists that were each created through either automatic means, or by experts BIBREF21, BIBREF22, BIBREF23), and we run an existing dialogue safety classifier BIBREF24 to measure offensiveness of the dialogues."
],
[
"We explore both data augmentation and algorithmic methods to mitigate bias in generative Transformer dialogue models. We describe first our modeling setting and then the three proposed techniques for mitigating bias. Using (i) counterfactual data augmentation BIBREF25 to swap gendered words and (ii) additional data collection with crowdworkers, we create a gender-balanced dataset. Further, (iii) we describe a controllable generation method which moderates the male and female gendered words it produces."
],
[
"Following BIBREF0, in all of our experiments we fine-tune a large, pre-trained Transformer encoder-decoder neural network on the dialogues in the LIGHT dataset. The model was pre-trained on Reddit conversations, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models were trained to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments, resulting in approximately $2,200$ million training examples. The model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of BIBREF26. For generation, we decode sequences with beam search with beam size 5."
],
[
"One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather."
],
[
"To create a more gender-balanced dataset, we collect additional data using a Positive-Bias Data Collection (Pos. Data) strategy."
],
[
"There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character."
],
[
"As discussed in Section SECREF2, it is insufficient to simply balance references to men and women in the dataset, as there may be bias in the form of sexism. While it is challenging to detect sexism, we attempt to offset this type of bias by collecting a set of interesting and independent characters. We do this by seeding workers with examples like adventurer with the persona I am an woman passionate about exploring a world I have not yet seen. I embark on ambitious adventures. We give the additional instruction to attempt to create diverse characters. Even with this instruction, crowdworkers still created roughly 3x as many male-gendered characters as female-gendered characters. We exclude male-gendered characters created in this fashion.",
"In combination with the gender swapped personas above, this yields a new set of 2,676 character personas (compared to 1,877 from the original dataset), for which the number of men and women and the number of references to male or female gendered words is roughly balanced: see Table TABREF5."
],
[
"Finally, we collect additional dialogues with these newly created gender balanced character personas, favoring conversations that feature female gendered characters to offset the imbalance in the original data. We added further instructions for annotators to be mindful of gender bias during their conversations, and in particular to assume equality between genders – social, economic, political, or otherwise – in this fantasy setting. In total, we collect 507 new dialogues containing 6,658 new dialogue utterances in total (about 6% of the size of the full LIGHT dataset)."
],
[
"Bias in dialogue can manifest itself in various forms, but one form is the imbalanced use of gendered words. For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output BIBREF27, BIBREF28, BIBREF29, BIBREF30. Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties BIBREF28, then modifying the control tokens during inference to produce the desired result.",
"Prior to training, each dialogue response is binned into one of four bins – $\\text{F}^{0/+}\\text{M}^{0/+}$ – where $\\text{F}^{0}$ indicates that there are zero female gendered words in the response and $\\text{F}^{+}$ indicates the presence of at least one female gendered word. The gendered words are determined via an aggregation of existing lists of gendered nouns and adjectives from BIBREF21, BIBREF22, BIBREF23. The bins are used to train a conditional model by appending a special token (indicating the bin for the target response) to the end of the input which is given to the encoder. At inference time, the bins can be manipulated to produce dialogue outputs with various quantities of gendered words."
],
[
"We train generative Transformer models using each of these methods – Counterfactual Data Augmentation that augments with swaps of gendered words (CDA, §SECREF19), adding new dialogues (Positive-Bias Data Collection, §SECREF20), and controllable generation to control the quantity of gendered words (CT, §SECREF24) – and finally combine all of these methods together (ALL)."
],
[
"Existing Transformer generative dialogue models BIBREF31, BIBREF32, BIBREF0 are trained to take as input the dialogue context and generate the next utterance. Previous work has shown that machine learning models reflect the biases present in data BIBREF4, BIBREF3, and that these biases can be easy to learn compared to more challenging reasoning BIBREF2, BIBREF33. Generative models often use beam search or top-k sampling BIBREF34 to decode, and these methods are well-known to produce generic text BIBREF35, which makes them susceptible statistical biases present in datasets.",
"As shown in Table TABREF11, we find that existing models actually amplify bias. When the trained model generates gendered words (i.e., words from our gendered word list), it generates male-gendered words the vast majority of the time – even on utterances for which it is supposed to generate only female-gendered words (i.e., the gold label only contains female-gendered words), it generates male-gendered words nearly $78\\%$ of the time.",
"Additionally, following BIBREF8, we run an offensive language classifier on the gold responses and the model generated utterances (Table TABREF16) and find that the model produces more offensive utterances than exist in the dataset."
],
[
"We analyze the performance of the various techniques by dividing the test set using the four genderedness bins – $\\text{F}^{0}\\text{M}^{0}$, $\\text{F}^{0}\\text{M}^{+}$, $\\text{F}^{+}\\text{M}^{0}$, and $\\text{F}^{+}\\text{M}^{+}$ – and calculate the F1 word overlap with the gold response, the percentage of gendered words generated (% gend. words), and the percentage of male-gendered words generated (relative to the sum total of gendered words generated by the model). We compare to the gold labels from the test set and a baseline model that does not use any of the bias mitigation techniques. Results for all methods are displayed in Table TABREF11.",
"Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\\text{F}^{0}\\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth."
],
[
"Our proposed CT method can be used to control the use of gendered words in generated dialogues. We examine the effect of such training by generating responses on the test set by conditioning the ALL model on a singular bin for all examples. Results are shown in Figure FIGREF12. Changing the bin radically changes the genderedness of generated text without significant changes to F1.",
"Examples of generated text from both the baseline and the ALL model are shown in Table TABREF31. The baseline model generates male-gendered words even when the gold response contains no gendered words or only female-gendered words, even generating unlikely sequences such as “my name is abigail. i am the king of this kingdom.\"."
],
[
"Using a dialogue safety classifier BIBREF24, we find that our proposed de-biased models are rated as less offensive compared to the baseline generative Transformer and the LIGHT data (see Table TABREF16)."
],
[
"Finally, we use human evaluation to compare the quality of our de-biasing methods. We use the dialogue evaluation system Acute-Eval BIBREF36 to ask human evaluators to compare two conversations from different models and decide which model is more biased and which model is more engaging. Following Acute-Eval, we collect 100 human and model paired chats. Conversations from a human and baseline model are compared to conversations from a human and the ALL model with all generations set to the $\\text{F}^{0}\\text{M}^{0}$ gender-neutral control bin. Evaluators are asked which model is more engaging and for which model they find it more difficult to predict the gender of the speaker. We found that asking about difficulty of predicting a speaker's gender was much more effective than asking evaluators to evaluate sexism or gender bias. Figure FIGREF17 shows that evaluators rate the ALL model harder to predict the gender of (statistically significant at $p < 0.01$) while engagingness does not change. Our proposed methods are able to mitigate gender bias without degrading dialogue quality."
],
[
"We analyze gender bias in dialogue and propose a general purpose method for understanding and mitigating bias in character personas and their associated dialogues. We present techniques using data augmentation and controllable generation to reduce gender bias in neural language generation for dialogue. We use the dataset LIGHT as a testbed for this work. By integrating these methods together, our models provide control over how gendered dialogue is and decrease the offensiveness of the generated utterances. Overall, our proposed methodology reduces the effect of bias while maintaining dialogue engagingness."
]
],
"section_name": [
"Introduction",
"Sources of Bias in Dialogue Datasets ::: Bias in Character Personas",
"Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Qualitative Examination.",
"Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Quantitative Examination.",
"Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances",
"Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Qualitative Examination.",
"Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Measuring Bias.",
"Methodology: Mitigating Bias in Generative Dialogue",
"Methodology: Mitigating Bias in Generative Dialogue ::: Models",
"Methodology: Mitigating Bias in Generative Dialogue ::: Counterfactual Data Augmentation",
"Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection",
"Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: Gender-swapping Existing Personas",
"Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New and Diverse characters",
"Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New dialogues",
"Methodology: Mitigating Bias in Generative Dialogue ::: Conditional Training",
"Results",
"Results ::: Bias is Amplified in Generation",
"Results ::: Genderedness of Generated Text",
"Results ::: Conditional Training Controls Gendered Words",
"Results ::: Safety of Generated Text",
"Results ::: Human Evaluation",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1adf5025419a86a5a9d6dfa3c94f2b10887ba8dc"
],
"answer": [
{
"evidence": [
"Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\\text{F}^{0}\\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth."
],
"extractive_spans": [
"Transformer generation model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"a4f3aaa96d4e166fbe45d5ff951d622f4f963863"
],
"answer": [
{
"evidence": [
"One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather."
],
"extractive_spans": [],
"free_form_answer": "The training dataset is augmented by swapping all gendered words by their other gender counterparts",
"highlighted_evidence": [
"One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1bd3662ed99b0f0baec07e009286a85a87364f37"
],
"answer": [
{
"evidence": [
"There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character."
],
"extractive_spans": [],
"free_form_answer": "Gendered characters in the dataset",
"highlighted_evidence": [
"For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What baseline is used to compare the experimental results against?",
"How does counterfactual data augmentation aim to tackle bias?",
"In the targeted data collection approach, what type of data is targetted?"
],
"question_id": [
"d0b005cb7ed6d4c307745096b2ed8762612480d2",
"9d9b11f86a96c6d3dd862453bf240d6e018e75af",
"415f35adb0ef746883fb9c33aa53b79cc4e723c3"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias",
"bias"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Character persona examples from the LIGHT dataset. While there are relatively few examples of femalegendered personas, many of the existing ones exhibit bias. None of these personas were flagged by annotators during a review for offensive content.",
"Table 2: An example dialogue from the LIGHT dataset, with the persona for the wife character provided. Bias from the persona informs and effects the dialogue task.",
"Table 3: Analysis of gender in LIGHT Characters: the original dataset contains 1.6× as many male-gendered characters as female-gendered characters. New characters are collected to offset this imbalance.",
"Table 4: We compare the performance of various bias mitigation methods – Counterfactual Data Augmentation (CDA), Positive-Bias Data Collection (Pos. Data), Conditional Training (CT), and combining these methods (ALL) – on the LIGHT test set, splitting the test set across the four genderedness bins: F0/+M0/+. X0 indicates there are no X-gendered words in the gold response, while, X+ indicates that there is at least one. We measure the percent of gendered words in the generated utterances (% gend. words) and the percent of male bias (% male bias), i.e. the percent of male-gendered words among all gendered words generated. While each of these methods yield some improvement, combining all of these methods in one yields the best control over the genderedness of the utterances while still maintaining a good F1-score.",
"Figure 1: Comparing the performance of the ALL de-bias model when we fix the conditioning to a specific bin for all examples at test time. We report results for each possible conditioning bin choice. Across bins, the model maintains performance whilst radically changing the genderedness of the language generated.",
"Table 5: Offensive language classification of model responses on the LIGHT dialogue test set.",
"Figure 2: Human Evaluation of ALL model compared to baseline Transformer generative model. The control bins in ALL are set to F0M0 to reduce gendered words. Evaluators find it harder to predict the speaker gender when using our proposed techniques, while model engagingness is not affected by the method.",
"Table 6: Example generations from the baseline model and the proposed de-biased models. In these examples, the gold truth either contains no gendered words or only female-gendered words, but the baseline model generates male-gendered words."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"4-Figure1-1.png",
"4-Table5-1.png",
"4-Figure2-1.png",
"7-Table6-1.png"
]
} | [
"How does counterfactual data augmentation aim to tackle bias?",
"In the targeted data collection approach, what type of data is targetted?"
] | [
[
"1911.03842-Methodology: Mitigating Bias in Generative Dialogue ::: Counterfactual Data Augmentation-0"
],
[
"1911.03842-Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: Gender-swapping Existing Personas-0"
]
] | [
"The training dataset is augmented by swapping all gendered words by their other gender counterparts",
"Gendered characters in the dataset"
] | 285 |
1707.02377 | Efficient Vector Representation for Documents through Corruption | We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time. | {
"paragraphs": [
[
"Text understanding starts with the challenge of finding machine-understandable representation that captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly well for many tasks BIBREF0 . However, by treating words and phrases as unique and discrete symbols, BoW often fails to capture the similarity between words or phrases and also suffers from sparsity and high dimensionality.",
"Recent works on using neural networks to learn distributed vector representations of words have gained great popularity. The well celebrated Word2Vec BIBREF1 , by learning to predict the target word using its neighboring words, maps words of similar meanings to nearby points in the continuous vector space. The surprisingly simple model has succeeded in generating high-quality word embeddings for tasks such as language modeling, text understanding and machine translation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. It can be trained on billions of words per hour on a single machine.",
"Paragraph Vectors BIBREF2 generalize the idea to learn vector representation for documents. A target word is predicted by the word embeddings of its neighbors in together with a unique document vector learned for each document. It outperforms established document representations, such as BoW and Latent Dirichlet Allocation BIBREF3 , on various text understanding tasks BIBREF4 . However, two caveats come with this approach: 1) the number of parameters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensive to generate vector representations for unseen documents at test time.",
"We propose an efficient model architecture, referred to as Document Vector through Corruption (Doc2VecC), to learn vector representations for documents. It is motivated by the observation that linear operations on the word embeddings learned by Word2Vec can sustain substantial amount of syntactic and semantic meanings of a phrase or a sentence BIBREF5 . For example, vec(“Russia”) + vec(“river”) is close to vec(“Volga River”) BIBREF6 , and vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) BIBREF5 . In Doc2VecC, we represent each document as a simple average of the word embeddings of all the words in the document. In contrast to existing approaches which post-process learned word embeddings to form document representation BIBREF7 , BIBREF8 , Doc2VecC enforces a meaningful document representation can be formed by averaging the word embeddings during learning. Furthermore, we include a corruption model that randomly remove words from a document during learning, a mechanism that is critical to the performance and learning speed of our algorithm.",
"Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupled from the size of the training corpus, depending only on the size of the vocabulary; 2. The model architecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. The new framework implicitly introduces a data-dependent regularization, which favors rare or informative words and suppresses words that are common but not discriminative; 4. Vector representation of a document can be generated by simply averaging the learned word embeddings of all the words in the document, which significantly boost test efficiency; 5. The vector representation generated by Doc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification as well as semantic relatedness tasks."
],
[
"Text representation learning has been extensively studied. Popular representations range from the simplest BoW and its term-frequency based variants BIBREF9 , language model based methods BIBREF10 , BIBREF11 , BIBREF12 , topic models BIBREF13 , BIBREF3 , Denoising Autoencoders and its variants BIBREF14 , BIBREF15 , and distributed vector representations BIBREF8 , BIBREF2 , BIBREF16 . Another prominent line of work includes learning task-specific document representation with deep neural networks, such as CNN BIBREF17 or LSTM based approaches BIBREF18 , BIBREF19 .",
"In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that are most similar to ours. There are two well-know model architectures used for both methods, referred to as Continuous Bag-of-Words (CBoW) and Skipgram models BIBREF1 . In this work, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we are going to use throughout the paper:"
],
[
"Several works BIBREF6 , BIBREF5 showcased that syntactic and semantic regularities of phrases and sentences are reasonably well preserved by adding or subtracting word embeddings learned through Word2Vec. It prompts us to explore the option of simply representing a document as an average of word embeddings. Figure FIGREF9 illustrates the new model architecture.",
"Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer as well as an output layer to predict the target word, “ceremony” in this example. The embeddings of neighboring words (“opening”, “for”, “the”) provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (“performance” at position INLINEFORM0 , “praised” at position INLINEFORM1 , and “brazil” at position INLINEFORM2 ). BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement.",
"Here we describe the stochastic process we used to generate a global context at each update. The global context, which we denote as INLINEFORM0 , is generated through a unbiased mask-out/drop-out corruption, in which we randomly overwrites each dimension of the original document INLINEFORM1 with probability INLINEFORM2 . To make the corruption unbiased, we set the uncorrupted dimensions to INLINEFORM3 times its original value. Formally, DISPLAYFORM0 ",
"Doc2VecC then defines the probability of observing a target word INLINEFORM0 given its local context INLINEFORM1 as well as the global context INLINEFORM2 as DISPLAYFORM0 ",
"Here INLINEFORM0 is the length of the document. Exactly computing the probability is impractical, instead we approximate it with negative sampling BIBREF1 . DISPLAYFORM0 ",
"here INLINEFORM0 stands for a uniform distribution over the terms in the vocabulary. The two projection matrices INLINEFORM1 and INLINEFORM2 are then learned to minimize the loss: DISPLAYFORM0 ",
"Given the learned projection matrix INLINEFORM0 , we then represent each document simply as an average of the embeddings of the words in the document, DISPLAYFORM0 ",
"We are going to elaborate next why we choose to corrupt the original document with the corruption model in eq.( EQREF10 ) during learning, and how it enables us to simply use the average word embeddings as the vector representation for documents at test time."
],
[
"We approximate the log likelihood for each instance INLINEFORM0 in eq.( EQREF13 ) with its Taylor expansion with respect to INLINEFORM1 up to the second-order BIBREF26 , BIBREF27 , BIBREF28 . Concretely, we choose to expand at the mean of the corruption INLINEFORM2 : INLINEFORM3 ",
"where INLINEFORM0 and INLINEFORM1 are the first-order (i.e., gradient) and second-order (i.e., Hessian) of the log likelihood with respect to INLINEFORM2 . Expansion at the mean INLINEFORM3 is crucial as shown in the following steps. Let us assume that for each instance, we are going to sample the global context INLINEFORM4 infinitely many times, and thus compute the expected log likelihood with respect to the corrupted INLINEFORM5 . INLINEFORM6 ",
"The linear term disappears as INLINEFORM0 . We substitute in INLINEFORM1 for the mean INLINEFORM2 of the corrupting distribution (unbiased corruption) and the matrix INLINEFORM3 for the variance, and obtain DISPLAYFORM0 ",
"As each word in a document is corrupted independently of others, the variance matrix INLINEFORM0 is simplified to a diagonal matrix with INLINEFORM1 element equals INLINEFORM2 . As a result, we only need to compute the diagonal terms of the Hessian matrix INLINEFORM3 .",
"The INLINEFORM0 dimension of the Hessian's diagonal evaluated at the mean INLINEFORM1 is given by INLINEFORM2 ",
"Plug the Hessian matrix and the variance matrix back into eq.( EQREF16 ), and then back to the loss defined in eq.( EQREF13 ), we can see that Doc2VecC intrinsically minimizes DISPLAYFORM0 ",
"Each INLINEFORM0 in the first term measures the log likelihood of observing the target word INLINEFORM1 given its local context INLINEFORM2 and the document vector INLINEFORM3 . As such, Doc2VecC enforces that a document vector generated by averaging word embeddings can capture the global semantics of the document, and fill in information missed in the local context. The second term here is a data-dependent regularization. The regularization on the embedding INLINEFORM4 of each word INLINEFORM5 takes the following form, INLINEFORM6 ",
"where INLINEFORM0 prescribes the confidence of predicting the target word INLINEFORM1 given its neighboring context INLINEFORM2 as well as the document vector INLINEFORM3 .",
"Closely examining INLINEFORM0 leads to several interesting findings: 1. the regularizer penalizes more on the embeddings of common words. A word INLINEFORM1 that frequently appears across the training corpus, i.e, INLINEFORM2 often, will have a bigger regularization than a rare word; 2. on the other hand, the regularization is modulated by INLINEFORM3 , which is small if INLINEFORM4 . In other words, if INLINEFORM5 is critical to a confident prediction INLINEFORM6 when it is active, then the regularization is diminished. Similar effect was observed for dropout training for logistic regression model BIBREF27 and denoising autoencoders BIBREF28 ."
],
[
"We evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semantic relatedness task, along with several document representation learning algorithms. All experiments can be reproduced using the code available at https://github.com/mchen24/iclr2017"
],
[
"We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset."
],
[
"For sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviews categorized as either positive or negative. It comes with predefined train/test split BIBREF30 : 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. The two classes are balanced in the training and testing sets. We remove words that appear less than 10 times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.",
"Setup. We test the various representation learning algorithms under two settings: one follows the same protocol proposed in BIBREF8 , where representation is learned using all the available data, including the test set; another one where the representation is learned using training and unlabeled set only. For both settings, a linear support vector machine (SVM) BIBREF31 is trained afterwards on the learned representation for classification. For Skip-thought Vectors, we used the generic model trained on a much bigger book corpus to encode the documents. A vector of 4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, are generated for each document. In comparison, all the other algorithms produce a vector representation of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parameters are tuned on a validation set subsampled from the training set.",
"Accuracy. Comparing the two columns in Table TABREF20 , we can see that all the representation learning algorithms benefits from including the testing data during the representation learning phrase. Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methods outperforms the other baselines, beating the BOW representation by 15%. In comparison with Word2Vec+IDF, which applies post-processing on learned word embeddings to form document representation, Doc2VecC naturally enforces document semantics to be captured by averaged word embeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Autoencoders (DEA) if the local context words are removed from the paradigm shown in Figure FIGREF9 . By including the context words, Doc2VecC allows the document vector to focus more on capturing the global context. Skip-thought vectors perform surprisingly poor on this dataset comparing to other methods. We hypothesized that it is due to the length of paragraphs in this dataset. The average length of paragraphs in the IMDB movie review dataset is INLINEFORM0 , much longer than the ones used for training and testing in the original paper, which is in the order of 10. As noted in BIBREF18 , the performance of LSTM based method (similarly, the gated RNN used in Skip-thought vectors) drops significantly with increasing paragraph length, as it is hard to preserve state over long sequences of words.",
"Time. Table TABREF22 summarizes the time required by these algorithms to learn and generate the document representation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC second that. The number of parameters that needs to be back-propagated in each update was increased by the number of surviving words in INLINEFORM0 . We found that both models are not sensitive to the corruption rate INLINEFORM1 in the noise model. Since the learning time decreases with higher corruption rate, we used INLINEFORM2 throughout the experiments. Paragraph Vectors takes longer time to train as there are more parameters (linear to the number of document in the learning set) to learn. At test time, Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document representation. Paragraph Vectors, on the other hand, requires another round of inference to produce the vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17 seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 seconds for the other methods. As we did not re-train the Skip-thought vector models on this dataset, the training time reported in the table is the time it takes to generate the embeddings for the 25,000 training documents. Due to repeated high-dimensional matrix operations required for encoding long paragraphs, it takes fairly long time to generate the representations for these documents. Similarly for testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.",
"Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words.",
"Subsampling frequent words. Note that for all the numbers reported, we applied the trick of subsampling of frequent words introduced in BIBREF6 to counter the imbalance between frequent and rare words. It is critical to the performance of simple Word2Vec+AVG as the sole remedy to diminish the contribution of common words in the final document representation. If we were to remove this step, the error rate of Word2Vec+AVG will increases from INLINEFORM0 to INLINEFORM1 . Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of words that are frequent but uninformative, therefore does not rely on this trick."
],
[
"In table TABREF24 , we demonstrated that the corruption model introduced in Doc2VecC dampens the embeddings of words which are common and non-discriminative (stop words). In this experiment, we are going to quantatively compare the word embeddings generated by Doc2VecC to the ones generated by Word2Vec, or Paragraph Vectors on the word analogy task introduced by BIBREF1 . The dataset contains five types of semantic questions, and nine types of syntactic questions, with a total of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simple linear algebraic operations on the word embeddings generated by different methods. Please refer to the original paper for more details on the evaluation protocol.",
"We trained the word embeddings of different methods using the English news dataset released under the ACL workshop on statistical machine translation. The training set includes close to 15M paragraphs with 355M tokens. We compare the performance of word embeddings trained by different methods with increasing embedding dimensionality as well as increasing training data.",
"We observe similar trends as in BIBREF1 . Increasing embedding dimensionality as well as training data size improves performance of the word embeddings on this task. However, the improvement is diminishing. Doc2VecC produces word embeddings which performs significantly better than the ones generated by Word2Vec. We observe close to INLINEFORM0 uplift when we train on the full training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset. Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors relies mostly on the unique document vectors to capture the information in a text document instead of learning the word semantic or syntactic similarities. This also explains why the PV-DBOW BIBREF2 model architecture proposed in the original work, which completely removes word embedding layers, performs comparable to the distributed memory version.",
"In table 5, we list a detailed comparison of the performance of word embeddings generated by Word2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding of size 100. We can see that Doc2VecC significantly outperforms the word embeddings produced by Word2Vec across almost all the subtasks."
],
[
"For the document classification task, we use a subset of the wikipedia dump, which contains over 300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports, entertainment, literature, and politics etc. Examples of categories include American drama films, Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts (the second paragraph) were extracted for each page as a document. For each category, we select 1,000 documents with unique category label, and 100 documents were used for training and 900 documents for testing. The remaining documents are used as unlabeled data. The 100 classes are balanced in the training and testing sets. For this data set, we learn the word embedding and document representation for all the algorithms using all the available data. We apply a cutoff of 10, resulting in a vocabulary of size INLINEFORM0 .",
"Table TABREF29 summarizes the classification error of a linear SVM trained on representations of different sizes. We can see that most of the algorithms are not sensitive to the size of the vector representation. Doc2Vec benefits most from increasing representation size. Across all sizes of representations, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC can achieve same or better performance with a much smaller representation vector.",
"Figure FIGREF30 visualizes the document representations learned by Doc2Vec (left) and Doc2VecC (right) using t-SNE BIBREF32 . We can see that documents from the same category are nicely clustered using the representation generated by Doc2VecC. Doc2Vec, on the other hand, does not produce a clear separation between different categories, which explains its worse performance reported in Table TABREF29 .",
"Figure FIGREF31 visualizes the vector representation generated by Doc2VecC w.r.t. coarser categorization. we manually grouped the 100 categories into 7 coarse categories, television, albums, writers, musicians, athletes, species and actors. Categories that do no belong to any of these 7 groups are not included in the figure. We can see that documents belonging to a coarser category are grouped together. This subset includes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cycling etc., which explains why the athletes category are less concentrated. In the projection, we can see documents belonging to the musician category are closer to those belonging to albums category than those of athletes or species."
],
[
"We test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset BIBREF33 . Given two sentences, the task is to determine how closely they are semantically related. The set contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5. A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. The set is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.",
"We compare Doc2VecC with several winning solutions of the competition as well as several more recent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM trained from scratch on this dataset, Skip-thought vectors learned a large book corpus BIBREF34 and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same protocol as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary to the vocabulary expansion technique used in BIBREF16 to handle out-of-vocabulary words, we extend the vocabulary of the learned model directly on the target dataset in the following way: we use the pre-trained word embedding as an initialization, and fine-tune the word and sentence representation on the SICK dataset. Notice that the fine-tuning is done for sentence representation learning only, and we did not use the relatedness score in the learning. This step brings small improvement to the performance of our algorithm. Given the sentence embeddings, we used the exact same training and testing protocol as in BIBREF16 to score each pair of sentences: with two sentence embedding INLINEFORM0 and INLINEFORM1 , we concatenate their component-wise product, INLINEFORM2 and their absolute difference, INLINEFORM3 as the feature representation.",
"Table TABREF35 summarizes the performance of various algorithms on this dataset. Despite its simplicity, Doc2VecC significantly out-performs the winning solutions of the competition, which are heavily feature engineered toward this dataset and several baseline methods, noticeably the dependency-tree RNNs introduced in BIBREF35 , which relies on expensive dependency parsers to compose sentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than the LSTM based methods or skip-thought vectors on this dataset, while it significantly outperforms skip-thought vectors on the IMDB movie review dataset ( INLINEFORM0 error rate vs INLINEFORM1 ). As we hypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We would like to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. It takes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktop with Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors."
],
[
"We introduce a new model architecture Doc2VecC for document representation learning. It is very efficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes sure document representation generated by averaging word embeddings capture semantics of document during learning. It also introduces a data-dependent regularization which favors informative or rare words while dampening the embeddings of common and non-discriminative words. As such, each document can be efficiently represented as a simple average of the learned word embeddings. In comparison to several existing document representation learning algorithms, Doc2VecC outperforms not only in testing efficiency, but also in the expressiveness of the generated representations."
]
],
"section_name": [
"Introduction",
"Related Works and Notations",
"Method",
"Corruption as data-dependent regularization",
"Experiments",
"Baselines",
"Sentiment analysis",
"Word analogy",
"Document Classification",
"Semantic relatedness",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"6db29a269f42efdb89beabbd9c34bc64102f33af"
],
"answer": [
{
"evidence": [
"We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset."
],
"extractive_spans": [
"RNNLM BIBREF11"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1ae7eca7804e1547227cce6d43ad9b403f8832ad"
],
"answer": [
{
"evidence": [
"Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer as well as an output layer to predict the target word, “ceremony” in this example. The embeddings of neighboring words (“opening”, “for”, “the”) provide local context while the vector representation of the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors, which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (“performance” at position INLINEFORM0 , “praised” at position INLINEFORM1 , and “brazil” at position INLINEFORM2 ). BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization, which brings great performance improvement."
],
"extractive_spans": [
"Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained."
],
"free_form_answer": "",
"highlighted_evidence": [
"BIBREF25 also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9335468572f5556bcdc53f49d72dc01c47d6814b"
],
"answer": [
{
"evidence": [
"Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words."
],
"extractive_spans": [],
"free_form_answer": "Informative are those that will not be suppressed by regularization performed.",
"highlighted_evidence": [
"As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words.",
"In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which language models do they compare against?",
"Is their approach similar to making an averaged weighted sum of word vectors, where weights reflect word frequencies?",
"How do they determine which words are informative?"
],
"question_id": [
"52f1a91f546b8a25a5d72325c503ec8f9c72de23",
"bb5697cf352dd608edf119ca9b82a6b7e51c8d21",
"98785bf06e60fcf0a6fe8921edab6190d0c2cec1"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: A new framework for learning document vectors.",
"Table 1: Classification error of a linear classifier trained on various document representations on the Imdb dataset.",
"Table 2: Learning time and representation generation time required by different representation learning algorithms.",
"Table 3: Words with embeddings closest to 0 learned by different algorithms.",
"Figure 2: Accuracy on subset of the Semantic-Syntactic Word Relationship test set. Only questions containing words from the most frequent 30k words are included in the test.",
"Table 4: Top 1 accuracy on the 5 type of semantics and 9 types of syntactic questions.",
"Table 5: Classification error (%) of a linear classifier trained on various document representations on the Wikipedia dataset.",
"Figure 3: Visualization of document vectors on Wikipedia dataset using t-SNE.",
"Figure 4: Visualization of Wikipedia Doc2VecC vectors using t-SNE.",
"Table 6: Test set results on the SICK semantic relatedness task. The first group of results are from the submission to the 2014 SemEval competition; the second group includes several baseline methods reported in (Tai et al., 2015); the third group are methods based on LSTM reported in (Tai et al., 2015) as well as the skip-thought vectors (Kiros et al., 2015)."
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Figure2-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Figure3-1.png",
"9-Figure4-1.png",
"11-Table6-1.png"
]
} | [
"How do they determine which words are informative?"
] | [
[
"1707.02377-Sentiment analysis-4"
]
] | [
"Informative are those that will not be suppressed by regularization performed."
] | 286 |
1701.06538 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost. | {
"paragraphs": [
[
"Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , images BIBREF4 , BIBREF5 , and audio BIBREF6 , BIBREF7 . For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.",
"Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.",
"While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges:",
"Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.",
"Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.",
"Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.",
"Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. BIBREF13 use three such terms. These issues can affect both model quality and load-balancing.",
"Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.",
"In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets."
],
[
"Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure FIGREF8 ). All parts of the network are trained jointly by back-propagation.",
"While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers BIBREF15 , as in Figure FIGREF8 . The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix SECREF84 Table TABREF92 ). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost."
],
[
"Since its introduction more than two decades ago BIBREF16 , BIBREF17 , the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs BIBREF18 , Gaussian Processes BIBREF19 , BIBREF20 , BIBREF21 , Dirichlet Processes BIBREF22 , and deep networks. Other work has focused on different expert configurations such as a hierarchical structure BIBREF23 , infinite numbers of experts BIBREF24 , and adding experts sequentially BIBREF25 . BIBREF26 suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model.",
"The works above concern top-level mixtures of experts. The mixture of experts is the whole model. BIBREF10 introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.",
"Our work builds on this use of MoEs as a general purpose neural network component. While BIBREF10 uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity."
],
[
"The Mixture-of-Experts (MoE) layer consists of a set of INLINEFORM0 “expert networks\" INLINEFORM1 , and a “gating network\" INLINEFORM2 whose output is a sparse INLINEFORM3 -dimensional vector. Figure FIGREF8 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.",
"Let us denote by INLINEFORM0 and INLINEFORM1 the output of the gating network and the output of the INLINEFORM2 -th expert network for a given input INLINEFORM3 . The output INLINEFORM4 of the MoE module can be written as follows: DISPLAYFORM0 ",
"We save computation based on the sparsity of the output of INLINEFORM0 . Wherever INLINEFORM1 , we need not compute INLINEFORM2 . In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts\", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix SECREF60 .",
"Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in BIBREF12 . A MoE whose experts have one hidden layer is similar to the block-wise dropout described in BIBREF13 , where the dropped-out layer is sandwiched between fully-activated layers."
],
[
"A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0 ",
"We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1 ",
"We train the gating network by simple back-propagation, along with the rest of the model. If we choose INLINEFORM0 , the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in BIBREF9 with respect to noisy rectifiers. Gradients also back-propagate through the gating network to its inputs. Our method differs here from BIBREF13 who use boolean gates and a REINFORCE-style approach to train the gating network."
],
[
"On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses INLINEFORM0 out of INLINEFORM1 experts for each example, then for a batch of INLINEFORM2 examples, each expert receives a much smaller batch of approximately INLINEFORM3 examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:",
"In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over INLINEFORM0 devices, and each device processes a batch of size INLINEFORM1 , each expert receives a batch of approximately INLINEFORM2 examples. Thus, we achieve a factor of INLINEFORM3 improvement in expert batch size.",
"In the case of a hierarchical MoE (Section SECREF60 ), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.",
"This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion-parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.",
"In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.",
"We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. BIBREF27 describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size."
],
[
"Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes INLINEFORM0 _ INLINEFORM1 _ INLINEFORM2 and INLINEFORM3 _ INLINEFORM4 _ INLINEFORM5 , the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers."
],
[
"We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. BIBREF10 describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. BIBREF13 include a soft constraint on the batch-wise average of each gate.",
"We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss INLINEFORM0 , which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor INLINEFORM1 . This additional loss encourages all experts to have equal importance. DISPLAYFORM0 DISPLAYFORM1 ",
"While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, INLINEFORM0 , which ensures balanced loads. Appendix SECREF51 contains the definition of this function, along with experimental results."
],
[
"This dataset, introduced by BIBREF28 consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.",
"The best previously published results BIBREF2 use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers BIBREF15 , BIBREF29 . The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure FIGREF32 -right.",
"Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure FIGREF8 ). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix SECREF65 .",
"To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.",
"The results of these models are shown in Figure FIGREF32 -left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.",
"In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.",
"We trained our models using TensorFlow BIBREF30 on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.",
"For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix SECREF65 , Table TABREF76 ."
],
[
"On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure FIGREF32 -left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.",
"We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix SECREF78 .",
"Figure FIGREF37 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets.",
"Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU."
],
[
"Our model was a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix SECREF84 .",
"We benchmarked our method on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in BIBREF3 : newstest2014 was used as the test set to compare against previous work BIBREF31 , BIBREF32 , BIBREF3 , while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data.",
"Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time."
],
[
" BIBREF35 train a single GNMT BIBREF3 model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix SECREF84 for details on model architecture. We train our model on the same dataset as BIBREF35 and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.",
"Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table TABREF50 . The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English INLINEFORM0 Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.",
""
],
[
"This work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come."
],
[
"tocsectionAppendices"
],
[
"As discussed in section SECREF4 , for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in back-propagation. Instead, we define a smooth estimator INLINEFORM0 of the number of examples assigned to each expert for a batch INLINEFORM1 of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define INLINEFORM2 as the probability that INLINEFORM3 is nonzero, given a new random choice of noise on element INLINEFORM4 , but keeping the already-sampled choices of noise on the other elements. To compute INLINEFORM5 , we note that the INLINEFORM6 is nonzero if and only if INLINEFORM7 is greater than the INLINEFORM8 -greatest element of INLINEFORM9 excluding itself. The probability works out to be: DISPLAYFORM0 ",
"Where INLINEFORM0 means the kth highest component of INLINEFORM1 , excluding component INLINEFORM2 . Simplifying, we get: DISPLAYFORM0 ",
"Where INLINEFORM0 is the CDF of the standard normal distribution. DISPLAYFORM0 ",
"We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor INLINEFORM0 . DISPLAYFORM0 ",
"To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices INLINEFORM0 and INLINEFORM1 to all zeros, which yields no signal and some noise.",
"We trained a set of models with identical architecture (the MoE-256 model described in Appendix SECREF65 ), using different values of INLINEFORM0 and INLINEFORM1 . We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in INLINEFORM2 and INLINEFORM3 , as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.",
"Results are reported in Table TABREF58 . All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of INLINEFORM0 had lower loads on the most overloaded expert."
],
[
"If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of “experts\", each of which is itself a secondary mixture-of-experts with its own gating network. If the hierarchical MoE consists of INLINEFORM0 groups of INLINEFORM1 experts each, we denote the primary gating network by INLINEFORM2 , the secondary gating networks by INLINEFORM3 , and the expert networks by INLINEFORM4 . The output of the MoE is given by: DISPLAYFORM0 ",
"Our metrics of expert utilization change to the following: DISPLAYFORM0 DISPLAYFORM1 ",
" INLINEFORM0 and INLINEFORM1 deonte the INLINEFORM2 functions for the primary gating network and INLINEFORM3 secondary gating network respectively. INLINEFORM4 denotes the subset of INLINEFORM5 for which INLINEFORM6 .",
"It would seem simpler to let INLINEFORM0 , but this would not have a gradient with respect to the primary gating network, so we use the formulation above."
],
[
"Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer BIBREF15 , BIBREF29 , a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput BIBREF43 to the layer output, dropping each activation with probability INLINEFORM0 , otherwise dividing by INLINEFORM1 . After dropout, the output of the previous layer is added to the layer output. This residual connection encourages gradient flow BIBREF37 .",
"Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.",
"The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:",
"MoE-1-Wide: The MoE layer consists of a single \"expert\" containing one ReLU-activated hidden layer of size 4096.",
"MoE-1-Deep: The MoE layer consists of a single \"expert\" containing four ReLU-activated hidden layers, each with size 1024.",
"4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.",
"LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions BIBREF41 . The next timestep of the LSTM receives the projected output. This is identical to one of the models published in BIBREF2 . We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.",
"The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section SECREF3 . Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in BIBREF2 . For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1.",
"To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .",
"We evaluate our model using perplexity on the holdout dataset, used by BIBREF28 , BIBREF2 . We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table TABREF76 . For each model, we report the test perplexity, the computational budget, the parameter counts, the value of INLINEFORM0 , and the computational efficiency.",
"We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 BIBREF41 . MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best INLINEFORM0 for each model, and trained each model for 10 epochs.",
"The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 ."
],
[
"The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively.",
"Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words.",
"We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:",
"The Adam optimizer BIBREF39 keeps first and second moment estimates of the per-parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set INLINEFORM0 . To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad BIBREF36 .",
"We evaluate our model using perplexity on a holdout dataset. Results are reported in Table TABREF81 . Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing BIBREF40 ."
],
[
"Our model is a modified version of the GNMT model described in BIBREF3 . To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow BIBREF37 . Similar to GNMT, to effectively deal with rare words, we used sub-word units (also known as “wordpieces\") BIBREF42 for inputs and outputs in our system.",
"We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in BIBREF3 .",
"We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use INLINEFORM0 and the hierarchical MoE models use INLINEFORM1 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains INLINEFORM2 parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix SECREF93 .",
"We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section UID14 , not the scheme from Appendix SECREF93 . The MoE layers in the encoder and decoder are non-hierarchical MoEs with INLINEFORM0 experts, and INLINEFORM1 . Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep.",
"We trained our networks using the Adam optimizer BIBREF39 . The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to BIBREF3 , we applied dropout BIBREF43 to the output of all embedding, LSTM and MoE layers, using INLINEFORM0 . Training was done synchronously on a cluster of up to 64 GPUs as described in section SECREF3 . Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.",
"To ensure balanced expert utilization we set INLINEFORM0 and INLINEFORM1 , as described in Section SECREF4 and Appendix SECREF51 .",
"We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in BIBREF31 .",
"Tables TABREF42 , TABREF43 and TABREF44 in Section SECREF39 show comparisons of our results to other published methods. Figure FIGREF91 shows test perplexity as a function of number of words in the (training data's) source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.",
"We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table TABREF92 . For example, one expert is used when the indefinite article “a\" introduces the direct object in a verb phrase indicating importance or leadership."
],
[
"Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below.",
"Recall that we define the softmax gating function to be: DISPLAYFORM0 ",
"To obtain a sparse gating vector, we multiply INLINEFORM0 component-wise with a sparse mask INLINEFORM1 and normalize the output. The mask itself is a function of INLINEFORM2 and specifies which experts are assigned to each input example: DISPLAYFORM0 ",
"To implement top-k gating in this formulation, we would let INLINEFORM0 , where: DISPLAYFORM0 ",
"To force each expert to receive the exact same number of examples, we introduce an alternative mask function, INLINEFORM0 , which operates over batches of input vectors. Instead of keeping the top INLINEFORM1 values per example, we keep the top INLINEFORM2 values per expert across the training batch, where INLINEFORM3 , so that each example is sent to an average of INLINEFORM4 experts. DISPLAYFORM0 ",
"As our experiments suggest and also observed in BIBREF38 , using a batchwise function during training (such as INLINEFORM0 ) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector INLINEFORM1 of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: DISPLAYFORM0 ",
"To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. DISPLAYFORM0 "
],
[
"The attention mechanism described in GNMT BIBREF3 involves a learned “Attention Function\" INLINEFORM0 which takes a “source vector\" INLINEFORM1 and a “target vector\" INLINEFORM2 , and must be computed for every source time step INLINEFORM3 and target time step INLINEFORM4 . In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size INLINEFORM5 . It can be expressed as: DISPLAYFORM0 ",
"Where INLINEFORM0 and INLINEFORM1 are trainable weight matrices and INLINEFORM2 is a trainable weight vector.",
"For performance reasons, in our models, we used a slightly different attention function: DISPLAYFORM0 ",
"With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions."
]
],
"section_name": [
"Conditional Computation",
"Our Approach: The Sparsely-Gated Mixture-of-Experts Layer",
"Related work on Mixtures of Experts",
"The Structure of the Mixture-of-Experts layer",
"Gating Network",
"The Shrinking Batch Problem",
"Network Bandwidth",
"Balancing Expert Utilization",
"1 Billion Word Language Modeling Benchmark",
"100 Billion Word Google News Corpus",
"Machine Translation (Single Language Pair)",
"Multilingual Machine Translation",
"Conclusion",
"Appendices",
"Load-Balancing Loss",
"Hierachical Mixture of Experts",
"1 Billion Word Language Modeling Benchmark - Experimental Details",
"100 Billion Word Google News Corpus - Experimental Details",
"Machine Translation - Experimental Details",
"Strictly Balanced Gating",
"Attention Function"
]
} | {
"answers": [
{
"annotation_id": [
"c24cfd0839faf733f7671147bea2e508dc3f0869"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1aeb9c43d0169356e7c33c2abe1301084252deea"
],
"answer": [
{
"evidence": [
"Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time."
],
"extractive_spans": [
"1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3",
"perplexity scores are also better",
"On the Google Production dataset, our model achieved 1.01 higher test BLEU score"
],
"free_form_answer": "",
"highlighted_evidence": [
"As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"63a2c138011f68edde041195331abe0c5176e64e"
],
"answer": [
{
"evidence": [
"The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .",
"In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details can be found in Appendix UID77 . Results of these three models form the bottom line of Figure FIGREF32 -right. Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.",
"FLOAT SELECTED: Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C."
],
"extractive_spans": [],
"free_form_answer": "Perpexity is improved from 34.7 to 28.0.",
"highlighted_evidence": [
"The two models achieved test perplexity of INLINEFORM0 and INLINEFORM1 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table TABREF76 . The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by INLINEFORM2 .",
" Table TABREF33 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.",
"FLOAT SELECTED: Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"79c6e303c769cf6b075f42fc27820b4a2f8ee791"
],
"answer": [
{
"evidence": [
"Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains INLINEFORM0 parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096-h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section UID14 ) with INLINEFORM1 for the ordinary MoE layers and INLINEFORM2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M."
],
"extractive_spans": [
"varied the number of experts between models"
],
"free_form_answer": "",
"highlighted_evidence": [
"We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c43b627b74d1c4b68aa374fa022b32080faf292f"
],
"answer": [
{
"evidence": [
"A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0",
"We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1"
],
"extractive_spans": [
"DISPLAYFORM0",
"DISPLAYFORM0 DISPLAYFORM1"
],
"free_form_answer": "",
"highlighted_evidence": [
"A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0\n\nWe add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1",
"A simple choice of non-sparse gating function BIBREF17 is to multiply the input by a trainable weight matrix INLINEFORM0 and then apply the INLINEFORM1 function. DISPLAYFORM0\n\nWe add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to INLINEFORM0 (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix SECREF51 . The amount of noise per component is controlled by a second trainable weight matrix INLINEFORM1 . DISPLAYFORM0 DISPLAYFORM1"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Approximately how much computational cost is saved by using this model?",
"What improvement does the MOE model make over the SOTA on machine translation?",
"What improvement does the MOE model make over the SOTA on language modelling?",
"How is the correct number of experts to use decided?",
"What equations are used for the trainable gating network?"
],
"question_id": [
"a85698f19a91ecd3cd3a90a93a453d2acebae1b7",
"af073d84b8a7c968e5822c79bef34a28655886de",
"e8fcfb1412c3b30da6cbc0766152b6e11e17196c",
"0cd90e5b79ea426ada0203177c28812a7fc86be5",
"f01a88e15ef518a68d8ca2bec992f27e7a3a6add"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In this case, the sparse gating function selects two experts to perform computations. Their outputs are modulated by the outputs of the gating network.",
"Figure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016). The bottom line represents 4-billion parameter MoE models with different computational budgets.",
"Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C.",
"Figure 3: Language modeling on a 100 billion word corpus. Models have similar computational budgets (8 million ops/timestep).",
"Table 2: Results on WMT’14 En→ Fr newstest2014 (bold values represent best results).",
"Table 4: Results on the Google Production En→ Fr dataset (bold values represent best results).",
"Table 5: Multilingual Machine Translation (bold values represent best results).",
"Table 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016).",
"Table 8: Model comparison on 100 Billion Word Google News Dataset",
"Figure 4: Perplexity on WMT’14 En→ Fr (left) and Google Production En→ Fr (right) datasets as a function of number of words processed. The large differences between models at the beginning of training are due to different batch sizes. All models incur the same computational budget (85M ops/timestep) except the one with no experts.",
"Table 9: Contexts corresponding to a few of the 2048 experts in the MoE layer in the encoder portion of the WMT’14 En→ Fr translation model. For each expert i, we sort the inputs in a training batch in decreasing order of G(x)i, and show the words surrounding the corresponding positions in the input sentences."
],
"file": [
"2-Figure1-1.png",
"6-Figure2-1.png",
"7-Table1-1.png",
"7-Figure3-1.png",
"8-Table2-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"15-Table7-1.png",
"16-Table8-1.png",
"18-Figure4-1.png",
"18-Table9-1.png"
]
} | [
"What improvement does the MOE model make over the SOTA on language modelling?"
] | [
[
"1701.06538-1 Billion Word Language Modeling Benchmark - Experimental Details-11",
"1701.06538-1 Billion Word Language Modeling Benchmark-5",
"1701.06538-7-Table1-1.png"
]
] | [
"Perpexity is improved from 34.7 to 28.0."
] | 288 |
1905.10810 | Evaluation of basic modules for isolated spelling error correction in Polish texts | Spelling error correction is an important problem in natural language processing, as a prerequisite for good performance in downstream tasks as well as an important feature in user-facing applications. For texts in Polish language, there exist works on specific error correction solutions, often developed for dealing with specialized corpora, but not evaluations of many different approaches on big resources of errors. We begin to address this problem by testing some basic and promising methods on PlEWi, a corpus of annotated spelling extracted from Polish Wikipedia. These modules may be further combined with appropriate solutions for error detection and context awareness. Following our results, combining edit distance with cosine distance of semantic vectors may be suggested for interpretable systems, while an LSTM, particularly enhanced by ELMo embeddings, seems to offer the best raw performance. | {
"paragraphs": [
[
"Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.",
"Because of the lack of tests of many common spelling correction methods for Polish, it is useful to establish how they perform in a simple scenario. We constrain ourselves to the pure task of isolated correction of non-word errors. They are traditionally separated in error correction literature BIBREF0 . Non-word errors are here incorrect word forms that not only differ from what was intended, but also do not constitute another, existing word themselves. Much of the initial research on error correction focused on this simple task, tackled without means of taking the context of the nearest words into account.",
"It is true that, especially in the case of neural networks, it is often possible and desirable to combine problems of error detection, correction and context awareness into one task trained with a supervised training procedure. In language correction research for English language also grammatical and regular spelling errors have been treated uniformly with much success BIBREF1 .",
"However, when more traditional methods are used, because of their predictability and interpretability for example, one can mix and match various approaches to dealing with the subproblems of detection, correction and context handling (often equivalent to employing some kind of a language model). We call it a modular approach to building spelling error correction systems. There is recent research where this paradigm was applied, interestingly, to convolutional networks trained separately for various subtasks BIBREF2 . In similar setups it is more useful to assess abilities of various solutions in isolation. The exact architecture of a spelling correction system should depend on characteristics of texts it will work on.",
"Similar considerations eliminated from our focus handcrafted solutions for the whole spelling correction pipeline, primarily the LanguageTool BIBREF3 . Its performance in fixing spelling of Polish tweets was already tested BIBREF4 . For our purposes it would be given an unfair advantage, since it is a rule-based system making heavy use of words in context of the error."
],
[
"Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .",
"These existing works pointed out more general, potentially useful qualities specific to spelling errors in Polish language texts. It is, primarily, the problem of leaving out diacritical signs, or, more rarely, adding them in wrong places. This phenomenon stems from using a variant of the US keyboard layout, where combinations of AltGr with some alphabetic keys produces characters unique to Polish. When the user forgets or neglects to press the AltGr key, typos such as writing *olowek instead of ołówek appear. In fact, BIBREF4 managed to get substantial performance on Twitter corpus by using this ”diacritical swapping” alone."
],
[
"The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.",
"Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters."
],
[
"A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.",
"The distance between two tokens INLINEFORM0 and INLINEFORM1 is thus defined as INLINEFORM2 ",
"Here INLINEFORM0 is just Levenshtein distance between strings, and INLINEFORM1 – cosine distance between vectors. INLINEFORM2 denotes the word vector for INLINEFORM3 . Both distance metrics are in our case roughly in the range [0,1] thanks to the scaling of edit distance performed automatically by Apache Lucene. We used a pretrained set of word embeddings of Polish BIBREF12 , obtained with the flavor word2vec procedure using skipgrams and negative sampling BIBREF13 ."
],
[
"Another powerful approach, if conceptually simple in linguistic terms, is using a character-based recurrent neural network. Here, we test uni- and bidirectional Long Short-Term Memory networks BIBREF14 that are fed characters of the error as their input and are expected to output its correct form, character after character. This is similar to traditional solutions conceptualizing the spelling error as a chain of characters, which are used as evidence to predict the most likely chain of replacements (original characters). This was done with n-gram methods, Markov chains and other probabilistic models BIBREF15 . Since nowadays neural networks enjoy a large awareness as an element of software infrastructure, with actively maintained packages readily available, their evaluation seems to be the most practically useful. We used the PyTorch BIBREF16 implementation of LSTM in particular.",
"The bidirectional version BIBREF17 of LSTM reads the character chains forward and backwards at the same time. Predictions from networks running in both directions are averaged.",
"In order to provide the network an additional, broad picture peek at the whole error form we also evaluated a setup where the internal state of LSTM cells, instead of being initialized randomly, is computed from an ELMo embedding BIBREF18 of the token. The ELMo embedder is capable of integrating linguistic information carried by the whole form (probably often not much in case of errors), as well as the string as a character chain. The latter is processed with a convolutional neural network. How this representation is constructed is informed by the whole corpus on which the embedder was trained. The pretrained ELMo model that we used BIBREF19 was trained on Wikipedia and Common Crawl corpora of Polish.",
"The ELMo embedding network outputs three layers as matrices, which are supposed to reflect subsequent compositional layers of language, from phonetic phenomena at the bottom to lexical ones at the top. A weighted sum of these layers is computed, with weights trained along with the LSTM error-correcting network. Then we apply a trained linear transformation, followed by INLINEFORM0 non-linearity: INLINEFORM1 ",
"(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional."
],
[
"PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors.",
"The corpus features texts that are descriptive rather than conversational, contain relatively many proper names and are more likely to have been at least skimmed by the authors before submitting for online publication. Error cases provided by PlEWi are, therefore, not a balanced representation of spelling errors in written Polish language. PlEWi does have the advantage of scale in comparison to existing literature, such as BIBREF4 operating on a set of only 740 annotated errors in tweets.",
"All methods were tested on a test subset of 25% of cases, with 75% left for training (where needed) and 5% for development.",
"The methods that required training – namely recurrent neural networks – had their loss measured as cross-entropy loss measure between correct character labels and predictions. This value was minimized with Adam algorithm BIBREF22 . The networks were trained for 35 epochs."
],
[
"The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.",
"On the other hand, the vector distance method was able to bring a discernible improvement over pure Levenshtein distance, comparable even with the most basic LSTM. It is possible that assigning more fine-tuned weights to edit distance and semantic distance would make the quality of predictions even higher. The idea of using vector space measurements explicitly can be also expanded if we were to consider the problem of contextualizing corrections. For example, the semantic distance of proposed corrections to the nearest words is likely to carry much information about their appropriateness. Looking from another angle, searching for words that seem semantically off in context may be a good heuristic for detecting errors that are not nonword (that is, they lead to wrong forms appearing in text which are nevertheless in-vocabulary).",
"The good performance of recurrent network methods is hardly a surprise, given observed effectiveness of neural networks in many NLP tasks in the recent decade. It seems that bidirectional LSTM augmented with ELMo may already hit the limit for correcting Polish spelling errors without contextual information. While it improves accuracy in comparison to LSTM initialized withrandom noise, it makes the test cross-entropy slightly worse, which hints at overfitting. The perplexity measures actually increase sharply for more sophisticated architectures. Perplexity should show how little probability is assigned by the model to true answers. We measure it as INLINEFORM0 ",
"where INLINEFORM0 is a sequence of INLINEFORM1 characters, forming the correct version of the word, and INLINEFORM2 is the estimated probability of the INLINEFORM3 th character, given previous predicted characters and the incorrect form. The observed increase of perplexity for increasingly accurate models is most likely due to more refined predicted probability distributions, which go beyond just assigning the bulk of probability to the best answer.",
"Interesting insights can be gained from weights assigned by optimization to layers of ELMo network, which are taken as the word form embedding (Table TABREF5 ). The first layer, and the one that is nearest to input of the network, is given relatively the least importance, while the middle one dominates both others taken together. This suggests that in error correction, at least for Polish, the middle level of morphemes and other characteristic character chunks is more important than phenomena that are low-level or tied to some specific words. This observation should be taken into account in further research on practical solutions for spelling correction."
],
[
"Among the methods tested the bidirectional LSTM, especially initialized by ELMo embeddings, offers the best accuracy and raw performance. Adding ELMo to a straightforward PyTorch implementation of LSTM may be easier now than at the time of performing our tests, as since then the authors of ELMoForManyLangs package BIBREF19 improved their programmatic interface. However, if a more interpretable and explainable output is required, some version of vector distance combined with edit distance may be the best direction. It should be noted that this method produces multiple candidate corrections with their similarity scores, as opposed to only one “best guess“ correction that can be obtained from a character-based LSTM. This is important in applications where it is up to humans to the make the final decision, and they are only to be aided by a machine.",
"It is desirable for further reasearch to expand the corpus material into a wider and more representative set of texts. Nevertheless, the solution for any practical case has to be tailored to its characteristic error patterns. Works on language correction for English show that available corpora can be ”boosted” BIBREF1 , i.e. expanded by generating new errors consistent with a generative model inferred from the data. This may greatly aid in developing models that are dependent on learning from error corpora.",
"A deliberate omission in this paper are the elements accompanying most real-word error correction solutions. Some fairly obvious approaches to integrating evidence from context include n-grams and Markov chains, although the possibility of using measurements in spaces of semantic vectors was already mentioned in this article. Similarly, non-word errors can be easily detected with comparing tokens against reference vocabulary, but in practice one should have ways of detecting mistakes masquerading as real words and fixing bad segmentation (tokens that are glued together or improperly separated). Testing how performant are various methods for dealing with these problems in Polish language is left for future research."
]
],
"section_name": [
"Introduction",
"Problems of spelling correction for Polish",
"Baseline methods",
"Vector distance",
"Recurrent neural networks",
"Experimental setup",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"91f989a06bf11f012960b7cdad07de1c33d7d969"
],
"answer": [
{
"evidence": [
"The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in existing literature. This might be explained by the already mentioned feature of Wikipedia edits, which can be expected to be to some degree self-reviewed before submission. This can very well limit the number of most trivial mistakes.",
"FLOAT SELECTED: Table 1: Test results for all the methods used. The loss measure is cross-entropy."
],
"extractive_spans": [],
"free_form_answer": "Accuracy of best interpretible system was 0.3945 while accuracy of LSTM-ELMo net was 0.6818.",
"highlighted_evidence": [
"The experimental results are presented in Table TABREF4 .",
"FLOAT SELECTED: Table 1: Test results for all the methods used. The loss measure is cross-entropy."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"645cb2f15db2bd0a712c0159a71fd64f152c98d3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1afa01b50f65043288ee2dc5ca7f521c49bf4694"
],
"answer": [
{
"evidence": [
"PlEWi BIBREF20 is an early version of WikEd BIBREF21 error corpus, containing error type annotations allowing us to select only non-word errors for evaluation. Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique. The corpus contains data extracted from histories of page versions of Polish Wikipedia. An algorithm designed by the corpus author determined where the changes were correcting spelling errors, as opposed to expanding content and disagreements among Wikipedia editors."
],
"extractive_spans": [
"[error, correction] pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"abc39352a914939a293c4c3a9ea06fc6ee432add"
],
"answer": [
{
"evidence": [
"The methods that we evaluated are baselines are the ones we consider to be basic and with moderate potential of yielding particularly good results. Probably the most straightforward approach to error correction is selecting known words from a dictionary that are within the smallest edit distance from the error. We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 . It is a version of edit distance that treats deletions, insertions and replacements as adding one unit distance, without giving a special treatment to character swaps. The SGJP – Grammatical Dictionary of Polish BIBREF10 was used as the reference vocabulary.",
"Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 . Namely, from the incorrect form we try to produce all strings obtainable by either adding or removing diacritical marks from characters. We then exclude options that are not present in SGJP, and select as the correction the one within the smallest edit distance from the error. It is possible for the number of such diacritically-swapped options to become very big. For example, the token Modlin-Zegrze-Pultusk-Różan-Ostrołęka-Łomża-Osowiec (taken from PlEWi corpus of spelling errors, see below) can yield over INLINEFORM0 states with this method, such as Módłiń-Żęgrzę-Pułtuśk-Roźąń-Óśtróleką-Lómzą-Óśówięć. The actual correction here is just fixing the ł in Pułtusk. Hence we only try to correct in this way tokens that are shorter than 17 characters.",
"A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors. This is based on the observation that trained vectors models of distributional semantics contain also representations of spelling errors, if they were not pruned. Their representations tend to be similar to those of their correct counterparts. For example, the token enginir will appear in similar contexts as engineer, and therefore will be assigned a similar vector embedding.",
"(applied cellwise) in order to obtain the initial setting of parameters for the main LSTM. Our ELMo-augmented LSTM is bidirectional."
],
"extractive_spans": [
"Levenshtein distance metric BIBREF8",
"diacritical swapping",
"Levenshtein distance is used in a weighted sum to cosine distance between word vectors",
"ELMo-augmented LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the Levenshtein distance metric BIBREF8 implemented in Apache Lucene library BIBREF9 .",
"Another simple approach is the aforementioned diacritical swapping, which is a term that we introduce here for referring to a solution inspired by the work of BIBREF4 .",
"A promising method, adapted from work on correcting texts by English language learners BIBREF11 , expands on the concept of selecting a correction nearest to the spelling error according to some notion of distance. Here, the Levenshtein distance is used in a weighted sum to cosine distance between word vectors.",
"Our ELMo-augmented LSTM is bidirectional."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b632e06c7bb1119cf80527670e985d1f07f6e97d"
],
"answer": [
{
"evidence": [
"Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 ."
],
"extractive_spans": [
"spellchecking mammography reports and tweets BIBREF7 , BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"What is the difference in performance between the interpretable system (e.g. vectors and cosine distance) and LSTM with ELMo system?",
"What solutions are proposed for error detection and context awareness?",
"How is PIEWi annotated?",
"What methods are tested in PIEWi?",
"Which specific error correction solutions have been proposed for specialized corpora in the past?"
],
"question_id": [
"44104668796a6ca10e2ea3ecf706541da1cec2cf",
"bbcd77aac74989f820e84488c52f3767d0405d51",
"6a31bd676054222faf46229fc1d283322478a020",
"e4d16050f0b457c93e590261732a20401def9cde",
"b25e7137f49f77e7e67ee2f40ca585d3a377f8b5"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Test results for all the methods used. The loss measure is cross-entropy.",
"Table 2: Discovered optimal weights for summing layers of ELMo embedding for initializing an error-correcting LSTM. The layers are numbered from the one that directly processes character and word input to the most abstract one."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png"
]
} | [
"What is the difference in performance between the interpretable system (e.g. vectors and cosine distance) and LSTM with ELMo system?"
] | [
[
"1905.10810-3-Table1-1.png",
"1905.10810-Results-0"
]
] | [
"Accuracy of best interpretible system was 0.3945 while accuracy of LSTM-ELMo net was 0.6818."
] | 289 |
1910.07481 | Using Whole Document Context in Neural Machine Translation | In Machine Translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a simple yet promising approach to add contextual information in Neural Machine Translation. We present a method to add source context that capture the whole document with accurate boundaries, taking every word into account. We provide this additional information to a Transformer model and study the impact of our method on three language pairs. The proposed approach obtains promising results in the English-German, English-French and French-English document-level translation tasks. We observe interesting cross-sentential behaviors where the model learns to use document-level information to improve translation coherence. | {
"paragraphs": [
[
"Neural machine translation (NMT) has grown rapidly in the past years BIBREF0, BIBREF1. It usually takes the form of an encoder-decoder neural network architecture in which source sentences are summarized into a vector representation by the encoder and are then decoded into target sentences by the decoder. NMT has outperformed conventional statistical machine translation (SMT) by a significant margin over the past years, benefiting from gating and attention techniques. Various models have been proposed based on different architectures such as RNN BIBREF0, CNN BIBREF2 and Transformer BIBREF1, the latter having achieved state-of-the-art performances while significantly reducing training time.",
"However, by considering sentence pairs separately and ignoring broader context, these models suffer from the lack of valuable contextual information, sometimes leading to inconsistency in a translated document. Adding document-level context helps to improve translation of context-dependent parts. Previous study BIBREF3 showed that such context gives substantial improvement in the handling of discourse phenomena like lexical disambiguation or co-reference resolution.",
"Most document-level NMT approaches focus on adding contextual information by taking into account a set of sentences surrounding the current pair BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. While giving significant improvement over the context-agnostic versions, none of these studies consider the whole document with well delimited boundaries. The majority of these approaches also rely on structural modification of the NMT model BIBREF6, BIBREF7, BIBREF8, BIBREF9. To the best of our knowledge, there is no existing work considering whole documents without structural modifications.",
"Contribution: We propose a preliminary study of a generic approach allowing any model to benefit from document-level information while translating sentence pairs. The core idea is to augment source data by adding document information to each sentence of a source corpus. This document information corresponds to the belonging document of a sentence and is computed prior to training, it takes every document word into account. Our approach focuses on pre-processing and consider whole documents as long as they have defined boundaries. We conduct experiments using the Transformer base model BIBREF1. For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. We obtain important improvements over the baseline and present evidences that this approach helps to resolve cross-sentence ambiguities."
],
[
"Interest in considering the whole document instead of a set of sentences preceding the current pair lies in the necessity for a human translator to account for broader context in order to keep a coherent translation. The idea of representing and using documents for a model is interesting, since the model could benefit from information located before or after the current processed sentence.",
"Previous work on document-level SMT started with cache based approaches, BIBREF11 suggest a conjunction of dynamic, static and topic-centered cache. More recent work tend to focus on strategies to capture context at the encoder level. Authors of BIBREF5 propose an auxiliary context source with a RNN dedicated to encode contextual information in addition to a warm-start of encoder and decoder states. They obtain significant gains over the baseline.",
"A first extension to attention-based neural architectures is proposed by BIBREF6, they add an encoder devoted to capture the preceding source sentence. Authors of BIBREF7 introduce a hierarchical attention network to model contextual information from previous sentences. Here the attention allows dynamic access to the context by focusing on different sentences and words. They show significant improvements over a strong NMT baseline. More recently, BIBREF9 extend Transformer architecture with an additional encoder to capture context and selectively merge sentence and context representations. They focus on co-reference resolution and obtain improvements in overall performances.",
"The closest approach to ours is presented by BIBREF4, they simply concatenate the previous source sentence to the one being translated. While they do not make any structural modification to the model, their method still does not take the whole document into account."
],
[
"We propose to use the simplest method to estimate document embeddings. The approach is called SWEM-aver (Simple Word Embedding Model – average) BIBREF12. The embedding of a document $k$ is computed by taking the average of all its $N$ word vectors (see Eq. DISPLAY_FORM2) and therefore has the same dimension. Out of vocabulary words are ignored.",
"Despite being straightforward, our approach raises the need of already computed word vectors to keep consistency between word and document embeddings. Otherwise, fine-tuning embeddings as the model is training would shift them in a way that totally wipes off the connection between document and word vectors.",
"To address this problem, we adopt the following approach: First, we train a baseline Transformer model (noted Baseline model) from which we extract word embeddings. Then, we estimate document embeddings using the SWEM-aver method and train an enhanced model (noted Document model) benefiting from these document embeddings and the extracted word embeddings. During training, the Document model does not fine-tune its embeddings to preserve the relation between words and document vectors. It should be noted that we could directly use word embeddings extracted from another model such as Word2Vec BIBREF13, in practice we obtain better results when we get these vectors from a Transformer model. In our case, we simply extract them from the Baseline after it has been trained.",
"Using domain adaptation ideas BIBREF14, BIBREF15, BIBREF16, we associate a tag to each sentence of the source corpus, which represents the document information. This tag takes the form of an additional token placed at the first position in the sentence and corresponds to the belonging document of the sentence (see Table TABREF1). The model considers the tag as an additional word and replace it with the corresponding document embedding. The Baseline model is trained on a standard corpus that does not contain document tags, while the Document model is trained on corpus that contains document tags.",
"The proposed approach requires strong hypotheses about train and test data. The first downfall is the need for well defined document boundaries that allow to mark each sentence with its document tag. The second major downfall is the need to compute an embedding vector for each new document fed in the model, adding a preprocessing step before inference time."
],
[
"We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment.",
"Translation tasks are English to German, proposed in the first document-level translation task at WMT 2019 BIBREF17, English to French and French to English, following the IWSLT translation task BIBREF18."
],
[
"Table TABREF4 describes the data used for the English-German language pair. These corpora correspond to the WMT 2019 document-level translation task. Table TABREF5 describes corpora for the English-French language pair, the same data is used for both translation directions.",
"For the English-German pair, only 10.4% (3.638M lines) of training data contains document boundaries. For English-French pair, we restricted the total amount of training data in order to keep 16.1% (602K lines) of document delimited corpora. To achieve this we randomly sampled 10% of the ParaCrawl V3. It means that only a fraction of the source training data contains document context. The enhanced model learns to use document information only when it is available.",
"All test sets contain well delimited documents, Baseline models are evaluated on standard corpora while Document models are evaluated on the same standard corpora that have been augmented with document context. We evaluate the English-German systems on newstest2017, newstest2018 and newstest2019 where documents consist of newspaper articles to keep consistency with the training data. English to French and French to English systems are evaluated over IWSLT TED tst2013, tst2014 and tst2015 where documents are transcriptions of TED conferences (see Table TABREF5).",
"Prior to experiments, corpora are tokenized using Moses tokenizer BIBREF19. To limit vocabulary size, we adopt the BPE subword unit approach BIBREF20, through the SentencePiece toolkit BIBREF21, with 32K rules."
],
[
"We use the OpenNMT framework BIBREF22 in its TensorFlow version to create and train our models. All experiments are run on a single NVIDIA V100 GPU. Since the proposed approach relies on a preprocessing step and not on structural enhancement of the model, we keep the same Transformer architecture in all experiments. Our Transformer configuration is similar to the baseline of BIBREF1 except for the size of word and document vectors that we set to $d_{model} = 1024$, these vectors are fixed during training. We use $N = 6$ as the number of encoder layers, $d_{ff} = 2048$ as the inner-layer dimensionality, $h = 8$ attention heads, $d_k = 64$ as queries and keys dimension and $Pdrop = 0.1$ as dropout probability. All experiments, including baselines, are run over 600k training steps with a batch size of approximately 3000 tokens.",
"For all language pairs we trained a Baseline and a Document model. The Baseline is trained on a standard parallel corpus and is not aware of document embeddings, it is blind to the context and cannot link the sentences of a document. The Document model uses extracted word embeddings from the Baseline as initialization for its word vectors and also benefits from document embeddings that are computed from the extracted word embeddings. It is trained on the same corpus as the Baseline one, but the training corpus is augmented with (see Table TABREF1) and learns to make use of the document context.",
"The Document model does not consider its embeddings as tunable parameters, we hypothesize that fine-tuning word and document vectors breaks the relation between them, leading to poorer results. We provide evidence of this phenomena with an additional system for the French-English language pair, noted Document+tuning (see Table TABREF7) that is identical to the Document model except that it adjusts its embeddings during training.",
"The evaluated models are obtained by taking the average of their last 6 checkpoints, which were written at 5000 steps intervals. All experiments are run 8 times with different seeds to ensure the statistical robustness of our results. We provide p-values that indicate the probability of observing similar or more extreme results if the Document model is actually not superior to the Baseline."
],
[
"Table TABREF6 presents results associated to the experiments for the English to German translation task, models are evaluated on the newstest2017, neswtest2018 and newstest2019 test sets. Table TABREF7 contains results for both English to French and French to English translation tasks, models are evaluated on the tst2013, tst2014 and tst2015 test sets.",
"En$\\rightarrow $De: The Baseline model obtained State-of-The-Art BLEU and TER results according to BIBREF23, BIBREF24. The Document system shows best results, up to 0.85 BLEU points over the Baseline on the newstest2019 corpus. It also surpassed the Baselinee by 0.18 points on the newstest2017 with strong statistical significance, and by 0.15 BLEU points on the newstest2018 but this time with no statistical evidence. These encouraging results prompted us to extend experiments to another language pair: English-French.",
"En$\\rightarrow $Fr: The Document system obtained the best results considering all metrics on all test sets with strong statistical evidence. It surpassed the Baseline by 1.09 BLEU points and 0.85 TER points on tst2015, 0.75 BLEU points and 0.76 TER points on tst2014, and 0.48 BLEU points and 0.68 TER points on tst2013.",
"Fr$\\rightarrow $En: Of all experiments, this language pair shows the most important improvements over the Baseline. The Document model obtained substantial gains with very strong statistical evidence on all test sets. It surpassed the Baseline model by 1.81 BLEU points and 1.02 TER points on tst2015, 1.50 BLEU points and 0.96 TER points on tst2014, and 1.29 BLEU points and 0.83 TER points on tst2013.",
"The Document+tuning system, which only differs from the fact that it tunes its embeddings, shows little or no improvement over the Baseline, leading us to the conclusion that the relation between word and document embeddings described by Eq. DISPLAY_FORM2 must be preserved for the model to fully benefit from document context."
],
[
"In this analysis we present some of the many cases that suggest the Document model can handle ambiguous situations. These examples are often isolated sentences where even a human translator could not predict the good translation without looking at the document, making it almost impossible for the Baseline model which is blind to the context. Table TABREF10 contains an extract of these interesting cases for the French-English language pair.",
"Translation from French to English is challenging and often requires to take the context into account. The personal pronoun \"lui\" can refer to a person of feminine gender, masculine gender or even an object and can therefore be translated into \"her\", \"him\" or \"it\". The first example in Table TABREF10 perfectly illustrate this ambiguity: the context clearly indicates that \"lui\" in the source sentence refers to \"ma fille\", which is located three sentences above, and should be translated into \"her\". In this case, the Baseline model predict the personal pronoun \"him\" while the Document model correctly predicts \"her\". It seems that the Baseline model does not benefit from any valuable information in the source sentence. Some might argue that the source sentence actually contains clues about the correct translation, considering that \"robe à paillettes\" (\"sparkly dress\") and \"baguette magique\" (\"magic wand\") probably refer to a little girl, but we will see that the model makes similar choices in more restricted contexts. This example is relevant mainly because the actual reference to the subject \"ma fille\" is made long before the source sentence.",
"The second example in Table TABREF10 is interesting because none of our models correctly translate the source sentence. However, we observe that the Baseline model opts for a literal translation of \"je peux faire le poirier\" (\"I can stand on my head\") into \"I can do the pear\" while the Document model predicts \"I can wring\". Even though these translations are both incorrect, we observe that the Document model makes a prediction that somehow relates to the context: a woman talking about her past disability, who has become more flexible thanks to yoga and can now twist her body.",
"The third case in table TABREF10 is a perfect example of isolated sentence that cannot be translated correctly with no contextual information. This example is tricky because the word \"Elle\" would be translated into \"She\" in most cases if no additional information were provided, but here it refers to \"la conscience\" (\"consciousness\") from the previous sentence and must be translated into \"It\". As expected the Baseline model does not make the correct guess and predicts the personal pronoun \"She\" while the Document model correctly predicts \"It\". This example present a second difficult part, the word \"son\" from the source sentence is ambiguous and does not, in itself, inform the translator if it must be translated into \"her\", \"his\" or \"its\". With contextual information we know that it refers to \"[le] monde physique\" (\"[the] physical world\") and that the correct choice is the word \"its\". Here the Baseline incorrectly predicts \"her\", possibly because of its earlier choice for \"She\" as the subject. The Document model makes again the correct translation.",
"According to our results (see Table TABREF7), the English-French language pair also benefits from document-level information but to a lesser extent. For this language pair, ambiguities about personal pronouns are less frequent. Other ambiguous phenomena like the formal mode (use of \"vous\" instead of \"tu\") appear. TableTABREF11 presents an example of this kind of situation where the word \"You\" from the source sentence does not indicate if the correct translation is \"Vous\" or \"Tu\". However it refers to the narrator of the story who is an old police officer. In this case, it is very likely that the use of formal mode is the correct translation. The Baseline model incorrectly predicts \"Tu\" and the Document model predicts \"Vous\".",
""
],
[
"In this work, we presented a preliminary study of a simple approach for document-level translation. The method allows to benefit from the whole document context at the sentence level, leading to encouraging results. In our experimental setup, we observed improvement of translation outcomes up to 0.85 BLEU points in the English to German translation task and exceeding 1 BLEU point in the English to French and French to English translation tasks. Looking at the translation outputs, we provided evidence that the approach allows NMT models to disambiguate complex situations where the context is absolutely necessary, even for a human translator.",
"The next step is to go further by investigating more elaborate document embedding approaches and to bring these experiments to other languages (e.g.: Asian, Arabic, Italian, Spanish, etc.). To consider a training corpus with a majority of document delimited data is also very promising."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Experiments",
"Experiments ::: Training and test sets",
"Experiments ::: Training details",
"Experiments ::: Results",
"Experiments ::: Manual Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"408e8c7aa8047ab454e61244dddecc43adcd7511"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: ∗< .05, ∗∗< .01, ∗∗∗< .001."
],
"extractive_spans": [],
"free_form_answer": "French-English",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: ∗< .05, ∗∗< .01, ∗∗∗< .001."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1bacdb1587b2b671bbe431b57f4662320224f95a"
],
"answer": [
{
"evidence": [
"Contribution: We propose a preliminary study of a generic approach allowing any model to benefit from document-level information while translating sentence pairs. The core idea is to augment source data by adding document information to each sentence of a source corpus. This document information corresponds to the belonging document of a sentence and is computed prior to training, it takes every document word into account. Our approach focuses on pre-processing and consider whole documents as long as they have defined boundaries. We conduct experiments using the Transformer base model BIBREF1. For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. We obtain important improvements over the baseline and present evidences that this approach helps to resolve cross-sentence ambiguities."
],
"extractive_spans": [
"WMT 2019 parallel dataset",
"a restricted dataset containing the full TED corpus from MUST-C BIBREF10",
"sampled sentences from WMT 2019 dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the English-German language pair we use the full WMT 2019 parallel dataset. For the English-French language pair we use a restricted dataset containing the full TED corpus from MUST-C BIBREF10 and sampled sentences from WMT 2019 dataset. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3f35eaf73310dbf6df624b004fe5e620d4ed1432"
],
"answer": [
{
"evidence": [
"We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment."
],
"extractive_spans": [
"BLEU and TER scores"
],
"free_form_answer": "",
"highlighted_evidence": [
"We consider two different models for each language pair: the Baseline and the Document model. We evaluate them on 3 test sets and report BLEU and TER scores. All experiments are run 8 times with different seeds, we report averaged results and p-values for each experiment."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"Which language-pair had the better performance?",
"Which datasets were used in the experiment?",
"What evaluation metrics did they use?"
],
"question_id": [
"c1f4d632da78714308dc502fe4e7b16ea6f76f81",
"749a307c3736c5b06d7b605dc228d80de36cbabe",
"102de97c123bb1e247efec0f1d958f8a3a86e2f6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Example of augmented parallel data used to train theDocumentmodel. The source corpus contains document tags while the target corpus remains unchanged.",
"Table 2: Detail of training and evaluation sets for the English-German pair, showing the number of lines, words in English (EN) and words in German (DE). Corpora with document boundaries are denoted by †.",
"Table 3: Detail of training and evaluation sets for the English-French pair in both directions, showing the number of lines, words in English (EN) and words in French (FR). Corpora with document boundaries are denoted by †.",
"Table 4: Results obtained for the English-German translation task, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: ∗< .05, ∗∗< .01, ∗∗∗< .001.",
"Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER metrics. p-values are denoted by * and correspond to the following values: ∗< .05, ∗∗< .01, ∗∗∗< .001.",
"Table 6: Translation examples for the French-English pair. We took the best models of all runs for both the Baseline and the Document enhanced model",
"Table 7: Translation example for the English-French pair."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"5-Table7-1.png"
]
} | [
"Which language-pair had the better performance?"
] | [
[
"1910.07481-4-Table5-1.png"
]
] | [
"French-English"
] | 291 |
2001.05493 | A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts | Wide usage of social media platforms has increased the risk of aggression, which results in mental stress and affects the lives of people negatively like psychological agony, fighting behavior, and disrespect to others. Majority of such conversations contains code-mixed languages[28]. Additionally, the way used to express thought or communication style also changes from one social media plat-form to another platform (e.g., communication styles are different in twitter and Facebook). These all have increased the complexity of the problem. To solve these problems, we have introduced a unified and robust multi-modal deep learning architecture which works for English code-mixed dataset and uni-lingual English dataset both.The devised system, uses psycho-linguistic features and very ba-sic linguistic features. Our multi-modal deep learning architecture contains, Deep Pyramid CNN, Pooled BiLSTM, and Disconnected RNN(with Glove and FastText embedding, both). Finally, the system takes the decision based on model averaging. We evaluated our system on English Code-Mixed TRAC 2018 dataset and uni-lingual English dataset obtained from Kaggle. Experimental results show that our proposed system outperforms all the previous approaches on English code-mixed dataset and uni-lingual English dataset. | {
"paragraphs": [
[
"The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to the fact that there are lots of aggressive comments (like hate, anger, and bullying). These cause not only mental and psychological stress but also account deactivation and even suicideBIBREF1. In this paper we concentrate on problems related to aggressiveness.",
"The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:",
"Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, \"Well said sonu..you have courage to stand against dadagiri of Muslims\".",
"Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, \"Dear India, stop playing with the emotions of your people for votes.\"",
"Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive.",
"The additional discussion on aggressiveness task can be found in Kaggle task , which just divided the task into two classes - i.e., presence or absence of aggression in tweets.",
"The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset.",
"The massive increase of the social media data rendered the manual methods of content moderation difficult and costly. Machine Learning and Deep Learning methods to identify such phenomena have attracted more attention to the research community in recent yearsBIBREF4.",
"Based on the current context, we can divide the problem into three sub-problems: (a) detection of aggression levels, (b) handling code-mixed data and (c) handling styles (due to differences in social media platforms and text entry rules/restrictions).",
"A lot of the previous approachesBIBREF5 have used an ensemble model for the task. For example, some of them uses ensemble of statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9 some used ensemble of statistical and deep learning modelsBIBREF10, BIBREF11, BIBREF12 some used ensemble of deep learning models BIBREF13. There are approaches which proposed unified architecture based on deep learningBIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 while some proposed unified statistical modelBIBREF7. Additionally, there are some approaches uses data augmentation either through translation or labeling external data to make the model generalize across domainsBIBREF14, BIBREF10, BIBREF7.",
"Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:",
"Deep-Text Learning. The goal is to learn long range associations, dependencies between regions of text, N-grams, key-patterns, topical information, and sequential dependencies.",
"Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.",
"Dual embedding based on FastText and Glove. This dual embedding helps in high vocabulary coverage and to capture the rare and partially incorrect words in the text (specially by FastText BIBREF20).",
"Our \"Deep-text architecture\" uses model averaging strategy with three different deep learning architectures. Model averaging belongs to the family of ensemble learning techniques that uses multiple models for the same problem and combines their predictions to produce a more reliable and consistent prediction accuracy BIBREF21. This is the simplest form of weighted average ensemble based predictionBIBREF22 where, each ensemble member contribute equally to predictions. Specifically in our case, three different models have been used. The following contains the intuition behind the selection of these three models:",
"Deep Pyramid CNN BIBREF23 being deeper helps to learn long range associations between temporal regions of text using two-view embeddings.",
"Disconnected RNN BIBREF24 is very helpful in encoding the sequential information with temporal key patterns in the text.",
"Pooled BiLSTM In this architecture the last hidden state of BiLSTM is concatenated with mean and max-pooled representation of the hidden states obtained over all the time steps of Bi-LSTM. The idea of using mean and max pooling layers together is taken from BIBREF25 to avoid the loss of information in longer sequences of texts and max-pooling is taken to capture the topical informationBIBREF26.",
"NLP Features In each of the individual models, the NLP features are concatenated with last hidden state before the softmax classification layer as meta-data. The main aim is to provide additional information to the deep learning network.",
"The intuition behind the NLP features are the following:",
"Emotion Sensor Dataset We have introduced to use of emotion sensor features, as a meta-data information. We have obtained the word sensor dataset from Kaggle. In this dataset each word is statistically classified into 7 distinct classes (Disgust, Surprise, Neutral, Anger, Sad, Happy and Fear) using Naive Bayes, based on sentences collected from twitter and blogs.",
"Controlled Topical Signals from Empath. Empath can analyse the text across 200 gold standard topics and emotions. Additionally, it uses neural embedding to draw connotation among words across more than 1.8 billion words. We have used only selected categories like violence, hate, anger, aggression, social media and dispute from 200 Empath categories useful for us unlikeBIBREF12 which takes 194 categories.",
"Emoticons frequently used on social media indicates the sense of sentenceBIBREF17, BIBREF19, BIBREF9.",
"Normalized frequency of POS tags According to BIBREF12, BIBREF11, BIBREF7, BIBREF15 POS Tags provide the degree of target aggressiveness. LikeBIBREF12, we have used only four tags (a) adjective (JJ, JJR, JJS), (b) adverb (RB, RBR, RBS), (c) verb (VB, VBD, VBG, VBN, VBP, VBZ) and (d) noun (NN, NNS, NNP, NNPS) (See Penn-Treebank POS Tags for abbreviations and the full list). The main reason behind the selection of these four tags is to just identify words related to persons, activities, quality, etc, in the text.",
"Sentiment polarity obtained from VADER Sentiment Analysis BIBREF27 (positive, negative and neutral) like used in BIBREF15, BIBREF10, BIBREF11, BIBREF7. It helps to demarcate aggressiveness with non-aggressiveness in the text.",
"The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper."
],
[
"There are several works for aggression identification submitted at TRAC 2018 among them some approaches use the ensemble of multiple statistical modelsBIBREF6, BIBREF7, BIBREF8, BIBREF9. Similarly, some of the models likeBIBREF10, BIBREF11, BIBREF12 have used ensemble of statistical and deep learning models. In these models the statistical part of the model uses additional features from text analysis like parts-of-speech tags, punctuation, emotion, emoticon etc. Model like: BIBREF13 has used the ensemble of deep learning models based on majority voting.",
"Some other models like: BIBREF28, BIBREF12, BIBREF9 have used different models for Facebook and twitter. While approaches like:BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 have proposed unified architecture based on deep learning. Systems likeBIBREF14, BIBREF10, BIBREF7 have used data augmentation either through translation or labelling external data to make the model generalize across domains. While BIBREF7 has proposed a unified statistical model.",
"Among approaches likeBIBREF6 extracted features from TF-IDF of character n-grams whileBIBREF28 uses LSTM with pre-trained embeddings from FastText. BIBREF15 have used the BiLSTM based model and the SVM metaclassifier model for the Facebook and Twitter test sets, respectively. While BIBREF13 tried ensembling of CNN, LSTM, and BILSTM.",
"Some approaches like:BIBREF12 has used emotions frequency as one of the features, while some others use sentiment emotion as featureBIBREF11. Also,BIBREF17, BIBREF19 have converted emoticons to their description. BIBREF9 have used TF-IDF of emoticons per-class as one of the features. Compared to all these approaches, we have concentrated to capture multiple linguistic/pattern based relations, key-terms and key-patters (with their association in text) through a combination of deep learning architectures with model averaging. We have also used NLP features as additional features with our deep learning architecture, obtained from psycho-linguistic and basic linguistic features."
],
[
"In this section, we describe our system architecture for aggressiveness classifier. In section SECREF23 we describe data preprocessing applied on the input text before feeding it to each of the classification models. Section SECREF26 describes the computation of NLP features. In Sections SECREF30, SECREF34 and SECREF45 we have described the architecture of different deep learning models like Deep Pyramid CNN, Disconnected RNN and Pooled BiLSTM respectively. Finally, in Section SECREF49, we describe model averaging based classification model which combines the prediction probabilities from three deep learninig architectures discussed above. (see Figure FIGREF22. for block diagram of system architecture)."
],
[
"We consider the text to be well formatted before applying the text to the embedding layer. First, we detect non-English text(which are few) and translate all of them to English using Google Translate. Still, there is some code mixed words like \"mc\", \"bc\" and other English abbreviations and spelling errors like \"nd\" in place of \"and\", \"u\" in place of \"you\" causes deep learning model to confuse with sentences of the same meaning. We follow the strategy of preprocessor as inBIBREF17 to normalize the abbreviations and remove spelling errors, URLs and punctuation marks, converting emojis to their description.",
"https://spacy.io/usage/linguistic-features#pos-tagging"
],
[
"We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).",
"Different from previous approachesBIBREF8, BIBREF12 where BIBREF12 have used Emotion features in the form of frequency while BIBREF8 have used emotion feature vector obtained from LIWC 2007BIBREF30. UnlikeBIBREF12 we have used only 6 topical signals from EmapthBIBREF29. We have borrowed the idea of using other features like punctuation features and parts-of-speech tags from BIBREF12. The Table 1. lists and describes features, tools used to obtain them and the number of features resulted from each type."
],
[
"Since it has been proved that CNNs are great feature extractors for text classificationBIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF23 while deeper networks(whether RNNs or CNN's) has been proven for learning long-range association like deeper character level CNN'sBIBREF36, BIBREF37, and complex combination of RNN and CNNBIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42. Deep Pyramid CNN (DPCNN)BIBREF23 has 15 layers of word-level CNN's and contains similar pre-activation as proposed in improved ResnetBIBREF43. DPCNN outperforms the 32-layer character CNNBIBREF37 and Hierarchical attention networksBIBREF42 it has added advantage that due to its pyramid structure it does not require dimension matching in shortcut connections defined as z + h(z) as inBIBREF43 where h(z) represents the skipped layers essentially contains two convolutional layers with pre-activation. It uses enhanced region embedding which consumes pre-trained embeddings (in our case it is FastText+Glove based dual embedding).",
"Enhanced Region Embedding. The current DPCNNBIBREF23, uses two view type enhanced region embedding. For the text categorization, it defines a region of text as view-1 and its adjacent regions as view-2. Then using unlabeled data, it trains a neural network of one hidden layer with an artificial task of predicting view-2 from view-1. The obtained hidden layer, which is an embedding function that takes view-1 as input, serves as an unsupervised embedding function in the model for text categorization. The detailed architecture has been shown in Figure FIGREF29.",
"Let each word input $x_j \\in R^d$ be the d-dimensional vector for the $j^{th}$ word $w_{j}$ and the sentence $s_i$ contains sequence of $n$ words $\\lbrace w_{1},w_{2},w_{3},......,w_{n}\\rbrace $ as shown in Figure FIGREF29. In comparision to conventional convolution layer, DPCNN proposes to use pre-activation, thus essentially the convolutional layer of DPCNN is $\\textbf {W}\\sigma (\\textbf {x})+\\textbf {b}$, where $\\textbf {W}$ and $\\textbf {b}$(unique to each layer) are the weights matrix and bias respectively, we use $\\sigma $ as PReLUBIBREF44. During implementation we use kernel size of 3(represented by $\\textbf {x}$ to denote the small overlapping regions of text.), The number of filters(number of feature maps denoted by the number of rows of $\\textbf {W}$) is 128 as depicted in Figure FIGREF29. With the number of filters same in each convolution layer and max-pooling with stride 2 makes the computation time halved, and doubles the net coverage of convolution kernel. Thus the deeper layers cause to learn long-range associations between regions of text. Let's say $h_{dpcnn} \\in R^{p_1}$ be the hidden state obtained from DPCNN just before the classification layer and $f_{nlp} \\in R^{24}$ be the NLP features computed from the text. Lets $z_1 \\in R^{p_1 + 24}$ be another hidden state obtained as",
"where, $\\oplus $ denotes concatenation. The vector $z_1$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i1}^*$ be the softmax probabilities, specifically for class label $k$ is given as:",
"where $K$ is the number of classes, $W_{dpcnn}$ and $b_{dpcnn}$ are the weight matrix and bias respectively."
],
[
"Given a sequence $s_i = [x_{1}, x_{2}, x_{3},....x_{n}]$ where $x_{j} \\in R^d$ represents the d-dimensional word vector for word $w_{j}$ and $n$ is the length of input text applied to a variant of RNN called Long Short-Term Memory (LSTM)BIBREF45 as shown in Figure FIGREF33. It is widely used for sequential modelling with long-term dependencies. For sequence modelling it keeps on updating the memory cell with current input using an adaptive gating mechanism. At time step $t$ the memory $c_t$ and the hidden state $h_t$ are updated as follows:",
"where $\\hat{c}_t$ is the current cell state obtained from current input $x_t$ and previous hidden state $h_{t-1}$, $i_t$, $f_t$ and $o_t$ are the activation corresponding to input gate, forget gate and output gate respectively, $\\sigma $ denotes the logistic sigmoid function and $\\odot $ denotes the element-wise multiplication. Hence the hidden state representation at time step $t$ depends on all the previous input vectors given as",
"Specifically we have used Bi-directional LSTM BIBREF45 to capture both past and future context. It provides $h_t$ from both directions(forward & backward). The forward LSTM takes the natural order of words from $x_{1}$ to $x_{n}$ to obtain $\\overrightarrow{h_t}$, while backward-LSTM $x_{n}$ to $x_{1}$ to obtain $\\overleftarrow{h_t}$. then $h_t$ is calculated as",
"where $\\oplus $ is the concatenation and $L$ is the size for one-directional LSTM. Therefore we denote the hidden state in equation DISPLAY_FORM37 with BiLSTM as",
"To avoid handling of long sequence and to capture local information for each word we define the window size $k$ for each word such that the BiLSTM only sees the the previous $k-1$ words with the current word, where $k$ is a hyperparameterBIBREF24. We use padding <PAD> to make the slices of fixed size k(as shown in Figure FIGREF33). It provides each hidden state $h_t$ with sequence of $k$ previous words. Since the phrase of $k$ words can lie anywhere in the text it helps to model the position invariant phrase representation due to which the it identifies key phrases important for identifying particular category. In this case, the equation of $h_t$ is given as",
"The output hidden vectors, $H = [h_1, h_2, h_3, ...... h_n] \\in R^{n \\times 2L}$ are converted to fixed-length vector $h_{drnn} \\in R^{2L}$ with max pooling over time:",
"Let's say $f_{nlp} \\in R^{24}$ be the NLP features computed from the text. Let's $z_2 \\in R^{2L + 24}$ be another hidden state obtained as",
"where $\\oplus $ denotes concatenation. The vector $z_2$ obtained, then fed to the fully connected layer with softmax activation. Let $y_{i2}^*$ be the softmax probabilities, specifically for class label $k$ is given as:",
"where $K$ is the number of classes, $W_{drnn}$ is the weight matrix, and $b_{drnn}$ is the bias."
],
[
"The architecture has been shown in Figure FIGREF44. Given a sequence $s_i = [x_{1}, x_{2}, x_{3}, ..... x_{j}]$, where $x_j \\in R^d$ is the d-dimensional word vector for word $w_j$, the hidden state obtained after BiLSTM is given as",
"To avoid the loss of information because of modelling the entire sequence, we have concatenated the max-pooled($c_{max}$) and mean-pooled($c_{mean}$) representation of hidden states calculated over all time steps BIBREF25. We have also concatenated the nlp features, $f_{nlp} \\in R^{24}$ the final feature vector $z_{3}$ is given as",
"where $\\oplus $ denotes concatenation. The final feature $z_3$ vector is fed to the fully connected layer with softmax activation. Let $y_{i3}^*$ be the softmax probablities, specifically for class label $k$ given as:",
"where $K$ is the number of classes and $W_{bilstm}$ and $b_{bilstm}$ are the weight matrix and bias respectively."
],
[
"According to deep learning literature BIBREF46, BIBREF47, BIBREF48, unweighted averaging might be a reasonable ensemble for similar base learners of comparable performance. Now, similar to the information discussed in BIBREF21, we can compute the model averaging (unweighted) by combining the softmax probabilities of three different classification models obtained from equations DISPLAY_FORM32, DISPLAY_FORM43, DISPLAY_FORM48. The averaged class probabilities are computed as:",
"where K is the number of classes, and $\\hat{y_i}$ is the predicted label for sentence $s_i$."
],
[
"We have used two datasets in our experimental evaluations: (1) TRAC 2018 Dataset and (2) Kaggle Dataset.",
"TRAC 2018 Dataset: We have used the English code-mixed dataset provided by TRAC 2018. This dataset contains three labels, (a) Non-Aggressive(NAG), (b) Overtly-Aggressive (OAG) and (c) Covertly-Aggressive(CAG). The distribution of training, validation and test sets are described in Table TABREF56.",
"Kaggle Dataset: This dataset contains 20001 tweets which are manually labeled. The labels are divided into two categories (indicating presence or absence of aggression in tweets) AGG(Aggressive) or NAG(Non-Aggressive). We have used the same test split available in the baseline code. The distribution for each of the training and test is given in Table TABREF56."
],
[
"We have used Glove EmbeddingsBIBREF49 concatenated with FastText EmbeddingsBIBREF20 in all the three classification models presented in this paper. Specifically, we used Glove pre-trained vectors obtained from Twitter corpus containing 27 billion tokens and 1.2 million vocabulary entries where each word is represented using 100-dimensional vector. In the case of FastText the word is represented using 300-dimensional vector. Also, we have applied spatial dropoutBIBREF50 of 0.3 at embedding layer for DPCNN(in section SECREF30) and Pooled BiLSTM(in section SECREF45). For DPCNN model(in SECREF30) we have learnt 128-dimensional vector representation for unsupervised embeddings implicitly for task specific representation as in BIBREF23. Additionally, for DPCNN all the convolutional layers used 128 filters, kernel size of 3 and max-pooling stride 2. Additionally, in the case of DPCNN we have used kernel and bias regularizer of value 0.00001 for all convolutional kernels. The pre-activation function used in DPCNN is Parametric ReLU (PReLU) proposed in BIBREF44 while the activation at each of the convolutional kernel is linear. For, DRNN(in section SECREF34) we have used the window size of 8 and rest of the parameters related to LSTM units are same as given inBIBREF24. For, Pooled BiLSTM(in section SECREF45) we have used LSTM hidden units size as 256. The maximum sequence length is 200 in all three models. In each of the classification model the classification layer contains the fully connected layer with softmax activation with output size of 3 equal to number of classes in case of TRAC 2018 dataset and its 2 in case of Kaggle dataset. Training has been done using ADAM optimizerBIBREF51 for DPCNN and RMSPROPBIBREF52 for DRNN and Pooled Bi-LSTM models. All the models are trained end-to-end using softmax cross entropy lossBIBREF53 for TRAC 2018 dataset and binary cross entropy lossBIBREF53 for Kaggle dataset.",
"To train our model for TRAC 2018 dataset, we merged the training and validation dataset and then used 10% split from shuffled dataset to save the best model, for all classifiers. We have used only 20 NLP features (except TF-IDF Emoticon feature and Punctuation feature as given in Table TABREF25) for Kaggle dataset (as these are not present in the Kaggle dataset)."
],
[
"To compare our experimental results we have used top-5 systems from the published results of TRAC-2018BIBREF5. To compare our results on Kaggle dataset, we have used the last & the best published result on Kaggle website as a baseline. We have conducted the separate experiments, to properly investigate the performance of (a) each of the classifiers (used in our model averaging based system), (b) impact of the NLP features on each of these classifiers and finally, (c) the performance of our proposed system. In Table TABREF57, TABREF57 and TABREF57, models, named as DPCNN(ref SECREF30), DRNN (ref SECREF34) and Pooled BiLSTM(ref SECREF45) are corresponding models without NLP features. Similarly, DPCNN+NLP Features, DRNN + NLP Features and Pooled BiLSTM + NLP Features are corresponding models with NLP features. The Model Averaging (A+B+C) is the ensemble of three models (i.e., model averaging of DPCNN, DRNN and Pooled BiLSTM) without NLP features. Finally, Our Proposed Method, which represents the model averaging of three models with NLP features."
],
[
"In this paper, we have evaluated our model using weighted macro-averaged F-score. The measure is defined as in (See BIBREF5, BIBREF2). It weights the F-score computed per class based on the class composition in the test set and then takes the average of these per-class F-score gives the final F-score. Table TABREF57, TABREF57 and TABREF57. presents the comparative experimental results for the proposed method in this paper with respect to the state-of-the-art. The top 5 modelsBIBREF5 given in Table TABREF57 and TABREF57. are the best performing models for Facebook and Twitter test dataset respectively on TRAC 2018. We have followed all the experimental guidelines as discussed in TRAC contest guideline paperBIBREF2, BIBREF5. From the results given in Table TABREF57, TABREF57 and TABREF57 it is clear that our proposed model shows the best performance among all of the approaches. These results also state that all the deep learning architectures with NLP features, perform better than individual corresponding deep learning architectures. This means NLP features, adds some value to the architectures, even if it is not very high.",
""
],
[
"In this paper, we have briefly described the approach we have taken to solve the aggressive identification on online social media texts which is very challenging since the dataset is noisy and code-mixed. We presented an ensemble of deep learning models which outperform previous approaches by sufficient margin while having the ability to generalize across domains.",
"In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources)."
]
],
"section_name": [
"Introduction",
"Related work",
"Methodology",
"Methodology ::: Data Preprocessing",
"Methodology ::: NLP Features",
"Methodology ::: Deep Pyramid CNN(DPCNN)",
"Methodology ::: Disconnected RNN(DRNN)",
"Methodology ::: Pooled BiLSTM",
"Methodology ::: Classification Model",
"Experiment and Evaluation ::: Dataset Description",
"Experiment and Evaluation ::: Experimental Setup",
"Experiment and Evaluation ::: Evaluation Strategy",
"Experiment and Evaluation ::: Results and Discussion",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"bf48a718d94133ed24e7ea54cb050ffaa688cf7b"
],
"answer": [
{
"evidence": [
"In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are well defined we will look after if we further fine-tune the categories with more data. In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources).",
"The block diagram of the proposed system is shown in Figure FIGREF22. The proposed system does not use any data augmentation techniques like BIBREF14, which is the top performer in TRAC (in English code-mixed Facebook data). This means the performance achieved by our system totally depends on the training dataset provided by TRAC. This also proves the effectiveness of our approach. Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set. The remaining part of this paper is organized as follows: Section SECREF2 is an overview of related work. Section SECREF3 presents the methodology and algorithmic details. Section SECREF4 discusses the experimental evaluation of the system, and Section SECREF5 concludes this paper.",
"The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:",
"Overtly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, \"Well said sonu..you have courage to stand against dadagiri of Muslims\".",
"Covertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, \"Dear India, stop playing with the emotions of your people for votes.\"",
"Non-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive."
],
"extractive_spans": [
"Hindi"
],
"free_form_answer": "",
"highlighted_evidence": [
" In the future, we are planning to pay attention on a generalized language model for code-mixed texts which can also handle Hindi-code-mixed and other multi-lingual code-mixed datasets (i.e., trying to reduce the dependencies on language-specific code-mixed resources).",
"Our system outperforms all the previous state of the art approaches used for aggression identification on English code-mixed TRAC data, while being trained only from Facebook comments the system outperforms other approaches on the additional Twitter test set.",
"The fine-grained definition of the aggressiveness/aggression identification is provided by the organizers of TRAC-2018 BIBREF0, BIBREF2. They have classified the aggressiveness into three labels (Overtly aggressive(OAG), Covertly aggressive(CAG), Non-aggressive(NAG)). The detailed description for each of the three labels is described as follows:\n\nOvertly Aggressive(OAG) - This type of aggression shows direct verbal attack pointing to the particular individual or group. For example, \"Well said sonu..you have courage to stand against dadagiri of Muslims\".\n\nCovertly Aggressive(CAG) - This type of aggression the attack is not direct but hidden, subtle and more indirect while being stated politely most of the times. For example, \"Dear India, stop playing with the emotions of your people for votes.\"\n\nNon-Aggressive(NAG) - Generally these type of text lack any kind of aggression it is basically used to state facts, wishing on occasions and polite and supportive."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"1e6b29722f6026e7890d7fb1ebfae4bc024cf62c"
],
"answer": [
{
"evidence": [
"Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.",
"We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).",
"FLOAT SELECTED: Table 1: Details of NLP features"
],
"extractive_spans": [],
"free_form_answer": "Emotion Sensor Feature, Part of Speech, Punctuation, Sentiment Analysis, Empath, TF-IDF Emoticon features",
"highlighted_evidence": [
"Exploiting psycho-linguistic features with basic linguistic features as meta-data. The main aim is to minimize the direct dependencies on in-depth grammatical structure of the language (i.e., to support code-mixed data). We have also included emoticons, and punctuation features with it. We use the term \"NLP Features\" to represent it in the entire paper.",
"We have identified a novel combination of features which are highly effective in aggression classification when applied in addition to the features obtained from the deep learning classifier at the classification layer. We have introduced two new features in addition to the previously available features. The first one is the Emotion Sensor Feature which use a statistical model to classify the words into 7 different classes based on the sentences obtained from twitter and blogs which contain total 1,185,540 words. The second one is the collection of selected topical signal from text collected using Empath (see Table 1.).",
"FLOAT SELECTED: Table 1: Details of NLP features"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"1bd72739177cfb9c85bfecdba849231b7893062f"
],
"answer": [
{
"evidence": [
"Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets. So, we concentrated to develop a robust system for English code-mixed texts, and uni-lingual texts, which can also handle different writing styles. Our approach is based on three main ideas:"
],
"extractive_spans": [],
"free_form_answer": "Systems do not perform well both in Facebook and Twitter texts",
"highlighted_evidence": [
"Most of the above-discussed systems either shows high performance on (a) Twitter dataset or (b) Facebook dataset (given in the TRAC-2018), but not on both English code-mixed datasets. This may be due to the text style or level of complexities of both datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"4bd6100879ff88f69bd3197930b3035fe4463808"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"ee7f69ecf994d51d3535bf22d0004da1740e43cc"
],
"answer": [
{
"evidence": [
"The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset."
],
"extractive_spans": [],
"free_form_answer": "None",
"highlighted_evidence": [
"The informal setting/environment of social media often encourage multilingual speakers to switch back and forth between languages when speaking or writing. These all resulted in code-mixing and code-switching. Code-mixing refers to the use of linguistic units from different languages in a single utterance or sentence, whereas code-switching refers to the co-occurrence of speech extracts belonging to two different grammatical systemsBIBREF3. This language interchange makes the grammar more complex and thus it becomes tough to handle it by traditional algorithms. Thus the presence of high percentage of code-mixed content in social media text has increased the complexity of the aggression detection task. For example, the dataset provided by the organizers of TRAC-2018 BIBREF0, BIBREF2 is actually a code-mixed dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is English mixed with in the TRAC dataset?",
"Which psycholinguistic and basic linguistic features are used?",
"How have the differences in communication styles between Twitter and Facebook increase the complexity of the problem?",
"What are the key differences in communication styles between Twitter and Facebook?",
"What data/studies do the authors provide to support the assertion that the majority of aggressive conversations contain code-mixed languages?"
],
"question_id": [
"5845d1db7f819dbadb72e7df69d49c3f424b5730",
"e829f008d62312357e0354a9ed3b0827c91c9401",
"54fe8f05595f2d1d4a4fd77f4562eac519711fa6",
"61404466cf86a21f0c1783ce535eb39a01528ce8",
"fbe5e513745d723aad711ceb91ce0c3c2ceb669e"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Block diagram of the proposed system",
"Table 1: Details of NLP features",
"Figure 2: DPCNN",
"Figure 3: DRNN",
"Figure 4: Pooled BiLSTM",
"Table 2: TRAC 2018, Details of English Code-Mixed Dataset",
"Table 6: Results on Kaggle Test Dataset",
"Figure 5: Confusion Matrix for Facebook, Twitter and Kaggle Datasets."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"6-Table2-1.png",
"7-Table6-1.png",
"8-Figure5-1.png"
]
} | [
"Which psycholinguistic and basic linguistic features are used?",
"How have the differences in communication styles between Twitter and Facebook increase the complexity of the problem?",
"What data/studies do the authors provide to support the assertion that the majority of aggressive conversations contain code-mixed languages?"
] | [
[
"2001.05493-Introduction-12",
"2001.05493-Methodology ::: NLP Features-0",
"2001.05493-4-Table1-1.png"
],
[
"2001.05493-Introduction-10"
],
[
"2001.05493-Introduction-6"
]
] | [
"Emotion Sensor Feature, Part of Speech, Punctuation, Sentiment Analysis, Empath, TF-IDF Emoticon features",
"Systems do not perform well both in Facebook and Twitter texts",
"None"
] | 293 |
1606.08140 | STransE: a novel embedding model of entities and relationships in knowledge bases | Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction or knowledge base completion, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a low-dimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task. | {
"paragraphs": [
[
"Knowledge bases (KBs), such as WordNet BIBREF0 , YAGO BIBREF1 , Freebase BIBREF2 and DBpedia BIBREF3 , represent relationships between entities as triples $(\\mathrm {head\\ entity, relation, tail\\ entity})$ . Even very large knowledge bases are still far from complete BIBREF4 , BIBREF5 . Link prediction or knowledge base completion systems BIBREF6 predict which triples not in a knowledge base are likely to be true BIBREF7 , BIBREF8 . A variety of different kinds of information is potentially useful here, including information extracted from external corpora BIBREF9 , BIBREF10 and the other relationships that hold between the entities BIBREF11 , BIBREF12 . For example, toutanova-EtAl:2015:EMNLP used information from the external ClueWeb-12 corpus to significantly enhance performance.",
"While integrating a wide variety of information sources can produce excellent results BIBREF13 , there are several reasons for studying simpler models that directly optimize a score function for the triples in a knowledge base, such as the one presented here. First, additional information sources might not be available, e.g., for knowledge bases for specialized domains. Second, models that don't exploit external resources are simpler and thus typically much faster to train than the more complex models using additional information. Third, the more complex models that exploit external information are typically extensions of these simpler models, and are often initialized with parameters estimated by such simpler models, so improvements to the simpler models should yield corresponding improvements to the more complex models as well.",
"Embedding models for KB completion associate entities and/or relations with dense feature vectors or matrices. Such models obtain state-of-the-art performance BIBREF14 , BIBREF8 , BIBREF15 , BIBREF16 , BIBREF4 , BIBREF17 , BIBREF18 and generalize to large KBs BIBREF19 . Table 1 summarizes a number of prominent embedding models for KB completion.",
"Let $(h, r, t)$ represent a triple. In all of the models discussed here, the head entity $h$ and the tail entity $t$ are represented by vectors $\\textbf {h}$ and $\\textbf {t}\\in \\mathbb {R}^{k}$ respectively. The Unstructured model BIBREF15 assumes that $\\textbf {h} \\approx \\textbf {t}$ . As the Unstructured model does not take the relationship $r$ into account, it cannot distinguish different relation types. The Structured Embedding (SE) model BIBREF8 extends the unstructured model by assuming that $h$ and $t$ are similar only in a relation-dependent subspace. It represents each relation $r$ with two matrices $h$0 and $h$1 , which are chosen so that $h$2 . The TransE model BIBREF16 is inspired by models such as Word2Vec BIBREF20 where relationships between words often correspond to translations in latent feature space. The TransE model represents each relation $h$3 by a translation vector r $h$4 , which is chosen so that $h$5 .",
"The primary contribution of this paper is that two very simple relation-prediction models, SE and TransE, can be combined into a single model, which we call STransE. Specifically, we use relation-specific matrices $\\textbf {W}_{r,1}$ and $\\textbf {W}_{r,2}$ as in the SE model to identify the relation-dependent aspects of both $h$ and $t$ , and use a vector $\\textbf {r}$ as in the TransE model to describe the relationship between $h$ and $t$ in this subspace. Specifically, our new KB completion model STransE chooses $\\textbf {W}_{r,1}$ , $\\textbf {W}_{r,2}$ and $\\textbf {r}$ so that $\\textbf {W}_{r,2}$0 . That is, a TransE-style relationship holds in some relation-dependent subspace, and crucially, this subspace may involve very different projections of the head $\\textbf {W}_{r,2}$1 and tail $\\textbf {W}_{r,2}$2 . So $\\textbf {W}_{r,2}$3 and $\\textbf {W}_{r,2}$4 can highlight, suppress, or even change the sign of, relation-specific attributes of $\\textbf {W}_{r,2}$5 and $\\textbf {W}_{r,2}$6 . For example, for the “purchases” relationship, certain attributes of individuals $\\textbf {W}_{r,2}$7 (e.g., age, gender, marital status) are presumably strongly correlated with very different attributes of objects $\\textbf {W}_{r,2}$8 (e.g., sports car, washing machine and the like).",
"As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion. We expect that the STransE will also be able to serve as the basis for extended models that exploit a wider variety of information sources, just as TransE does."
],
[
"Let $\\mathcal {E}$ denote the set of entities and $\\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \\in \\mathcal {E}$ and $r \\in \\mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\\prime }}(h^{\\prime },t^{\\prime })$ of an implausible triple $\\mathcal {R}$0 . We define the STransE score function $\\mathcal {R}$1 as follows:",
" $\nf_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}}\n$ ",
"using either the $\\ell _1$ or the $\\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\\ell _1$ norm gave slightly better results). To learn the vectors and matrices we minimize the following margin-based objective function: $\n\\mathcal {L} & = & \\sum _{\\begin{array}{c}(h,r,t) \\in \\mathcal {G} \\\\ (h^{\\prime },r,t^{\\prime }) \\in \\mathcal {G}^{\\prime }_{(h, r, t)}\\end{array}} [\\gamma + f_r(h, t) - f_r(h^{\\prime }, t^{\\prime })]_+\n$ ",
"where $[x]_+ = \\max (0, x)$ , $\\gamma $ is the margin hyper-parameter, $\\mathcal {G}$ is the training set consisting of correct triples, and $\\mathcal {G}^{\\prime }_{(h, r, t)} = \\lbrace (h^{\\prime }, r, t) \\mid h^{\\prime } \\in \\mathcal {E}, (h^{\\prime }, r, t) \\notin \\mathcal {G} \\rbrace \\cup \\lbrace (h, r,\nt^{\\prime }) \\mid t^{\\prime } \\in \\mathcal {E}, (h, r, t^{\\prime }) \\notin \\mathcal {G} \\rbrace $ is the set of incorrect triples generated by corrupting a correct triple $(h, r, t)\\in \\mathcal {G}$ .",
"We use Stochastic Gradient Descent (SGD) to minimize $\\mathcal {L}$ , and impose the following constraints during training: $\\Vert \\textbf {h}\\Vert _2 \\leqslant 1$ , $\\Vert \\textbf {r}\\Vert _2 \\leqslant 1$ , $\\Vert \\textbf {t}\\Vert _2 \\leqslant 1$ , $\\Vert \\textbf {W}_{r,1}\\textbf {h}\\Vert _2\n\\leqslant 1$ and $\\Vert \\textbf {W}_{r,2}\\textbf {t}\\Vert _2 \\leqslant 1$ ."
],
[
"Table 1 summarizes related embedding models for link prediction and KB completion. The models differ in the score functions $f_r(h, t)$ and the algorithms used to optimize the margin-based objective function, e.g., SGD, AdaGrad BIBREF21 , AdaDelta BIBREF22 and L-BFGS BIBREF23 .",
"DISTMULT BIBREF24 is based on a Bilinear model BIBREF14 , BIBREF15 , BIBREF25 where each relation is represented by a diagonal rather than a full matrix. The neural tensor network (NTN) model BIBREF4 uses a bilinear tensor operator to represent each relation while ProjE BIBREF26 could be viewed as a simplified version of NTN with diagonal matrices. Similar quadratic forms are used to model entities and relations in KG2E BIBREF27 , ComplEx BIBREF28 , TATEC BIBREF29 and RSTE BIBREF30 . In addition, HolE BIBREF31 uses circular correlation—a compositional operator—which could be interpreted as a compression of the tensor product.",
"The TransH model BIBREF17 associates each relation with a relation-specific hyperplane and uses a projection vector to project entity vectors onto that hyperplane. TransD BIBREF32 and TransR/CTransR BIBREF33 extend the TransH model using two projection vectors and a matrix to project entity vectors into a relation-specific space, respectively. TransD learns a relation-role specific mapping just as STransE, but represents this mapping by projection vectors rather than full matrices, as in STransE. The lppTransD model BIBREF34 extends TransD to additionally use two projection vectors for representing each relation. In fact, our STransE model and TranSparse BIBREF35 can be viewed as direct extensions of the TransR model, where head and tail entities are associated with their own projection matrices, rather than using the same matrix for both, as in TransR and CTransR.",
"Recently, several authors have shown that relation paths between entities in KBs provide richer information and improve the relationship prediction BIBREF36 , BIBREF37 , BIBREF18 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , BIBREF44 . In addition, NickelMTG15 reviews other approaches for learning from KBs and multi-relational data."
],
[
"For link prediction evaluation, we conduct experiments and compare the performance of our STransE model with published results on the benchmark WN18 and FB15k datasets BIBREF16 . Information about these datasets is given in Table 2 ."
],
[
"The link prediction task BIBREF8 , BIBREF15 , BIBREF16 predicts the head or tail entity given the relation type and the other entity, i.e. predicting $h$ given $(?, r, t)$ or predicting $t$ given $(h, r, ?)$ where $?$ denotes the missing element. The results are evaluated using the ranking induced by the score function $f_r(h,t)$ on test triples.",
"For each test triple $(h, r, t)$ , we corrupted it by replacing either $h$ or $t$ by each of the possible entities in turn, and then rank these candidates in ascending order of their implausibility value computed by the score function. This is called as the “Raw” setting protocol. For the “Filtered” setting protocol described in BIBREF16 , we removed any corrupted triples that appear in the knowledge base, to avoid cases where a correct corrupted triple might be ranked higher than the test triple. The “Filtered” setting thus provides a clearer view on the ranking performance. Following BIBREF16 , we report the mean rank and the Hits@10 (i.e., the proportion of test triples in which the target entity was ranked in the top 10 predictions) for each model. In addition, we report the mean reciprocal rank, which is commonly used in information retrieval. In both “Raw” and “Filtered” settings, lower mean rank, higher mean reciprocal rank or higher Hits@10 indicates better link prediction performance.",
"Following TransR BIBREF33 , TransD BIBREF32 , rTransE BIBREF37 , PTransE BIBREF36 , TATEC BIBREF29 and TranSparse BIBREF35 , we used the entity and relation vectors produced by TransE BIBREF16 to initialize the entity and relation vectors in STransE, and we initialized the relation matrices with identity matrices. We applied the “Bernoulli” trick used also in previous work for generating head or tail entities when sampling incorrect triples BIBREF17 , BIBREF33 , BIBREF27 , BIBREF32 , BIBREF36 , BIBREF34 , BIBREF35 . We ran SGD for 2,000 epochs to estimate the model parameters. Following NIPS20135071 we used a grid search on validation set to choose either the $l_1$ or $l_2$ norm in the score function $f$ , as well as to set the SGD learning rate $\\lambda \\in \\lbrace 0.0001, 0.0005, 0.001, 0.005, 0.01 \\rbrace $ , the margin hyper-parameter $\\gamma \\in \\lbrace 1, 3, 5 \\rbrace $ and the vector size $k\\in \\lbrace 50, 100 \\rbrace $ . The lowest filtered mean rank on the validation set was obtained when using the $l_1$ norm in $f$ on both WN18 and FB15k, and when $\\lambda = 0.0005, \\gamma = 5,\n\\text{ and } k = 50$ for WN18, and $\\lambda = 0.0001, \\gamma = 1,\n\\text{ and } k = 100$ for FB15k."
],
[
"Table 3 compares the link prediction results of our STransE model with results reported in prior work, using the same experimental setup. The first 15 rows report the performance of the models that do not exploit information about alternative paths between head and tail entities. The next 5 rows report results of the models that exploit information about relation paths. The last 3 rows present results for the models which make use of textual mentions derived from a large external corpus.",
"It is clear that the models with the additional external corpus information obtained best results. In future work we plan to extend the STransE model to incorporate such additional information. Table 3 also shows that the models employing path information generally achieve better results than models that do not use such information. In terms of models not exploiting path information or external information, the STransE model produces the highest filtered mean rank on WN18 and the highest filtered Hits@10 and mean reciprocal rank on FB15k. Compared to the closely related models SE, TransE, TransR, CTransR, TransD and TranSparse, our STransE model does better than these models on both WN18 and FB15k.",
"Following NIPS20135071, Table 4 analyzes Hits@10 results on FB15k with respect to the relation categories defined as follows: for each relation type $r$ , we computed the averaged number $a_h$ of heads $h$ for a pair $(r, t)$ and the averaged number $a_t$ of tails $t$ for a pair $(h, r)$ . If $a_h < 1.5$ and $a_t\n< 1.5$ , then $r$ is labeled 1-1. If $a_h$0 and $a_h$1 , then $a_h$2 is labeled M-1. If $a_h$3 and $a_h$4 , then $a_h$5 is labeled as 1-M. If $a_h$6 and $a_h$7 , then $a_h$8 is labeled as M-M. 1.4%, 8.9%, 14.6% and 75.1% of the test triples belong to a relation type classified as 1-1, 1-M, M-1 and M-M, respectively.",
"Table 4 shows that in comparison to prior models not using path information, STransE obtains the second highest Hits@10 result for M-M relation category at $(80.1\\% + 83.1\\%) / 2 = 81.6\\%$ which is 0.5% smaller than the Hits@10 result of TranSparse for M-M. However, STransE obtains 2.5% higher Hits@10 result than TranSparse for M-1. In addition, STransE also performs better than TransD for 1-M and M-1 relation categories. We believe the improved performance of the STransE model is due to its use of full matrices, rather than just projection vectors as in TransD. This permits STransE to model diverse and complex relation categories (such as 1-M, M-1 and especially M-M) better than TransD and other similiar models. However, STransE is not as good as TransD for the 1-1 relations. Perhaps the extra parameters in STransE hurt performance in this case (note that 1-1 relations are relatively rare, so STransE does better overall)."
],
[
"This paper presented a new embedding model for link prediction and KB completion. Our STransE combines insights from several simpler embedding models, specifically the Structured Embedding model BIBREF8 and the TransE model BIBREF16 , by using a low-dimensional vector and two projection matrices to represent each relation. STransE, while being conceptually simple, produces highly competitive results on standard link prediction evaluations, and scores better than the embedding-based models it builds on. Thus it is a suitable candidate for serving as future baseline for more complex models in the link prediction task.",
"In future work we plan to extend STransE to exploit relation path information in knowledge bases, in a manner similar to lin-EtAl:2015:EMNLP1, guu-miller-liang:2015:EMNLP or NguyenCoNLL2016."
],
[
"This research was supported by a Google award through the Natural Language Understanding Focused Program, and under the Australian Research Council's Discovery Projects funding scheme (project number DP160102156).",
"NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. The first author is supported by an International Postgraduate Research Scholarship and a NICTA NRPA Top-Up Scholarship."
]
],
"section_name": [
"Introduction",
"Our approach",
"Related work",
"Experiments",
"Task and evaluation protocol",
"Main results",
"Conclusion and future work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1c1dfad3a62e0b5a77ea7279312f43e2b0f155c0"
],
"answer": [
{
"evidence": [
"Let $\\mathcal {E}$ denote the set of entities and $\\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \\in \\mathcal {E}$ and $r \\in \\mathcal {R}$ , the STransE model defines a score function $f_r(h, t)$ of its implausibility. Our goal is to choose $f$ such that the score $f_r(h,t)$ of a plausible triple $(h,r,t)$ is smaller than the score $f_{r^{\\prime }}(h^{\\prime },t^{\\prime })$ of an implausible triple $\\mathcal {R}$0 . We define the STransE score function $\\mathcal {R}$1 as follows:",
"$ f_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}} $",
"using either the $\\ell _1$ or the $\\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\\ell _1$ norm gave slightly better results). To learn the vectors and matrices we minimize the following margin-based objective function: $ \\mathcal {L} & = & \\sum _{\\begin{array}{c}(h,r,t) \\in \\mathcal {G} \\\\ (h^{\\prime },r,t^{\\prime }) \\in \\mathcal {G}^{\\prime }_{(h, r, t)}\\end{array}} [\\gamma + f_r(h, t) - f_r(h^{\\prime }, t^{\\prime })]_+ $"
],
"extractive_spans": [
"$ f_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}} $"
],
"free_form_answer": "",
"highlighted_evidence": [
"We define the STransE score function $\\mathcal {R}$1 as follows:\n\n$ f_r(h, t) & = & \\Vert \\textbf {W}_{r,1}\\textbf {h} + \\textbf {r} - \\textbf {W}_{r,2}\\textbf {t}\\Vert _{\\ell _{1/2}} $\n\nusing either the $\\ell _1$ or the $\\ell _2$ -norm (the choice is made using validation data; in our experiments we found that the $\\ell _1$ norm gave slightly better results)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"97bb27301de49b9136971207ffed30e1f9e2e8eb"
],
"answer": [
{
"evidence": [
"As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion. We expect that the STransE will also be able to serve as the basis for extended models that exploit a wider variety of information sources, just as TransE does."
],
"extractive_spans": [],
"free_form_answer": "WN18, FB15k",
"highlighted_evidence": [
"As we show below, STransE performs better than the SE and TransE models and other state-of-the-art link prediction models on two standard link prediction datasets WN18 and FB15k, so it can serve as a new baseline for KB completion."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What scoring function does the model use to score triples?",
"What datasets are used to evaluate the model?"
],
"question_id": [
"8d258899e36326183899ebc67aeb4188a86f682c",
"955ca31999309685c1daa5cb03867971ca99ec52"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"link prediction",
"link prediction"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: The score functions fr(h, t) and the optimization methods (Opt.) of several prominent embedding models for KB completion. In all of these the entities h and t are represented by vectors h and t ∈ Rk respectively.",
"Table 2: Statistics of the experimental datasets used in this study (and previous works). #E is the number of entities, #R is the number of relation types, and #Train, #Valid and #Test are the numbers of triples in the training, validation and test sets, respectively.",
"Table 3: Link prediction results. MR and H10 denote evaluation metrics of mean rank and Hits@10 (in %), respectively. “NLFeat” abbreviates Node+LinkFeat. The results for NTN (Socher et al., 2013) listed in this table are taken from Yang et al. (2015) since NTN was originally evaluated on different datasets. The results marked with + are obtained using the optimal hyper-parameters chosen to optimize Hits@10 on the validation set; trained in this manner, STransE obtains a mean rank of 244 and Hits@10 of 94.7% on WN18, while producing the same results on FB15k.",
"Table 4: Hits@10 (in %) by the relation category on FB15k. “Unstr.” abbreviates Unstructured."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png"
]
} | [
"What datasets are used to evaluate the model?"
] | [
[
"1606.08140-Introduction-5"
]
] | [
"WN18, FB15k"
] | 295 |
1901.02257 | Multi-Perspective Fusion Network for Commonsense Reading Comprehension | Commonsense Reading Comprehension (CRC) is a significantly challenging task, aiming at choosing the right answer for the question referring to a narrative passage, which may require commonsense knowledge inference. Most of the existing approaches only fuse the interaction information of choice, passage, and question in a simple combination manner from a \emph{union} perspective, which lacks the comparison information on a deeper level. Instead, we propose a Multi-Perspective Fusion Network (MPFN), extending the single fusion method with multiple perspectives by introducing the \emph{difference} and \emph{similarity} fusion\deleted{along with the \emph{union}}. More comprehensive and accurate information can be captured through the three types of fusion. We design several groups of experiments on MCScript dataset \cite{Ostermann:LREC18:MCScript} to evaluate the effectiveness of the three types of fusion respectively. From the experimental results, we can conclude that the difference fusion is comparable with union fusion, and the similarity fusion needs to be activated by the union fusion. The experimental result also shows that our MPFN model achieves the state-of-the-art with an accuracy of 83.52\% on the official test set. | {
"paragraphs": [
[
"Content: Task Definition",
"1. Describe the task of commonsense reading comprehension(CRC) belongs to which filed and how important it is.",
"2. Define the task of CRC",
"3. Data feature of CRC",
"4. Figure 1 shows an example.",
"Machine Reading Comprehension (MRC) is an extremely challenging topic in natural language processing field. It requires a system to answer the question referring to a given passage.no matter whether the answer is mentioned in the passage. MRC consists of several sub-tasks, such as cloze-style reading comprehension, span-extraction reading comprehension, and open-domain reading comprehension. Most of existing datasets emphasize the question whose answer is mentioned in the passage since it does not need any commonsense. In real reading comprehension, the human reader can fully understand the passage with the prior knowledge to answer the question. To directly relate commonsense knowledge to reading comprehension, SemEval2018 Task 11 defines a new sub-task called Commonsense Reading Comprehension, aiming at answering the questions that requires both commonsense knowledge and the understanding of the passage. The challenge of this task is how tolies in answer questions requires a system to draw inferences from multiple sentences from the passage and requireswith the commonsense knowledge that does not appear in the passage explicitly. Table 1 shows an example of CRC."
],
[
"Content: Previous Research",
"1. Category the methods in SemEval2018 task 11",
"2. Describe the first method",
"3. Describe the second method",
"4. State that your work is belong to which method",
"Most studies on CRC task are neural network based (NN-based) models, which typically have the following characteristics. Firstly, word representations are augmented by additional lexical information. , such as pre-trained embedding, POS and NER embedding, Relation embedding and some other handcraft features. Secondly, the interaction process is usually implemented by the attention mechanism, which can provide the interaction representations like choice-aware passage, choice-aware question, and question-aware passage. Thirdly, the original representations and interaction representations are fused together and then aggregated by a Bidirectional Long Short-Term Memory Network (BiLSTM) BIBREF1 to get high-order semantic information. Fourthly, the final output based on their bilinear interactions. is the sum scores of passage to choice and question to choice.",
"The NN-based models have shown powerfulness on this task. However, there are still some limitations. Firstly, the two fusion processes of passage and question to choice are implemented separately, until producing the final output. Secondly, the existing fusion method used in reading comprehension task is usually implemented by concatenation BIBREF2 , BIBREF3 , which is monotonous and cannot capture the partial comparison information between two parts. Studies on Natural Language Inference (NLI) have explored more functions BIBREF4 , BIBREF5 , such as element-wise subtraction and element-wise multiplication, to capture more comparison information, which have been proved to be effective.",
"In this paper, we introduce a Muti-Perspective Fusion Network (MPFN) to tackle these limitations. The model can fuse the choice with passage and question simultaneously to get a multi-perspective fusion representation. Furthermore, inspired by the element-wise subtraction and element-wise multiplication function used in BIBREF5 , we define three kinds of fusion functions from multiple perspectives to fuse choice, choice-aware passage, and choice-aware question. The three fusions are union fusion, difference fusion, and similarity fusion. Note that, we name the concatenation fusion method as union fusion in this paper, which collects the global information. The difference fusion and the similarity fusion can discover the different parts and similar parts among choice, choice-aware passage, and choice-aware question respectively.",
"MPFN comprises an encoding layer, a context fusion layer, and an output layer. In the encoding layer, we employ a BiLSTM as the encoder to obtain context representations. to convert the embeddings of passage, question, and choice to their corresponding context embeddings. To acquire better semantic representations, we apply union fusion in the word level. to choice, choice-aware passage embedding, and choice-aware question embedding. In the context fusion layer, we apply union fusion, difference fusion, and similarity fusion to obtain a multi-perspective fusion representation. In the output layer, a self-attention and a feed-forward neural network are used to make the final prediction.",
"We conduct experiments on MRScript dataset released by BIBREF0 . Our single and ensemble model achieve the accuracy of 83.52% and 84.84% on the official test set respectively. Our main contributions are as follows:",
"We propose a general fusion framework with two-layer fusion, which can fuse the passage, question, and choice simultaneously.",
"To collect multi-perspective fusion representations, we define three types of fusions, consisting of union fusion, difference fusion, and similarity fusion.",
"We extend the fusion method to multi-perspective to obtain deeper understanding of the passage, question, and choice.",
"We design several groups of experiments to evaluate the effectiveness of the three types of fusion and prove that our MPFN model outperforms all the other models. with an accuracy of 83.52%."
],
[
"MRC has gained significant popularity over the past few years. Several datasets have been constructed for testing the comprehension ability of a system, such as MCTest BIBREF6 , SQuAD BIBREF7 , BAbI BIBREF8 , TriviaQA BIBREF9 , RACE BIBREF10 , and NewsQA BIBREF11 . The types of passage, question and answer of these datasets are various. Each dataset focuses on one specific aspect of reading comprehension. Particularly, the MCScript BIBREF0 dataset concerns answering the question which requires using commonsense knowledge.",
"including Wikipedia articles, examinations, narrative stories, news articles. Answering questions in these datasets. Meanwhile, the question types and answer types vary differently. The answer type multiple choice, span-answer, exact match",
"Many architectures on MRC follow the process of representation, attention, fusion, and aggregation BIBREF12 , BIBREF2 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . BiDAF BIBREF12 fuses the passage-aware question, the question-aware passage, and the original passage in context layer by concatenation, and then uses a BiLSTM for aggregation. The fusion levels in current advanced models are categorized into three types by BIBREF14 , including word-level fusion, high-level fusion, and self-boosted fusion. They further propose a FusionNet to fuse the attention information from bottom to top to obtain a fully-aware representation for answer span prediction.",
" BIBREF16 present a DFN model to fuse the passage, question, and choice by dynamically determine the attention strategy.",
"On SemEval2018 Task 11, most of the models use the attention mechanism to build interactions among the passage, the question, and the choice BIBREF17 , BIBREF3 , BIBREF18 , BIBREF19 . The most competitive models are BIBREF17 , BIBREF3 , and both of them employ concatenation fusion to integrate the information. BIBREF17 utilizes choice-aware passage and choice-aware question to fuse the choice in word level. In addition, they apply the question-aware passage to fuse the passage in context level. Different from BIBREF17 , both the choice-aware passage and choice-aware question are fused into choice in the context level in BIBREF3 , which is the current state-of-the-art result on the MCSript dataset.",
"On NLI task, fusing the premise-aware hypothesis into the hypothesis is an effective and commonly-used method. BIBREF20 , BIBREF21 leverage the concatenation of the hypothesis and the hypothesis-aware premise to help improve the performance of their model. The element-wise subtraction and element-wise multiplication between the hypothesis and the hypothesis-aware premise are employed in BIBREF5 to enhance the concatenation. and further achieved the state-of-the-art results on Stanford Natural Language Inference BIBREF22 benchmark.",
"Almost all the models on CRC only use the union fusion. In our MPFN model, we design another two fusion methods to extend the perspective of fusion. We evaluate the MPFN model on MRC task and achieve the state-of-the-art result."
],
[
"The overview of our Multi-Perspective Fusion Network (MPFN) is shown in Fig. 1 . Given a narrative passage about a series of daily activities and several corresponding questions, a system requires to select a correct choice from two options for each question. In this paper, we denote $\\bf {p=\\lbrace p_1,p_2,...,p_{|p|}\\rbrace }$ as the passage, $\\bf {q=\\lbrace q_1,q_2,...,q_{|q|}\\rbrace }$ as a question, $\\bf {c=\\lbrace c_1,c_2,...,c_{|c|}\\rbrace }$ as one of the candidate choice, and a true label $y^{*} \\in \\lbrace 0,1\\rbrace $ . Our model aims to compute a probability for each choice and take the one with higher probability as the prediction label. Our model consists of three layers: an encoding layer, a context fusion layer, and an output layer. The details of each layer are described in the following subsections."
],
[
"This layer aims to encode the passage embedding $p$ , the question embedding $q$ , and the choice embedding $c$ into context embeddings. Specially, we use a one-layer BiLSTM as the context encoder. ",
"$$&\\bar{c}_i = \\text{BiLSTM}(c, i) , & i \\in [1,2, \\cdots ,|c|] \\\\\n&\\bar{p}_j = \\text{BiLSTM}(p, j) , & j \\in [1,2, \\cdots ,|p|] \\\\\n&\\bar{q}_k = \\text{BiLSTM}(q, k) , & k \\in [1,2, \\cdots ,|q|] $$ (Eq. 18) ",
"The embeddings of $p$ , $q$ and $c$ are semantically rich word representations consisting of several kinds of embeddings. Specifically, the embeddings of passage and question are the concatenation of the Golve word embedding, POS embedding, NER embedding, Relation embedding and Term Frequency feature. And the embeddings of choice comprise the Golve word embedding, the choice-aware passage embedding, $c^p$ and choice-aware question embedding $c^q$ . The details about each embedding are follows:",
"Glove word embedding We use the 300-dimensional Glove word embeddings trained from 840B Web crawl data BIBREF23 . The out-of-vocabulary words are initialized randomly. The embedding matrix are fixed during training.",
"POS&NER embedding We leverage the Part-of-Speech (POS) embeddings and Named-Entity Recognition(NER) embeddings. The two embeddings $c_i^{pos} \\text{and} c_i^{ner}$ are randomly initialized to 12d and 8d respectively, and updated during training.",
"Relation embedding Relations are extracted form ConceptNet. For each word in the choice, if it satisfies any relation with another word in the passage or the question, the corresponding relation will be taken out. If the relations between two words are multiple, we just randomly choose one. The relation embeddings $c_i^{rel}$ are generated in the similar way of POS embeddings. randomly initialized and updated during training as well.",
"Term Frequency Following BIBREF17 , we introduce the term frequency feature to enrich the embedding of each word. The calculation is based on English Wikipedia.",
"Choice-aware passage embedding The information in the passage that is relevant to the choice can help encode the choice BIBREF24 . To acquire the choice-aware passage embedding $c_i^p$ , we utilize dot product between non-linear mappings of word embeddings to compute the attention scores for the passage BIBREF25 . ",
"$$& c_i^p = Attn(c_i,\\lbrace p_j\\rbrace _1^{|p|}) = \\sum _{j=1}^{|p|} {\\alpha }_{ij} p_j \\\\\n& {\\alpha }_{ij} \\propto exp(S(c_i, p_j)), \\quad S(c_i, p_j) = {ReLU(W{c_i})}^{T} ReLU(W {p_j})$$ (Eq. 19) ",
"Choice-aware question embedding The choice relevant question information is also important for the choice. Therefore, we adopt the similar attention way as above to get the choice-aware question embedding $c_i^q=Attn(c_i, \\lbrace q_k\\rbrace _{1}^{|q|})$ .",
"The embeddings delivered to the BiLSTM are the concatenation the above components, where $p_j = [p_j^{glove}, p_j^{pos},p_j^{ner},p_j^{rel}, p_j^{tf} ]$ , $c_i = [c_i^{glove}, c_i^{p},c_i^{q}]$ , and $q_k = [q_k^{glove}, q_k^{pos}, q_k^{ner}, q_k^{rel},q_k^{tf} ]$ ."
],
[
"This is the core layer of our MPFN model. To take the union, different and similar information of the choice, passage, and question into consideration, three fusion functions are defined in this layer. In this layer, we define three fusion functions, which consider the union information, the different information, and the similar information of the choice, passage, and question.",
"Since we have obtained the choice context $\\bar{c}_i$ , the passage context $\\bar{p}_j$ , and the question context $\\bar{q}_k$ in the encoding layer, we can calculate the choice-aware passage contexts $\\tilde{c}^p_i$ and choice-aware question contexts $\\tilde{c}^q_i$ . Then we deliver them together with the choice contexts $\\bar{c}_i$ to the three fusion functions.",
"In this layer, we define three fusion functions to fuse the $\\bar{c}_i$ , $\\tilde{c}^p_j$ , and $\\bar{c}^q_k$ simultaneously and multi-perspectively. The three fusion functions take the union information, the different information, and the similar information of the choice, passage, and question into consideration. To better integrate this information, we feed the three fusion outputs to FNN for aggregation.",
"Choice-aware passage context In this part, we calculate the choice-aware passage representations $\\tilde{c}_i^p= \\sum _{j}{\\beta }_{ij} \\bar{p}_j$ . For model simplification, here we use dot product between choice contexts and passage contexts to compute the attention scores ${\\beta }_{ij}$ : ",
"$$&{\\beta }_{ij}= \\frac{exp({\\bar{c}_i^T \\bar{p}_j)}}{\\sum \\limits {_{j^\\prime =1}^{|p|}exp(\\bar{c}_i^T \\bar{p}_{j^\\prime })}}$$ (Eq. 21) ",
"Choice-aware question context In a similar way as above, we get the choice-aware question context $\\tilde{c}_i^q= \\sum _{j}{\\beta }_{ik} \\bar{q}_k$ . The ${\\beta }_{ik}$ is the dot product of the choice context $\\bar{c}_i$ and question context $\\bar{q}_k$ .",
"Multi-perspective Fusion This is the key module in our MPFN model. The goal of this part is to produce multi-perspective fusion representation for the choice $\\bar{c}_i$ , the choice-aware passage $\\tilde{c}^p_i$ , and the choice-aware question $\\tilde{c}^q_i$ . In this paper, we define fusion in three perspectives: union, difference, and similarity. Accordingly, we define three fusion functions to describe the three perspectives. The outputs and calculation of the three functions are as follows: : concatenation $;$ , element-wise dot product and element-wise subtraction. $f^u$ , $f^d$ , and $f^s$ All of the three fusion functions take the choice context, the choice-aware passage, and the choice-aware question as input. ",
"$$&u_i = [\\bar{c}_i \\, ; \\tilde{c}_i^p \\,; \\tilde{c}^q_i] ,\\\\\n&d_i = ( \\bar{c}_i - \\tilde{c}_i^p)\\odot (\\bar{c_i} - \\tilde{c}_i^q) ,\\\\\n&s_i = \\bar{c}_i \\odot \\tilde{c}_i^p \\odot \\tilde{c}_i^q ,$$ (Eq. 22) ",
" where $; \\,$ , $-$ , and $\\odot $ represent concatenation, element-wise subtraction, and element-wise multiplication respectively. And $u_i$ , $d_i$ , and $s_i$ are the representations from the union, difference and similarity perspective respectively.",
"The union perspective is commonly used in a large bulk of tasks BIBREF21 , BIBREF14 , BIBREF2 . It can see the whole picture of the passage, the question, and the choice by concatenating the $\\tilde{c}^p_i$ and $\\tilde{c}^q_i$ together with $c_i$ . While the difference perspective captures the different parts between choice and passage, and the difference parts between choice and question by $\\bar{c_i} - \\tilde{c}_i^p$ and $\\bar{c_i} - \\tilde{c}_i^q$ respectively. The $\\odot $ in difference perspective can detect the two different parts at the same time and emphasize them. In addition, the similarity perspective is capable of discovering the similar parts among the passage, the question, and the choice.",
"To map the three fusion representations to lower and same dimension, we apply three different FNNs with the ReLU activation to $u_i$ , $d_i$ , and $s_i$ . The final output $g_i$ is the concatenation of the results of the three FNNs, which represents a global perspective representation. ",
"$$g_i=[f^u(u_i),f^d(d_i),f^s(s_i)] $$ (Eq. 23) "
],
[
" The output layer includes a self-attention layer and a prediction layer. Following BIBREF26 , we summarize the global perspective representation $\\lbrace g_i\\rbrace _1^{|c|}$ to a fixed length vector $r$ . We compute the $r= \\sum _{i=1}^{|c|} b_i g_i$ , where $b_j$ is the self-weighted attention score : ",
"$$&b_i = \\frac{exp(W{g}_i)}{\\sum \\limits {_{i^\\prime =1}^{|c|}exp(W {g}_{i^\\prime })}}$$ (Eq. 25) ",
"In the prediction layer, we utilize the output of self-attention $r$ to make the final prediction.",
"The final output y is obtained by transforming the $\\mathbf {v}$ to a scalar and then apply a sigmoid activation to map it to a probability."
],
[
"Data We conduct experiments on the MCScript BIBREF0 , which is used as the official dataset of SemEval2018 Task11. This dataset constructs a collection of text passages about daily life activities and a series of questions referring to each passage, and each question is equipped with two answer choices. The MCScript comprises 9731, 1411, and 2797 questions in training, development, and test set respectively. For data preprocessing, we use spaCy for sentence tokenization, Part-of-Speech tagging, and Name Entity Recognization. The relations between two words are generated by ConceptNet. The MCScript is a recently released dataset, which collects 2,119 narrative texts about daily events along with 13,939 questions. In this dataset, 27.4% questions require commonsense inference.",
"Parameters We use the standard cross-entropy function as the loss function. We choose Adam BIBREF27 with initial momentums for parameter optimization. As for hyper-parameters, we set the batch size as 32, the learning rate as 0.001, the dimension of BiLSTM and the hidden layer of FNN as 123. The embedding size of Glove, NER, POS, Relation are 300, 8, 12, 10 respectively. The dropout rate of the word embedding and BiLSTM output are 0.386 and 0.40 respectively."
],
[
"Table 2 shows the results of our MPFN model along with the competitive models on the MCScript dataset. The TriAN achieves 81.94% in terms of test accuracy, which is the best result of the single model. The best performing ensemble result is 84.13%, provided by HMA, which is the voting results of 7 single systems.",
"Our single MPFN model achieves 83.52% in terms of accuracy, outperforming all the previous models. The model exceeds the HMA and TriAN by approximately 2.58% and 1.58% absolute respectively. Our ensemble model surpasses the current state-of-the-art model with an accuracy of 84.84%. We got the final ensemble result by voting on 4 single models. Every single model uses the same architecture but different parameters."
],
[
"To study the effectiveness of each perspective, we conduct several experiments on the three single perspectives and their combination perspective. Table 3 presents their comparison results. The first group of models are based on the three single perspectives, and we can observe that the union perspective performs best compared with the difference and similarity perspective. Moreover, the union perspective achieves 82.73% in accuracy, exceeding the TriAN by 0.79% absolute. We can also see that the similarity perspective is inferior to the other two perspectives.",
"The second group of models in the Table 3 are formed from two perspectives. Compared with the single union perspective, combining the difference perspective with the union perspective can improve 0.11%. Composing union and similarity fusion together doesn't help the training. To our surprise, the combination of similarity perspective and difference perspective obtains 83.09% accuracy score.",
"The last model is our MPFN model, which performing best. The final result indicates that composing the union perspective, difference perspective, and similarity perspective together to train is helpful.",
"Many advanced models employ a BiLSTM to further aggregate the fusion results. To investigate whether a BiLSTM can assist the model, we apply another BiLSTM to the three fusion representations in Formula 23 respectively and then put them together. The results are shown in the second column in Table 3 , which indicate that the BiLSTM does not help improve the performance of the models."
],
[
"In the section, we conduct ablation study on the encoding inputs to examine the effectiveness each component. The experiment results are listed in Table 3 . In Section \"Encoding Layer\" , we describe that our encoding inputs comprise six components: POS embedding, NER embedding, Relation embedding, Term Frequency, choice-aware passage embedding $C^p$ and choice-aware question embedding $C^q$ .",
"From the best model, if we remove the POS embedding and NER embedding, the accuracy drops by 0.82% and 0.9%. Without Relation embedding, the accuracy drops to 81.98%, revealing that the external relations are helpful to the context fusions. Without Term Frequency, the accuracy drops by approximately 1.61%. This behavior suggests that the Term Frequency feature has a powerful capability to guide the model.",
"After removing the $C^p$ , we find the performance degrades to 81.62%. This demonstrates that information in the passage is significantly important to final performance. If we remove $C^q$ from the MPFN, the accuracy drops to 82.16%. If we remove the word level fusion completely, we will obtain an 81.66% accuracy score. These results demonstrate that each component is indispensable and the bottom embeddings are the basic foundations of the top layer fusions."
],
[
"In this section, we explore the influence of word-level interaction to each perspective. Fig 2 reports the overall results of how each perspective can be affected by the lower level interaction. The $C^p$ and the $C^q$ represent the choice-aware passage embedding and the choice-aware question embedding respectively. We can observe that the results of $[C;C^p]$ , $[C;C^q]$ , and $[C;C^p;C^q]$ are all higher than the result of $C$ alone, indicating the effectiveness of word embedding interaction.",
"Both the union fusion and difference fusion can achieve more than 80% accuracy, while the similarity fusion is very unstable. We also observe that the difference fusion is comparable with the union fusion, which even works better than the union fusion when the information of $C^p$ is not introduced into the input of encoding. The similarity fusion performs poorly in $C$ and $[C;C^q]$ , while yielding a huge increase in the remaining two groups of experiments, which is an interesting phenomenon. We infer that the similarity fusion needs to be activated by the union fusion.",
"In summary, we can conclude that integrate the information of $C^p$ into $C$ can greatly improve the performance of the model. Combining $C^q$ together with $C^p$ can further increase the accuracy. The information in the passage is richer than the question The overall conclusion"
],
[
"In this section, we visualize the union and difference fusion representations and show them in Fig 3 . And, we try to analyze their characteristics and compare them to discover some connections. The values of similarity fusion are too small to observe useful information intuitively, so we do not show it here. We use the example presented in Table 1 for visualization, where the question is Why didn't the child go to bed by themselves? and the corresponding True choice is The child wanted to continue playing.",
"The left region in Fig 3 is the union fusion. The most intuitive observation is that it captures comprehensive information. The values of child, wanted, playing are obvious higher than other words. This is consistent with our prior cognition, because the concatenation operation adopted in union fusion does not lose any content. While the difference union shows in the right region in Fig 3 focuses on some specific words. By further comparison, we find that the difference fusion can pay attention to the content ignored by the union fusion. What's more, the content acquired by the union would not be focused by the difference again. In other words, the union fusion and difference fusion indeed can emphasize information from the different perspective. Due to space limitation and"
],
[
"In this paper, we propose the Multi-Perspective Fusion Network (MPFN) for the Commonsense Reading Comprehension (CMC) task. We propose a more general framework for CRC by designing the difference and similarity fusion to assist the union fusion. Our MPFN model achieves an accuracy of 83.52% on MCScript, outperforming the previous models. The experimental results show that union fusion based on the choice-aware passage, the choice-aware question, and the choice can surpass the TriAN and HMA model. The difference fusion performs stably, which is comparable with the union fusion. We find that the word-level union fusion can significantly influence the context-level fusion. The choice-aware passage word embedding can activate the similarity fusion. We find that combining the similar parts and the difference parts together can obtain the best performance among the two-perspective models. By taking the three types of fusion methods into consideration, our MPFN model achieves a state-of-the-art result."
],
[
"This work is funded by Beijing Advanced Innovation for Language Resources of BLCU, the Fundamental Research Funds for the Central Universities in BLCU (17PT05), the Natural Science Foundation of China (61300081), and the Graduate Innovation Fund of BLCU (No.18YCX010)."
]
],
"section_name": [
"paragraph 1",
"paragraph 2",
"Related Work",
"Model",
"Encoding Layer",
"Context Fusion Layer",
"Output Layer",
"Experimental Settings",
"Experimental Results",
"Discussion of Multi-Perspective",
"Encoding Inputs Ablation",
"Influence of Word-level Interaction",
"Visualization",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"1cbbd80eee1c4870bf7827e2e3bb278186731b7d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Experimental Results of Models"
],
"extractive_spans": [],
"free_form_answer": "SLQA, Rusalka, HMA Model (single), TriAN (single), jiangnan (ensemble), MITRE (ensemble), TriAN (ensemble), HMA Model (ensemble)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Experimental Results of Models"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"What baseline models do they compare against?"
],
"question_id": [
"3aa7173612995223a904cc0f8eef4ff203cbb860"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"reading comprehension"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Fig. 1: Architecture of our MPFN Model.",
"Table 2: Experimental Results of Models",
"Table 3: Test Accuracy of Multi-Perspective",
"Fig. 2: Influence of Word-level Interaction.",
"Fig. 3: Visualization of Fusions"
],
"file": [
"4-Figure1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"9-Figure2-1.png",
"10-Figure3-1.png"
]
} | [
"What baseline models do they compare against?"
] | [
[
"1901.02257-7-Table2-1.png"
]
] | [
"SLQA, Rusalka, HMA Model (single), TriAN (single), jiangnan (ensemble), MITRE (ensemble), TriAN (ensemble), HMA Model (ensemble)"
] | 297 |
1710.01507 | Identifying Clickbait: A Multi-Strategy Approach Using Neural Networks | Online media outlets, in a bid to expand their reach and subsequently increase revenue through ad monetisation, have begun adopting clickbait techniques to lure readers to click on articles. The article fails to fulfill the promise made by the headline. Traditional methods for clickbait detection have relied heavily on feature engineering which, in turn, is dependent on the dataset it is built for. The application of neural networks for this task has only been explored partially. We propose a novel approach considering all information found in a social media post. We train a bidirectional LSTM with an attention mechanism to learn the extent to which a word contributes to the post's clickbait score in a differential manner. We also employ a Siamese net to capture the similarity between source and target information. Information gleaned from images has not been considered in previous approaches. We learn image embeddings from large amounts of data using Convolutional Neural Networks to add another layer of complexity to our model. Finally, we concatenate the outputs from the three separate components, serving it as input to a fully connected layer. We conduct experiments over a test corpus of 19538 social media posts, attaining an F1 score of 65.37% on the dataset bettering the previous state-of-the-art, as well as other proposed approaches, feature engineering or otherwise. | {
"paragraphs": [
[
"The Internet provides instant access to a wide variety of online content, news included. Formerly, users had static preferences, gravitating towards their trusted sources, incurring an unwavering sense of loyalty. The same cannot be said for current trends since users are likely to go with any source readily available to them.",
"In order to stay in business, news agencies have switched, in part, to a digital front. Usually, they generate revenue by (1) advertisements on their websites, or (2) a subscription based model for articles that might interest users. However, since the same information is available via multiple sources, no comment can be made on the preference of the reader. To lure in more readers and increase the number of clicks on their content, subsequently increasing their agency's revenue, writers have begun adopting a new technique - clickbait.",
"The concept of clickbait is formalised as something to encourage readers to click on hyperlinks based on snippets of information accompanying it, especially when those links lead to content of dubious value or interest. Clickbaiting is the intentional act of over-promising or purposely misrepresenting - in a headline, on social media, in an image, or some combination - what can be expected while reading a story on the web. It is designed to create and, consequently, capitalise on the Loewenstein information gap BIBREF0 . Sometimes, especially in cases where such headlines are found on social media, the links can redirect to a page with an unoriginal story which contains repeated or distorted facts from the original article itself.",
"Our engine is built on three components. The first leverages neural networks for sequential modeling of text. Article title is represented as a sequence of word vectors and each word of the title is further converted into character level embeddings. These features serve as input to a bidirectional LSTM model. An affixed attention layer allows the network to treat each word in the title in a differential manner. The next component focuses on the similarity between the article title and its actual content. For this, we generate Doc2Vec embeddings for the pair and act as input for a Siamese net, projecting them into a highly structured space whose geometry reflects complex semantic relationships. The last part of this system attempts to quantify the similarity of the attached image, if any, to the article title. Finally, the output of each component is concatenated and sent as input to a fully connected layer to generate a score for the task."
],
[
"The task of automating clickbait detection has risen to prominence fairly recently. Initial attempts for the same have worked on (1) news headlines, and (2) heavy feature engineering for the particular dataset. BIBREF1 's work is one of the earliest pieces of literature available in the field, focusing on an aggregation of news headlines from previously categorised clickbait and non-clickbait sources. Apart from defining different types of clickbait, they emphasise on the presence of language peculiarities exploited by writers for this purpose. These include qualitative informality metrics and use of forward references in the title to keep the reader on the hook. The first instance of detecting clickbait across social media can be traced to BIBREF2 , hand-crafting linguistic features, including a reference dictionary of clickbait phrases, over a dataset of crowdsourced tweets BIBREF3 . However, BIBREF4 argued that work done specifically for Twitter had to be expanded since clickbait was available throughout the Internet, and not just social networks.",
"It was not until BIBREF5 that neural networks were tried out for the task as the authors used the same news dataset as BIBREF4 to develop a deep learning based model to detect clickbait. They used distributional semantics to represent article titles, and BiLSTM to model sequential data and its dependencies. Since then, BIBREF6 has also experimented with Twitter data BIBREF3 deploying a BiLSTM for each of the textual features (post-text, target-title, target-paragraphs, target-description, target-keywords, post-time) available in the corpus, and finally concatenating the dense output layers of the network before forwarding it to a fully connected layer. Since it was proposed in BIBREF7 , the attention mechanism has been used for a variety of text-classification tasks, such as fake news detection and aspect-based sentiment analysis. BIBREF8 used a self-attentive BiGRU to infer the importance of tweet tokens in predicting the annotation distribution of the task.",
"One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task."
],
[
"In this section, we present our hybrid approach to clickbait detection. We first explain the three individual components followed by their fusion, which is our proposed model. These components are (1) BiLSTM with attention, (2) Siamese Network on Text Embeddings, and (3) Siamese Network on Visual Embeddings. An overview of the architecture can be seen in Figure 1.",
"We start with an explanation of the features used in the first component of the model.",
"Distributed Word Embeddings",
"Considering the effectiveness of distributional semantics in modeling language data, we use a pre-trained 300 dimensional Word2Vec BIBREF9 model trained over 100 billion words in the Google News corpus using the Continuous Bag of Words architecture. These map the words in a language to a high dimensional real-valued vectors to capture hidden semantic and syntactic properties of words, and are typically learned from large, unannotated text corpora. For each word in the title, we obtain its equivalent Word2Vec embeddings using the model described above.",
"Character Level Word Embeddings",
"Character level word embeddings BIBREF10 capture the orthographic and morphological features of a word. Apart from this, using them is a step toward mitigating the problem of out-of-vocabulary (OoV) words. In such a case, the word can be embedded by its characters using character level embedding. We follow BIBREF5 and first initialize a vector for every character in the corpus. The vector representation of each word is learned by applying 3 layers of a 1-dimensional Convolutional Neural Network BIBREF11 with ReLU non-linearity on each vector of character sequence of that word and finally max-pooling the sequence for each convolutional feature.",
"Document Embeddings",
"Doc2Vec BIBREF12 is an unsupervised approach to generate vector representations for slightly larger bodies of text, such as sentences, paragraphs and documents. It has been adapted from Word2Vec BIBREF9 which is used to generate vectors for words in large unlabeled corpora. The vectors generated by this approach come handy in tasks like calculating similarity metrics for sentences, paragraphs and documents. In sequential models like RNNs, the word sequence is captured in the generated sentence vectors. However, in Doc2Vec, the representations are order independent. We use GenSim BIBREF13 to learn 300 dimensional Doc2Vec embeddings for each target description and post title available.",
"Pre-trained CNN Features",
"As seen in various visual understanding problems recently, image descriptors trained using Convolutional Neural Networks over large amounts of data such as ImageNet have proven to be very effective. The implicit learning of spatial layout and object semantics in the later layers of the network from very large datasets has contributed to the success of these features. We use a pre-trained network of VGG-19 architecture BIBREF14 trained over the ImageNet database (ILSVRC-2012) and extract CNN features. We use the output of the fully-connected layer (FC7), which has 4096 dimensions, as feature representations for our architecture.",
"We now go into detail about the components of the model, individual and combined, and how the parameters are learned."
],
[
"Recurrent Neural Network (RNN) is a class of artificial neural networks which utilizes sequential information and maintains history through its intermediate layers. A standard RNN has an internal state whose output at every time-step which can be expressed in terms of that of previous time-steps. However, it has been seen that standard RNNs suffer from a problem of vanishing gradients BIBREF15 . This means it will not be able to efficiently model dependencies and interactions between words that are a few steps apart. LSTMs are able to tackle this issue by their use of gating mechanisms. For each record in the dataset, the content of the post as well as the content of the related web page is available. We convert the words from the title of both attributes into the previously mentioned types of embeddings to act as input to our bidirectional LSTMs.",
" $(\\overrightarrow{h}_1, \\overrightarrow{h}_2, \\dots , \\overrightarrow{h}_R)$ represent forward states of the LSTM and its state updates satisfy the following equations: ",
"$$\\big [\\overrightarrow{f_t},\\overrightarrow{i_t},\\overrightarrow{o_t}\\big ] = \\sigma \\big [ \\overrightarrow{W} \\big [\\overrightarrow{h}_{t-1},\\overrightarrow{r_t}\\big ] + \\overrightarrow{b}\\big ]$$ (Eq. 3) ",
"$$\\overrightarrow{l_t} = \\tanh \\big [\\overrightarrow{V} \\big [\\overrightarrow{h}_{t-1}, \\overrightarrow{r_t}\\big ] + \\overrightarrow{d}\\big ]$$ (Eq. 4) ",
"here $\\sigma $ is the logistic sigmoid function, $\\overrightarrow{f_t}$ , $\\overrightarrow{i_t}$ , $\\overrightarrow{o_t}$ represent the forget, input and output gates respectively. $\\overrightarrow{r_t}$ denotes the input at time $t$ and $\\overrightarrow{h_t}$ denotes the latent state, $\\overrightarrow{b_t}$ and $\\overrightarrow{d_t}$ represent the bias terms. The forget, input and output gates control the flow of information throughout the sequence. $\\overrightarrow{W}$ and $\\overrightarrow{f_t}$0 are matrices which represent the weights associated with the connections.",
" $(\\overleftarrow{h}_1, \\overleftarrow{h}_2, \\dots , \\overleftarrow{h}_R)$ denote the backward states and its updates can be computed similarly.",
"The number of bidirectional LSTM units is set to a constant K, which is the maximum length of all title lengths of records used in training. The forward and backward states are then concatenated to obtain $(h_1, h_2, \\dots , h_K)$ , where ",
"$$h_i = \\begin{bmatrix}\n\\overrightarrow{h}_i \\\\\n\\overleftarrow{h}_i\n\\end{bmatrix}$$ (Eq. 7) ",
"Finally, we are left with the task of figuring out the significance of each word in the sequence i.e. how much a particular word influences the clickbait-y nature of the post. The effectiveness of attention mechanisms have been proven for the task of neural machine translation BIBREF7 and it has the same effect in this case. The goal of attention mechanisms in such tasks is to derive context vectors which capture relevant source side information and help predict the current target word. The sequence of annotations generated by the encoder to come up with a context vector capturing how each word contributes to the record's clickbait quotient is of paramount importance to this model. In a typical RNN encoder-decoder framework BIBREF7 , a context vector is generated at each time-step to predict the target word. However, we only need it for calculation of context vector for a single time-step. ",
"$$c_{attention} = \\sum _{j=1}^{K}\\alpha _jh_j$$ (Eq. 8) ",
"where, $h_1$ ,..., $h_K$ represents the sequence of annotations to which the encoder maps the post title vector and each $\\alpha _j$ represents the respective weight corresponding to each annotation $h_j$ . This component is represented on the leftmost in Figure 1."
],
[
"The second component of our model is a Siamese net BIBREF16 over two textual features in the dataset. Siamese networks are designed around having symmetry and it is important because it's required for learning a distance metric. We use them to find the similarity between the title of the record and its target description. The words in the title and in the target description are converted into their respective Doc2Vec embeddings and concatenated, after which they are considered as input into a Siamese network. A visual representation of this can be found in the middle of Figure 1."
],
[
"The final component of our hybrid model is also a Siamese net. However, it considers visual information available in the dataset, and sets our model apart from other approaches in this field. The relevance of the image attached to the post can be quantified by capturing its similarity with the target description. The VGG-19 architecture outputs a 4096 dimensional vector for each image which, in turn, is fed as input into a dense layer to convert each representation to a 300 dimensional vector. This serves as one input to the visual Siamese net. The target description is converted into its 300 dimensional vector representation by passing it through the pre-trained Doc2Vec model, which acts as the second input for the network. It is the rightmost part of Figure 1."
],
[
"To combine the components and complete our hybrid model, the output from each of the three parts is concatenated and subsequently acts as input for a fully connected layer. This layer finally gives as its output the probability/extent that a post, together with its related information, can be considered clickbait."
],
[
"We use binary cross-entropy as the loss optimization function for our model. The cross-entropy method BIBREF17 is an iterative procedure where each iteration can be divided into two stages:",
"(1) Generate a random data sample (vectors, trajectories etc.) according to a specified mechanism.",
"(2) Update the parameters of the random mechanism based on the data to produce a \"better\" sample in the next iteration."
],
[
"The model was evaluated over a collection of 19538 social media posts BIBREF3 , each containing supplementary information like target description, target keywords and linked images. We performed our experiments with the aim of increasing the accuracy and F1 score of the model. Other metrics like mean squared error (MSE) were also considered."
],
[
"We randomly partition the training set into training and validation set in a 4:1 ratio. This ensures that the two sets do not overlap. The model hyperparameters are tuned over the validation set. We initialise the fully connected network weights with the uniform distribution in the range $-\\sqrt{{6}/{(fanin + fanout)}}$ and $\\sqrt{{6}/{(fanin + fanout)}}$ BIBREF18 . We used a batch size of 256 and adadelta BIBREF19 as a gradient based optimizer for learning the parameters of the model."
],
[
"In Table 1, we compare our model with the existing state-of-the-art for the dataset used and other models which have employed similar techniques to accomplish the task. Calculation and comparison across these metrics was conducted on TIRA BIBREF2 , a platform that offers evaluation as a service. It is clear that our proposed model outperforms the previous feature engineering benchmark and other work done in the field both in terms of F1 score and accuracy of detection."
],
[
"In this work, we have come up with a multi-strategy approach to tackle the problem of clickbait detection across the Internet. Our model takes into account both textual and image features, a multimedia approach, to score the classify headlines. A neural attention mechanism is utilised over BIBREF5 to improve its performance, simultaneously adding Siamese nets for scoring similarity between different attributes of the post. To build on this approach, we would like to explore better image embedding techniques to better relate it to the article."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model Architecture",
"Bidirectional LSTM with Attention",
"Siamese Net with Text Embeddings",
"Siamese Neural Network with Visual Embeddings",
"Fusion of the components",
"Learning the Parameters",
"Evaluation Results",
"Training",
"Comparison with other models",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1cbfdce25dfdc7c55ded63bbade870a96b66c848"
],
"answer": [
{
"evidence": [
"One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task."
],
"extractive_spans": [],
"free_form_answer": "This approach considers related images",
"highlighted_evidence": [
"One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates textual features, modeled using BiLSTM and augmented with an attention mechanism, but also considers related images for the task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"What are the differences with previous applications of neural networks for this task?"
],
"question_id": [
"acc8d9918d19c212ec256181e51292f2957b37d7"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Model Architecture",
"Table 1: Comparison of our model with existing methods"
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"What are the differences with previous applications of neural networks for this task?"
] | [
[
"1710.01507-Related Work-2"
]
] | [
"This approach considers related images"
] | 298 |
2002.02492 | Consistency of a Recurrent Language Model With Respect to Incomplete Decoding | Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition. We study the related issue of receiving infinite-length sequences from a recurrent language model when using common decoding algorithms. To analyze this issue, we first define inconsistency of a decoding algorithm, meaning that the algorithm can yield an infinite-length sequence that has zero probability under the model. We prove that commonly used incomplete decoding algorithms - greedy search, beam search, top-k sampling, and nucleus sampling - are inconsistent, despite the fact that recurrent language models are trained to produce sequences of finite length. Based on these insights, we propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model. Empirical results show that inconsistency occurs in practice, and that the proposed methods prevent inconsistency. | {
"paragraphs": [
[
"Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm.",
"We begin by formally defining recurrent neural language models, a family that encompasses neural models used in practice, such as recurrent neural networks BIBREF11, BIBREF12, BIBREF13, and transformers BIBREF14. Next, we formally define a decoding algorithm – a function that induces a distribution over sequences given a recurrent language model and a context distribution – which is used to obtain probable sequences from a model. In this paper, we show that the distribution induced by a decoding algorithm can contradict this intended use; instead, the decoding algorithm may return improbable, infinite-length sequences.",
"Our main finding is that a sequence which receives zero probability under a recurrent language model's distribution can receive nonzero probability under the distribution induced by a decoding algorithm. This occurs when the recurrent language model always ranks the sequence termination token outside of the set of tokens considered at each decoding step, yielding an infinite-length, zero probability sequence. This holds whenever the decoding algorithm is incomplete, in the sense that the algorithm excludes tokens from consideration at each step of decoding, which is the case for common methods such as greedy search, beam search, top-$k$ sampling BIBREF15, and nucleus sampling BIBREF5. We formalize our main finding using the notion of consistency BIBREF16 – whether a distribution assigns probability mass only to finite sequences – and prove that a consistent recurrent language model paired with an incomplete decoding algorithm can induce an inconsistent sequence distribution.",
"Based on the insight that inconsistency occurs due to the behavior of the termination token under incomplete decoding, we develop two methods for addressing inconsistency. First, we propose consistent sampling methods which guarantee that the termination token is not excluded from selection during decoding. Second, we introduce a self-terminating recurrent language model which ensures that the termination token is eventually ranked above all others, guaranteeing consistency under incomplete decoding.",
"To empirically measure inconsistency, we decode sequences from trained recurrent language models and measure the proportion of sequences with lengths far exceeding the maximum training sequence length. Our experiments on the Wikitext2 dataset BIBREF17 suggest that inconsistency occurs in practice when using incomplete decoding methods, while the proposed consistent sampling methods and self-terminating model parameterization prevent inconsistency and maintain language modeling quality.",
"The theoretical analysis reveals defects of existing decoding algorithms, providing a way to develop future models, inference procedures, and learning algorithms. We present methods related to sampling and model parameterization, but there are more directions which we leave to the future; we close with directions related to sequence-level learning."
],
[
"We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation.",
"Definition 2.1 (Sequence) A sequence $Y$ is an ordered collection of items from a predefined finite vocabulary $V$. A sequence of finite length always ends with a special token $\\left<\\text{eos}\\right>\\in V$ that only appears at the end of a sequence.",
"Each model we consider generates a sequence conditioned on context information, such as a prefix in sentence completion. To consider this, we define a context distribution.",
"Definition 2.2 (Context distribution) A context distribution $p(C)$ is a probability distribution defined over a set $\\mathcal {C}$. An element $C\\in \\mathcal {C}$ is called a context."
],
[
"A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$.",
"Definition 2.3 (Recurrent language model) A recurrent language model $p_\\theta $ is a neural network that computes the following conditional probability at each time step",
"where $h_t = f_{\\theta }(y_t, h_{t-1})$ and $h_0 = g_{\\theta }(C)$, and $u,c,\\theta $ are parameters. A recurrent language model thereby computes the probability of a sequence $Y=(y_1, \\ldots , y_T)$ by",
"where $y_{<t}=(y_1,\\ldots ,y_{t-1})$. This distribution satisfies",
"Practical variants of the recurrent language model differ by the choice of transition function $f_{\\theta }$ BIBREF11, BIBREF13, BIBREF12, BIBREF14. The use of softmax BIBREF18 implies that every unique token in the vocabulary is considered at every location of a sequence.",
"Remark 2.1 Under the conditional distribution of a recurrent language model, every token $v\\in V$ is assigned a positive probability. This implies that $0 < p_\\theta (v\\,|\\,y_{<t}, C) < 1.$ In addition, it follows that any finite sequence is probable by a recurrent language model under any context, i.e., $p_{\\theta }(Y\\,|\\,C) > 0$ for any sequence $Y$ of finite length."
],
[
"Because it is intractable to decode the most probable sequence, it is necessary in practice to use an approximate decoding algorithm.",
"Definition 2.4 (Decoding algorithm) A decoding algorithm $\\mathcal {F}(p_{\\theta }, C)$ is a function that generates a sequence $\\tilde{Y}$ given a recurrent language model $p_{\\theta }$ and context $C$. Let $q_{\\mathcal {F}}$ denote the distribution induced by the decoding algorithm $\\mathcal {F}$.",
"We consider two families of decoding algorithms. In our analysis we only consider decoding algorithms that decode in a single pass, forward in time, without modifying previously selected tokens."
],
[
"The first family consists of stochastic algorithms. Among them, ancestral sampling is asymptotically unbiased and can be used for finding the most probable sequence, although it requires a substantial number of samples to achieve a low-variance estimate.",
"Definition 2.5 (Ancestral sampling) Ancestral sampling $\\mathcal {F}_{\\text{anc}}$ generates a sequence from a recurrent language model $p_{\\theta }$ given context $C$ by recursively sampling from $p_{\\theta }(y_t\\,|\\,\\tilde{y}_{<t}, C)$ until $\\tilde{y}_t = \\left<\\text{eos}\\right>$:",
"In order to avoid the high variance, two approximate stochastic decoding algorithms have recently been proposed and tested with recurrent language models. Top-$k$ sampling considers only a subset of the $k$ most probable tokens from the vocabulary at a time, while nucleus sampling considers only the minimal subset of most probable tokens whose total probability is higher than a predefined threshold.",
"Definition 2.6 (Top-$k$ sampling BIBREF15) Top-$k$ sampling $\\mathcal {F}_{\\text{top-k}}$ generates a sequence from a recurrent language model $p_{\\theta }$ given context $C$ by recursively sampling from the following proposal distribution:",
"Definition 2.7 (Nucleus sampling BIBREF5) Nucleus sampling $\\mathcal {F}_{\\text{nuc-}\\mu }$ generates a sequence from a recurrent language model $p_{\\theta }$ given context $C$ by recursively sampling from the following proposal distribution. Let $v_1,\\ldots ,v_{|V|}$ denote tokens in $V$ such that $p_{\\theta }(v_i\\,|\\,y_{<t},C) \\ge p_{\\theta }(v_j\\,|\\,y_{<t},C)$ for all $i < j$, and define",
"where $V_{\\mu } = \\left\\lbrace v_1, \\cdots , v_{k_\\mu } \\right\\rbrace $ with"
],
[
"The other family consists of deterministic decoding algorithms, where a token is selected deterministically according to a rule at each decoding step. The most naive algorithm, called greedy decoding, simply takes the most probable token at each step.",
"Definition 2.8 (Greedy decoding) Greedy decoding $\\mathcal {F}_{\\text{greedy}}$ generates a sequence from a recurrent language model $p_{\\theta }$ given context $C$ by recursively selecting the most likely token from $p_{\\theta }(y_t | \\tilde{y}_{<t}, C)$ until $\\tilde{y}_t = \\left<\\text{eos}\\right>$:",
"In contrast to greedy decoding, beam search operates on the level of partial sequences or prefixes.",
"Definition 2.9 (Prefix) A prefix $\\rho _t$ is an ordered collection of items from $V$. The score of a prefix is",
"where $\\rho _t[\\tau ]$ is a token at time $\\tau $ from $\\rho _t$.",
"Starting from a set of empty prefixes, at each iteration a new prefix set is formed by expanding each prefix, then choosing the highest scoring expanded prefixes.",
"Definition 2.10 (Beam search) Beam search with width $k$, $\\mathcal {F}_{\\text{beam}-k}$, generates a sequence from a recurrent language model $p_{\\theta }$ by maintaining a size-$k$ prefix set $\\mathrm {P}_t^{\\text{top}}$. Starting with $P_0^{top}=\\varnothing $, at each iteration $t\\in \\lbrace 1,2,\\ldots \\rbrace $ beam search forms a new prefix set $\\mathrm {P}_t^{\\text{top}}$ by expanding the current set, $\\mathrm {P}_t = \\bigcup _{\\rho \\in \\mathrm {P}_{t-1}^{\\text{top}}} \\lbrace \\rho \\circ v\\, |\\, v\\in V\\rbrace $ (where $\\rho \\circ v$ is concatenation), then choosing the $k$ highest scoring elements,",
"Any $\\rho \\in \\mathrm {P}_t^{\\text{top}}$ ending with $\\left<\\text{eos}\\right>$ is restricted from being expanded further, and is added to a set $S$. Beam search ends when $S$ contains $k$ sequences, and returns the highest scoring sequence in $S$."
],
[
"Other than ancestral sampling, the decoding algorithms above are incomplete in that they only consider a strict subset of the the full vocabulary $V$ at each time step, aside from the trivial case of $k=|V|$.",
"Definition 2.11 (Incomplete Decoding) A decoding algorithm $\\mathcal {F}$ is incomplete when for each context $C$ and prefix $y_{<t}$, there is a strict subset $V^{\\prime }_t\\subsetneq V$ such that"
],
[
"A recurrent language model $p_{\\theta }$ may assign a positive probability to an infinitely long sequence, in which case we call the model inconsistent. This notion of consistency was raised and analyzed earlier, for instance by BIBREF19 and BIBREF16, in terms of whether the distribution induced by $p_{\\theta }$ is concentrated on finite sequences. We extend their definition to account for the context $C$.",
"Definition 3.1 (Consistency of a recurrent language model) A recurrent language model is consistent under a context distribution $p(C)$ if $p_{\\theta }(|Y|=\\infty ) = 0$. Otherwise, the recurrent language model is said to be inconsistent.",
"Any sequence decoded from a consistent model for a given probable context is guaranteed to terminate.",
"Lemma 3.1 If a recurrent language model $p_{\\theta }$ is consistent, $p_{\\theta }(|Y|=\\infty \\,|\\,C)=0$ for any probable context $C$.",
"Next, we establish a practical condition under which a recurrent language model is consistent.",
"Lemma 3.2 A recurrent language model $p_{\\theta }$ is consistent if $\\Vert h_t\\Vert _p$ is uniformly bounded for some $p\\ge 1$.",
"[Proof sketch] If $\\Vert h_t\\Vert _p$ is bounded, then each $u_v^\\top h_t$ is bounded, hence $p_{\\theta }(\\left<\\text{eos}\\right>| y_{<t}, C)>\\xi >0$ for a constant $\\xi $. Thus $p_{\\theta }(|Y|=\\infty ) \\le \\lim _{t\\rightarrow \\infty } (1 - \\xi )^t = 0$, meaning that $p_{\\theta }$ is consistent.",
"Although this condition is practical because layer normalization or bounded activation functions BIBREF11, BIBREF12, BIBREF14 result in bounded $h_t$, we show that even if a recurrent language model is consistent, a decoding algorithm may produce an infinite-length sequence. We formalize this discrepancy using the consistency of a decoding algorithm.",
"Definition 3.2 (Consistency of a decoding algorithm) A decoding algorithm $\\mathcal {F}$ is consistent with respect to a consistent recurrent language model $p_{\\theta }$ under a context distribution $p(C)$ if the decoding algorithm $\\mathcal {F}$ preserves the consistency of the model $p_{\\theta }$, that is, $q_{\\mathcal {F}}(|Y|=\\infty )=0$.",
"When a consistent recurrent language model $p_{\\theta }$ and a decoding algorithm $\\mathcal {F}$ induce a consistent distribution $q_{\\mathcal {F}}$, we say that $p_{\\theta }$ paired with $\\mathcal {F}$ is consistent. For instance, any consistent recurrent language model paired with ancestral sampling is consistent, because the induced distribution $q_{\\mathcal {F}_{\\text{anc}}}$ is the same as the distribution of the original model. We also have an analogue of Lemma UNKREF21.",
"Lemma 3.3 A consistent decoding algorithm with respect to a consistent recurrent language model decodes only probable sequences. That is, if $q_{\\mathcal {F}}(Y\\,|\\,C)>0$, then $p_{\\theta }(Y\\,|\\,C)>0$ for any probable context $C$."
],
[
"Any incomplete decoding algorithm (Definition UNKREF18) can be inconsistent regardless of the context distribution, because there is a recurrent language model that places $\\left<\\text{eos}\\right>$ outside of $V^{\\prime }_t$ at every step of decoding. To show this, we construct a consistent recurrent language model whose distribution induced by an incomplete decoding algorithm is inconsistent.",
"Theorem 3.4 (Inconsistency of an incomplete decoding algorithm) There exists a consistent recurrent language model $p_{\\theta }$ from which an incomplete decoding algorithm $\\mathcal {F}$, that considers only up to $(|V|-1)$-most likely tokens according to $p_{\\theta }(y_t\\,|\\,y_{<t},C)$ at each step $t$, finds a sequence $\\tilde{Y}$ whose probability under $p_{\\theta }$ is 0 for any context distribution.",
"We prove this theorem by constructing a $\\tanh $ recurrent network. We define the recurrent function $f_{\\theta }$ as",
"where $e(y_{t}) \\in \\mathbb {R}^{|V|}$ is a one-hot representation of $y_t$, $W_h \\in \\mathbb {R}^{d \\times d}$ where every entry is positive, and $I$ is an identity matrix of size $|V| \\times |V|$. $h_0 = g_{\\theta }(C)$ is constructed to consist of positive values only. Because each element of $|h_t|$ is bounded by 1, the constructed recurrent language model $p_{\\theta }$ is consistent by Lemma UNKREF23.",
"For $v \\ne \\left<\\text{eos}\\right>$, we set $u_v$ (see Definition UNKREF4) to be",
"where all elements of $\\bar{u}_v$ are positive and $e(v)$ is a one-hot representation of $v$. $c_v$ is set to zero. Next, let",
"where all elements of $\\bar{u}_{\\left<\\text{eos}\\right>}$ are negative.",
"This defines a valid recurrent language model (Definition UNKREF4), since the conditional distribution at each time $t$ is influenced by all the previous tokens. More specifically, the logit of a token $v$ depends on $\\sum _{t^{\\prime }=1}^t {1}(y_{t^{\\prime }} = v)$, where 1 is an indicator function.",
"This recurrent language model always outputs positive logits for non-$\\left<\\text{eos}\\right>$ tokens, and outputs negative logits for the $\\left<\\text{eos}\\right>$ token. This implies $p(\\left<\\text{eos}\\right>|\\,y_{<t}, C) < p(v\\,|\\,y_{<t}, C)$ for all $v \\in V \\backslash \\left\\lbrace \\left<\\text{eos}\\right>\\right\\rbrace $. This means that $\\left<\\text{eos}\\right>$ is always ranked last at each time step, so an incomplete decoding algorithm that considers at most $(|V|-1)$ most probable tokens at each time step from $p_{\\theta }(y_t\\,|\\,y_{<t}, C)$ cannot decode $\\left<\\text{eos}\\right>$ and thus always decodes an infinitely long sequence.",
"The log-probability of this infinitely long sequence $\\hat{Y}$ is",
"For any $v\\in V$,",
"where $b_v = \\sum _{v^{\\prime }\\ne v} \\exp (-\\Vert u_{v^{\\prime }}\\Vert _1)$. The last inequality holds because $x/(x+b_v)$ is increasing in $x>0$. Therefore, the log-probability $\\log p_{\\theta }(\\hat{Y}\\,|\\,C)$ diverges as $|\\hat{Y}| \\rightarrow \\infty $, and thus $p_{\\theta }(\\hat{Y}\\,|\\,C) = 0$, which implies the decoding algorithm $\\mathcal {F}$ is inconsistent by Lemma UNKREF25. Greedy decoding, beam search, top-$k$ sampling, and nucleus sampling are all inconsistent according to this theorem; there are consistent models $p_{\\theta }$ that induce inconsistent distributions when paired with these decoding algorithms."
],
[
"In this section, we consider two ways to prevent inconsistency arising from incomplete decoding algorithms. First, we introduce consistent versions of top-$k$ and nucleus sampling. Second, we introduce the self-terminating recurrent language model, which is consistent when paired with any of the decoding algorithms considered in this paper."
],
[
"The proof of Theorem UNKREF27 suggests that inconsistency of incomplete decoding algorithms arises from the fact that $\\left<\\text{eos}\\right>$ may be excluded indefinitely from the set of top-ranked tokens. We propose a simple modification to top-$k$ and nucleus sampling that forces $\\left<\\text{eos}\\right>$ to be included at each step of decoding. First, we give a condition for when a particular model $p_{\\theta }$ paired with a decoding algorithm $\\mathcal {F}$ is consistent.",
"Theorem 4.1 Let $p_{\\theta }$ be a consistent recurrent language model. If a decoding algorithm $\\mathcal {F}$ satisfies $q_{\\mathcal {F}}(\\left<\\text{eos}\\right>|\\,y_{<t}, C) \\ge p_{\\theta }(\\left<\\text{eos}\\right>|\\,y_{<t}, C)$ for every prefix $y_{<t}$ and context $C$, then the decoding algorithm $\\mathcal {F}$ is consistent with respect to the model $p_{\\theta }$.",
"Let $P^{\\prime }_{t-1}$ denote a set of all prefixes $y_{<t}$ of length $t-1$. For $t\\ge 1$,",
"Taking the limit $t\\rightarrow \\infty $ and expectation over $C$ on both sides, we have",
"from which the decoding algorithm is consistent.",
"We define consistent variants of top-$k$ and nucleus sampling which satisfy this condition.",
"Definition 4.1 (Consistent top-$k$ sampling) Consistent top-$k$ sampling is top-$k$ sampling with the following modified proposal distribution:",
"where $V^{\\prime } = \\left\\lbrace \\left<\\text{eos}\\right>\\right\\rbrace \\cup \\underset{v^{\\prime }}{\\arg \\text{top-k}}\\ p_{\\theta }(v^{\\prime }\\,|\\,y_{<t}, C)$.",
"Definition 4.2 (Consistent nucleus sampling) Consistent nucleus sampling is nucleus sampling with the following modified proposal distribution:",
"The induced probability of $\\left<\\text{eos}\\right>$ under these two algorithms is always equal to or larger than the model's probability. By Theorem UNKREF29, these algorithms are consistent with respect to any consistent recurrent language model."
],
[
"Although these consistent sampling algorithms can be used with any recurrent language model, their stochastic nature may not be suitable for finding a single, highly probable sequence. To avoid this limitation, we propose the self-terminating recurrent language model (STRLM).",
"Definition 4.3 (Self-terminating recurrent language model) A self-terminating recurrent language model computes the following conditional probability at each time step:",
"where",
"with $\\sigma : \\mathbb {R} \\rightarrow [0,1-\\epsilon ]$ and $\\epsilon \\in (0,1)$. $h_t$ is computed as in the original recurrent language model.",
"The underlying idea is that the probability of $\\left<\\text{eos}\\right>$ increases monotonically. The model is consistent when paired with greedy decoding.",
"Theorem 4.2 Greedy decoding is consistent with respect to any self-terminating recurrent language model.",
"Let $p_{t}^{\\left<\\text{eos}\\right>}$ denote $p_{\\theta }(\\left<\\text{eos}\\right>|\\,y_{<t}, C)$ and $a_{t}^{\\left<\\text{eos}\\right>}$ denote $u_{\\left<\\text{eos}\\right>}^\\top h_t + c_{\\left<\\text{eos}\\right>}$. By Definition UNKREF33 we have",
"Take $B=-\\log 2 / \\log (1-\\epsilon )$. We then have $p_{t}^{\\left<\\text{eos}\\right>}>1/2$ for all $t > B$, which implies that $\\left<\\text{eos}\\right>$ is always the most probable token after time step $B$. Hence, the sequence length is less than $B$ with probability 1. Beam search is also consistent with respect to any self-terminating recurrent language model according to a similar argument; see Appendix for the proof."
],
[
"The theoretical results rely on the existence of a model that results in inconsistency; it remains to be shown that inconsistency with respect to incomplete decoding occurs with recurrent language models encountered in practice. Moreover, while the proposed consistent sampling methods and self-terminating recurrent language model carry theoretical guarantees in terms of consistency, we must check whether they retain language modeling quality. To do so, we perform two experiments using a sequence completion task. In each experiment, we use the beginning of a sequence as context, then decode continuations from a trained recurrent language model and measure the proportion of non-terminated sequences in order to approximately measure inconsistency. The first experiment (§SECREF45) shows that inconsistency occurs in practice, and the second experiment (§SECREF47) shows the effectiveness of the proposed approaches."
],
[
"We evaluate recurrent language models on a sequence completion task, which has previously been used to evaluate the effectiveness of sequence models, e.g. BIBREF20, BIBREF21, BIBREF2, BIBREF5, BIBREF10. Sequence completion is a general setting for studying the behavior of language models, encompassing machine translation BIBREF0, story generation BIBREF15, and dialogue modeling BIBREF1. The task consists of decoding a continuation $\\hat{Y}\\sim \\mathcal {F}(p_{\\theta }, C)$ given a length-$k$ prefix $C=(c_1,\\ldots ,c_k)$, resulting in a completion $(c_1,\\ldots ,c_k,\\hat{y}_1\\ldots ,\\hat{y}_T)$."
],
[
"We use the Wikitext2 dataset BIBREF17 consisting of paragraphs from Wikipedia, since it has frequently been used to evaluate language models BIBREF22, BIBREF23, BIBREF24. We split each paragraph into sentences using Spacy, resulting in roughly 100k sequences (78,274 train, 8,464 valid, 9,708 test). We split each sequence, using the first $k$ tokens as a context and the remaining tokens as a continuation. To ensure that each sequence contains a prefix, we prepend padding tokens to make it length $k$. Special $\\left<\\text{bos}\\right>$ and $\\left<\\text{eos}\\right>$ tokens are then inserted at the beginning and end of every sequence. Our experiments use $k=10$. We model sequences at the word level with a vocabulary size of 33,182. The average training sequence length is 24 tokens, with a maximum of 137."
],
[
"We define empirical context distributions with prefixes from the train, valid, and test sets,",
"where $\\mathcal {D}=\\lbrace (C^{(n)},Y^{(n)})\\rbrace _{n=1}^{N}$ is a dataset split."
],
[
"We use finite sequences to approximately measure the consistency of a model paired with a decoding algorithm, since decoding an infinite-length sequence is impossible. We use the proportion of decoded continuations that are longer than a predefined limit,",
"where $\\hat{Y}^{(n)}\\sim \\mathcal {F}(p_{\\theta }, C^{(n)})$ for each context $C^{(n)}$ in $\\mathcal {D}$. We call $r_L$ the non-termination ratio of the decoding algorithm $\\mathcal {F}$ for an underlying model and context distribution. A value of $r_L$ greater than zero means that some sequences did not terminate within $L$ steps. When $L$ is infinity, this implies that the model paired with the decoding algorithm is inconsistent. In practice, we use a finite $L$ that is substantially larger than the maximum training sequence length, and we interpret a non-zero $r_L$ as evidence that the model paired with the decoding algorithm is inconsistent. We use $L=1500$, which is more than 10 times the maximum training sequence length.",
"In each experiment, we report the mean and standard deviation of metrics across 10 independent initializations. Unless specified otherwise, we report metrics using the test context distribution, since the train, valid, and randomly generated context distributions had similar results."
],
[
"We train recurrent language models for sequence completion with maximum likelihood, using the following loss on each sequence $Y=(c_1,\\ldots ,c_k,y_1,\\ldots ,y_T)$:",
"This amounts to running the full training sequence through a recurrent model and zeroing the loss for the first $k$ tokens, so that the first $k$ steps correspond to learning a $g_{\\theta }$ that encodes the context. Each model is trained on a single Nvidia P40 GPU for up to 100 epochs, stopping early when validation perplexity does not decrease for 10 consecutive epochs."
],
[
"We consider recurrent neural networks with hyperbolic tangent activations ($\\tanh $-RNN) BIBREF11 and LSTM units (LSTM-RNN) BIBREF13. We perform an initial hyper-parameter sweep and select the best set of hyper-parameters for each of $\\tanh $-RNN and LSTM-RNN based on the validation perplexities. With this best set of hyperparameters, we train each of these models with 10 different initializations. The choice of $\\tanh $ and LSTM RNNs implies that all of the recurrent language models that we train are consistent according to Lemma UNKREF23. Our LSTM models achieve similar test perplexity ($91.86 \\pm 0.4$) to those reported in previous work BIBREF24; see Appendix for further details.",
"Additionally, we train self-terminating $\\tanh $-RNN and LSTM-RNN variants (Definition UNKREF33) at various values of $\\epsilon $, which controls a lower bound on the termination probability at each step. We use $\\sigma (x)=(1-\\epsilon )\\text{sigmoid}(x)$. We use the hyper-parameters selected in the preceding grid search."
],
[
"In this experiment, we demonstrate evidence of inconsistency with incomplete decoding methods (Theorem UNKREF27).",
"Table TABREF43 shows non-termination ratios for the recurrent language models using the incomplete decoding algorithms considered in this work, along with ancestral sampling. Decoding with ancestral sampling always resulted in sequences that terminated within $L$ steps, since the induced distribution is the same as that of the consistent model. On the other hand, the non-zero non-termination ratios for the incomplete decoding algorithms suggest inconsistency with respect to each algorithm, providing evidence for Theorem UNKREF27.",
"In particular, greedy search, beam search, and nucleus sampling yielded non-terminating sequences with both the $\\tanh $ and LSTM RNNs. Using greedy decoding, roughly 6% of all contexts resulted in a non-terminating continuation with the $\\tanh $-RNN, and roughly 1% with the LSTM-RNN. Nucleus sampling also produced non-terminating sequences with the $\\tanh $-RNN (2.49%, nuc-0.2) and LSTM-RNN (0.76%, nuc-0.2), with the amount of non-termination decreasing as $\\mu $ increased (see Definition UNKREF11), likely due to $\\left<\\text{eos}\\right>$ having a higher chance of being included in $V_{\\mu }$. Top-$k$ sampling resulted in non-terminating sequences with the $\\tanh $-RNN, but not with the LSTM, implying that $\\left<\\text{eos}\\right>$ was ranked within the top $k$ positions on at least one timestep during each decoding. Beam search produced non-terminating sequences with both the $\\tanh $-RNN (beam-2,4) and LSTM-RNN (beam-2) models. This means that $\\left<\\text{eos}\\right>$ was outside of the top tokens (determined by the beam width) considered at each step, since in our experiments we terminated the beam search when a single beam prefix contained $\\left<\\text{eos}\\right>$. With the LSTM-RNN, a larger beam width (beam-4) prevented non-termination."
],
[
"In this experiment, we evaluate the consistent variants of top-$k$ and nucleus sampling (§SECREF28) as well as the self-terminating recurrent language model (§SECREF32) in terms of consistency and language modeling quality."
],
[
"Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate."
],
[
"As seen in Table TABREF50, the self-terminating recurrent language models with $\\epsilon \\in \\lbrace 10^{-2},10^{-3}\\rbrace $ are consistent with respect to greedy decoding, at the expense of perplexity compared to the vanilla model. The value of $\\epsilon $ from Definition UNKREF33, which controls a lower-bound on termination probability at each step, influences both $r_L$ and perplexity. When $\\epsilon $ is too large ($\\epsilon =10^{-2}$), perplexity degrades. When $\\epsilon $ is too small ($\\epsilon =10^{-4}$), the lower-bound grows slowly, so $\\left<\\text{eos}\\right>$ is not guaranteed to be top-ranked within $L$ steps, and the metrics resemble the baseline's. An $\\epsilon $ of $10^{-3}$ balanced consistency and language modeling quality, with a zero non-termination ratio and perplexity within 3 points of the baseline.",
"For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition."
],
[
"The methods we proposed in this paper have focused on how to resolve inconsistency from the viewpoint of decoding algorithms or model parameterization. Another approach is to address the issue of inconsistency in the learning phase.",
"One interesting direction is to investigate whether maximum likelihood learning is a cause of inconsistency. Given a training set $\\left\\lbrace (C^{(n)}, Y^{(n)}) \\right\\rbrace _{n=1}^N$ drawn from a data distribution, maximum likelihood learning solves:",
"where $\\Omega (\\theta )$ is a regularizer and $\\lambda $ is a regularization weight.",
"Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding."
],
[
"We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future."
],
[
"We thank Chris Dyer, Noah Smith and Kevin Knight for valuable discussions. This work was supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). KC thanks eBay and NVIDIA for their support."
]
],
"section_name": [
"Introduction",
"Background",
"Background ::: Recurrent Language Models",
"Background ::: Decoding Algorithms",
"Background ::: Decoding Algorithms ::: Stochastic decoding.",
"Background ::: Decoding Algorithms ::: Deterministic decoding.",
"Background ::: Decoding Algorithms ::: Incompleteness.",
"Consistency of a Decoding Algorithm ::: Definition of consistency.",
"Consistency of a Decoding Algorithm ::: Inconsistency of incomplete decoding.",
"Fixing the inconsistency",
"Fixing the inconsistency ::: Consistent Sampling Algorithms",
"Fixing the inconsistency ::: A Self-Terminating Recurrent Language Model",
"Empirical Validation",
"Empirical Validation ::: Sequence completion.",
"Empirical Validation ::: Dataset.",
"Empirical Validation ::: Context distribution.",
"Empirical Validation ::: Evaluation metrics.",
"Empirical Validation ::: Training.",
"Empirical Validation ::: Models.",
"Empirical Validation ::: Inconsistency of Recurrent Language Models",
"Empirical Validation ::: Consistency of the Proposed Methods",
"Empirical Validation ::: Consistency of the Proposed Methods ::: Consistent sampling.",
"Empirical Validation ::: Consistency of the Proposed Methods ::: Self-terminating RNN.",
"Future Directions",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"cfd1e076d4a9b5356e4b4202f216399e66547e50"
],
"answer": [
{
"evidence": [
"Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.",
"For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.",
"FLOAT SELECTED: Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.",
"FLOAT SELECTED: Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods."
],
"extractive_spans": [],
"free_form_answer": "It eliminates non-termination in some models fixing for some models up to 6% of non-termination ratio.",
"highlighted_evidence": [
"Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\\left<\\text{eos}\\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.",
" This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.",
"FLOAT SELECTED: Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.",
"FLOAT SELECTED: Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1cffe76ed7d5f8f9ba0dd6ee3592f71b0cf46488"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a830540b9688e3ea11b8b8b9185415022c4f3fb1"
],
"answer": [
{
"evidence": [
"We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.",
"Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding."
],
"extractive_spans": [],
"free_form_answer": "There are is a strong conjecture that it might be the reason but it is not proven.",
"highlighted_evidence": [
"We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.",
"Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How much improvement is gained from the proposed approaches?",
"Is the problem of determining whether a given model would generate an infinite sequence is a decidable problem? ",
"Is infinite-length sequence generation a result of training with maximum likelihood?"
],
"question_id": [
"6f2f304ef292d8bcd521936f93afeec917cbe28a",
"82fa2b99daa981fc42a882bb6db8481bdbbb9675",
"61fb982b2c67541725d6db76b9c710dd169b533d"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods.",
"Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.",
"Table 3. Example continuations using nucleus and consistent nucleus (µ = 0.4) sampling with the LSTM-RNN.",
"Table 4. Example continuations with the LSTM-RNN and a self-terminating LSTM-RNN ( = 10−3).",
"Table 5. Non-termination ratio (rL (%)) of greedy-decoded sequences and test perplexity for self-terminating recurrent models.",
"Table 6. More example continuations from the LSTM-RNN and a self-terminating LSTM-RNN ( = 10−3).",
"Table 7. Grid search specification. The values selected for the LSTM-RNN and tanh-RNN models are shown in bold and italics, respectively.",
"Table 8. Perplexities of trained recurrent language models."
],
"file": [
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png",
"12-Table6-1.png",
"12-Table7-1.png",
"12-Table8-1.png"
]
} | [
"How much improvement is gained from the proposed approaches?",
"Is infinite-length sequence generation a result of training with maximum likelihood?"
] | [
[
"2002.02492-7-Table2-1.png",
"2002.02492-7-Table1-1.png",
"2002.02492-Empirical Validation ::: Consistency of the Proposed Methods ::: Consistent sampling.-0",
"2002.02492-Empirical Validation ::: Consistency of the Proposed Methods ::: Self-terminating RNN.-1"
],
[
"2002.02492-Conclusion-0",
"2002.02492-Future Directions-3"
]
] | [
"It eliminates non-termination in some models fixing for some models up to 6% of non-termination ratio.",
"There are is a strong conjecture that it might be the reason but it is not proven."
] | 299 |
2001.06354 | Modality-Balanced Models for Visual Dialogue | The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue. However, via manual analysis, we find that a large number of conversational questions can be answered by only looking at the image without any access to the context history, while others still need the conversation context to predict the correct answers. We demonstrate that due to this reason, previous joint-modality (history and image) models over-rely on and are more prone to memorizing the dialogue history (e.g., by extracting certain keywords or patterns in the context information), whereas image-only models are more generalizable (because they cannot memorize or extract keywords from history) and perform substantially better at the primary normalized discounted cumulative gain (NDCG) task metric which allows multiple correct answers. Hence, this observation encourages us to explicitly maintain two models, i.e., an image-only model and an image-history joint model, and combine their complementary abilities for a more balanced multimodal model. We present multiple methods for this integration of the two models, via ensemble and consensus dropout fusion with shared parameters. Empirically, our models achieve strong results on the Visual Dialog challenge 2019 (rank 3 on NDCG and high balance across metrics), and substantially outperform the winner of the Visual Dialog challenge 2018 on most metrics. | {
"paragraphs": [
[
"When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information.",
"We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores.",
"Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics)."
],
[
"Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features."
],
[
"The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions."
],
[
"In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\\textrm {HISTORY}_t = \\lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \\rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective."
],
[
"Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \\in \\mathbb {R}^{k \\times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone).",
"Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \\lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\\rbrace $ is encoded via an LSTM-RNN BIBREF17,",
"and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$.",
"History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that",
"where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM,",
"We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$."
],
[
"We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \\in \\mathbb {R}^{k \\times d_{m}}$ by applying MFB:",
"where $\\textrm {Linear}_{d_v\\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space.",
"where $M$, $N$ $\\in \\mathbb {R}^{d_{m} \\times d \\times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\\in \\mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\\ell _2$ normalization to obtain $\\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\\alpha $: $\\alpha _{r} = \\textrm {softmax}(L\\hat{z}_{r}^{\\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \\sum _{i=1}^k \\alpha _{ri}V_i$, where $L \\in \\mathbb {R}^{1 \\times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers.",
"where $\\textrm {fc}_*$ is an fully-connected layer."
],
[
"For each round, there are 100 candidate answers. The $l$-th answer at round $r$,",
"is encoded in the same way as question and history.",
"where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\\cdot a_{rl}$."
],
[
"We calculate the similarity matrix, $S_r \\in \\mathbb {R}^{k \\times r} $ between visual and history features following BIBREF15.",
"where $w_s \\in \\mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is:",
"Similarly, the new fused visual representation is:",
"These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation:",
"where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section."
],
[
"To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away.",
"where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$."
],
[
"Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time."
],
[
"In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23)."
],
[
"We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach.",
"where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits."
],
[
"To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$,",
"where ${1}_{(N\\times R)} \\in \\mathbb {R}^{(N\\times R)}$ and ${1}_{d} \\in \\mathbb {R}^{d}$ are all-ones vectors of $(N\\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\\xi $, is calculated following BIBREF20's work."
],
[
"We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits."
],
[
"We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context."
],
[
"For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values."
],
[
"In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss."
],
[
"In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model."
],
[
"We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy)."
],
[
"We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score."
],
[
"If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together."
],
[
"Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26)."
],
[
"As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters."
],
[
"As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation."
],
[
"For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average."
],
[
"We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected."
],
[
"Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session.",
"Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics.",
"Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model.",
"Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided."
],
[
"We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type."
],
[
"We thank the reviewers for their helpful comments. This work was supported by NSF Award #1840131, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, Bloomberg, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency."
]
],
"section_name": [
"Introduction",
"Related Work ::: Visual Question Answering (VQA)",
"Related Work ::: Visual Dialog",
"Models",
"Models ::: Features",
"Models ::: Image-Only Model",
"Models ::: Image-Only Model ::: Answer Selection",
"Models ::: Image-History Joint Model",
"Models ::: Image-History Joint Model ::: Round Dropout",
"Models ::: Combining Image-Only & Image-History Joint Models",
"Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion",
"Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Consensus",
"Models ::: Combining Image-Only & Image-History Joint Models ::: Consensus Dropout Fusion ::: Instance Dropout",
"Models ::: Combining Image-Only & Image-History Joint Models ::: Ensemble",
"Experimental Setup ::: Dataset",
"Experimental Setup ::: Metrics",
"Experimental Setup ::: Training Details",
"Analysis and Results",
"Analysis and Results ::: Human Evaluation: Is Image Alone Enough?",
"Analysis and Results ::: Reduced Question-Answer Rounds",
"Analysis and Results ::: Complementary Relation",
"Analysis and Results ::: Model Combination Results",
"Analysis and Results ::: Model Combination Results ::: Consensus Dropout Fusion Results",
"Analysis and Results ::: Model Combination Results ::: Ensemble Model Results",
"Analysis and Results ::: Final Visual Dialog Test Results",
"Analysis and Results ::: Final Visual Dialog Test Results ::: Ensemble on More Models",
"Ablation Study",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"2c59967f5430dfaddbed7aeaa01ebf35e6afc767"
],
"answer": [
{
"evidence": [
"For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values."
],
"extractive_spans": [
"NDCG",
"MRR",
"recall@k",
"mean rank"
],
"free_form_answer": "",
"highlighted_evidence": [
"For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"dbf531411464d218699f4b4109b194241493975b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"e98014faaebd6968cd2b0205cdbfd63719c3c6b3"
],
"answer": [
{
"evidence": [
"For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average."
],
"extractive_spans": [
"DL-61"
],
"free_form_answer": "",
"highlighted_evidence": [
"As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1d8faba3c04980db4626613dacbeac1c6120f447"
],
"answer": [
{
"evidence": [
"As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters.",
"As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation."
],
"extractive_spans": [
"ensemble model"
],
"free_form_answer": "",
"highlighted_evidence": [
"As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics.",
"As also shown in Table TABREF46, the ensemble model seems to take the best results from each model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"fc54c32c3f036dd106b335309d7820b0204c5e58"
],
"answer": [
{
"evidence": [
"We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context."
],
"extractive_spans": [],
"free_form_answer": "133,287 images",
"highlighted_evidence": [
"We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What metrics are used in challenge?",
"What model was winner of the Visual Dialog challenge 2019?",
"What model was winner of the Visual Dialog challenge 2018?",
"Which method for integration peforms better ensemble or consensus dropout fusion with shared parameters?",
"How big is dataset for this challenge?"
],
"question_id": [
"68edb6a483cdec669c9130c928994654f1c19839",
"f64531e460e0ac09b58584047b7616fdb7dd5b3f",
"cee29acec4da1b247795daa4e2e82ef8a7b25a64",
"7e54c7751dbd50d9d14b9f8b13dc94947a46e42f",
"d3bcfcea00dec99fa26283cdd74ba565bc907632"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Examples of Visual Dialog Task. Some questions only need an image to be answered (Q8-A8 and Q3-A3 pairs in blue from each example, respectively), but others need conversation history (Q9-A9 and Q4-A4 pairs in orange from each example, respectively).",
"Figure 2: The architecture of the image-history joint model. The visual features are obtained from Faster R-CNN and the history features are encoded via LSTM. They are fused together via the similarity matrix calculated using cross-attention. The fused features are combined with a question feature and dot products are calculated between the combined feature and candidate answers to rank the answers.",
"Figure 3: Round Dropout: history features are dropped randomly. H0 is the image caption, Hr is the history feature at round r. Dropout is not applied to the image caption feature.",
"Figure 4: Consensus Dropout Fusion. Logits from both image-only model and joint model are added to produce combined one. Instance dropout is applied to the logit from joint model to prevent strong coupling. The two models share many portions of parameters and are trained together.",
"Table 1: Human evaluation on questions of VisDial v1.0 val set. Percentage of questions which can be answered only from image or need help from conversation history is calculated by the manual investigation.",
"Table 2: Performance of models with the different amount of history on validation dataset of VisDial v1.0 (Round dropout is not applied to the joint model in these experiments. FULL: full image-history joint model, H-k: imagehistory joint model with k history, Img-only: image-only model. For H-k models we include image caption feature for a fair comparison with the full joint model).",
"Table 4: Performance of the consensus dropout fusion model and the ensemble model between our image-only model and joint model on the validation dataset of VisDial v1.0 (ImgOnly: image-only model, Joint: image-history joint model, CDF: consensus dropout fusion model).",
"Table 3: Intersection and Union of the answers from imageonly model and joint model which contribute to scoring for R@1 and NDCG metrics.",
"Table 5: Performance comparison between our models and other models on the test-standard dataset of VisDial v1.0. We run two ensemble models each from 6 image-only models and 6 consensus dropout fusion models.",
"Table 6: The effect of round dropout: applying round dropout improves model’s performance on NDCG by around 1.2 while also improving other metrics. (CA: crossattention model (base model), RD: round dropout).",
"Table 7: Consensus dropout fusion and different dropout rates. With different dropout rates, consensus dropout fusion model yields different scores of all metrics. (CDF: consensus dropout fusion model).",
"Table 8: Performance of ensemble models with different combinations. Img+Img model (3 Img models) has highest value of NDCG while Joint+Joint (3 Joint models) model highest values for other metrics. Img+Joint model (3 Img + 3 Joint models) has more balanced results (Img: image-only model, Joint: image-history joint model).",
"Figure 5: Coreference and memorization examples of the image-history joint model (a darker blue square indicates a higher score and a lighter blue square indicates a lower score): the left example shows that the model attends to the last QA pair to resolve the coreference (i.e., ‘it’ to ‘convenience store’), and the middle example shows that the model might memorize keywords/phrases to answer questions (‘3 persons’). Note that attention scores for captions are always high since they have more general information than others. On the right, we show two examples for answer prediction of the image-only model.",
"Figure 6: Examples of only-image questions in blue+italics which only need an image to be answered.",
"Figure 7: An example of the ranking list of the image-history joint and image-only model (the numbers next to the answers are the scores which indicate the relevance of the corresponding answers to the question)."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table4-1.png",
"6-Table3-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png",
"7-Table8-1.png",
"10-Figure5-1.png",
"10-Figure6-1.png",
"11-Figure7-1.png"
]
} | [
"How big is dataset for this challenge?"
] | [
[
"2001.06354-Experimental Setup ::: Dataset-0"
]
] | [
"133,287 images"
] | 300 |
1910.08210 | RTFM: Generalising to Novel Environment Dynamics via Reading | Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps. | {
"paragraphs": [
[
"Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.",
"Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training.",
"Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate .",
"Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future."
],
[
"A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal."
],
[
"Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics."
],
[
"We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training.",
"To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on.",
"In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations.",
"During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer).",
"In order to win the game (e.g. Figure FIGREF3), the agent must",
"",
"identify the target team from the goal (e.g. Order of the Forest)",
"identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx)",
"identify which monster is in the world (e.g. goblin), and its element (e.g. fire)",
"identify the modifiers that are effective against this element (e.g. fanatical, shimmering)",
"find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword)",
"pick up the correct item (e.g. fanatical sword)",
"engage the correct monster in combat (e.g. fire goblin).",
"If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise.",
"presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand.",
"We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion.",
"In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of ."
],
[
"We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model."
],
[
"Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer.",
"We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features:",
"Unlike FiLM, we additionally modulate text features using visual features:",
"The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions."
],
[
"We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model.",
"Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20.",
"We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21.",
"We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have",
"$_{\\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as",
"where $_{\\rm policy}$ and $_{\\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details."
],
[
"We consider variants of by varying the size of the grid-world ($6\\times 6$ vs $10\\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information.",
"We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details."
],
[
"We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details)."
],
[
"Due to the long sequence of co-references the agent must perform in order to solve the full ($10\\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\\times 6$ versions of the full and in which the model was trained on $10\\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners."
],
[
"Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document.",
""
],
[
"We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials."
],
[
"We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25."
],
[
"These figures shows key snapshots from a trained policy on randomly sampled environments.",
""
],
[
"Let $_\\in {_}$ denote a fixed-length $_$-dimensional representation of the text and $_\\in {_\\times H \\times W}$ denote the representation of visual inputs with"
],
[
"The used in our experiments consists of 5 consecutive layers, each with 3x3 convolutions and padding and stride sizes of 1. The layers have channels of 16, 32, 64, 64, and 64, with residual connections from the 3rd layer to the 5th layer. The Goal-doc LSTM (see Figure FIGREF18) shares weight with the Goal LSTM. The Inventory and Goal LSTMs have a hidden dimension of size 10, whereas the Vis-doc LSTM has a dimension of 100. We use a word embedding dimension of 30."
],
[
"Like , the CNN baseline consists of 5 layers of convolutions with channels of 16, 32, 64, 64, and 64. There are residual connections from the 3rd layer to the 5th layer. The input to each layer consists of the output of the previous layer, concatenated with positional features.",
"The input to the network is the concatenation of the observations $^{(0)}$ and text representations. The text representations consist of self-attention over bidirectional LSTM-encoded goal, document, and inventory. These attention outputs are replicated over the dimensions of the grid and concatenated feature-wise with the observation embeddings in each cell. Figure FIGREF46 illustrates the CNN baseline."
],
[
"The FiLM baseline encodes text in the same fashion as the CNN model. However, instead of using convolutional layers, each layer is a FiLM layer from BIBREF6. Note that in our case, the language representation is a self-attention over the LSTM states instead of a concatenation of terminal LSTM states."
],
[
"We train using an implementation of IMPALA BIBREF22. In particular, we use 20 actors and a batch size of 24. When unrolling actors, we use a maximum unroll length of 80 frames. Each episode lasts for a maximum of 1000 frames. We optimise using RMSProp BIBREF26 with a learning rate of 0.005, which is annealed linearly for 100 million frames. We set $\\alpha = 0.99$ and $\\epsilon = 0.01$.",
"During training, we apply a small negative reward for each time step of $-0.02$ and a discount factor of 0.99 to facilitate convergence. We additionally include a entropy cost to encourage exploration. Let $$ denote the policy. The entropy loss is calculated as",
"In addition to policy gradient, we add in the entropy loss with a weight of 0.005 and the baseline loss with a weight of 0.5. The baseline loss is computed as the root mean square of the advantages BIBREF22.",
"When tuning models, we perform a grid search using the training environments to select hyperparameters for each model. We train 5 runs for each configuration in order to report the mean and standard deviation. When transferring, we transfer each of the 5 runs to the new task and once again report the mean and standard deviation."
],
[
"We split environment dynamics by permuting 3-character dependency graphs from an alphabet, which we randomly split into training and held-out sets. This corresponds to the “permutations” setting in Table TABREF50.",
"We train models on the $10\\times 10$ worlds from the training set and evaluate them on both seen and not seen during training. The left of Figure FIGREF51 shows the performance of models on worlds of varying sizes with training environment dynamics. In this case, the dynamics (e.g. dependency graphs) were seen during training. For $9\\times 9$ and $11\\times 11$ worlds, the world configuration not seen during training. For $10\\times 10$ worlds, there is a 5% chance that the initial frame was seen during training. Figure FIGREF51 shows the performance on held-out environments not seen during training. We see that all models generalise to environments not seen during training, both when the world configuration is not seen (left) and when the environment dynamics are not seen (right)."
],
[
"In addition to splitting via permutations, we devise two additional ways of splitting environment dynamics by introducing new edges and nodes into the held-out set. Table TABREF50 shows the three different settings. For each, we study the transfer behaviour of models on new environments. Figure FIGREF52 shows the learning curve when training a model on the held-out environments directly and when transferring the model trained on train environments to held-out environments. We observe that all models are significantly more sample-efficient when transferring from training environments, despite the introduction of new edges and new nodes."
],
[
"In Figure FIGREF51, we see that the FiLM model outperforms the CNN model on both training environment dynamics and held-out environment dynamics. further outperforms FiLM, and does so more consistently in that the final performance has less variance. This behaviour is also observed in the in Figure FIGREF52. When training on the held-out set without transferring, is more sample efficient than FiLM and the CNN model, and achieves higher win-rate. When transferring to the held-out set, remains more sample efficient than the other models."
],
[
"We collect human-written natural language templates for the goal and the dynamics. The goal statements in describe which team the agent should defeat. We collect 12 language templates for goal statements. The document of environment dynamics consists of two types of statements. The first type describes which monsters are assigned to with team. The second type describes which modifiers, which describe items, are effective against which element types, which are associated with monsters. We collection 10 language templates for each type of statements. The entire document is composed from statements, which are randomly shuffled. We randomly sample a template for each statement, which we fill with the monsters and team for the first type and modifiers and element for the second type."
]
],
"section_name": [
"Introduction",
"Related Work ::: Language-conditioned policy learning.",
"Related Work ::: Language grounding.",
"",
"Model",
"Model ::: () layer",
"Model ::: The model",
"Experiments",
"Experiments ::: Comparison to baselines and ablations",
"Experiments ::: Curriculum learning for complex environments",
"Experiments ::: Curriculum learning for complex environments ::: Attention maps.",
"Experiments ::: Curriculum learning for complex environments ::: Analysis of trajectories and failure modes.",
"Conclusion",
"Playthrough examples",
"Variable dimensions",
"Model details ::: ::: Hyperparameters.",
"Model details ::: CNN with residual connections",
"Model details ::: FiLM baseline",
"Training procedure",
" ::: Reading models generalise to new environments.",
" ::: Reading models generalise to new concepts.",
" ::: is more sample-efficient and learns better policies.",
"Language templates"
]
} | {
"answers": [
{
"annotation_id": [
"e1baf02533ffcfb9d0e56ff82dd5c96ec07e8198"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"bb8e5fe4e6a558de4e92fb2cf8af5c0136069224"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). “Train” and “Eval” show final win rates on training and eval environments."
],
"extractive_spans": [],
"free_form_answer": "Proposed model achive 66+-22 win rate, baseline CNN 13+-1 and baseline FiLM 32+-3 .",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). “Train” and “Eval” show final win rates on training and eval environments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1dc49ec6ef7bc7ba6566d986bbd2f5879f009fe1"
],
"answer": [
{
"evidence": [
"We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model."
],
"extractive_spans": [
" We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation."
],
"free_form_answer": "",
"highlighted_evidence": [
"We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How is data for RTFM collected?",
"How better is performance of proposed model compared to baselines?",
"How does propose model model that capture three-way interactions?"
],
"question_id": [
"8816333fbed2bfb1838407df9d6c084ead89751c",
"37e8f5851133a748c4e3e0beeef0d83883117a98",
"c9e9c5f443649593632656a5934026ad8ccc1712"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: RTFM requires jointly reasoning over the goal, a document describing environment dynamics, and environment observations. This figure shows key snapshots from a trained policy on one randomly sampled environment. Frame 1 shows the initial world. In 4, the agent approaches “fanatical sword”, which beats the target “fire goblin”. In 5, the agent acquires the sword. In 10, the agent evades the distractor “poison bat” while chasing the target. In 11, the agent engages the target and defeats it, thereby winning the episode. Sprites are used for visualisation — the agent observes cell content in text (shown in white). More examples are in appendix A.",
"Figure 2: The FiLM2 layer.",
"Figure 3: txt2π models interactions between the goal, document, and observations.",
"Table 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). “Train” and “Eval” show final win rates on training and eval environments.",
"Table 2: Curriculum training results. We keep 5 randomly initialised models through the entire curriculum. A cell in row i and column j shows transfer from the best-performing setting in the previous stage (bolded in row i− 1) to the new setting in column j. Each cell shows final mean and standard deviation of win rate on the training environments. Each experiment trains for 50 million frames, except for the initial stage (first row, 100 million instead). For the last stage (row 4), we also transfer to a 10× 10 + dyna+ group+ nl variant and obtain 61 ± 18 win rate.",
"Table 3: Win rate when evaluating on new dynamics and world configurations for txt2π on the full RTFM problem.",
"Figure 5: txt2π attention on the full RTFM. These include the document attention conditioned on the goal (top) as well as those conditioned on summaries produced by intermediate FiLM2 layers. Weights are normalised across words (e.g. horizontally). Darker means higher attention weight.",
"Figure 6: The initial world is shown in 1. In 4, the agent avoids the target “lightning shaman” because it does not yet have “arcane spear”, which beats the target. In 7 and 8, the agent is cornered by monsters. In 9, the agent is forced to engage in combat and loses.",
"Figure 7: The initial world is shown in 1. In 5 the agent evades the target “cold ghost” because it does not yet have “soldier’s knife”, which beats the target. In 11 and 13, the agent obtains “soldier’s knife” while evading monsters. In 14, the agent defeats the target and wins.",
"Table 4: Variable dimensions",
"Figure 8: The convolutional network baseline. The FiLM baseline has the same structure, but with convolutional layers replaced by FiLM layers.",
"Table 5: Statistics of the three variations of the Rock-paper-scissors task",
"Figure 10: Performance on the Rock-paper-scissors task across models. Left shows final performance on environments whose goals and dynamics were seen during training. Right shows performance on the environments whose goals and dynamics were not seen during training.",
"Figure 9: The Rock-paper-scissors task requires jointly reasoning over the game observations and a document describing environment dynamics. The agent observes cell content in the form of text (shown in white).",
"Figure 11: Learning curve while transferring to the development environments. Win rates of individual runs are shown in light colours. Average win rates are shown in bold, dark lines.",
"Figure 12: Ablation training curves. Win rates of individual runs are shown in light colours. Average win rates are shown in bold, dark lines.",
"Figure 13: Curriculum learning results for txt2π on RTFM. Win rates of individual runs are shown in light colours. Average win rates are shown in bold, dark lines."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Figure5-1.png",
"11-Figure6-1.png",
"11-Figure7-1.png",
"12-Table4-1.png",
"13-Figure8-1.png",
"15-Table5-1.png",
"15-Figure10-1.png",
"15-Figure9-1.png",
"16-Figure11-1.png",
"16-Figure12-1.png",
"17-Figure13-1.png"
]
} | [
"How better is performance of proposed model compared to baselines?"
] | [
[
"1910.08210-6-Table1-1.png"
]
] | [
"Proposed model achive 66+-22 win rate, baseline CNN 13+-1 and baseline FiLM 32+-3 ."
] | 302 |
2001.05672 | AandP: Utilizing Prolog for converting between active sentence and passive sentence with three-steps conversion | I introduce a simple but efficient method to solve one of the critical aspects of English grammar which is the relationship between active sentence and passive sentence. In fact, an active sentence and its corresponding passive sentence express the same meaning, but their structure is different. I utilized Prolog [4] along with Definite Clause Grammars (DCG) [5] for doing the conversion between active sentence and passive sentence. Some advanced techniques were also used such as Extra Arguments, Extra Goals, Lexicon, etc. I tried to solve a variety of cases of active and passive sentences such as 12 English tenses, modal verbs, negative form, etc. More details and my contributions will be presented in the following sections. The source code is available at https://github.com/tqtrunghnvn/ActiveAndPassive. | {
"paragraphs": [
[
"Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfield of linguistics. NLP aims to provide interactions between computers and human languages. The performance of NLP is evaluated by how computers can process and analyze large amounts of natural language data BIBREF4. In terms of language processing, we cannot but mention Computational Linguistics BIBREF5. Computational Linguistics is the scientific study of language from a computational perspective, and thus an interdisciplinary field, involving linguistics, computer science, mathematics, logic, cognitive science, and cognitive psychology.",
"One of the most useful tools for studying computational linguistics is Prolog programming language BIBREF0. Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog can help deal with issues related to not only logic puzzle (Cryptoarithmetic puzzles, Zebra Puzzle, etc.) but also natural language processing. In this work, I utilized Prolog along with Definite Clause Grammars (DCG) BIBREF1 to solve one of the critical aspects of English grammar, active sentence and passive sentence. DCG proves the efficiency in handling the grammar of the sentence. Basically, a sentence is built out of noun phrase and verb phrase, so the structure of sentence, noun phrase, and verb phrase will be both covered in this work.",
"In terms of English grammar, we have lots of content to solve as shown in Figure FIGREF1. For example, there are 12 tenses in English such as the simple past tense, the simple present tense, the perfect present tense, etc. We also have more than three types of conditional clause, more than three types of comparative clause, and so on. This work covers the contents of active sentence and passive sentence. For instance, if an active sentence is “a man buys an apple in the supermarket\", its corresponding passive sentence will be “an apple is bought by a man in the supermarket\". The basic rules for rewriting an active sentence to passive sentence are shown clearly in Figure FIGREF2.",
"As shown in Figure FIGREF2, basic rules are:",
"The object of the active sentence becomes the subject of the passive sentence;",
"The subject of the active sentence becomes the object of the passive sentence;",
"The finite form of the verb is changed to “to be + past participle\".",
"As my best understanding so far, there are only a few works mentioning the problem of active sentence and passive sentence in terms of language processing and computational linguistics. The conversion between active sentence and passive sentence was early mentioned in BIBREF6 by using a transformation rule to express the relationship between active and passive sentences. According to this rule, a parse tree is produced to represent the deep structure and determine whether the given sentence is active or passive. Similarly, BIBREF7 also used a tree-to-tree mapping to represent the active/passive transformation rule. However, these works just stopped in introducing how to transform an active sentence to passive sentence and did not solve many cases of them. Actually, there are many cases of active and passive sentences, leading to extra rules for converting between them. It is not easy to handle all these cases, and this is the main challenge of this work. My contributions are shown as follows:",
"As far as I know, this may be the first work utilizing Prolog and DCG to solve a variety of cases of converting between active sentence and passive sentence such as 12 English tenses, modal verbs, negative form, etc.",
"I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48 and Figure FIGREF50.",
"In order to deal with 12 tenses in English, I proposed an auxiliary-based solution (is presented in Section SECREF67) for dividing 12 tenses into 4 groups. This is a very nice solution that reduces the workload of defining DCG rules.",
"I also proposed a three-steps conversion (is presented in Section SECREF73) for doing the conversion between active sentence and passive sentence."
],
[
"The main challenge of this work is how much it can handle cases. There are a variety of cases in terms of active sentence and passive sentence. The cases that I solved in this work are shown as follows.",
"The possibility of the conversion: the prerequisite to convert an active sentence to a passive sentence is that the active sentence must have the object. For instance:",
"The sentence “the man buys an apple\" is converted to the passive form being “an apple is bought by the man\";",
"However, the sentence “the man goes to school\" cannot be converted to the passive form because of the lack of object.",
"The tenses of the sentence: there are 12 tenses in English such as simple present tense, continuous past tense, perfect present tense, perfect continuous future tense, etc. With each tense, there is a specific way for converting between active sentence and passive sentence. For example (from active form to passive form):",
"In the simple present tense: “the man buys an apple\" is converted to “an apple is bought by the man\";",
"In the perfect present tense: “the man has bought an apple\" is converted to “an apple has been bought by the man\".",
"This work handles all these 12 tenses.",
"The form of past participle: commonly, a verb is converted to past participle form by adding “ed\" at the end (example: “add\" becomes “added\", “look\" becomes “looked\"). However, there are some exceptions such as “buy\" becomes “bought\", “see\" becomes “seen\", etc.",
"The case of negative sentence. For example, the negative form of “the man buys an apple\" is “the man does not buy an apple\", and the corresponding passive sentence is “an apple is not bought by the man\".",
"The case of modal verb: modal verbs (also called modals, modal auxiliary verbs, modal auxiliaries) are special verbs which behave irregularly in English. They are different from normal verbs like “work\", “play\", “visit\", etc. Modal verbs are always followed by an infinitive without “to\". For example, the sentence “the boy should bring a pen to the class\" is converted to the passive form being “a pen should be brought by the boy to the class\" (Figure FIGREF2).",
"Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he\" is used for the subject as “he\" but is used for the object as “him\"."
],
[
"The objective of this work is sentences: active sentence and passive sentence, so I need to determine the representation of both active sentence and passive sentence.",
"An active sentence is built out of a noun phrase and a verb phrase. Therefore basically, the representation of an active sentence is s(NP,VP). The noun phrase or verb phrase is built out of fundamental elements such as determiner, noun, adjective, verb, etc. Simply, the representation of fundamental elements are shown as follows:",
"Determiner: det(X). Example: det(a), det(an), det(the), etc.",
"Noun: n(X). Example: n(man), n(woman), n(apple), etc.",
"Pronoun: pro(X). Example: pro(he), pro(she), pro(him), etc.",
"Adjective: adj(X). Example: adj(small), adj(big), adj(beautiful), etc.",
"Verb: v(X). Example: v(play), v(like), v(love), etc.",
"Preposition: pre(X). Example: pre(on), pre(in), pre(by), etc.",
"Auxiliary verb: aux(X). Example: aux(do), aux(does), aux(is), aux(be), etc. Actually, there are three types of auxiliary verbs are used in this work. For example, the sentence “you will have been loving them\" (perfect continuous future tense) has three auxiliary verbs are “will\", “have\", “been\" which are determined by three predicates aux/5, aux1/4, aux2/4 as shown in the source code (convertible.pl), respectively.",
"Auxiliary verb for tense in the passive form: auxTense(X). There are three groups of auxTense:",
"Group 1: including only simple future tense: auxTense(be). Example: “an apple will be bought buy the man\".",
"Group 2: consisting of continuous past tense, continuous present tense, continuous future tense, perfect continuous past tense, perfect continuous present tense, and perfect continuous future tense: auxTense(being). Example: “an apple was being bought by a man\", “an apple will be being bought by him\".",
"Group 3: including perfect past tense, perfect present tense, and perfect future tense: auxTense(been). Example: “an apple has been bought by the man\", “an apple will have been bought by the man\".",
"Modal verb: modal(X). Example: modal(should), modal(can), modal(may), etc.",
"Moreover, this work also uses pol(not) for the negative form and agent(by) for the passive form.",
"With a noun phrase, there are some ways to build the noun phrase such as:",
"A noun phrase is built out of a determiner and a noun, so its representation is np(DET,N). Example: noun phrase “the man\" has the representation is np(det(the),n(man)).",
"A noun phrase is built out of pronoun such as “he\", “she\", “we\", etc. In this case, the representation of the noun phrase is simply np(PRO). For example: np(pro(he)).",
"A noun phrase is built out of a determiner, adjectives, and a noun. In this case, the representation of the noun phrase is np(DET,ADJ,N). For example, the noun phrase “a small beautiful girl\" has the representation is np(det(a),adi([small, beautiful]), n(girl)).",
"A noun phrase is built out of a noun phrase and a prepositional phrase. The representation of the noun phrase in this case is np(DET,N,PP), np(PRO,PP), or np(DET,ADJ,N,PP). For example, the noun phrase “a cat on the big table\" has the representation is",
"np(det(a),n(cat),pp(pre(on),det(the),adj([big]),n(table))).",
"With a verb phrase, there are two ways to build the verb phrase:",
"A verb phrase is built out of a verb and a noun phrase. In this case, the presentation of the verb phrase is vp(V,NP). For example, the verb phrase “love a beautiful woman\" has the representation is vp(v(love), np(det(a), adj([beautiful]), n(woman))).",
"A verb phrase is built out of only a verb, so its representation is simply vp(V). Example: vp(v(love)) or vp(v(eat)). In fact, as presented above, in order to be able to convert from an active sentence to a passive sentence, the active sentence has to have the object. Therefore, the case of verb phrase vp(V) will not be considered in this work.",
"After having the representation of noun phrase and verb phrase, the representation of the sentence could be obtained.",
"Originally, the active sentence “he buys an apple\" has the representation is",
"s(np(pro(he)),vp(v(buys),np(det(an),n(apple)))).",
"However, as presented above, this work only considers the case of verb phrase vp(V,NP), so I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48.",
"Therefore, the active sentence “he buys an apple\" has the representation is",
"s(np(pro(he)), v(buys), np(det(an), n(apple))).",
"The passive sentence “an apple is bought by him\" has the representation is",
"s(np(det(an), n(apple)), aux(is), v(bought), agent(by), np(pro(",
"him))).",
"As introduced in the DCG BIBREF1, the representation of the sentence is represented by “parse tree\" as illustrated in Figure FIGREF48 (active sentence) and Figure FIGREF50 (passive sentence). Parse tree could be found with the help of advanced techniques like extra arguments and extra goals.",
"“Inference\" is the conversion between a sentence and its representation, or even the conversion between an active sentence and a passive sentence:",
"Given a sentence, “inference\" is the process of getting the representation of that sentence;",
"Given a representation of a sentence, “inference\" is the process of getting that sentence.",
"The final purpose of this work is that:",
"Given an active sentence, we will get the respective passive sentence; and vice versa,",
"Given a passive sentence, we will get the respective active sentence."
],
[
"User interacts with the program by posing the query with the form (Figure FIGREF56):",
"convert(ActiveS, ActiveRe, PassiveS, PassiveRe).",
"Where:",
"ActiveS: the active sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [he,buys,an,apple].",
"ActiveRe: the representation of the active sentence ActiveS.",
"Example: s(np(pro(he)),v(buys),np(det(an),n(apple))).",
"PassiveS: the passive sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [an,apple,is,bought,by,him].",
"PassiveRe: the representation of the passive sentence PassiveS. Example:",
"s(np(det(an),n(apple)),aux(is),v(bought),agent(by),np(pro(him))).",
"Input will be either ActiveS or PassiveS for the case of converting from an active sentence to a passive sentence and the case of converting from a passive sentence to an active sentence, respectively.",
"There are several cases of output:",
"If the input is ActiveS and it is able to convert to the passive sentence, the outputs will be ActiveRe, PassiveS, and PassiveRe.",
"If the input is PassiveS and it is able to convert to the active sentence, the outputs will be ActiveS, ActiveRe, and PassiveRe.",
"If the input is either ActiveS or PassiveS but it is not able to convert to passive/active sentence, the output will be ‘false’. There are some cases which cannot be converted:",
"ActiveS is the active sentence but is typed as a passive sentence;",
"PassiveS is the passive sentence but is typed as an active sentence;",
"ActiveS is an active sentence having no object. Example: the sentence “he goes\" cannot be converted to the passive sentence.",
"Especially, we can pose the query with no input, and the program will generate all possible cases of the active sentence and passive sentence. Some examples to make user interaction more clear will be presented in Section SECREF4."
],
[
"There are 12 tenses in English. Each tense has a specific structure for the sentence. If each tense is handled individually, it will be quite long and be not an optimal solution. Therefore, as my best observation, I found a solution which divides 12 English tenses into 4 groups (same color means same group) based on the number of auxiliary verbs in the active sentence. This solution is summarized in Figure FIGREF72, consisting of:",
"Group 1: the number of auxiliary verbs in the active sentence is equal to 0. This group consists of the simple past tense and the simple present tense;",
"Group 2: the number of auxiliary verbs in the active sentence is equal to 1. We have 5 tenses in this group, those are the simple future tense, the continuous past tense, the continuous present tense, the perfect past tense, and the perfect present tense;",
"Group 3: the number of auxiliary verbs in the active sentence is equal to 2. This group consists of the continuous future tense, the perfect future tense, the perfect continuous past tense, and the perfect continuous present tense;",
"Group 4: the number of auxiliary verbs in the active sentence is equal to 3. This group has only one tense which is the perfect continuous future tense.",
"As we can easily see in Figure FIGREF72, tenses in the same group has the same structure of representation. For example, DCG rules for active sentence and passive sentence of group 3 are implemented as follows."
],
[
"The three-steps conversion consists of three steps:",
"From the input sentence fed as a list, the program first finds the representation of the sentence.",
"From the representation of active or passive sentence, the program then finds the representation of passive or active sentence, respectively.",
"From the representation achieved in the 2nd step, the program returns the converted sentence as a list.",
"The implementation of the three-steps conversion (written in convert.pl) is shown as follows.",
"The 1st and 3rd steps are done by using DCG rules (implemented in convertible.pl). The 2nd step is easily done by the rule like:",
"As you can see above, the 2nd step is easily done by doing the conversion between corresponding elements. More details for other groups are shown in convert.pl."
],
[
"All implementations above are for the positive form of the sentence. The negative form of the sentence can be easily done by inheriting the rules that are defined for the positive form. DCG rule for the negative form is implemented as follows.",
"DCG rules for the negative form is almost similar to those of the positive form, except from pol/1 predicate. However, in the 2nd step for the negative form, it completely utilizes the rule for the positive form as follows.",
"However, there is an exception of the 2nd step for group 1, it needs an extra rule like:",
"As we can see above, the negative form of group 1 needs the extra rule lex(AUX_POL,pol,Tense",
",Qs) because, in this negative form, an extra auxiliary verb is needed. For example, the positive sentence is “he buys an apple\", but the corresponding negative sentence is “he does not buy an apple\". Other implementations such as lexicon, modal verbs, etc. are carefully written in the source code."
],
[
"This work has been already done with three files:",
"convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon.",
"convert.pl: implementing the three-steps conversion and its 2nd step.",
"testSuite.pl: providing commands for user interaction. Users do not need to type the input sentence as a list (like [the, man, buys, an, apple]) but can type the sentence in the common way (directly type: the man buys an apple) by using two commands: active and passive. Moreover, users can easily check the correctness of the program by using two test suite commands: activeTestSuite and passiveTestSuite.",
"Some execution examples are shown as follows.",
"It should be noted that if users use active or passive commands, everything they type has to be defined in the lexicon or users have to define them in the lexicon (implemented in convertible.pl)."
],
[
"I introduced an effort to solve the problem of active and passive sentences using Prolog in terms of computation linguistics. By observing the possibility of converting an active sentence to passive sentence, I proposed a compact version of the representation of the sentence (Figure FIGREF48 and Figure FIGREF50). I also introduced a solution called auxiliary-based solution (Section SECREF67) to deal with 12 tenses in English. The auxiliary-based solution helps to reduce the workload of defining DCG rules. Finally, I proposed the three-steps conversion (Section SECREF73) for converting between active sentence and passive sentence. In the future, this work should consider solving other cases of active and passive sentences as much as possible."
]
],
"section_name": [
"Introduction",
"Analysis and Discussion ::: Cases to be solved",
"Analysis and Discussion ::: Representation and Inference",
"Design and Implementation ::: Scenario for user interaction",
"Design and Implementation ::: Auxiliary-based solution to handle 12 English tenses",
"Design and Implementation ::: Three-steps conversion",
"Design and Implementation ::: Others",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1e0fa4309a720cb4870ddfe8e6f05744cf596f7c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"613c3ff63db7539b7b74501e6bfed9d59bfcb86e"
],
"answer": [
{
"evidence": [
"convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon."
],
"extractive_spans": [],
"free_form_answer": "Author's own DCG rules are defined from scratch.",
"highlighted_evidence": [
"convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a79aebf738b6fb3e5a060b5871e2a9e2e09ef29f"
],
"answer": [
{
"evidence": [
"Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he\" is used for the subject as “he\" but is used for the object as “him\"."
],
"extractive_spans": [
"cases of singular/plural, subject pronoun/object pronoun, etc."
],
"free_form_answer": "",
"highlighted_evidence": [
"Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he\" is used for the subject as “he\" but is used for the object as “him\"."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c0d9e1b6df1087c71eb9d10a05d43f4a1107db3b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3fb6e6abf5c06c14ac05ffae10f6da541d34d956"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4618e36aebd3a0e7e2bbb3e55e52571e080974ac"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Is there a machine learning approach that tries to solve same problem?",
"What DCGs are used?",
"What else is tried to be solved other than 12 tenses, model verbs and negative form?",
"What is used for evaluation of this approach?",
"Is there information about performance of these conversion methods?",
"Are there some experiments performed in the paper?"
],
"question_id": [
"bee74e96f2445900e7220bc27795bfe23accd0a7",
"a56fbe90d5d349336f94ef034ba0d46450525d19",
"b1f2db88a6f89d0f048803e38a0a568f5ba38fc5",
"cf3af2b68648fa8695e7234b6928d014e3b141f1",
"7883a52f008f3c4aabfc9f71ce05d7c4107e79bb",
"cd9776d03fe48903e43e916385df12e1e798070a"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A variety of stuff in English grammar",
"Figure 2: Basic rules for converting an active sentence to passive sentence",
"Figure 3: The compact version of the representation of the active sentence",
"Figure 4: The representation of the passive sentence",
"Figure 5: The scenario for user interaction",
"Figure 6: Auxiliary-based solution: the division of 12 tenses into 4 groups"
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png",
"7-Figure6-1.png"
]
} | [
"What DCGs are used?"
] | [
[
"2001.05672-Results-1"
]
] | [
"Author's own DCG rules are defined from scratch."
] | 305 |
1911.02711 | Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis | Sentiment analysis provides a useful overview of customer review contents. Many review websites allow a user to enter a summary in addition to a full review. It has been shown that jointly predicting the review summary and the sentiment rating benefits both tasks. However, these methods consider the integration of review and summary information in an implicit manner, which limits their performance to some extent. In this paper, we propose a hierarchically-refined attention network for better exploiting multi-interaction between a review and its summary for sentiment analysis. In particular, the representation of a review is layer-wise refined by attention over the summary representation. Empirical results show that our model can better make use of user-written summaries for review sentiment analysis, and is also more effective compared to existing methods when the user summary is replaced with summary generated by an automatic summarization system. | {
"paragraphs": [
[
"Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification.",
"To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time.",
"One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders.",
"To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification.",
"We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark."
],
[
"The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11.",
"In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words.",
"Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification.",
"There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent."
],
[
"In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods."
],
[
"The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \\in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\\lbrace (X^w_i, X^s_i, y_i)\\rbrace |_{i=1}^M$ where $M$ is the total number of training examples."
],
[
"Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer."
],
[
"Input for the summary encoder is a sequence of summary word representations $\\mathbf {x}^s = \\mathbf {x}^s_1, \\mathbf {x}^s_2, ..., \\mathbf {x}^s_m = \\lbrace emb(x_1^s), ..., emb(x_m^s)\\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\\mathbf {h}_t$ are calculated from a sequence of $\\mathbf {x}_t$($t \\in [1,...,m]$).",
"A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\\lbrace {\\stackrel{\\rightarrow }{\\mathbf {h}_1^s}},...,{\\stackrel{\\rightarrow }{\\mathbf {h}_n^s}}\\rbrace $ and a sequence of backward hidden states $\\lbrace {\\stackrel{\\leftarrow }{\\mathbf {h}_1^s}},...,{\\stackrel{\\leftarrow }{\\mathbf {h}_n^s}}\\rbrace $, respectively. The two hidden states are concatenated to form a final representation:",
"We then apply an average-pooling operation over the hidden and take $\\mathbf {h}^s = avg\\_pooling(\\mathbf {h}^s_1, \\mathbf {h}^s_2,...,\\mathbf {h}^s_n)$ as the final representation of summary text."
],
[
"The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer."
],
[
"Given a review $\\mathbf {x}^w = \\lbrace emb(x_1^w),...,emb(x_n^w)\\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\\mathbf {H}^w=\\lbrace \\mathbf {h}^w_1, \\mathbf {h}^w_2,...,\\mathbf {h}^s_n \\rbrace $."
],
[
"In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\\mathbf {\\alpha } \\in \\mathbb {R}^{d_h \\times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by",
"where $\\mathbf {W}_i^Q \\in \\mathbb {R}^{d_{h} \\times \\frac{d_{h}}{k}}$, $\\mathbf {W}_i^K \\in \\mathbb {R}^{d_{h} \\times \\frac{d_{h}}{k}}$ and $\\mathbf {W}_i^V \\in \\mathbb {R}^{d_{h} \\times \\frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \\in [1,k]$ indicates which head is being processed.",
"Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 :",
"$\\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any.",
"According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review.",
"Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness."
],
[
"Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer:",
"where $\\hat{y}$ is the predicted sentiment label; $\\mathbf {W}$ and $\\mathbf {b}$ are parameters to be learned."
],
[
"Given a dataset $D={\\lbrace (X^w_t,X^s_t,y_t)\\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between",
"where $\\mathbf {p}^{y_t}$ denotes the value of the label in $\\mathbf {p}$ that corresponds to $y_t$."
],
[
"We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects."
],
[
"We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set."
],
[
"We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\\beta _1 = 0.9$, $\\beta _2 = 0.999$, and $\\epsilon = 1 \\times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV.",
"We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model."
],
[
"This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder."
],
[
"This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations."
],
[
"For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem."
],
[
"This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours."
],
[
"To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary.",
"For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder."
],
[
"We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29."
],
[
"We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either."
],
[
"We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments."
],
[
"As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance."
],
[
"Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary.",
"A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents.",
"With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states.",
"Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries."
],
[
"Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information."
],
[
"Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\\ge 50$.",
"As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !\", which suggests that the game is (1) fun (from word “fun\") and (2) not difficult to learn (from phrase “all ages\"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun\", which is relevant to the word “fun\" in the summary. In comparisons the second layer attends to the phrase “much easier\", which is relevant to the phrase “in all ages\" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information.",
"Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies\". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard\", “fun\", “immensely\" and “most\", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach\", which is a perfect match of the phrase “teach to newbies\" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone\", which links to “easy to teach\" and “Teach to Newbies\", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary."
],
[
"We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset."
]
],
"section_name": [
"Introduction",
"Related Work",
"Method",
"Method ::: Problem Formulation",
"Method ::: Model Overview",
"Method ::: Summary Encoder",
"Method ::: Hierarchically-Refined Review Encoder",
"Method ::: Hierarchically-Refined Review Encoder ::: Sequence Encoding Layer",
"Method ::: Hierarchically-Refined Review Encoder ::: Attention Inference Layer",
"Method ::: Output Layer",
"Method ::: Training",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Experimental Settings",
"Experiments ::: Baselines ::: HSSC @!START@BIBREF6@!END@.",
"Experiments ::: Baselines ::: SAHSSC @!START@BIBREF7@!END@.",
"Experiments ::: Baselines ::: BiLSTM+Pooling.",
"Experiments ::: Baselines ::: BiLSTM+Self-attention @!START@BIBREF13@!END@.",
"Experiments ::: Baselines ::: BiLSTM+Hard Attention",
"Experiments ::: Development Experiments",
"Experiments ::: Development Experiments ::: Self-attention Baseline",
"Experiments ::: Development Experiments ::: Hidden Size",
"Experiments ::: Development Experiments ::: Number of Layers",
"Experiments ::: Results",
"Experiments ::: Results ::: Review Length",
"Experiments ::: Results ::: Case Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1e38f91f9d12f8b3d01c46b7b00f139d81e5df0e"
],
"answer": [
{
"evidence": [
"To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification.",
"FLOAT SELECTED: Figure 2: Three model structures for incorporating summary into sentiment classification"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer.",
"FLOAT SELECTED: Figure 2: Three model structures for incorporating summary into sentiment classification"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"6c214a7258a5f324981240fb40ca2593e335e037"
],
"answer": [
{
"evidence": [
"Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary.",
"FLOAT SELECTED: Table 4: Experimental results. Predicted indicates the use of system-predicted summaries. Star (*) indicates that hard attention model is trained with golden summaries but does not require golden summaries during inference.",
"FLOAT SELECTED: Table 5: Experimental results. Golden indicates the use of user-written (golden) summaries. Noted that joint modeling methods, such as HSSC (Ma et al., 2018) and SAHSSC (Wang and Ren, 2018), cannot make use of golden summaries during inference time, so their results are excluded in this table.",
"Experiments ::: Datasets",
"We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set."
],
"extractive_spans": [],
"free_form_answer": "2.7 accuracy points",
"highlighted_evidence": [
"Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets.",
"FLOAT SELECTED: Table 4: Experimental results. Predicted indicates the use of system-predicted summaries. Star (*) indicates that hard attention model is trained with golden summaries but does not require golden summaries during inference.",
"FLOAT SELECTED: Table 5: Experimental results. Golden indicates the use of user-written (golden) summaries. Noted that joint modeling methods, such as HSSC (Ma et al., 2018) and SAHSSC (Wang and Ren, 2018), cannot make use of golden summaries during inference time, so their results are excluded in this table.",
"Experiments ::: Datasets\nWe empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"4d4d702ae019bc7e349389e242e7516e8d927ca7"
],
"answer": [
{
"evidence": [
"We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark.",
"We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set."
],
"extractive_spans": [
"SNAP (Stanford Network Analysis Project)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries.",
"We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project.",
"For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they predict the sentiment of the review summary?",
"What is the performance difference of using a generated summary vs. a user-written one?",
"Which review dataset do they use?"
],
"question_id": [
"b2c8c90041064183159cc825847c142b1309a849",
"68e3f3908687505cb63b538e521756390c321a1c",
"2f9d30e10323cf3a6c9804ecdc7d5872d8ae35e4"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 2: Three model structures for incorporating summary into sentiment classification",
"Figure 3: Architecture of proposed model (Xw = xw1 , x w 2 , ..., x w n : review; X s = xs1, x s 2, ..., x s m: summary).",
"Table 1: Data statistics. Size: number of samples, #Review: the average length of review, #Summary: the average length of summary.",
"Table 2: Statistics of generated summary. #Recall refers to the percentage of words in a summary that occur in the corresponding review. #ROUGE (Lin, 2004) indicates the abstractive summarization experimental result reported in HSSC (Ma et al., 2018), including ROUGE-1, ROUGE-2, ROUGE-L, respectively.",
"Table 3: Results (with golden summary) on the development set of Toys&Games. #Hidden: LSTM hidden size, # Layer: number of layers, Acc: accuracy, # Param: number of parameters",
"Table 4: Experimental results. Predicted indicates the use of system-predicted summaries. Star (*) indicates that hard attention model is trained with golden summaries but does not require golden summaries during inference.",
"Table 5: Experimental results. Golden indicates the use of user-written (golden) summaries. Noted that joint modeling methods, such as HSSC (Ma et al., 2018) and SAHSSC (Wang and Ren, 2018), cannot make use of golden summaries during inference time, so their results are excluded in this table.",
"Figure 4: Accuracy against the review length",
"Figure 5: Visualizations of self-attention and hierarchically-refined attention, one with generated summary and the other with golden summary. (1) . . . . . . . . . . . . . . . . . . . . . . . . .BiLSTM+self-attention:. . . . . . .dot . . . . .line. . ./ . . . . . .blue . . . . . . .color; (2) First layer of our model: straight line / pink color; (3) Second layer of our model: dash line / yellow color. Deeper"
],
"file": [
"2-Figure2-1.png",
"3-Figure3-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png"
]
} | [
"What is the performance difference of using a generated summary vs. a user-written one?"
] | [
[
"1911.02711-7-Table4-1.png",
"1911.02711-7-Table5-1.png",
"1911.02711-Experiments ::: Datasets-0",
"1911.02711-Experiments ::: Results-0"
]
] | [
"2.7 accuracy points"
] | 307 |
2001.11381 | Generaci\'on autom\'atica de frases literarias en espa\~nol | In this work we present a state of the art in the area of Computational Creativity (CC). In particular, we address the automatic generation of literary sentences in Spanish. We propose three models of text generation based mainly on statistical algorithms and shallow parsing analysis. We also present some rather encouraging preliminary results. | {
"paragraphs": [
[
"Los investigadores en Procesamiento de Lenguaje Natural (PLN) durante mucho tiempo han utilizado corpus constituidos por documentos enciclopédicos (notablemente Wikipedia), periodísticos (periódicos o revistas) o especializados (documentos legales, científicos o técnicos) para el desarrollo y pruebas de sus modelos BIBREF0, BIBREF1, BIBREF2.",
"La utilización y estudios de corpora literarios sistemáticamente han sido dejados a un lado por varias razones. En primer lugar, el nivel de discurso literario es más complejo que los otros géneros. En segundo lugar, a menudo, los documentos literarios hacen referencia a mundos o situaciones imaginarias o alegóricas, a diferencia de los otros géneros que describen sobre todo situaciones o hechos factuales. Estas y otras características presentes en los textos literarios, vuelven sumamente compleja la tarea de análisis automático de este tipo de textos. En este trabajo nos proponemos utilizar corpora literarios, a fin de generar realizaciones literarias (frases nuevas) no presentes en dichos corpora.",
"La producción de textos literarios es el resultado de un proceso donde una persona hace uso de aptitudes creativas. Este proceso, denominado “proceso creativo”, ha sido analizado por BIBREF3, quien propone tres tipos básicos de creatividad: la primera, Creatividad Combinatoria (CCO), donde se fusionan elementos conocidos para la generación de nuevos elementos. La segunda, Creatividad Exploratoria (CE), donde la generación ocurre a partir de la observación o exploración. La tercera, Creatividad Transformacional (CT), donde los elementos generados son producto de alteraciones o experimentaciones aplicadas al dominio de la CE.",
"Sin embargo, cuando se pretende automatizar el proceso creativo, la tarea debe ser adaptada a métodos formales que puedan ser realizados en un algoritmo. Este proceso automatizado da lugar a un nuevo concepto denominado Creatividad Computacional (CC), introducido por BIBREF4, quien retoma para ello la CT y la CE propuestas por BIBREF3.",
"La definición de literatura no tiene un consenso universal, y muchas variantes de la definición pueden ser encontradas. En este trabajo optaremos por introducir una definición pragmática de frase literaria, que servirá para nuestros modelos y experimentos.",
"Definición. Una frase literaria es una frase que se diferencia de las frases en lengua general, porque contiene elementos (nombres, verbos, adjetivos, adverbios) que son percibidos como elegantes o menos coloquiales que sus equivalentes en lengua general.",
"En particular, proponemos crear artificialmente frases literarias utilizando modelos generativos y aproximaciones semánticas basados en corpus de lengua literaria. La combinación de esos modelos da lugar a una homosintaxis, es decir, la producción de texto nuevo a partir de formas de discurso de diversos autores. La homosintaxis no tiene el mismo contenido semántico, ni siquiera las mismas palabras, aunque guarda la misma estructura sintáctica.",
"En este trabajo proponemos estudiar el problema de la generación de texto literario original en forma de frases aisladas, no a nivel de párrafos. La generación de párrafos puede ser objeto de trabajos futuros. Una evaluación de la calidad de las frases generadas por nuestro sistema será presentada.",
"Este artículo está estructurado como sigue. En la Sección SECREF2 presentamos un estado del arte de la creatividad computacional. En la Sección SECREF3 describimos los corpus utilizados. Nuestros modelos son descritos en la Sección SECREF4. Los resultados y su interpretación se encuentran en la Sección SECREF5. Finalmente la Sección SECREF6 presenta algunas ideas de trabajos futuros antes de concluir."
],
[
"La generación de texto es una tarea relativamente clásica, que ha sido estudiada en diversos trabajos. Por ejemplo, BIBREF5 presentan un modelo basado en cadenas de Markov para la generación de texto en idioma polaco. Los autores definen un conjunto de estados actuales y calculan la probabilidad de pasar al estado siguiente. La ecuación (DISPLAY_FORM1) calcula la probabilidad de pasar al estado $X_{i}$ a partir de $X_{j}$,",
"Para ello, se utiliza una matriz de transición, la cual contiene las probabilidades de transición de un estado actual $X_i$ a los posibles estados futuros $X_{i+1}$. Cada estado puede estar definido por $n$-gramas de letras o de palabras.",
"La tarea inicia en un estado $X_i$ dado por el usuario. Posteriormente, usando la matriz de transición, se calcula la probabilidad de pasar al estado siguiente $X_{i+1}$. En ese momento el estado predicho $X_{i+1}$ se convierte en el estado actual $X_i$, repitiendo este proceso hasta satisfacer una condición. Este método tiene un buen comportamiento al generar palabras de 4 o 5 letras. En polaco esta longitud corresponde a la longitud media de la mayor parte de las palabras BIBREF6.",
"También hay trabajos que realizan análisis más profundos para generar no solamente palabras, sino párrafos completos. BIBREF7 presentan un algoritmo que genera automáticamente comentarios descriptivos para bloques de código (métodos) en Java. Para ello, se toma el nombre del método y se usa como la acción o idea central de la descripción a generar. Posteriormente se usan un conjunto de heurísticas, para seleccionar las líneas de código del método que puedan aportar mayor información, y se procesan para generar la descripción. La tarea consiste en construir sintagmas, a partir de la idea central dada por el nombre del método, y enriquecerlos con la información de los elementos extraídos. Por ejemplo, si hay un método removeWall(Wall x) y se encuentra la llamada al método removeWall(oldWall), la descripción generada podría ser: “Remove old Wall”. Obteniéndose la acción (verbo) y el objeto (sustantivo) directamente del nombre del método y el adjetivo a partir de la llamada. Estas ideas permiten a los autores la generación de comentarios extensos sin perder la coherencia y la gramaticalidad.",
"También se encuentran trabajos de generación textual que se proponen como meta resultados con un valor más artístico. BIBREF8 presentan un conjunto de algoritmos para la generación de una guía narrativa basada en la idea de Creatividad Exploratoria BIBREF3. El modelo establece i/ un conjunto universal U de conceptos relevantes relacionados a un dominio; ii/ un modelo generador de texto; iii/ un subconjunto de conceptos S que pertenecen al conjunto universal U; y iv/ algoritmos encargados de establecer las relaciones entre U y S para generar nuevos conceptos. Estos nuevos conceptos serán posteriormente comparados con los conceptos ya existentes en U para verificar la coherencia y relación con la idea principal. Si los resultados son adecuados, estos nuevos conceptos se utilizan para dar continuación a la narrativa.",
"Son diversos los trabajos que están orientados a la generación de una narrativa ficticia como cuentos o historias. BIBREF9 proponen un modelo de generación de texto narrativo a partir del análisis de entidades. Dichas entidades son palabras (verbos, sustantivos o adjetivos) dentro de un texto que serán usados para generar la frase siguiente. El modelo recupera las entidades obtenidas de tres fuentes principales: la frase actual, la frase previa y el documento completo (contexto), y las procesa con una red neuronal para seleccionar las mejores de acuerdo a diversos criterios. A partir de un conjunto de heurísticas, se analizaron las frases generadas para separar aquellas que expresaran una misma idea (paráfrasis), de aquellas que tuvieran una relación entre sus entidades pero con ideas diferentes.",
"La generación de texto literario es un proceso muy diferente a la generación de texto aleatorio BIBREF10, BIBREF11 y tampoco se limita a una idea o concepto general. El texto literario está destinado a ser un documento elegante y agradable a la lectura, haciendo uso de figuras literarias y un vocabulario distinto al empleado en la lengua general. Esto da a la obra una autenticidad y define el estilo del autor. El texto literario también debe diferenciarse de las estructuras rígidas o estereotipadas de los géneros periodístico, enciclopédico o científico.",
"BIBREF12 proponen un modelo para la generación de poemas y se basa en dos premisas básicas: ¿qué decir? y ¿cómo decirlo? La propuesta parte de la selección de un conjunto de frases tomando como guía una lista de palabras dadas por el usuario. Las frases son procesadas por un modelo de red neuronal BIBREF13, para construir combinaciones coherentes y formular un contexto. Este contexto es analizado para identificar sus principales elementos y generar las líneas del poema, que también pasarán a formar parte del contexto. El modelo fue evaluado manualmente por 30 expertos en una escala de 1 a 5, analizando legibilidad, coherencia y significatividad en frases de 5 palabras, obteniendo una precisión de 0.75. Sin embargo, la coherencia entre frases resultó ser muy pobre.",
"BIBREF14, BIBREF15 proponen un modelo de generación de poemas a base de plantillas. El algoritmo inicia con un conjunto de frases relacionadas a partir de palabras clave. Las palabras clave sirven para generar un contexto. Las frases son procesadas usando el sistema PEN para obtener su información gramatical. Esta información es empleada para la generación de nuevas platillas gramaticales y finalmente la construcción de las líneas del poema, tratando de mantener la coherencia y la gramaticalidad.",
"El modelo sentiGAN BIBREF16 pretende generar texto con un contexto emocional. Se trata de una actualización del modelo GAN (Generative Adversarial Net) BIBREF17 que ha producido resultados alentadores en la generación textual, aunque con ciertos problemas de calidad y coherencia. Se utiliza el análisis semántico de una entrada proporcionada por el usuario que sirve para la creación del contexto. La propuesta principal de SentiGAN sugiere establecer un número definido de generadores textuales que deberán producir texto relacionado a una emoción definida. Los generadores son entrenados bajo dos esquemas: i/ una serie de elementos lingüísticos que deben ser evitados para la generación del texto; y ii/ un conjunto de elementos relacionados con la emoción ligada al generador. A través de cálculos de distancia, heurísticas y modelos probabilísticos, el generador crea un texto lo más alejado del primer esquema y lo más cercano al segundo.",
"También existen trabajos con un alcance más corto pero de mayor precisión. BIBREF18 proponen la evaluación de un conjunto de datos con un modelo basado en redes neuronales para la generación de subconjuntos de multi-palabras. Este mismo análisis, se considera en BIBREF19, en donde se busca establecer o detectar la relación hiperónimo-hipónimo con la ayuda del modelo de Deep Learning Word2vec BIBREF20. La propuesta de BIBREF19 reporta una precisión de 0.70 al ser evaluado sobre un corpus manualmente etiquetado.",
"La literatura es una actividad artística que exige capacidades creativas importantes y que ha llamado la atención de científicos desde hace cierto tiempo. BIBREF4 realiza un estado del arte interesante donde menciona algunos trabajos que tuvieron un primer acercamiento a la obra literaria desde una perspectiva superficial. Por ejemplo, el modelo “Through the park” BIBREF21, es capaz de generar narraciones históricas empleando la elipsis. Esta técnica es empleada para manipular, entre otras cosas, el ritmo de la narración. En los trabajos “About So Many Things” BIBREF22 y “Taroko Gorge” BIBREF23 se muestran textos generados automáticamente. El primero de ellos genera estrofas de 4 líneas estrechamente relacionadas entre ellas. Eso se logra a través de un análisis gramatical que establece conexiones entre entidades de distintas líneas. El segundo trabajo muestra algunos poemas cortos generados automáticamente con una estructura más compleja que la de las estrofas. El inconveniente de ambos enfoques es el uso de una estructura inflexible, lo que genera textos repetitivos con una gramaticalidad limitada.",
"El proyecto MEXICA modela la generación colaborativa de narraciones BIBREF4. El propósito es la generación de narraciones completas utilizando obras de la época Precolombina. MEXICA genera narraciones simulando el proceso creativo de E-R (Engaged y Reflexive) BIBREF24. Este proceso se describe como la acción, donde el autor trae a su mente un conjunto de ideas y contextos y establece una conexión coherente entre estas (E). Posteriormente se reflexiona sobre las conexiones establecidas y se evalúa el resultado final para considerar si este realmente satisface lo esperado (R). El proceso itera hasta que el autor lo considera concluido."
],
[
"Este corpus fue constituido con aproximadamente 5 000 documentos (en su mayor parte libros) en español. Los documentos originales, en formatos heterogéneos, fueron procesados para crear un único documento codificado en utf8. Las frases fueron segmentadas automáticamente, usando un programa en PERL 5.0 y expresiones regulares, para obtener una frase por línea.",
"Las características del corpus 5KL se encuentran en la Tabla TABREF4. Este corpus es empleado para el entrenamiento de los modelos de aprendizaje profundo (Deep Learning, Sección SECREF4).",
"El corpus literario 5KL posee la ventaja de ser muy extenso y adecuado para el aprendizaje automático. Tiene sin embargo, la desventaja de que no todas las frases son necesariamente “frases literarias”. Muchas de ellas son frases de lengua general: estas frases a menudo otorgan una fluidez a la lectura y proporcionan los enlaces necesarios a las ideas expresadas en las frases literarias.",
"Otra desventaja de este corpus es el ruido que contiene. El proceso de segmentación puede producir errores en la detección de fronteras de frases. También los números de página, capítulos, secciones o índices producen errores. No se realizó ningún proceso manual de verificación, por lo que a veces se introducen informaciones indeseables: copyrights, datos de la edición u otros. Estas son, sin embargo, las condiciones que presenta un corpus literario real."
],
[
"Un corpus heterogéneo de casi 8 000 frases literarias fue constituido manualmente a partir de poemas, discursos, citas, cuentos y otras obras. Se evitaron cuidadosamente las frases de lengua general, y también aquellas demasiado cortas ($N \\le 3$ palabras) o demasiado largas ($N \\ge 30$ palabras). El vocabulario empleado es complejo y estético, además que el uso de ciertas figuras literarias como la rima, la anáfora, la metáfora y otras pueden ser observadas en estas frases.",
"Las características del corpus 8KF se muestran en la Tabla TABREF6. Este corpus fue utilizado principalmente en los dos modelos generativos: modelo basado en cadenas de Markov (Sección SECREF13) y modelo basado en la generación de Texto enlatado (Canned Text, Sección SECREF15)."
],
[
"En este trabajo proponemos tres modelos híbridos (combinaciones de modelos generativos clásicos y aproximaciones semánticas) para la producción de frases literarias. Hemos adaptado dos modelos generativos, usando análisis sintáctico superficial (shallow parsing) y un modelo de aprendizaje profundo (Deep Learning) BIBREF25, combinados con tres modelos desarrollados de aproximación semántica.",
"En una primera fase, los modelos generativos recuperan la información gramatical de cada palabra del corpus 8KF (ver Sección SECREF3), en forma de etiquetas POS (Part of Speech), a través de un análisis morfosintáctico. Utilizamos Freeling BIBREF26 que permite análisis lingüísticos en varios idiomas. Por ejemplo, para la palabra “Profesor” Freeling genera la etiqueta POS [NCMS000]. La primera letra indica un sustantivo (Noun), la segunda un sustantivo común (Common); la tercera indica el género masculino (Male) y la cuarta da información de número (Singular). Los 3 últimos caracteres dan información detallada del campo semántico, entidades nombradas, etc. En nuestro caso usaremos solamente los 4 primeros niveles de las etiquetas.",
"Con los resultados del análisis morfosintáctico, se genera una salida que llamaremos Estructura gramatical vacía (EGV): compuesta exclusivamente de una secuencia de etiquetas POS; o Estructura gramatical parcialmente vacía (EGP), compuesta de etiquetas POS y de palabras funcionales (artículos, pronombres, conjunciones, etc.).",
"En la segunda fase, las etiquetas POS (en la EGV y la EGP) serán reemplazadas por un vocabulario adecuado usando ciertas aproximaciones semánticas.",
"La producción de una frase $f(Q,N)$ es guiada por dos parámetros: un contexto representado por un término $Q$ (o query) y una longitud $3 \\le N \\le 15$, dados por el usuario. Los corpus 5KL y 8KF son utilizados en varias fases de la producción de las frases $f$.",
"El Modelo 1 está compuesto por: i/ un modelo generativo estocástico basado en cadenas de Markov para la selección de la próxima etiqueta POS usando el algoritmo de Viterbi; y ii/ un modelo de aprendizaje profundo (Word2vec), para recuperar el vocabulario que reemplazará la secuencia de etiquetas POS.",
"El Modelo 2 es una combinación de: i/ el modelo generativo de Texto enlatado; y ii/ un modelo Word2vec, con un cálculo de distancias entre diversos vocabularios que han sido constituidos manualmente.",
"El Modelo 3 utiliza: i/ la generación de Texto enlatado; y ii/ una interpretación geométrica del aprendizaje profundo. Esta interpretación está basada en una búsqueda de información iterativa (Information Retrieval, IR), que realiza simultáneamente un alejamiento de la semántica original y un acercamiento al query $Q$ del usuario."
],
[
"Este modelo generativo, que llamaremos Modelo de Markov, está basado en el algoritmo de Viterbi y las cadenas de Markov BIBREF27, donde se selecciona una etiqueta POS con la máxima probabilidad de ocurrencia, para ser agregada al final de la secuencia actual.",
"Utilizamos el corpus de frases literarias 8KF (ver Sección SECREF5), que fue convenientemente filtrado para eliminar tokens indeseables: números, siglas, horas y fechas. El corpus filtrado se analizó usando Freeling, que recibe en entrada una cadena de texto y entrega el texto con una etiqueta POS para cada palabra. El corpus es analizado frase a frase, reemplazando cada palabra por su respectiva etiqueta POS. Al final del análisis, se obtiene un nuevo corpus 8KPOS con $s = 7~679$ secuencias de etiquetas POS, correspondientes al mismo número de frases del corpus 8KF. Las secuencias del corpus 8KPOS sirven como conjunto de entrenamiento para el algoritmo de Viterbi, que calcula las probabilidades de transición, que serán usadas para generar cadenas de Markov.",
"Las $s$ estructuras del corpus 8KPOS procesadas con el algoritmo de Viterbi son representadas en una matriz de transición $P_{[s \\times s]}$. $P$ será utilizada para crear nuevas secuencias de etiquetas POS no existentes en el corpus 8KPOS, simulando un proceso creativo. Nosotros hemos propuesto el algoritmo Creativo-Markov que describe este procedimiento.",
"En este algoritmo, $X_i$ representa el estado de una etapa de la creación de una frase, en el instante $i$, que corresponde a una secuencia de etiquetas POS. Siguiendo un procedimiento de Markov, en un instante $i$ se selecciona la próxima etiqueta POS$_{i+1}$, con máxima probabilidad de ocurrencia, dada la última etiqueta POS$_i$ de la secuencia $X_{i}$. La etiqueta POS$_{i+1}$ será agregada al final de $X_{i}$ para generar el estado $X_{i+1}$. $P(X_{i+1}=Y|X_{i}=Z)$ es la probabilidad de transición de un estado a otro, obtenido con el algoritmo de Viterbi. Se repiten las transiciones, hasta alcanzar una longitud deseada.",
"El resultado es una EGV, donde cada cuadro vacío representa una etiqueta POS que será remplazada por una palabra en la etapa final de generación de la nueva frase. El remplazo se realiza usando un modelo de aprendizaje profundo (Sección SECREF19). La arquitectura general de este modelo se muestra en la Figura FIGREF14."
],
[
"El algoritmo creativo-Markov del Modelo de Markov logra reproducir patrones lingüísticos (secuencias POS) detectados en el corpus 8KPOS, pero de corta longitud. Cuando se intentó extender la longitud de las frases a $N>6$ palabras, no fue posible mantener la coherencia y legibilidad (como se verá en la Sección SECREF19). Decidimos entonces utilizar métodos de generación textual guiados por estructuras morfosintácticas fijas: el Texto enlatado. BIBREF28 argumentan que el uso de estas estructuras ahorran tiempo de análisis sintáctico y permite concentrarse directamente en el vocabulario.",
"La técnica de Texto enlatado ha sido empleada también en varios trabajos, con objetivos específicos. BIBREF29, BIBREF30 desarrollaron modelos para la generación de diálogos y frases simples. Esta técnica es llamada “Generación basada en plantillas” (Template-based Generation) o de manera intuitiva, Texto enlatado.",
"Decidimos emplear texto enlatado para la generación textual usando un corpus de plantillas (templates), construido a partir del corpus 8KF (Sección SECREF3). Este corpus contiene estructuras gramaticales flexibles que pueden ser manipuladas para crear nuevas frases. Estas plantillas pueden ser seleccionadas aleatoriamente o a través de heurísticas, según un objetivo predefinido.",
"Una plantilla es construida a partir de las palabras de una frase $f$, donde se reemplazan únicamente las palabras llenas de las clases verbo, sustantivo o adjetivo $\\lbrace V, S, A \\rbrace $, por sus respectivas etiquetas POS. Las otras palabras, en particular las palabras funcionales, son conservadas. Esto producirá una estructura gramatical parcialmente vacía, EGP. Posteriormente las etiquetas podrán ser reemplazadas por palabras (términos), relacionadas con el contexto definido por el query $Q$ del usuario.",
"El proceso inicia con la selección aleatoria de una frase original $f_{o} \\in $ corpus 8KF de longitud $|f_{o}|=N$. $f_{o}$ será analizada con Freeling para identificar los sintagmas. Los elementos $\\lbrace V, S, A \\rbrace $ de los sintagmas de $f_{o}$ serán reemplazados por sus respectivas etiquetas POS. Estos elementos son los que mayor información aportan en cualquier texto, independientemente de su longitud o género BIBREF31. Nuestra hipótesis es que al cambiar solamente estos elementos, simulamos la generación de frases por homosintaxis: semántica diferente, misma estructura.",
"La salida de este proceso es una estructura híbrida parcialmente vacía (EGP) con palabras funcionales que dan un soporte gramatical y las etiquetas POS. La arquitectura general de este modelo se ilustra en la Figura FIGREF18. Los cuadros llenos representan palabras funcionales y los cuadros vacíos etiquetas POS a ser reemplazadas."
],
[
"Los modelos generativos generan estructuras gramaticales vacías (EGV) o parcialmente vacías (EGP) que pueden ser manipuladas para generar nuevas frases $f(Q,N)$. La idea es que las frases $f$ sean generadas por homosintaxis. En esta sección, proponemos un modelo de aproximación semántica que utiliza el algoritmo Word2vec (basado en aprendizaje profundo), combinado con el modelo generativo de Markov descrito en la Sección SECREF13. El proceso se describe a continuación.",
"El corpus 5KL es pre-procesado para uniformizar el formato del texto, eliminando caracteres que no son importantes para el análisis semántico: puntuación, números, etc. Esta etapa prepara los datos de entrenamiento del algoritmo de aprendizaje profundo que utiliza una representación vectorial del corpus 5KL. Para el aprendizaje profundo utilizamos la biblioteca Gensim, la versión en Python de Word2vec. Con este algoritmo se obtiene un conjunto de palabras asociadas (embeddings) a un contexto definido por un query $Q$. Word2vec recibe un término $Q$ y devuelve un léxico $L(Q)=(w_1,w_2,...,w_m)$ que representa un conjunto de $m$ palabras semánticamente próximas a $Q$. Formalmente, Word2vec: $Q \\rightarrow L(Q)$.",
"El próximo paso consiste en procesar la EGV producida por Markov. Las etiquetas POS serán identificadas y clasificadas como POS$_{\\Phi }$ funcionales (correspondientes a puntuación y palabras funcionales) y POS$_\\lambda $ llenas $\\in \\lbrace V, S, A \\rbrace $ (verbos, sustantivos, adjetivos).",
"Las etiquetas POS$_\\Phi $ serán reemplazadas por palabras obtenidas de recursos lingüísticos (diccionarios) construídos con la ayuda de Freeling. Los diccionarios consisten en entradas de pares: POS$_\\Phi $ y una lista de palabras y signos asociados, formalmente POS$_\\Phi $ $\\rightarrow $ $l$(POS$_\\Phi )=(l_1,l_2,...,l_j)$. Se reemplaza aleatoriamente cada POS$_\\Phi $ por una palabra de $l$ que corresponda a la misma clase gramatical.",
"Las etiquetas POS$_\\lambda $ serán reemplazadas por las palabras producidas por Word2vec $L(Q)$. Si ninguna de las palabras de $L(Q)$ tiene la forma sintáctica exigida por POS$_\\lambda $, empleamos la biblioteca PATTERN para realizar conjugaciones o conversiones de género y/o número y reemplazar correctamente POS$_\\lambda $.",
"Si el conjunto de palabras $L(Q)$, no contiene ningún tipo de palabra llena, que sea adecuada o que pueda manipularse con la biblioteca PATTERN, para reemplazar las etiquetas POS$_\\lambda $, se toma otra palabra, $w_i \\in L(Q)$, lo más cercana a $Q$ (en función de la distancia producida por Word2vec). Se define un nuevo $Q*=w_i$ que será utilizado para generar un nuevo conjunto de palabras $L(Q*)$. Este procedimiento se repite hasta que $L(Q*)$ contenga una palabra que pueda reemplazar la POS$_{\\lambda }$ en cuestión. El resultado de este procedimiento es una nueva frase $f$ que no existe en los corpora 5KL y 8KF. La Figura FIGREF23 muestra el proceso descrito."
],
[
"En este modelo proponemos una combinación entre el modelo de Texto enlatado (Sección SECREF15) y un algoritmo de aprendizaje profundo con Word2vec entrenado sobre el corpus 5KL. El objetivo es eliminar las iteraciones del Modelo 1, que son necesarias cuando las etiquetas POS no pueden ser reemplazadas con el léxico $L(Q)$.",
"Se efectúa un análisis morfosintáctico del corpus 5KL usando Freeling y se usan las etiquetas POS para crear conjuntos de palabras que posean la misma información gramatical (etiquetas POS idénticas). Una Tabla Asociativa (TA) es generada como resultado de este proceso. La TA consiste en $k$ entradas de pares POS$_k$ y una lista de palabras asociadas. Formalmente POS$_k \\rightarrow V_k =\\lbrace v_{k,1},v_{k,2},...,v_{k,i}\\rbrace $.",
"El Modelo 2 es ejecutado una sola vez para cada etiqueta POS$_k$. La EGP no será reemplazada completamente: las palabras funcionales y los signos de puntuación son conservados.",
"Para generar una nueva frase se reemplaza cada etiqueta POS$_k \\in $ EGP, $k=1,2,...$, por una palabra adecuada. Para cada etiqueta POS$_k$, se recupera el léxico $V_k$ a partir de TA.",
"El vocabulario es procesado por el algoritmo Word2vec, que calcula el valor de proximidad (distancia) entre cada palabra del vocabulario $v_{k,i}$ y el query $Q$ del usuario, $dist(Q,v_{k,i})$. Después se ordena el vocabulario $V_k$ en forma descendente según los valores de proximidad $dist(Q,v_{k,i})$ y se escoge aleatoriamente uno de los primeros tres elementos para reemplazar la etiqueta POS$_k$ de la EGP.",
"El resultado es una nueva frase $f_2(Q,N)$ que no existe en los corpora 5KL y 8KF. El proceso se ilustra en la figura FIGREF26."
],
[
"El Modelo 3 reutiliza varios de los recursos anteriores: el algoritmo Word2vec, la Tabla Asociativa TA y la estructura gramatical parcialmente vacía (EGP) obtenida del modelo de Texto enlatado. El modelo utiliza distancias vectoriales para determinar las palabras más adecuadas que sustituirán las etiquetas POS de una EGP y así generar una nueva frase. Para cada etiqueta POS$_k$, $k=1,2,...$ $\\in $ EGP, que se desea sustituir, usamos el algoritmo descrito a continuación.",
"Se construye un vector para cada una de las tres palabras siguientes:",
"$o$: es la palabra $k$ de la frase $f_{o}$ (Sección SECREF15), correspondiente a la etiqueta POS$_k$. Esta palabra permite recrear un contexto del cual la nueva frase debe alejarse, evitando producir una paráfrasis.",
"$Q$: palabra que define al query proporcionado por el usuario.",
"$w$: palabra candidata que podría reemplazar POS$_k$, $w \\in V_k$. El vocabulario posee un tamaño $|V_k| = m$ palabras y es recuperado de la TA correspondiente a la POS$_k$.",
"Las 10 palabras $o_i$ más próximas a $o$, las 10 palabras $Q_i$ más próximas a $Q$ y las 10 palabras $w_i$ más próximas a $w$ (en este orden y obtenidas con Word2vec), son concatenadas y representadas en un vector simbólico $\\vec{U}$ de 30 dimensiones. El número de dimensiones fue fijado a 30 de manera empírica, como un compromiso razonable entre diversidad léxica y tiempo de procesamiento. El vector $\\vec{U}$ puede ser escrito como:",
"donde cada elemento $u_j, j=1,...,10$, representa una palabra próxima a $o$; $u_j, j=11,...,20$, representa una palabra próxima a $Q$; y $u_j, j=21,...,30$, es una palabra próxima a $w$. $\\vec{U}$ puede ser re-escrito de la siguiente manera (ecuación DISPLAY_FORM32):",
"$o$, $Q$ y $w$ generan respectivamente tres vectores numéricos de 30 dimensiones:",
"donde los valores de $\\vec{X}$ son obtenidos tomando la distancia entre la palabra $o$ y cada palabra $u_j \\in \\vec{U}, j=1,...,30$. La distancia, $x_j=dist(o,u_j)$ es proporcionada por Word2vec y además $x_j \\in [0,1]$. Evidentemente la palabra $o$ estará más próxima a las 10 primeras palabras $u_j$ que a las restantes.",
"Un proceso similar permite obtener los valores de $\\vec{Q}$ y $\\vec{W}$ a partir de $Q$ y $w$, respectivamente. En estos casos, el $query$ $Q$ estará más próximo a las palabras $u_j$ en las posiciones $j=11,...,20$ y la palabra candidata $w$ estará más próxima a las palabras $u_j$ en las posiciones $j=21,...30$.",
"Enseguida, se calculan las similitudes coseno entre $\\vec{Q}$ y $\\vec{W}$ (ecuación DISPLAY_FORM34) y entre $\\vec{X}$ y $\\vec{W}$ (ecuación DISPLAY_FORM35). Estos valores también están normalizados entre [0,1].",
"El proceso se repite para todas las palabras $w$ del léxico $V_k$. Esto genera otro conjunto de vectores $\\vec{X}, \\vec{Q}$ y $\\vec{W}$ para los cuales se deberán calcular nuevamente las similitudes. Al final se obtienen $m$ valores de similitudes $\\theta _i$ y $\\beta _i$, $ i= 1,..., m$, y se calculan los promedios $\\langle \\theta \\rangle $ y $\\langle \\beta \\rangle $.",
"El cociente normalizado $\\left( \\frac{\\langle \\theta \\rangle }{\\theta _i} \\right)$ indica qué tan grande es la similitud de $\\theta _i$ con respecto al promedio $\\langle \\theta \\rangle $ (interpretación de tipo maximización); es decir, que tan próxima se encuentra la palabra candidata $w$ al query $Q$.",
"El cociente normalizado $\\left( \\frac{\\beta _i}{\\langle \\beta \\rangle } \\right)$ indica qué tan reducida es la similitud de $\\beta _i$ con respecto a $\\langle \\beta \\rangle $ (interpretación de tipo minimización); es decir, qué tan lejos se encuentra la palabra candidata $w$ de la palabra $o$ de $f_{o}$.",
"Estas fracciones se obtienen en cada par $(\\theta _i, \\beta _i)$ y se combinan (minimización-maximización) para calcular un score $S_i$, según la ecuación (DISPLAY_FORM36):",
"Mientras más elevado sea el valor $S_i$, mejor obedece a nuestros objetivos: acercarse al $query$ y alejarse de la semántica original.",
"Finalmente ordenamos en forma decreciente la lista de valores de $S_i$ y se escoge, de manera aleatoria, entre los 3 primeros, la palabra candidata $w$ que reemplazará la etiqueta POS$_k$ en cuestión. El resultado es una nueva frase $f_3(Q,N)$ que no existe en los corpora utilizados para construir el modelo.",
"En la Figura FIGREF37 se muestra una representación del modelo descrito."
],
[
"Dado la especificidad de nuestros experimentos (idioma, corpora disponibles, homosintaxis), no es posible compararse directamente con otros métodos.",
"Tampoco consideramos la utilización de un baseline de tipo aleatorio, porque los resultados carecerían de la homosintaxis y sería sumamente fácil obtener mejores resultados. Dicho lo anterior, el Modelo 1 podría ser considerado como nuestro propio baseline."
],
[
"A continuación presentamos un protocolo de evaluación manual de los resultados obtenidos. El experimento consistió en la generación de 15 frases por cada uno de los tres modelos propuestos. Para cada modelo, se consideraron tres queries: $Q=$ {AMOR, GUERRA, SOL}, generando 5 frases con cada uno. Las 15 frases fueron mezcladas entre sí y reagrupadas por queries, antes de presentarlas a los evaluadores.",
"Para la evaluación, se pidió a 7 personas leer cuidadosamente las 45 frases (15 frases por query). Todos los evaluadores poseen estudios universitarios y son hispanohablantes nativos. Se les pidió anotar en una escala de [0,1,2] (donde 0=mal, 1=aceptable y 2=correcto) los criterios siguientes:",
"Gramaticalidad: ortografía, conjugaciones correctas, concordancia en género y número.",
"Coherencia: legibilidad, percepción de una idea general.",
"Contexto: relación de la frase con respecto al query.",
"Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\\sigma $.",
"Las frases generadas por los modelos propuestos presentan características particulares.",
"El Modelo 1 produce generalmente frases con un contexto estrechamente relacionado con el query del usuario, pero a menudo carecen de coherencia y gramaticalidad. Este modelo presenta el valor más alto para el contexto, pero también la desviación estándar más elevada. Se puede inferir que existe cierta discrepancia entre los evaluadores. Los valores altos para el contexto se explican por el grado de libertad de la EGV generada por el modelo de Markov. La EGV permite que todos los elementos de la estructura puedan ser sustituidos por un léxico guiado únicamente por los resultados del algoritmo Word2vec.",
"El Modelo 2 genera frases razonablemente coherentes y gramaticalmente correctas, pero en ocasiones el contexto se encuentra más próximo a la frase original que al query. Esto puede ser interpretado como una paráfrasis elemental, que no es lo que deseamos.",
"Finalmente, el Modelo 3 genera frases coherentes, gramaticalmente correctas y mejor relacionadas al query que el Modelo 2. Esto se logra siguiendo una intuición opuesta a la paráfrasis: buscamos conservar la estructura sintáctica de la frase original, generando una semántica completamente diferente.",
"Por otro lado, la mínima dispersión se observa en el Modelo 1, es decir, hay una gran concordancia entre las percepciones de los evaluadores para este criterio."
],
[
"En este artículo hemos presentado tres modelos de producción de frases literarias. La generación de este género textual necesita sistemas específicos que deben considerar el estilo, la sintaxis y una semántica que no necesariamente respeta la lógica de los documentos de géneros factuales, como el periodístico, enciclopédico o científico. Los resultados obtenidos son alentadores para el Modelo 3, utilizando Texto enlatado, aprendizaje profundo y una interpretación del tipo IR. El trabajo a futuro necesita la implementación de módulos para procesar los $queries$ multi-término del usuario. También se tiene contemplada la generación de frases retóricas utilizando los modelos aquí propuestos u otros con un enfoque probabilístico BIBREF32. Los modelos aquí presentados pueden ser enriquecidos a través de la integración de otros componentes, como características de una personalidad y/o las emociones BIBREF33, BIBREF34, BIBREF35, BIBREF36. Finalmente, un protocolo de evaluación semi-automático (y a gran escala) está igualmente previsto."
],
[
"Los autores agradecen a Eric SanJuan respecto a las ideas y el concepto de la homosintaxis."
]
],
"section_name": [
"Introducción",
"Trabajos previos",
"Corpus utilizados ::: Corpus 5KL",
"Corpus utilizados ::: Corpus 8KF",
"Modelos propuestos",
"Modelos propuestos ::: Modelo generativo estocástico usando cadenas de Markov",
"Modelos propuestos ::: Modelo generativo basado en Texto enlatado",
"Modelos propuestos ::: Modelo 1: Markov y aprendizaje profundo",
"Modelos propuestos ::: Modelo 2: Texto enlatado, aprendizaje profundo y análisis morfosintáctico",
"Modelos propuestos ::: Modelo 3: Texto enlatado, aprendizaje profundo e interpretación geométrica",
"Experimentos y resultados",
"Experimentos y resultados ::: Resultados",
"Conclusión y trabajo futuro",
"Agradecimientos"
]
} | {
"answers": [
{
"annotation_id": [
"1e3d5d6820e7f433376363f1e349bd66a4aa7b53"
],
"answer": [
{
"evidence": [
"Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\\sigma $."
],
"extractive_spans": [],
"free_form_answer": "accuracy with standard deviation",
"highlighted_evidence": [
"Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\\sigma $."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"72bd72f782c91bb0688b21150a294c244cd997a7"
],
"answer": [
{
"evidence": [
"Corpus utilizados ::: Corpus 5KL",
"Este corpus fue constituido con aproximadamente 5 000 documentos (en su mayor parte libros) en español. Los documentos originales, en formatos heterogéneos, fueron procesados para crear un único documento codificado en utf8. Las frases fueron segmentadas automáticamente, usando un programa en PERL 5.0 y expresiones regulares, para obtener una frase por línea.",
"Las características del corpus 5KL se encuentran en la Tabla TABREF4. Este corpus es empleado para el entrenamiento de los modelos de aprendizaje profundo (Deep Learning, Sección SECREF4).",
"El corpus literario 5KL posee la ventaja de ser muy extenso y adecuado para el aprendizaje automático. Tiene sin embargo, la desventaja de que no todas las frases son necesariamente “frases literarias”. Muchas de ellas son frases de lengua general: estas frases a menudo otorgan una fluidez a la lectura y proporcionan los enlaces necesarios a las ideas expresadas en las frases literarias.",
"Otra desventaja de este corpus es el ruido que contiene. El proceso de segmentación puede producir errores en la detección de fronteras de frases. También los números de página, capítulos, secciones o índices producen errores. No se realizó ningún proceso manual de verificación, por lo que a veces se introducen informaciones indeseables: copyrights, datos de la edición u otros. Estas son, sin embargo, las condiciones que presenta un corpus literario real.",
"Corpus utilizados ::: Corpus 8KF",
"Un corpus heterogéneo de casi 8 000 frases literarias fue constituido manualmente a partir de poemas, discursos, citas, cuentos y otras obras. Se evitaron cuidadosamente las frases de lengua general, y también aquellas demasiado cortas ($N \\le 3$ palabras) o demasiado largas ($N \\ge 30$ palabras). El vocabulario empleado es complejo y estético, además que el uso de ciertas figuras literarias como la rima, la anáfora, la metáfora y otras pueden ser observadas en estas frases.",
"Las características del corpus 8KF se muestran en la Tabla TABREF6. Este corpus fue utilizado principalmente en los dos modelos generativos: modelo basado en cadenas de Markov (Sección SECREF13) y modelo basado en la generación de Texto enlatado (Canned Text, Sección SECREF15)."
],
"extractive_spans": [
"Corpus 5KL",
"Corpus 8KF"
],
"free_form_answer": "",
"highlighted_evidence": [
"Corpus utilizados ::: Corpus 5KL\nEste corpus fue constituido con aproximadamente 5 000 documentos (en su mayor parte libros) en español. Los documentos originales, en formatos heterogéneos, fueron procesados para crear un único documento codificado en utf8. Las frases fueron segmentadas automáticamente, usando un programa en PERL 5.0 y expresiones regulares, para obtener una frase por línea.\n\nLas características del corpus 5KL se encuentran en la Tabla TABREF4. Este corpus es empleado para el entrenamiento de los modelos de aprendizaje profundo (Deep Learning, Sección SECREF4).\n\nEl corpus literario 5KL posee la ventaja de ser muy extenso y adecuado para el aprendizaje automático. Tiene sin embargo, la desventaja de que no todas las frases son necesariamente “frases literarias”. Muchas de ellas son frases de lengua general: estas frases a menudo otorgan una fluidez a la lectura y proporcionan los enlaces necesarios a las ideas expresadas en las frases literarias.\n\nOtra desventaja de este corpus es el ruido que contiene. El proceso de segmentación puede producir errores en la detección de fronteras de frases. También los números de página, capítulos, secciones o índices producen errores. No se realizó ningún proceso manual de verificación, por lo que a veces se introducen informaciones indeseables: copyrights, datos de la edición u otros. Estas son, sin embargo, las condiciones que presenta un corpus literario real.\n\nCorpus utilizados ::: Corpus 8KF\nUn corpus heterogéneo de casi 8 000 frases literarias fue constituido manualmente a partir de poemas, discursos, citas, cuentos y otras obras. Se evitaron cuidadosamente las frases de lengua general, y también aquellas demasiado cortas ($N \\le 3$ palabras) o demasiado largas ($N \\ge 30$ palabras). El vocabulario empleado es complejo y estético, además que el uso de ciertas figuras literarias como la rima, la anáfora, la metáfora y otras pueden ser observadas en estas frases.\n\nLas características del corpus 8KF se muestran en la Tabla TABREF6. Este corpus fue utilizado principalmente en los dos modelos generativos: modelo basado en cadenas de Markov (Sección SECREF13) y modelo basado en la generación de Texto enlatado (Canned Text, Sección SECREF15)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"What evaluation metrics did they look at?",
"What datasets are used?"
],
"question_id": [
"327e06e2ce09cf4c6cc521101d0aecfc745b1738",
"40b9f502f15e955ba8615822e6fa08cb5fd29c81"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"spanish",
"spanish"
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Table 1: Corpus 5KL compuesto de 4 839 obras literarias.",
"Table 2: Corpus 8KF compuesto de 7 679 frases literarias.",
"Figure 1: Arquitectura general de los modelos.",
"Figure 2: Modelo generativo estocástico (Markov) que produce una estructura gramatical vacía EGV.",
"Figure 3: Modelo generativo de Texto enlatado que produce una estructura parcialmente vacía.",
"Figure 4: Modelo 1: Aproximación semántica usando Markov y aprendizaje profundo.",
"Figure 5: Modelo 2: Aproximación semántica basada en Deep Learning y análisis morfosintáctico.",
"Figure 6: Modelo 3: Aproximación semántica basada en interpretación geométrica min-max.",
"Table 3: Resultados de la evaluación manual."
],
"file": [
"4-Table1-1.png",
"4-Table2-1.png",
"5-Figure1-1.png",
"6-Figure2-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"9-Figure5-1.png",
"10-Figure6-1.png",
"11-Table3-1.png"
]
} | [
"What evaluation metrics did they look at?"
] | [
[
"2001.11381-Experimentos y resultados ::: Resultados-5"
]
] | [
"accuracy with standard deviation"
] | 308 |
1909.13362 | Language-Agnostic Syllabification with Neural Sequence Labeling | The identification of syllables within phonetic sequences is known as syllabification. This task is thought to play an important role in natural language understanding, speech production, and the development of speech recognition systems. The concept of the syllable is cross-linguistic, though formal definitions are rarely agreed upon, even within a language. In response, data-driven syllabification methods have been developed to learn from syllabified examples. These methods often employ classical machine learning sequence labeling models. In recent years, recurrence-based neural networks have been shown to perform increasingly well for sequence labeling tasks such as named entity recognition (NER), part of speech (POS) tagging, and chunking. We present a novel approach to the syllabification problem which leverages modern neural network techniques. Our network is constructed with long short-term memory (LSTM) cells, a convolutional component, and a conditional random field (CRF) output layer. Existing syllabification approaches are rarely evaluated across multiple language families. To demonstrate cross-linguistic generalizability, we show that the network is competitive with state of the art systems in syllabifying English, Dutch, Italian, French, Manipuri, and Basque datasets. | {
"paragraphs": [
[
"Words can be considered compositions of syllables, which in turn are compositions of phones. Phones are units of sound producible by the human vocal apparatus. Syllables play an important role in prosody and are influential components of natural language understanding, speech production, and speech recognition systems. Text-to-speech (TTS) systems can rely heavily on automatically syllabified phone sequences BIBREF0. One prominent example is Festival, an open source TTS system that relies on a syllabification algorithm to organize speech production BIBREF1.",
"Linguists have recognized since the late 1940s that the syllable is a hierarchical structure, present in most, if not all, languages (though there is some disagreement on this score. See, for example, BIBREF2). An optional consonant onset is followed by a rime, which may be further decomposed into a high sonority vowel nucleus followed by an optional consonant coda. All languages appear to have at least the single syllable vowel ($V$) and the two syllable vowel-consonant ($VC$) forms in their syllable inventories. For example, oh and so in English. Most languages supplement these with codas to form the $\\lbrace V, CV, VC, CVC\\rbrace $ syllable inventory. Sonority rises from the consonant onset to the vowel nucleus and falls toward the consonant coda, as in the English pig.",
"The components of the syllable obey the phonotactic constraints of the language in which they occur, and therein lies the question that motivates this research. Phonologists agree that the human vocal apparatus produces speech sounds that form a sonority hierarchy, from highest to lowest: vowels, glides, liquids, nasals, and obstruents. Examples are, come, twist, lack, ring, and cat, respectively. English, and other languages with complex syllable inventories, supplement the basic forms in ways that are usually consistent with the sonority hierarchy, where usually is the operative word. Thus, English permits double consonant onsets, as in twist with a consonant lower in the hierarchy (t, an obstruent) followed by a consonant one higher in the hierarchy (w, a glide). So sonority rises to the vowel, i, falls to the fricative, s, an obstruent, and falls further to another obstruent, t, still lower in the hierarchy. Yet p and w do not form a double consonant onset in English, probably because English avoids grouping sounds that use the same articulators, the lips, in this instance. Constructing an automatic syllabifier could be the process of encoding all rules such as these in the language under investigation. Another approach, one more congenial to the rising tide of so-called usage-based linguists (e.g, BIBREF3), is to recognize that the regularities of language formulated as rules can be usefully expressed as probabilities BIBREF4, BIBREF5, BIBREF6.",
"An automatic syllabifier is a computer program that, given a word as a sequence of phones, divides the word into its component syllables, where the syllables are legal in the language under investigation. Approaches take the form of dictionary-based look-up procedures, rule-based systems, data-driven systems, and hybrids thereof BIBREF7. Dictionary look-ups are limited to phone sequences previously seen and thus cannot handle new vocabulary BIBREF8. Rule-based approaches can process previously unseen phone sequences by encoding linguistic knowledge. Formalized language-specific rules are developed by hand, necessarily accompanied by many exceptions, such as the one noted in the previous paragraph. An important example is the syllabification package tsylb, developed at the National Institute of Standards and Technology (NIST), which is based on Daniel Kahn's 1979 MIT dissertation BIBREF9, BIBREF10. Language particularity is a stumbling block for rule-based and other formal approaches to language such as Optimality Theory (OT), however much they strive for universality. Thus, T.A. Hall argues that the OT approach to syllabification found in BIBREF11 is superior to previous OT research as well as to Kahn's rule-based work, because both postulate language-specific structures without cross-linguistic motivation. From Hall's perspective, previous systems do not capture important cross-linguistic features of the syllable. In a word, the earlier systems require kludges, an issue for both builders of automatic, language-agnostic syllabifiers and theoretical linguists like Hall.",
"Data-driven syllabification methods, like the one to be presented in this paper, have the potential to function across languages and to process new, out of dictionary words. For languages that have transcribed syllable data, data-driven approaches often outperform rule-based ones. BIBREF12 used a combined support vector machine (SVM) and hidden Markov model (HMM) to maximize the classification margin between a correct and incorrect syllable boundary. BIBREF13 used segmental conditional random fields (SCRF). The SCRF hybrid method statistically leveraged general principles of syllabification such as legality, sonority and maximal onset. Many other HMM-based labeling structures exist, such as evolved phonetic categorization and high order n-gram models with back-off BIBREF14, BIBREF15.",
"Data-driven models are evaluated by word accuracy against transcribed datasets. Commonly, only one language or languages of the same family are used. The CELEX lexical database from BIBREF16 contains syllabifications of phone sequences for English, Dutch, and German. These three languages fall into the West Germanic language family, so the phonologies of each are closely related. Evaluating a model solely on these three languages, the approach taken in BIBREF13 and others, does not adequately test a model's generalized ability to learn diverse syllable structures.",
"In this paper, we present a neural network that can syllabify phone sequences without introducing any fixed principles or rules of syllabification. We show that this novel approach to syllabification is language-agnostic by evaluating it on datasets of six languages, five from two major language families, and one that appears to be unrelated to any existing language."
],
[
"Syllabification can be considered a sequence labeling task where each label delineates the existence or absence of a syllable boundary. As such, syllabification has much in common with well-researched topics such as part-of-speech tagging, named-entity recognition, and chunking BIBREF17. Neural networks have recently outpaced more traditional methods in sequence labeling tasks. These neural-based approaches are taking the place of HMMs, maximum entropy Markov models (MEMM), and conditional random fields (CRF) BIBREF18.",
"In the following section and in Fig. FIGREF1, we present a neural network architecture that leverages both recurrence and one-dimensional convolutions. Recurrence enables our model to read a sequence much like a human would; a sequence with elements $abcd$ would be read one element at a time, updating a latent understanding after reading each $a$, $b$, $c$, and finally $d$. One-dimensional convolutions extract a spatial relationship between sequential elements. The $abcd$ example sequence may then be read as $ab$, $bc$, $cd$. Explicitly recognizing this spatial relationship is beneficial in syllabification because a syllable is a local sub-sequence of phones within a word. The input to the model is a sequence of phones that together represent a word. We pad each phone sequence to a length of $n$ where $n$ is the length of the longest phone sequence. All inputs then take the form",
"Each phone $p_i$ is mapped to a $d$-dimensional embedding vector $x_i$ resulting in",
"where $x$ has a dimension of $d\\times n$. Taken together, the phone embeddings represent the relationships between phones in a real-valued vector space. The embedding dimension $d$ is optimized as a model hyperparameter and has a large impact on overall model performance BIBREF19. As such, we carefully tune $d$ for the proposed Base model and reduce it for our Small model as described in Section SECREF24.",
"The vector values of the phone embeddings are learned during each model training. Using learned embeddings enables the model to have a custom embedding space for each language that it is trained on. This is desirable because phonetic patterns differ from language to language. Also, learned embeddings allow the model to be trained using the input of any phonetic transcription. For example, one training of the model can use IPA and one can use SAMPA without needing to specify a mapping of one alphabet to another."
],
[
"Recurrent neural networks (RNNs) differ from standard feed-forward neural networks in their treatment of input order; each element is processed given the context of the input that came before. RNNs operate on sequential data and can take many forms. Our network leverages the long short-term memory (LSTM) cell which is a prominent RNN variant capable of capturing long-term sequential dependencies BIBREF20. The gated memory cells of LSTM are an improvement over the standard RNN because the standard RNN is often biased toward short-term dependencies BIBREF21, BIBREF22. At each time step, the LSTM cell determines what information is important to introduce, to keep, and to output. This is done using an input gate, a forget gate, and an output gate shown in Fig. FIGREF5. LSTM operates in a single direction through time. This can be a limitation when a time step has both past dependency and future dependency. For example, a consonant sound may be the coda of a syllable earlier in the sequence or the onset of a syllable later in the sequence. Thus, processing a phonetic sequence in both the forward and backwards directions provides an improved context for assigning syllable boundaries. A bidirectional LSTM (BiLSTM) is formed when an LSTM moving forward through time is concatenated with an LSTM moving backward through time BIBREF23.",
"We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM:",
"Both $\\overrightarrow{h_i}$ and $\\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\\times n$."
],
[
"Convolutional neural networks (CNNs) are traditionally used in computer vision, but perform well in many text processing tasks that benefit from position-invariant abstractions BIBREF24, BIBREF25. These abstractions depend exclusively on local neighboring features rather than the position of features in a global structure. According to a comparative study by BIBREF26, BiLSTMs tend to outperform CNNs in sequential tasks such as POS tagging, but CNNs tend to outperform BiLSTMs in global relation detection tasks such as keyphrase matching for question answering. We use both the BiLSTM and the CNN in our network so that the strengths of each are incorporated. CNNs have been combined with BiLSTMs to perform state-of-the-art sequence tagging in both POS tagging and NER. BIBREF27 used BiLSTMs to process the word sequence while each word's character sequence was processed with CNNs to provide a second representation. In textual syllabification, the only input is the phone sequence.",
"Both our BiLSTM and CNN components process the same input: the $x$ vector. We pad $x$ with $w-1$ $d$-dimensional zero vectors before $x_0$. A 1-dimensional convolutional filter of width $w$ processes a window $x_{i-w+1},...,x_i$ for all $i$ from 0 to $n-1$. To determine the output vector $c$, the convolutional filter performs a nonlinear weight and bias computation. Due to the padding of $x$, the resulting dimension of $c$ is $f\\times n$ where $f$ is the number of filters used. A 1-dimensional max pooling is performed over $c$ with a stride of 1 which keeps the dimensionality unaltered. The pool size is an optimized hyperparameter that determines how many adjacent elements are used in the $max$ operation. The convolutional and max pooling components can be repeated to compute higher-level abstractions. As the convolutional and max pooling output is conformant to the BiLSTM output, we can concatenate them to create a combined vector with dimension $(2l+f)\\times n$:"
],
[
"We introduce a time-distributed fully connected layer over vector $o$, taking $o$ from a dimension of $(2l+f)\\times n$ down to a dimension of $2\\times n$. We do this because there are two class labels: either a syllable boundary or no syllable boundary. The output of the model is a sequence",
"When $y_i\\equiv 0$, there is no syllable boundary predicted to follow the phone $p_i$. When $y_i\\equiv 1$, there is a syllable boundary predicted to follow $p_i$. Intuitively, we seek an output sequence $y$ that gives the highest $p(y|o)$. One approach calculates the softmax for each $o_i$:",
"The softmax normalizes each $o_i$ to a probability distribution over the two discrete class labels. We can then model $p(y|o)$ by multiplying the maximum of each $s_i$ together:",
"When using the softmax, $p(y|o)$ is calculated under the limiting assumption that each $o_i$ is independent. To more accurately model $p(y|o)$, we replace the softmax classifier with a conditional random field (CRF) BIBREF28. Specifically, we use a linear-chain CRF which is a sequential model that leverages both past and future output tags to model the output probability. The linear-chain CRF can be considered a sequential generalization of logistic regression classifiers as well as a discriminative analogue of hidden Markov models because it models $p(y|o)$ directly instead of modeling $p(o|y)$ BIBREF29. Using sequence-level tag information with a CRF has been shown to improve tag accuracy in the related tasks of POS tagging, chunking, and NER BIBREF30, BIBREF31. We use a linear-chain CRF to model the conditional distribution directly:",
"where $Z(o)$ is the normalization function",
"and $\\theta $ is a learned parameter vector scaled by the set of transition feature functions $f$."
],
[
"Training of the network parameters is performed using backpropagation. Using Keras, the backpropagation is automatically defined given the forward definition of the network. The defined loss function is sparse categorical cross entropy, in accordance with the real-valued probabilities given by the CRF output layer. Loss optimization is performed with the Adam optimizer BIBREF32. Adam was chosen because it adapts the learning rate on a parameter-to-parameter basis; strong convergence occurs at the end of optimization. Training is performed to a set number of epochs. Early stopping allows the network to conclude training if convergence is reached prior to reaching the epoch training limit BIBREF33."
],
[
"The materials for this research comprises the software described above and several syllabified datasets."
],
[
"The implementation of our model was adapted from an open source code library designed for general-purpose sequence tagging and made available by BIBREF37. The modifications to this code include adding data preparation scripts and changing the model architecture to reflect the network architecture described above. Our code is made publicly available for future research at https://github.com/jacobkrantz/lstm-syllabify."
],
[
"To produce a language-agnostic syllabifier, it is crucial to test syllabification accuracy across different language families and language groupings within families. We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset.",
"Among the six languages we evaluate with, both English and Dutch are notable for the availability of rich datasets of phonetic and syllabic transcriptions. These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16. CELEX was built jointly by the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. CELEX is maintained by the Max Planck Institute for Psycholinguistics. The CELEX database contains information on orthography, phonology, morphology, syntax and word frequency. It also contains syllabified words in Dutch and English transcribed using SAM-PA, CELEX, CPA, and DISC notations. The first three are variations of the International Phonetic Alphabet (IPA), in that each uses a standard ASCII character to represent each IPA character. DISC is different than the other three in that it maps a distinct ASCII character to each phone in the sound systems of Dutch, English, and German BIBREF38. Different phonetic transcriptions are used in different datasets. Part of the strength of our proposed syllabifier is that every transcription can be used as-is without any additional modification to the syllabifier or the input sequences. The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. Both IIT-Guwahat and Festival were initially syllabified with a naive algorithm and then each entry was confirmed or corrected by hand.",
"For each dataset used to evaluate the proposed model, we compare our results with published accuracies of existing syllabification systems. Table TABREF21 shows the performance of well known and state of the art syllabifiers for each dataset. Liang's hyphenation algorithm is commonly known for its usage in . The patgen program was used to learn the rules of syllable boundaries BIBREF39. What we call Entropy CRF is a method particular to Manipuri; a rule-based component estimates the entropy of phones and phone clusters while a data-driven CRF component treats syllabification as a sequence modeling task BIBREF35."
],
[
"Each dataset used to evaluate the model was split into three groups: training, development, and test. Each training epoch iterated over the training set to optimize the model parameters. The development set was used to tune the hyperparameters of the model, such as the batch size and the phone embedding dimension. The test set was exclusively used for reporting the model accuracy. The datasets were split randomly by percentages 80 (training), 10 (development), and 10 (test). For the English CELEX dataset of $89,402$ words, this resulted in $71,522$ words for training and $8,940$ words for each development and training.",
"For each experiment, models were initialized with a random set of parameter weights. BIBREF37 showed that differences in random number generation produce statistically significant variances in the accuracy of LSTM-based models. Due to the stochastic nature of neural network training, we performed each experiment 20 times. We report model accuracy as a mean and standard deviation of these experiment repetitions."
],
[
"Prior to splitting each dataset, a simple cleaning process had to be performed to remove unwanted entries. This cleaning involved removing all entries that had at least one other entry with the same word. It is important to note that two words being different does not necessitate a different pronunciation or syllabification. These entries with different words but same pronunciations were kept in the dataset. No other cleaning was needed for the datasets other than mapping the syllabified phone sequence to an input-target pair usable by our model for training and evaluation. This cleaning process contributes to the language-agnostic nature of this research. The simplicity of the cleaning process is enabled by the fact that the model is end to end; no external phonetic features are gathered, and any phonetic transcription can be accommodated in the training process."
],
[
"For all experiments, models were trained with a batch size of 64. A limit of 120 epochs was imposed with early stopping after 10 unimproved epochs. Dropout was used for the input connection to the BiLSTM layer at $25\\%$ BIBREF41. The learned embeddings layer had dimension $d=300$. The LSTM outputs, $\\overrightarrow{h_i}$ and $\\overleftarrow{h_i}$, both had dimension $l=300$. The convolutional to max pooling component was repeated twice before concatenation with the BiLSTM output. 200 convolutional filters were used and each had a dimension of 3. Finally, when using the Adam optimizer, we scaled the gradient norm when it exceeded $1.0$ using the Keras clipnorm parameter. All training was performed on single GPU machines on Amazon Web Services (AWS) servers which provided more than enough compute power. The average training of a model on the English CELEX dataset took approximately 45 minutes to reach convergence."
],
[
"We tested three model versions against all datasets. The model we call Base is the BiLSTM-CNN-CRF model described in Section SECREF2 with the associated hyperparameters. Another model, Small, uses the same architecture as Base but reduces the number of convolutional layers to 1, the convolutional filters to 40, the LSTM dimension $l$ to 50, and the phone embedding size $d$ to 100. We also tested a Base-Softmax model, which replaces the CRF output of the Base model with a softmax. A comparison of the results of these three models can be seen in Table TABREF25. This comparison empirically motivates the CRF output because Base almost always outperforms Base-Softmax. Of these three models, the Base model performed the best with the exception of the French and Manipuri datasets. The differences in the French results can be considered negligible because the accuracies are all near $100\\%$. The Small model performed best on Manipuri, which may suggest that reducing the number of parameters of the Base model leads to better accuracy on smaller datasets.",
"When comparing our model with previous syllabifiers, we consider the Base model exclusively. In Table TABREF26, a side-by-side comparison of our Base model to a selection of published syllabifiers shows that Base is near state-of-the art performance on English CELEX. For the Dutch dataset, we report an accuracy of $99.47 \\pm 0.04\\%$, which improves on the previously best-known accuracy of $99.16\\%$ from the HMM-SVM of BIBREF12. Best-known results are also obtained on the Italian, French, and Basque datasets. Our reported accuracy of $94.9 \\pm 0.3\\%$ on the Manipuri dataset is furthest from state of the art. We suspect this to be due to having limited amounts of training data; the $97.5\\%$ accurate system from BIBREF35 supplemented their data-driven approach with rules of syllabification."
],
[
"Examples from the outputs of the Base model can give us insight into what the model does well and what types of words it struggles with. The total number of sounds across languages is vast, but not infinite, as Ladefoged and Maddieson's The Sounds of the the World's Languages demonstrates BIBREF42. Different languages choose different inventories from the total producible by the human vocal apparatus. Within a language, sounds and patterns of sound vary widely in frequency, though with considerable regularity. This regularity has led a generation of linguists to attempt to uncover rules that describe not only syntax, but sound as well. Chomsky and Halle's The Sound Pattern of English is the classic effort, first appearing in 1968 BIBREF43. It is not surprising that the earliest attempts to produce automatic syllabifiers were based on just such rule collections. Nor is it surprising that the best-known rule-based syllabifier was inspired by a doctoral dissertation at MIT, Noam Chomsky's home institution for five decades. An alternative approach is to recognize that 1) rules can be reconceptualized as probabilities and 2) native speakers of a language have internalized those very probabilities. Nevertheless, where there is probability, there is ambiguity. With all of these caveats in mind, a few examples have been selected from our results to showcase the model as shown in Table TABREF27.",
"The syllabification of misinterpretation illustrates the model's ability to process longer words. Containing 14 phones and 5 syllables, this word demonstrates that the model's pattern finding technique works well regardless of the location of phonetic and syllabic patterns in the word. The model can accurately handle prefixes, correctly syllabifying mis- as Table TABREF27 shows. Another word is achieved. Inflected languages, such as English, use morphemes to distinguish mood, tense, case, and number, among others. Thus, the verb achieve has several forms, or conjugates. The syllabifier correctly detected the stem and the past tense morpheme, ed. An odd aspect of the English CELEX dataset is the occurrence of entries, $22,393$ of which, that either have hyphens or are multiple entirely separate words, such as public-address systems. Because the phonetic representation does not denote hyphens or whitespace, the model has difficulties processing these words."
],
[
"We proposed a sequential neural network model that is capable of syllabifying phonetic sequences. This model is independent of any hand-crafted linguistic knowledge. We showed that this model performs at or near state of the art levels on a variety of datasets sampled from two Indo-European, one Sino-Tibetan, and an apparently family-less language. Specifically, the proposed model achieved accuracies higher than any other we could find on datasets from Dutch, Italian, French, and Basque languages and close to the best-reported accuracy for English and Manipuri. Evaluating the performance of the syllabifier across diverse languages provides strong evidence that the proposed model is language-agnostic."
],
[
"With a language-agnostic syllabification system, any language can be syllabified given enough labeled training data. A problem is that many languages do not have large, labeled syllabification datasets. For example, we failed to find available and sufficient datasets in the Slavic languages of Russian and Serbian. This problem can be addressed either in a concentrated effort to create more labeled data or in the development of systems that require limited data."
],
[
"This research was supported in part by a Gonzaga University McDonald Work Award by Robert and Claire McDonald and an Amazon Web Services (AWS) grant through the Cloud Credits for Research program."
]
],
"section_name": [
"Introduction",
"Method",
"Method ::: Bidirectional LSTM",
"Method ::: CNN",
"Method ::: Output: Conditional Random Field",
"Method ::: Training",
"Materials",
"Materials ::: Software",
"Materials ::: Datasets",
"Experiments",
"Experiments ::: Data Cleaning",
"Experiments ::: Hyperparameter Specification",
"Experiments ::: Results",
"Discussion",
"Conclusion",
"Conclusion ::: Future Work",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"db21ebb540520b9df2e5841a9b8f9947372f7cff"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE I DATASETS AND LANGUAGES USED FOR EVALUATION. AVERAGE PHONE AND SYLLABLE COUNTS ARE PER WORD.",
"To produce a language-agnostic syllabifier, it is crucial to test syllabification accuracy across different language families and language groupings within families. We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset.",
"Among the six languages we evaluate with, both English and Dutch are notable for the availability of rich datasets of phonetic and syllabic transcriptions. These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16. CELEX was built jointly by the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. CELEX is maintained by the Max Planck Institute for Psycholinguistics. The CELEX database contains information on orthography, phonology, morphology, syntax and word frequency. It also contains syllabified words in Dutch and English transcribed using SAM-PA, CELEX, CPA, and DISC notations. The first three are variations of the International Phonetic Alphabet (IPA), in that each uses a standard ASCII character to represent each IPA character. DISC is different than the other three in that it maps a distinct ASCII character to each phone in the sound systems of Dutch, English, and German BIBREF38. Different phonetic transcriptions are used in different datasets. Part of the strength of our proposed syllabifier is that every transcription can be used as-is without any additional modification to the syllabifier or the input sequences. The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. Both IIT-Guwahat and Festival were initially syllabified with a naive algorithm and then each entry was confirmed or corrected by hand."
],
"extractive_spans": [],
"free_form_answer": "Datasets used are Celex (English, Dutch), Festival (Italian), OpenLexuque (French), IIT-Guwahati (Manipuri), E-Hitz (Basque)",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE I DATASETS AND LANGUAGES USED FOR EVALUATION. AVERAGE PHONE AND SYLLABLE COUNTS ARE PER WORD.",
"We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset.",
" These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16.",
"The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"578c09d9c48ae44789c3daed655f563b07680978"
],
"answer": [
{
"evidence": [
"We tested three model versions against all datasets. The model we call Base is the BiLSTM-CNN-CRF model described in Section SECREF2 with the associated hyperparameters. Another model, Small, uses the same architecture as Base but reduces the number of convolutional layers to 1, the convolutional filters to 40, the LSTM dimension $l$ to 50, and the phone embedding size $d$ to 100. We also tested a Base-Softmax model, which replaces the CRF output of the Base model with a softmax. A comparison of the results of these three models can be seen in Table TABREF25. This comparison empirically motivates the CRF output because Base almost always outperforms Base-Softmax. Of these three models, the Base model performed the best with the exception of the French and Manipuri datasets. The differences in the French results can be considered negligible because the accuracies are all near $100\\%$. The Small model performed best on Manipuri, which may suggest that reducing the number of parameters of the Base model leads to better accuracy on smaller datasets.",
"FLOAT SELECTED: TABLE III THE ACCURACY OF OUR PROPOSED MODEL ON EACH EVALUATION DATASET. MODEL ACCURACY (%± σ) IS REPORTED ON A WORD LEVEL WHICH MEANS THE ENTIRE WORD MUST BE SYLLABIFIED CORRECTLY."
],
"extractive_spans": [],
"free_form_answer": "Authors report their best models have following accuracy: English CELEX (98.5%), Dutch CELEX (99.47%), Festival (99.990%), OpenLexique (100%), IIT-Guwahat (95.4%), E-Hitz (99.83%)",
"highlighted_evidence": [
"A comparison of the results of these three models can be seen in Table TABREF25.",
"FLOAT SELECTED: TABLE III THE ACCURACY OF OUR PROPOSED MODEL ON EACH EVALUATION DATASET. MODEL ACCURACY (%± σ) IS REPORTED ON A WORD LEVEL WHICH MEANS THE ENTIRE WORD MUST BE SYLLABIFIED CORRECTLY."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"96aa576aa4fabc980fc5fae39e636d0ba1302db3"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE II REPORTED ACCURACIES OF STATE OF THE ART AND SELECTED HIGH PERFORMING SYLLABIFIERS ON EACH EVALUATION DATASET.",
"For each dataset used to evaluate the proposed model, we compare our results with published accuracies of existing syllabification systems. Table TABREF21 shows the performance of well known and state of the art syllabifiers for each dataset. Liang's hyphenation algorithm is commonly known for its usage in . The patgen program was used to learn the rules of syllable boundaries BIBREF39. What we call Entropy CRF is a method particular to Manipuri; a rule-based component estimates the entropy of phones and phone clusters while a data-driven CRF component treats syllabification as a sequence modeling task BIBREF35."
],
"extractive_spans": [],
"free_form_answer": "CELEX (Dutch and English) - SVM-HMM\nFestival, E-Hitz and OpenLexique - Liang hyphenation\nIIT-Guwahat - Entropy CRF",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE II REPORTED ACCURACIES OF STATE OF THE ART AND SELECTED HIGH PERFORMING SYLLABIFIERS ON EACH EVALUATION DATASET.",
"For each dataset used to evaluate the proposed model, we compare our results with published accuracies of existing syllabification systems. Table TABREF21 shows the performance of well known and state of the art syllabifiers for each dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1e47a36c5fe3891315ef86dca350f95a4f22122b"
],
"answer": [
{
"evidence": [
"We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM:",
"Both $\\overrightarrow{h_i}$ and $\\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\\times n$."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM:\n\nBoth $\\overrightarrow{h_i}$ and $\\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\\times n$."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What are the datasets used for the task?",
"What is the accuracy of the model for the six languages tested?",
"Which models achieve state-of-the-art performances?",
"Is the LSTM bidirectional?"
],
"question_id": [
"ba56afe426906c4cfc414bca4c66ceb4a0a68121",
"14634943d96ea036725898ab2e652c2948bd33eb",
"d71cb7f3aa585e256ca14eebdc358edfc3a9539c",
"f6556d2a8b42b133eaa361f562745edbe56c0b51"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"italian",
"italian",
"italian",
"italian"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Network diagram detailing the concatenation of the forward and backward LSTMs with the convolutional component.",
"Fig. 2. Diagram of the LSTM cell. ci and hi are the cell states and hidden states that propagate through time, respectively. xi is the input at time i and is concatenated with the previous hidden state. X represents element-wise multiplication and + is element-wise addition.",
"TABLE I DATASETS AND LANGUAGES USED FOR EVALUATION. AVERAGE PHONE AND SYLLABLE COUNTS ARE PER WORD.",
"TABLE II REPORTED ACCURACIES OF STATE OF THE ART AND SELECTED HIGH PERFORMING SYLLABIFIERS ON EACH EVALUATION DATASET.",
"TABLE III THE ACCURACY OF OUR PROPOSED MODEL ON EACH EVALUATION DATASET. MODEL ACCURACY (%± σ) IS REPORTED ON A WORD LEVEL WHICH MEANS THE ENTIRE WORD MUST BE SYLLABIFIED CORRECTLY.",
"TABLE IV COMPARISON OF REPORTED ACCURACIES AGAINST THE ENGLISH CELEX DATASET. NOTE THAT HMM-SVM TRAINED ON 30K EXAMPLES, LEARNED EBG TRAINED ON 60K, AND HMM-GA TRAINED ON 54K.",
"TABLE V EXAMPLES OF GENERATED SYLLABIFICATIONS WHEN THE Base BILSTM-CNN-CRF MODEL IS TRAINED ON ENGLISH CELEX. Target IS THE SYLLABIFICATION GIVEN IN ENGLISH CELEX. PHONES ARE REPRESENTED IN THE DISC FORMAT AND CORRECT SYLLABIFICATIONS ARE IN BOLD."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-TableI-1.png",
"5-TableII-1.png",
"6-TableIII-1.png",
"6-TableIV-1.png",
"7-TableV-1.png"
]
} | [
"What are the datasets used for the task?",
"What is the accuracy of the model for the six languages tested?",
"Which models achieve state-of-the-art performances?"
] | [
[
"1909.13362-Materials ::: Datasets-1",
"1909.13362-5-TableI-1.png",
"1909.13362-Materials ::: Datasets-0"
],
[
"1909.13362-6-TableIII-1.png",
"1909.13362-Experiments ::: Results-0"
],
[
"1909.13362-5-TableII-1.png",
"1909.13362-Materials ::: Datasets-2"
]
] | [
"Datasets used are Celex (English, Dutch), Festival (Italian), OpenLexuque (French), IIT-Guwahati (Manipuri), E-Hitz (Basque)",
"Authors report their best models have following accuracy: English CELEX (98.5%), Dutch CELEX (99.47%), Festival (99.990%), OpenLexique (100%), IIT-Guwahat (95.4%), E-Hitz (99.83%)",
"CELEX (Dutch and English) - SVM-HMM\nFestival, E-Hitz and OpenLexique - Liang hyphenation\nIIT-Guwahat - Entropy CRF"
] | 309 |
1907.08937 | Quantifying Similarity between Relations with Fact Distribution | We introduce a conceptually simple and effective method to quantify the similarity between relations in knowledge bases. Specifically, our approach is based on the divergence between the conditional probability distributions over entity pairs. In this paper, these distributions are parameterized by a very simple neural network. Although computing the exact similarity is in-tractable, we provide a sampling-based method to get a good approximation. We empirically show the outputs of our approach significantly correlate with human judgments. By applying our method to various tasks, we also find that (1) our approach could effectively detect redundant relations extracted by open information extraction (Open IE) models, that (2) even the most competitive models for relational classification still make mistakes among very similar relations, and that (3) our approach could be incorporated into negative sampling and softmax classification to alleviate these mistakes. The source code and experiment details of this paper can be obtained from https://github.com/thunlp/relation-similarity. | {
"paragraphs": [
[
"Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author.",
"Relations, representing various types of connections between entities or arguments, are the core of expressing relational facts in most general knowledge bases (KBs) BIBREF0 , BIBREF1 . Hence, identifying relations is a crucial problem for several information extraction tasks. Although considerable effort has been devoted to these tasks, some nuances between similar relations are still overlooked, (tab:similarityexample shows an example); on the other hand, some distinct surface forms carrying the same relational semantics are mistaken as different relations. These severe problems motivate us to quantify the similarity between relations in a more effective and robust method.",
"In this paper, we introduce an adaptive and general framework for measuring similarity of the pairs of relations. Suppose for each relation INLINEFORM0 , we have obtained a conditional distribution, INLINEFORM1 ( INLINEFORM2 are head and tail entities, and INLINEFORM3 is a relation), over all head-tail entity pairs given INLINEFORM4 . We could quantify similarity between a pair of relations by the divergence between the conditional probability distributions given these relations. In this paper, this conditional probability is given by a simple feed-forward neural network, which can capture the dependencies between entities conditioned on specific relations. Despite its simplicity, the proposed network is expected to cover various facts, even if the facts are not used for training, owing to the good generalizability of neural networks. For example, our network will assign a fact a higher probability if it is “logical”: e.g., the network might prefer an athlete has the same nationality as same as his/her national team rather than other nations.",
"Intuitively, two similar relations should have similar conditional distributions over head-tail entity pairs INLINEFORM0 , e.g., the entity pairs associated with be trade to and play for are most likely to be athletes and their clubs, whereas those associated with live in are often people and locations. In this paper, we evaluate the similarity between relations based on their conditional distributions over entity pairs. Specifically, we adopt Kullback–Leibler (KL) divergence of both directions as the metric. However, computing exact KL requires iterating over the whole entity pair space INLINEFORM1 , which is quite intractable. Therefore, we further provide a sampling-based method to approximate the similarity score over the entity pair space for computational efficiency.",
"Besides developing a framework for assessing the similarity between relations, our second contribution is that we have done a survey of applications. We present experiments and analysis aimed at answering five questions:",
"(1) How well does the computed similarity score correlate with human judgment about the similarity between relations? How does our approach compare to other possible approaches based on other kinds of relation embeddings to define a similarity? (sec:relationship and sec:human-judgment)",
"(2) Open IE models inevitably extract many redundant relations. How can our approach help reduce such redundancy? (sec:openie)",
"(3) To which extent, quantitatively, does best relational classification models make errors among similar relations? (sec:error-analysis)",
"(4) Could similarity be used in a heuristic method to enhance negative sampling for relation prediction? (sec:training-guidance-relation-prediction)",
"(5) Could similarity be used as an adaptive margin in softmax-margin training method for relation extraction? (sec:training-guidance-relation-extraction)",
"Finally, we conclude with a discussion of valid extensions to our method and other possible applications."
],
[
"Just as introduced in sec:introduction, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution."
],
[
"A fact is a triple INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are called head and tail entities, INLINEFORM3 is the relation connecting them, INLINEFORM4 and INLINEFORM5 are the sets of entities and relations respectively. We consider a score function INLINEFORM6 maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: INLINEFORM7 . We use INLINEFORM8 to define the unnormalized probability. DISPLAYFORM0 ",
"for every triple INLINEFORM0 . The real parameter INLINEFORM1 can be adjusted to obtain difference distributions over facts.",
"In this paper, we only consider locally normalized version of INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are directly parameterized by feed-forward neural networks. Through local normalization, INLINEFORM2 is naturally a valid probability distribution, as the partition function INLINEFORM3 . Therefore, INLINEFORM4 ."
],
[
"Here we introduce our special design of neural networks. For the first part and the second part, we implement the scoring functions introduced in eq:local-normalization as DISPLAYFORM0 ",
"where each INLINEFORM0 represents a multi-layer perceptron composed of layers like INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are embeddings of INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 includes weights and biases in all layers."
],
[
"Now we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 is a set of triples. The whole set of parameters, INLINEFORM1 . We train these parameters by Adam optimizer BIBREF2 . Training details are shown in sec:trainingdetail."
],
[
"So far, we have talked about how to use neural networks to approximate the natural distribution of facts. The center topic of our paper, quantifying similarity, will be discussed in detail in this section."
],
[
"In this paper, we provide a probability view of relations by representing relation INLINEFORM0 as a probability distribution INLINEFORM1 . After training the neural network on a given set of triples, the model is expected to generalize well on the whole INLINEFORM2 space.",
"Note that it is very easy to calculate INLINEFORM0 in our model thanks to local normalization (eq:local-normalization). Therefore, we can compute it by DISPLAYFORM0 "
],
[
"As the basis of our definition, we hypothesize that the similarity between INLINEFORM0 reflects the similarity between relations. For example, if the conditional distributions of two relations put mass on similar entity pairs, the two relations should be quite similar. If they emphasize different ones, the two should have some differences in meaning.",
"Formally, we define the similarity between two relations as a function of the divergence between the distributions of corresponding head-tail entity pairs: DISPLAYFORM0 ",
"where INLINEFORM0 denotes Kullback–Leibler divergence, DISPLAYFORM0 ",
"vice versa, and function INLINEFORM0 is a symmetrical function. To keep the coherence between semantic meaning of “similarity” and our definition, INLINEFORM1 should be a monotonically decreasing function. Through this paper, we choose to use an exponential family composed with max function, i.e., INLINEFORM2 . Note that by taking both sides of KL divergence into account, our definition incorporates both the entity pairs with high probability in INLINEFORM3 and INLINEFORM4 . Intuitively, if INLINEFORM5 mainly distributes on a proportion of entities pairs that INLINEFORM6 emphasizes, INLINEFORM7 is only hyponymy of INLINEFORM8 . Considering both sides of KL divergence could help model yield more comprehensive consideration. We will talk about the advantage of this method in detail in sec:relationship."
],
[
"Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0 ",
"where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 ."
],
[
"Previous work proposed various methods for representing relations as vectors BIBREF3 , BIBREF4 , as matrices BIBREF5 , even as angles BIBREF6 , etc. Based on each of these representations, one could easily define various similarity quantification methods. We show in tab:other-similarity the best one of them in each category of relation presentation.",
"Here we provide two intuitive reasons for using our proposed probability-based similarity: (1) the capacity of a single fixed-size representation is limited — some details about the fact distribution is lost during embedding; (2) directly comparing distributions yields a better interpretability — you can not know about how two relations are different given two relation embeddings, but our model helps you study the detailed differences between probabilities on every entity pair. fig:head-tail-distribution provides an example. Although the two relations talk about the same topic, they have different meanings. TransE embeds them as vectors the closest to each other, while our model can capture the distinction between the distributions corresponds to the two relations, which could be directly noticed from the figure.",
"Embeddings used in this graph are from a trained TransE model."
],
[
"We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section."
],
[
"In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations."
],
[
"ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset."
],
[
"FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied."
],
[
"Following BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. The inter-subject correlation, estimated by leaving-one-out method BIBREF13 , is r = INLINEFORM0 , standard deviation = INLINEFORM1 . This important reference value (marked in fig:correlation) could be seen as the highest expected performance for machines BIBREF12 .",
"To get baselines for comparison, we consider other possible methods to define similarity functions, as shown in tab:other-similarity. We compute the correlation between these methods and human judgment scores. As the models we have chosen are the ones work best in knowledge base completion, we do expect the similarity quantification approaches based on them could measure some degree of similarity. As shown in fig:correlation, the three baseline models could achieve moderate ( INLINEFORM0 ) positive correlation. On the other hand, our model shows a stronger correlation ( INLINEFORM1 ) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments. These results provide evidence for our claim raised in sec:defining-similarity."
],
[
"Open IE extracts concise token patterns from plain text to represent various relations between entities, e.g.,, (Mark Twain, was born in, Florida). As Open IE is significant for constructing KBs, many effective extractors have been proposed to extract triples, such as Text-Runner BIBREF14 , ReVerb BIBREF9 , and Standford Open IE BIBREF15 . However, these extractors only yield relation patterns between entities, without aggregating and clustering their results. Accordingly, there are a fair amount of redundant relation patterns after extracting those relation patterns. Furthermore, the redundant patterns lead to some redundant relations in KBs.",
"Recently, some efforts are devoted to Open Relation Extraction (Open RE) BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , aiming to cluster relation patterns into several relation types instead of redundant relation patterns. Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand. On the other hand, these methods still rely on the conventional similarity metrics mentioned above.",
"In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (sec:toy-experiment). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (sec:real-experiment). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy."
],
[
"In this subsection, we propose a toy environment to verify our similarity-based method. Specifically, we construct a dataset from Wikidata and implement Chinese restaurant process to split every relation in the dataset into several sub-relations. Then, we filter out those sub-relations appearing less than 50 times to eventually get 1165 relations. All these split relations are regarded as different ones during training, and then different relation similarity metrics are adopted to merge those sub-relations into one relation. As Figure FIGREF26 shown that the matrices-based approach is less effective than other approaches, we leave this approach out of this experiment. The results are shown in Table TABREF37 ."
],
[
"In this subsection, we evaluate various relation similarity metrics on the real-world Open IE patterns. The dataset are constructed by ReVerb. Different patterns will be regarded as different relations during training, and we also adopt various relation similarity metrics to merge similar relation patterns. Because it is nearly impossible to annotate all pattern pairs for their merging or not, meanwhile it is also inappropriate to take distantly supervised annotations as golden results. Hence, we propose a novel metric approximating recall and precision evaluation based on minimal human annotations for evaluation in this experiment.",
"Recall is defined as the yielding fraction of true positive instances over the total amount of real positive instances. However, we do not have annotations about which pairs of relations are synonymous. Crowdsourcing is a method to obtain a large number of high-quality annotations. Nevertheless, applying crowdsourcing is not trivial in our settings, because it is intractable to enumerate all synonymous pairs in the large space of relation (pattern) pairs INLINEFORM0 in Open IE. A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers. However, this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency. Fortunately, we could use normalized importance sampling as an alternative to get an unbiased estimation of recall.",
"Theorem 1 Suppose every sample INLINEFORM0 has a label INLINEFORM1 , and the model to be evaluated also gives its prediction INLINEFORM2 . The recall can be written as DISPLAYFORM0 ",
"where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . If we have a proposal distribution INLINEFORM2 satisfying INLINEFORM3 , we get an unbiased estimation of recall: DISPLAYFORM0 ",
"where INLINEFORM0 is a normalized version of INLINEFORM1 , where INLINEFORM2 is the unnormalized version of q, and INLINEFORM3 are i.i.d. drawn from INLINEFORM4 .",
"Similar to eq:recall-expectation, we can write the expectation form of precision: DISPLAYFORM0 ",
"where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . As these samples could be found out by performing models on it. We can simply approximate precision by Monte Carlo Sampling: DISPLAYFORM0 ",
"where INLINEFORM0 .",
"In our setting, INLINEFORM0 , INLINEFORM1 means INLINEFORM2 and INLINEFORM3 are the same relations, INLINEFORM4 means INLINEFORM5 is larger than a threshold INLINEFORM6 .",
"The results on the ReVerb Extractions dataset that we constructed are described in fig:precision-recall-openie. To approximate recall, we use the similarity scores as the proposal distribution INLINEFORM0 . 500 relation pairs are then drawn from INLINEFORM1 . To approximate precision, we set thresholds at equal intervals. At each threshold, we uniformly sample 50 to 100 relation pairs whose similarity score given by the model is larger than the threshold. We ask 15 undergraduates to judge whether two relations in a relation pair have the same meaning. A relation pair is viewed valid only if 8 of the annotators annotate it as valid. We use the annotations to approximate recall and precision with eq:recall and eq:precision. Apart from the confidential interval of precision shown in the figure, the largest INLINEFORM2 confidential interval among thresholds for recall is INLINEFORM3 . From the result, we could see that our model performs much better than other models' similarity by a very large margin."
],
[
"In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence."
],
[
"We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0 ",
"where INLINEFORM0 is the set of training triples, INLINEFORM1 is the distance function, INLINEFORM2 is a negative sample with one element different from INLINEFORM4 uniformly sampled from INLINEFORM5 , and INLINEFORM6 is the margin.",
"During testing, for each entity pair INLINEFORM0 , TransE rank relations according to INLINEFORM1 . For each INLINEFORM2 in the test set, we call the relations with higher rank scores than INLINEFORM3 distracting relations. We then compare the similarity between the golden relation and distracting relations. Note that some entity pairs could correspond to more than one relations, in which case we just do not see them as distracting relations."
],
[
"For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset."
],
[
"fig:averank shows the distribution of similarity ranks of distracting relations of the above mentioned models' outputs on both relation prediction and relation extraction tasks. From fig:averankrp,fig:averankre, we could observe the most distracting relations are the most similar ones, which corroborate our hypothesis that even the best models on these tasks still make mistakes among the most similar relations. This result also highlights the importance of a heuristic method for guiding models to pay more attention to the boundary between similar relations. We also try to do the negative sampling with relation type constraints, but we see no improvement compared with uniform sampling. The details of negative sampling with relation type constraints are presented in sec:relation-type-constraints."
],
[
"Based on the observation presented in sec:erroranalysisresult, we find out that similar relations are often confusing for relation prediction models. Therefore, corrupted triples with similar relations can be used as high-quality negative samples.",
"For a given valid triple INLINEFORM0 , we corrupt the triple by substituting INLINEFORM1 with INLINEFORM2 with the probability, DISPLAYFORM0 ",
"where INLINEFORM0 is the temperature of the exponential function, the bigger the INLINEFORM1 is, the flatter the probability distribution is. When the temperature approaches infinite, the sampling process reduces to uniform sampling.",
"In training, we set the initial temperature to a high level and gradually reduce the temperature. Intuitively, it enables the model to distinguish among those obviously different relations in the early stage and gives more and more confusing negative triples as the training processes to help the model distinguish the similar relations. This can be also viewed as a process of curriculum learning BIBREF21 , the data fed to the model gradually changes from simple negative triples to hard ones.",
"We perform relation prediction task on FB15K with TransE. Following BIBREF3 , we use the \"Filtered\" setting protocol, i.e., filtering out the corrupted triples that appear in the dataset. Our sampling method is shown to improve the model's performance, especially on Hit@1 (fig:relationprediction). Training details are described in sec:trainingdetail."
],
[
"Similar to sec:training-guidance-relation-prediction, we find out that relation extraction models often make wrong preditions on similar relations. In this section, we use similarity as an adaptive margin in softmax-margin loss to improve the performance of relation extraction models.",
"As shown in BIBREF22 , Softmax-Margin Loss can be expressed as DISPLAYFORM0 ",
"where INLINEFORM0 denotes a structured output space for INLINEFORM1 , and INLINEFORM2 is INLINEFORM3 example in training data.",
"We can easily incorporate similarity into cost function INLINEFORM0 . In this task, we define the cost function as INLINEFORM1 , where INLINEFORM2 is a hyperparameter.",
"Intuitively, we give a larger margin between similar relations, forcing the model to distinguish among them, and thus making the model perform better. We apply our method to Position-aware Attention LSTM (PA-LSTM) BIBREF10 , and tab:relationextraction shows our method improves the performance of PA-LSTM. Training details are described in sec:trainingdetail."
],
[
"As many early works devoted to psychology and linguistics, especially those works exploring semantic similarity BIBREF11 , BIBREF12 , researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, bejar1991cognitive explicitly defining these relations and miller1995wordnet further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 .",
"With the ongoing development of information extraction and effective construction of KBs BIBREF0 , BIBREF1 , BIBREF30 , relations are further defined as various types of latent connections between objects more than semantic relations. These general relations play a core role in expressing relational facts in the real world. Hence, there are accordingly various methods proposed for discovering more relations and their facts, including open information extraction BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 and relation extraction BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , and relation prediction BIBREF3 , BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 .",
"For both semantic relations and general relations, identifying them is a crucial problem, requiring systems to provide a fine-grained relation similarity metric. However, the existing methods suffer from sparse data, which makes it difficult to achieve an effective and stable similarity metric. Motivated by this, we propose to measure relation similarity by leveraging their fact distribution so that we can identify nuances between similar relations, and merge those distant surface forms of the same relations, benefitting the tasks mentioned above."
],
[
"In this paper, we introduce an effective method to quantify the relation similarity and provide analysis and a survey of applications. We note that there are a wide range of future directions: (1) human prior knowledge could be incorporated into the similarity quantification; (2) similarity between relations could also be considered in multi-modal settings, e.g., extracting relations from images, videos, or even from audios; (3) by analyzing the distributions corresponding to different relations, one can also find some “meta-relations” between relations, such as hypernymy and hyponymy."
],
[
"This work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), the National Key Research and Development Program of China (No. 2018YFB1004503). Chen and Zhu is supported by Tsinghua University Initiative Scientific Research Program, and Chen is also supported by DCST Student Academic Training Program. Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program."
],
[
"If we have a proposal distribution INLINEFORM0 satisfying INLINEFORM1 , then eq:proofrecallfirstpart can be further written as DISPLAYFORM0 ",
"Sometimes, it's hard for us to compute normalized probability INLINEFORM0 . To tackle this problem, consider self-normalized importance sampling as an unbiased estimation BIBREF50 , DISPLAYFORM0 ",
"where INLINEFORM0 is the normalized version of INLINEFORM1 ."
],
[
"Specifically, for a relation INLINEFORM0 with currently INLINEFORM1 sub-relations, we turn it to a new sub-relation with probability DISPLAYFORM0 ",
"or to the INLINEFORM0 existing sub-relation with probability DISPLAYFORM0 ",
"where INLINEFORM0 is the size of INLINEFORM1 existing sub-relation, INLINEFORM2 is the sum of the number of all sub-relationships of INLINEFORM3 , and INLINEFORM4 is a hyperparameter, in which case we use INLINEFORM5 ."
],
[
"In Wikidata and ReVerb Extractions dataset, we manually split a validation set, assuring every entity and relation appears in validation set also appears in training set. While minimizing loss on the training set, we observe the loss on the validation set and stop training as validation loss stops to decrease. Before training our model on any dataset, we use the entity embeddings and relation embeddings produced by TransE on the dataset as the pretrained embeddings for our model."
],
[
"The sampling is launched with an initial temperature of 8192. The temperature drops to half every 200 epochs and remains stable once it hits 16. Optimization is performed using SGD, with a learning rate of 1e-3."
],
[
"The sampling is launching with an initial temperature of 64. The temperature drops by 20% per epoch, and remains stable once it hits 16. The alpha we use is 9. Optimization is performed using SGD, with a learning rate of 1."
],
[
"As is shown in fig:recallstd, the max recall standard deviation for our model is 0.4, and 0.11 for TransE."
],
[
"In FB15K, if two relations have same prefix, we regard them as belonging to a same type, e.g., both /film/film/starring./film/performance/actor and /film/actor/film./film/performance/film have prefix film, they belong to same type. Similar to what is mentioned in sec:training-guidance-relation-prediction, we expect the model first to learn to distinguish among obviously different relations, and gradually learn to distinguish similar relations. Therefore, we conduct negative sampling with relation type constraints in two ways."
],
[
"For each triple INLINEFORM0 , we have two uniform distribution INLINEFORM1 and INLINEFORM2 . INLINEFORM3 is the uniform distribution over all the relations except for those appear with INLINEFORM4 in the knowledge base, and INLINEFORM5 is the uniform distribution over the relations of the same type as INLINEFORM6 . When corrupting the triple, we sample INLINEFORM7 from the distribution: DISPLAYFORM0 ",
"where INLINEFORM0 is a hyperparameter. We set INLINEFORM1 to 1 at the beginning of training, and every INLINEFORM2 epochs, INLINEFORM3 will be multiplied by decrease rate INLINEFORM4 . We do grid search for INLINEFORM5 and INLINEFORM6 , but no improvement is observed."
],
[
"We speculate that the unsatisfactory result produced by adding up two uniform distribution is because that for those types with few relations in it, a small change of INLINEFORM0 will result in a significant change in INLINEFORM1 . Therefore, when sampling a negative INLINEFORM2 , we add weights to relations that are of the same type as INLINEFORM3 instead. Concretely, we substitute INLINEFORM4 with INLINEFORM5 with probability INLINEFORM6 , which can be calculated as: DISPLAYFORM0 ",
"where INLINEFORM0 denotes all the relations that are the same type as INLINEFORM1 , INLINEFORM2 is a hyperparameter and INLINEFORM3 is a normalizing constant. We set INLINEFORM4 to 0 at the beginning of training, and every INLINEFORM5 epochs, INLINEFORM6 will increase by INLINEFORM7 . We do grid search for INLINEFORM8 and INLINEFORM9 , still no improvement is observed."
],
[
"We show the guidance provided for the annotators here."
]
],
"section_name": [
"Introduction",
"Learning Head-Tail Distribution",
"Formal Definition of Fact Distribution",
"Neural architecture design",
"Training",
"Quantifying Similarity",
"Relations as Distributions",
"Defining Similarity",
"Calculating Similarity",
"Relationship with other metrics",
"Dataset Construction",
"Wikidata",
"ReVerb Extractions",
"FB15K and TACRED",
"Human Judgments",
"Redundant Relation Removal",
"Toy Experiment",
"Real World Experiment",
"Error Analysis for Relational Classification",
"Relation Prediction",
"Relation Extraction",
"Results",
"Similarity and Negative Sampling",
"Similarity and Softmax-Margin Loss",
"Related Works",
"Conclusion and Future Work",
"Acknowledgements",
"Proofs to theorems in the paper",
"Chinese Restaurant Process",
"Training Details",
"Training Details on Negative Sampling",
"Training Details on Softmax-Margin Loss",
"Recall Standard Deviation",
"Negative Samplilng with Relation Type Constraints",
"Add Up Two Uniform Distribution",
"Add Weight",
"Wikidata annotation guidance"
]
} | {
"answers": [
{
"annotation_id": [
"95bdaf3d6a7bda0b316622f894922a9e97014da0"
],
"answer": [
{
"evidence": [
"In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence.",
"We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0",
"For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset."
],
"extractive_spans": [],
"free_form_answer": "For relation prediction they test TransE and for relation extraction they test position aware neural sequence model",
"highlighted_evidence": [
"In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. ",
"We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset.",
"For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1f884a8c66e2b03922b0982c043d139474a336a0"
],
"answer": [
{
"evidence": [
"In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (sec:toy-experiment). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (sec:real-experiment). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy.",
"FLOAT SELECTED: Figure 3: Precision-recall curve on Open IE task comparing our similarity function with vector-based and angle-based similarity. Error bar represents 95% confidential interval. Bootstraping is used to calculate the confidential interval.",
"In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence."
],
"extractive_spans": [
"relation prediction",
"relation extraction",
"Open IE"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations.",
"FLOAT SELECTED: Figure 3: Precision-recall curve on Open IE task comparing our similarity function with vector-based and angle-based similarity. Error bar represents 95% confidential interval. Bootstraping is used to calculate the confidential interval.",
"In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2cd64b80bc3339eefa46c8cae56b6823b13e92f6"
],
"answer": [
{
"evidence": [
"We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section.",
"In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations.",
"ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset.",
"FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied."
],
"extractive_spans": [
"Wikidata",
"ReVerb",
"FB15K",
"TACRED"
],
"free_form_answer": "",
"highlighted_evidence": [
"We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section.",
"In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). ",
"We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data.",
"ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia.",
"FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ecdbad0b58727c3e230a18bd851ea3295cece946"
],
"answer": [
{
"evidence": [
"Human Judgments",
"Following BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. The inter-subject correlation, estimated by leaving-one-out method BIBREF13 , is r = INLINEFORM0 , standard deviation = INLINEFORM1 . This important reference value (marked in fig:correlation) could be seen as the highest expected performance for machines BIBREF12 ."
],
"extractive_spans": [],
"free_form_answer": "By assessing similarity of 360 pairs of relations from a subset of Wikidata using an integer similarity score from 0 to 4",
"highlighted_evidence": [
"Human Judgments\nFollowing BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"c2ffbf5dc43e19a73cd8ed56c0a1bb60344ca7c3"
],
"answer": [
{
"evidence": [
"Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0",
"where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 ."
],
"extractive_spans": [
"monte-carlo",
"sequential sampling"
],
"free_form_answer": "",
"highlighted_evidence": [
"Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0\n\nwhere INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Which competitive relational classification models do they test?",
"Which tasks do they apply their method to?",
"Which knowledge bases do they use?",
"How do they gather human judgements for similarity between relations?",
"Which sampling method do they use to approximate similarity between the conditional probability distributions over entity pairs?"
],
"question_id": [
"10ddc5caf36fe9d7438eb5a3936e24580c4ffe6a",
"29571867fe00346418b1ec36c3b7685f035e22ce",
"1a678d081f97531d54b7122254301c20b3531198",
"b9f2a30f5ef664ff845d860cf4bfc2afb0a46e5a",
"3513682d4ee2e64725b956c489cd5b5995a6acf2"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: An illustration of the errors made by relation extraction models. The sentence contains obvious patterns indicating the two persons are siblings, but the model predicts it as parents. We introduce an approach to measure the similarity between relations. Our result shows “siblings” is the second most similar one to “parents”. By applying this approach, we could analyze the errors made by models, and help reduce errors.",
"Table 2: Methods to define a similarity function with different types of relation representations",
"Figure 1: Head-tail entity pairs of relation “be an unincorporated community in” (in blue) and “be a small city in” (in red) sampled from our fact distribution model. The coordinates of the points are computed by t-sne (Maaten and Hinton, 2008) on the concatenation of head and tail embeddings8. The two larger blue and red points indicate the embeddings of these two relations.",
"Figure 2: Spearman correlations between human judgment and models’ outputs. The inter-subject correlation is also shown as a horizontal line with its standard deviation as an error band. Our model shows the strongest positive correlation with human judgment, and, in other words, the smallest margin with human inter-subject agreement. Significance: ***/**/* := p < .001/.01/.05.",
"Table 3: Statistics of the triple sets used in this paper.",
"Table 4: The experiment results on the toy dataset show that our metric based on probability distribution significantly outperforms other relation similarity metrics.",
"Figure 3: Precision-recall curve on Open IE task comparing our similarity function with vector-based and angle-based similarity. Error bar represents 95% confidential interval. Bootstraping is used to calculate the confidential interval.",
"Figure 4: Similarity rank distributions of distracting relations on different tasks and datasets. Most of the distracting relations have top similarity rank. Distracting relations are, as defined previously, the relations have a higher rank in the relation classification result than the ground truth.",
"Table 5: Improvement of using similarity in softmaxmargin loss.",
"Figure 6: The recall standard deviation of different models."
],
"file": [
"1-Table1-1.png",
"4-Table2-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png",
"8-Table5-1.png",
"12-Figure6-1.png"
]
} | [
"Which competitive relational classification models do they test?",
"How do they gather human judgements for similarity between relations?"
] | [
[
"1907.08937-Error Analysis for Relational Classification-0",
"1907.08937-Relation Extraction-0"
],
[
"1907.08937-Human Judgments-0"
]
] | [
"For relation prediction they test TransE and for relation extraction they test position aware neural sequence model",
"By assessing similarity of 360 pairs of relations from a subset of Wikidata using an integer similarity score from 0 to 4"
] | 314 |
1803.02839 | The emergent algebraic structure of RNNs and embeddings in NLP | We examine the algebraic and geometric properties of a uni-directional GRU and word embeddings trained end-to-end on a text classification task. A hyperparameter search over word embedding dimension, GRU hidden dimension, and a linear combination of the GRU outputs is performed. We conclude that words naturally embed themselves in a Lie group and that RNNs form a nonlinear representation of the group. Appealing to these results, we propose a novel class of recurrent-like neural networks and a word embedding scheme. | {
"paragraphs": [
[
"Tremendous advances in natural language processing (NLP) have been enabled by novel deep neural network architectures and word embeddings. Historically, convolutional neural network (CNN) BIBREF0 , BIBREF1 and recurrent neural network (RNN) BIBREF2 , BIBREF3 topologies have competed to provide state-of-the-art results for NLP tasks, ranging from text classification to reading comprehension. CNNs identify and aggregate patterns with increasing feature sizes, reflecting our common practice of identifying patterns, literal or idiomatic, for understanding language; they are thus adept at tasks involving key phrase identification. RNNs instead construct a representation of sentences by successively updating their understanding of the sentence as they read new words, appealing to the formally sequential and rule-based construction of language. While both networks display great efficacy at certain tasks BIBREF4 , RNNs tend to be the more versatile, have emerged as the clear victor in, e.g., language translation BIBREF5 , BIBREF6 , BIBREF7 , and are typically more capable of identifying important contextual points through attention mechanisms for, e.g., reading comprehension BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . With an interest in NLP, we thus turn to RNNs.",
"RNNs nominally aim to solve a general problem involving sequential inputs. For various more specified tasks, specialized and constrained implementations tend to perform better BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF10 , BIBREF11 , BIBREF8 , BIBREF9 . Often, the improvement simply mitigates the exploding/vanishing gradient problem BIBREF18 , BIBREF19 , but, for many tasks, the improvement is more capable of generalizing the network's training for that task. Understanding better how and why certain networks excel at certain NLP tasks can lead to more performant networks, and networks that solve new problems.",
"Advances in word embeddings have furnished the remainder of recent progress in NLP BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Although it is possible to train word embeddings end-to-end with the rest of a network, this is often either prohibitive due to exploding/vanishing gradients for long corpora, or results in poor embeddings for rare words BIBREF26 . Embeddings are thus typically constructed using powerful, but heuristically motivated, procedures to provide pre-trained vectors on top of which a network can be trained. As with the RNNs themselves, understanding better how and why optimal embeddings are constructed in, e.g., end-to-end training can provide the necessary insight to forge better embedding algorithms that can be deployed pre-network training.",
"Beyond improving technologies and ensuring deep learning advances at a breakneck pace, gaining a better understanding of how these systems function is crucial for allaying public concerns surrounding the often inscrutable nature of deep neural networks. This is particularly important for RNNs, since nothing comparable to DeepDream or Lucid exists for them BIBREF27 .",
"To these ends, the goal of this work is two fold. First, we wish to understand any emergent algebraic structure RNNs and word embeddings, trained end-to-end, may exhibit. Many algebraic structures are well understood, so any hints of structure would provide us with new perspectives from which and tools with which deep learning can be approached. Second, we wish to propose novel networks and word embedding schemes by appealing to any emergent structure, should it appear.",
"The paper is structured as follows. Methods and experimental results comprise the bulk of the paper, so, for faster reference, § SECREF2 provides a convenient summary and intrepretation of the results, and outlines a new class of neural network and new word embedding scheme leveraging the results. § SECREF3 motivates the investigation into algebraic structures and explains the experimental setup. § SECREF4 Discusses the findings from each of the experiments. § SECREF5 interprets the results, and motivates the proposed network class and word embeddings. § SECREF6 provides closing remarks and discusses followup work, and § SECREF7 gives acknowledgments.",
"To make a matter of notation clear going forward, we begin by referring to the space of words as INLINEFORM0 , and transition to INLINEFORM1 after analyzing the results in order to be consistent with notation in the literature on algebraic spaces."
],
[
"We embedded words as vectors and used a uni-directional GRU connected to a dense layer to classify the account from which tweets may have originated. The embeddings and simple network were trained end-to-end to avoid imposing any artificial or heuristic constraints on the system.",
"There are two primary takeaways from the work presented herein:",
"The first point follows since 1) words are embedded in a continuous space; 2) an identity word exists that causes the RNN to act trivially on a hidden state; 3) word inverses exist that cause the RNN to undo its action on a hidden state; 4) the successive action of the RNN using two words is equivalent to the action of the RNN with a single third word, implying the multiplicative closure of words; and 5) words are not manifestly closed under any other binary action.",
"The second point follows given that words embed on a manifold, sentences traces out paths on the manifold, and the difference equation the RNN solves bears a striking resemble to the first order equation for parallel transport, DISPLAYFORM0 ",
" where INLINEFORM0 is the INLINEFORM1 -th hidden state encountered when reading over a sentence and INLINEFORM2 is the RNN conditioned by the INLINEFORM3 -th word, INLINEFORM4 , acting on the hidden state. Since sentences trace out a path on the word manifold, and parallel transport operators for representations of the word manifold take values in the group, the RNN must parallel transport hidden states either on the group itself or on a base space, INLINEFORM5 , equipped with some word field, INLINEFORM6 , that connects the path in the base space to the path on the word manifold.",
"Leveraging these results, we propose two new technologies.",
"First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0 ",
"where DISPLAYFORM0 ",
" and where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state. In particular, using INLINEFORM5 for sentence generation is the topic of a manuscript presently in preparation.",
"Second, we propose embedding schemes that explicitly embed words as elements of a Lie group. In practice, these embedding schemes would involve representing words as constrained matrices, and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation.",
"The proposals are only briefly discussed herein, as they are the focus of followup work; the focus of the present work is on the experimental evidence for the emergent algebraic structure of RNNs and embeddings in NLP."
],
[
"We provide two points to motivate examining the potential algebraic properties of RNNs and their space of inputs in the context of NLP.",
"First, a RNN provides a function, INLINEFORM0 , that successively updates a hidden memory vector, INLINEFORM1 , characterizing the information contained in a sequence of input vectors, INLINEFORM2 , as it reads over elements of the sequence. Explicitly, INLINEFORM3 . At face value, INLINEFORM4 takes the same form as a (nonlinear) representation of some general algebraic structure, INLINEFORM5 , with at least a binary action, INLINEFORM6 , on the vector space INLINEFORM7 . While demanding much structure on INLINEFORM8 generally places a strong constraint on the network's behavior, it would be fortuitous for such structure to emerge. Generally, constrained systems still capable of performing a required task will perform the task better, or, at least, generalize more reliably BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . To this end, the suggestive form RNNs assume invites further examination to determine if there exist any reasonable constraints that may be placed on the network. To highlight the suggestiveness of this form in what follows, we represent the INLINEFORM9 argument of INLINEFORM10 as a subscript and the INLINEFORM11 argument by treating INLINEFORM12 as a left action on INLINEFORM13 , adopting the notation INLINEFORM14 . Since, in this paper, we consider RNNs vis-à-vis NLP, we take INLINEFORM15 as the (continuous) set of words.",
"Second, in the massive exploration of hyperparameters presented in BIBREF5 , it was noted that, for a given word embedding dimension, the network's performance on a seq2seq task was largely insensitive to the hidden dimension of the RNN above a threshold ( INLINEFORM0 128). The dimension of admissible representations of a given algebraic structure is generally discrete and spaced out. Interpreting neurons as basis functions and the output of layers as elements of the span of the functions BIBREF34 , BIBREF35 , BIBREF36 , we would expect a network's performance to improve until an admissible dimension for the representation is found, after which the addition of hidden neurons would simply contribute to better learning the components of the proper representation by appearing in linear combinations with other neurons, and contribute minimally to improving the overall performance. In their hyperparameter search, a marginal improvement was found at a hidden dimension of 2024, suggesting a potentially better representation may have been found.",
"These motivating factors may hint at an underlying algebraic structure in language, at least when using RNNs, but they raise the question: what structures are worth investigating?",
"Groups present themselves as a candidate for consideration since they naturally appear in a variety of applications. Unitary weight matrices have already enjoyed much success in mitigating the exploding/vanishing gradients problem BIBREF13 , BIBREF14 , and RNNs even further constrained to act explicitly as nonlinear representations of unitary groups offer competitive results BIBREF15 . Moreover, intuitively, RNNs in NLP could plausibly behave as a group since: 1) the RNN must learn to ignore padding words used to square batches of training data, indicating an identity element of INLINEFORM0 must exist; 2) the existence of contractions, portmanteaus, and the Germanic tradition of representing sentences as singular words suggest INLINEFORM1 might be closed; and 3) the ability to backtrack and undo statements suggests language may admit natural inverses - that is, active, controlled “forgetting\" in language may be tied to inversion. Indeed, groups seem reasonably promising.",
"It is also possible portmanteaus only make sense for a finite subset of pairs of words, so INLINEFORM0 may take on the structure of a groupoid instead; moreover, it is possible, at least in classification tasks, that information is lost through successive applications of INLINEFORM1 , suggesting an inverse may not actually exist, leaving INLINEFORM2 as either a monoid or category. INLINEFORM3 may also actually admit additional structure, or an additional binary operation, rendering it a ring or algebra.",
"To determine what, if any, algebraic structure INLINEFORM0 possesses, we tested if the following axiomatic properties of faithful representations of INLINEFORM1 hold:",
"(Identity) INLINEFORM0 such that INLINEFORM1 , INLINEFORM2 ",
"(Closure under multiplication) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 ",
"(Inverse) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 ",
"(Closure under Lie bracket) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 ",
"Closure under Lie bracket simultaneously checks for ring and Lie algebra structures.",
"Whatever structure, if any, INLINEFORM0 possesses, it must additionally be continuous since words are typically embedded in continuous spaces. This implies Lie groups (manifolds), Lie semigroups with an identity (also manifolds), and Lie algebras (vector spaces with a Lie bracket) are all plausible algebraic candidates."
],
[
"We trained word embeddings and a uni-directional GRU connected to a dense layer end-to-end for text classification on a set of scraped tweets using cross-entropy as the loss function. End-to-end training was selected to impose as few heuristic constraints on the system as possible. Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about. Tokens that occurred fewer than 5 times were disregarded in the model. The model was trained on 22106 tweets over 10 epochs, while 5526 were reserved for validation and testing sets (2763 each). The network demonstrated an insensitivity to the initialization of the hidden state, so, for algebraic considerations, INLINEFORM0 was chosen for hidden dimension of INLINEFORM1 . A graph of the network is shown in Fig.( FIGREF13 ).",
"Algebraic structures typically exhibit some relationship between the dimension of the structure and the dimension of admissible representations, so exploring the embedding and hidden dimensions for which certain algebraic properties hold is of interest. Additionally, beyond the present interest in algebraic properties, the network's insensitivity to the hidden dimension invites an investigation into its sensitivity to the word embedding dimension. To address both points of interest, we extend the hyperparameter search of BIBREF5 , and perform a comparative search over embedding dimensions and hidden dimensions to determine the impact of each on the network's performance and algebraic properties. Each dimension in the hyperparameter pair, INLINEFORM0 , runs from 20 to 280 by increments of 20.",
"After training the network for each hyperparameter pair, the GRU model parameters and embedding matrix were frozen to begin testing for emergent algebraic structure. To satisfy the common “ INLINEFORM0 \" requirement stated in § SECREF6 , real hidden states encountered in the testing data were saved to be randomly sampled when testing the actions of the GRU on states. 7 tests were conducted for each hyperparameter pair with randomly selected states:",
"Identity (“arbitrary identity\")",
"Inverse of all words in corpus (“arbitrary inverse\")",
"Closure under multiplication of arbitrary pairs of words in total corpus (“arbitrary closure\")",
"Closure under commutation of arbitrary pairs of words in total corpus (“arbitrary commutativity\")",
"Closure under multiplication of random pairs of words from within each tweet (“intra-sentence closure\")",
"Closure of composition of long sequences of words in each tweet (“composite closure\")",
"Inverse of composition of long sequences of words in each tweet (“composite inverse\")",
"Tests 6 and 7 were performed since, if closure is upheld, the composition of multiple words must also be upheld. These tests were done to ensure mathematical consistency.",
"To test for the existence of “words\" that satisfy these conditions, vectors were searched for that, when inserted into the GRU, minimized the ratio of the Euclidean norms of the difference between the “searched\" hidden vector and the correct hidden vector. For concreteness, the loss function for each algebraic property from § SECREF6 were defined as follows:",
"(Identity) DISPLAYFORM0 ",
"(Closure under multiplication) DISPLAYFORM0 ",
"(Inverse) DISPLAYFORM0 ",
"(Closure under Lie bracket) DISPLAYFORM0 ",
"where INLINEFORM0 are random, learned word vectors, INLINEFORM1 is a hidden state, and INLINEFORM2 is the model parameter trained to minimize the loss. We refer to Eqs.( SECREF12 ) as the “axiomatic losses.\" It is worth noting that the non-zero hidden state initialization was chosen to prevent the denominators from vanishing when the initial state is selected as a candidate INLINEFORM3 in Eqs.( EQREF22 )&( EQREF26 ). The reported losses below are the average across all INLINEFORM4 's and INLINEFORM5 's that were examined. Optimization over the losses in Eqs.( SECREF12 ) was performed over 5000 epochs. For the associated condition to be satisfied, there must exist a word vector INLINEFORM6 that sufficiently minimizes the axiomatic losses.",
"If it is indeed the case that the GRU attempts to learn a representation of an algebraic structure and each neuron serves as a basis function, it is not necessary that each neuron individually satisfies the above constraints. For clarity, recall the second motivating point that the addition of neurons, once a representation is found, simply contributes to learning the representation better. Instead, only a linear combination of the neurons must. We consider this possibility for the most task-performant hyperparameter pair, and two other capricious pairs. The target dimension of the linear combination, INLINEFORM0 , which we refer to as the “latent dimension,\" could generally be smaller than the hidden dimension, INLINEFORM1 . To compute the linear combination of the neurons, the outputs of the GRU were right-multiplied by a INLINEFORM2 matrix, INLINEFORM3 : DISPLAYFORM0 ",
" Since the linear combination is not à priori known, INLINEFORM0 is treated as a model parameter.",
"The minimization task previously described was repeated with this combinatorial modification while scanning over latent dimensions, INLINEFORM0 , in steps of 20. The test was performed 10 times and the reported results averaged for each value of INLINEFORM1 to reduce fluctuations in the loss from differing local minima. INLINEFORM2 was trained to optimize various combinations of the algebraic axioms, the results of which were largely found to be redundant. In § SECREF4 , we address the case in which INLINEFORM3 was only trained to assist in optimizing a single condition, and frozen in other axiomatic tests; the commutative closure condition, however, was given a separate linear combination matrix for reasons that will be discussed later.",
"Finally, the geometric structure of the resulting word vectors was explored, naively using the Euclidean metric. Sentences trace out (discrete) paths in the word embedding space, so it was natural to consider relationships between both word vectors and vectors “tangent\" to the sentences' paths. Explicitly, the angles and distances between",
"random pairs of words",
"all words and the global average word vector",
"random pairs of co-occurring words",
"all words with a co-occurring word vector average",
"adjacent tangent vectors",
"tangent vectors with a co-occurring tangent vector average",
"were computed to determine how word vectors are geometrically distributed. Intuitively, similar words are expected to affect hidden states similarly. To test this, and to gain insight into possible algebraic interpretations of word embeddings, the ratio of the Euclidean norm of the difference between hidden states produced by acting on a hidden state with two different words to the Euclidean norm of the original hidden state was computed as a function of the popular cosine similarity metric and distance between embeddings. This fractional difference, cosine similarity, and word distance were computed as, DISPLAYFORM0 ",
" where Einstein summation is applied to the (contravariant) vector indices.",
"High-level descriptions of the methods will be briefly revisited in each subsection of § SECREF4 so that they are more self-contained and pedagogical."
],
[
"We performed hyperparameter tuning over the word embedding dimension and the GRU hidden dimension to optimize the classifier's accuracy. Each dimension ran from 20 to 280 in increments of 20. A contour plot of the hyperparameter search is shown in Fig.( FIGREF39 ).",
"For comparison, using pretrained, 50 dimensional GloVe vectors with this network architecture typically yielded accuracies on the order of INLINEFORM0 on this data set, even for more performant hidden dimensions. Thus, training the embeddings end-to-end is clearly advantageous for short text classification. It is worth noting that training them end-to-end is viable primarily because of the short length of tweets; for longer documents, exploding/vanishing gradients typically prohibits such training.",
"The average Fisher information of each hyperparameter dimension over the searched region was computed to determine the relative sensitivities of the model to the hyperparameters. The Fisher information for the hidden dimension was INLINEFORM0 ; the Fisher information for the embedding dimension was INLINEFORM1 . Evidently, by this metric, the model was, on average in this region of parameter space, 1.76 times more sensitive to the hidden dimension than the embedding dimension. Nevertheless, a larger word embedding dimension was critical for the network to realize its full potential.",
"The model performance generally behaved as expected across the hyperparameter search. Indeed, higher embedding and hidden dimensions tended to yield better results. Given time and resource constraints, the results are not averaged over many search attempts. Consequently, it is unclear if the pockets of entropy are indicative of anything deeper, or merely incidental fluctuations. It would be worthwhile to revisit this search in future work."
],
[
"Seven tests were conducted for each hyperparameter pair to explore any emergent algebraic structure the GRU and word embeddings may exhibit. Specifically, the tests searched for 1) the existence of an identity element, 2) existence of an inverse word for each word, 3) multiplicative closure for arbitrary pairs of words, 4) commutative closure for arbitrary pairs of words, 5) multiplicative closure of pairs of words that co-occur within a tweet, 6) multiplicative closure of all sequences of words that appear in tweets, and 7) the existence of an inverse for all sequences of words that appear in tweets. The tests optimized the axiomatic losses defined in Eqs.( SECREF12 ).",
"In what follows, we have chosen INLINEFORM0 (or, INLINEFORM1 error) as the criterion by which we declare a condition “satisfied.\"",
"The tests can be broken roughly into two classes: 1) arbitrary solitary words and pairs of words, and 2) pairs and sequences of words co-occurring within a tweet. The results for class 1 are shown in Fig.( FIGREF41 ); the results for class 2 are shown in Fig.( FIGREF42 ).",
"The identity condition was clearly satisfied for virtually all embedding and hidden dimensions, with possible exceptions for small embedding dimensions and large hidden dimensions. Although we did not explicitly check, it is likely that even the possible exceptions would be viable in the linear combination search.",
"Arbitrary pairs of words were evidently not closed under multiplication without performing a linear combination search, with a minimum error of INLINEFORM0 across all dimensions. Moreover, the large entropy across the search does not suggest any fundamentally interesting or notable behavior, or any connections between the embedding dimension, hidden dimension, and closure property.",
"Arbitrary pairs of words were very badly not closed under commutation, and it is unfathomable that even a linear combination search could rescue the property. One might consider the possibility that specific pairs of words might have still closed under commutation, and that the exceptional error was due to a handful of words that commute outright since this would push the loss up with a near-vanishing denominator. As previously stated, the hidden states were not initialized to be zero states, and separate experiments confirm that the zero state was not in the orbit of any non-zero state, so there would have been no hope to negate the vanishing denominator. Thus, this concern is in principle possible. However, explicitly removing examples with exploding denominators (norm INLINEFORM0 ) from the loss when performing linear combination searches still resulted in unacceptable errors ( INLINEFORM1 ), so this possibility is not actually realized. We did not explicitly check for this closure in class 2 tests since class 2 is a subset of class 1, and such a flagrant violation of the condition would not be possible if successful closure in class 2 were averaged into class 1 results. Even though commutative closure is not satisfied, it is curious to note that the error exhibited a mostly well-behaved stratification.",
"The most interesting class 1 result was the arbitrary inverse. For embedding dimensions sufficiently large compared to the hidden dimension, inverses clearly existed even without a linear combination search. Even more remarkable was the well-behaved stratification of the axiomatic error, implying a very clear relationship between the embedding dimension, hidden dimension, and emergent algebraic structure of the model. It is not unreasonable to expect the inverse condition to be trivially satisfied in a linear combination search for a broad range of hyperparameter pairs.",
"The same behavior of the inverse property is immediately apparent in all class 2 results. The stratification of the error was virtually identical, and all of the tested properties have acceptable errors for sufficiently large embedding dimensions for given hidden dimensions, even without a linear combination search."
],
[
"The optimal hyperparameter pair for this single pass of tuning was INLINEFORM0 , which resulted in a model accuracy of INLINEFORM1 . This was not a statistically significant result since multiple searches were not averaged, so random variations in validation sets and optimization running to differing local minima may have lead to fluctuations in the test accuracies. However, the selection provided a reasonable injection point to investigate the algebraic properties of linear combinations of the output of the GRU's neurons. For comparison, we also considered INLINEFORM2 and INLINEFORM3 .",
"The tests were run with the linear combination matrix, INLINEFORM0 , trained to assist in optimizing the composite inverse. The learned INLINEFORM1 was then applied to the output hidden states for the other properties except for commutative closure, which was given its own linear combination matrix to determine if any existed that would render it an emergent property.",
"The combination was trained to optimize a single condition because, if there exists an optimal linear combination for one condition, and there indeed exists an underlying algebraic structure incorporating other conditions, the linear combination would be optimal for all other conditions.",
"Initial results for the INLINEFORM0 search is shown in Figs.( FIGREF45 )&( FIGREF46 ). Well-optimized properties are shown in Fig.( FIGREF45 ), while the expected poorly-optimized properties are shown in Fig.( FIGREF46 ).",
"The four conditions examined in Fig.( FIGREF45 ) are clearly satisfied for all latent dimensions. They all also reach a minimum error in the same region. Composite closure, intra-sentence closure, and arbitrary inverse are all optimized for INLINEFORM0 ; composite inverse is optimized for INLINEFORM1 , though the variation in the range INLINEFORM2 is small ( INLINEFORM3 variation around the mean, or an absolute variation of INLINEFORM4 in the error).",
"Arbitrary multiplicative closure and commutative closure are highly anti-correlated, and both conditions are badly violated. It is worth noting that the results in Fig.( FIGREF46 )(b) did not remove commutative pairs of words from the error, and yet the scale of the error in the linear combination search is virtually identical to what was separately observed with the commutative pairs removed. They both also exhibit a monotonic dependence on the latent dimension. Despite their violation, this dependence is well-behaved, and potentially indicative of some other structure.",
"Before discussing the linear combination searches for the other selected hyperparameter pairs, it is worthwhile noting that retraining the network and performing the linear combination search again can yield differing results. Figs.( FIGREF47 )&( FIGREF48 ) show the linear combination results after retraining the model for the same hyperparameter pair, with a different network performance of INLINEFORM0 .",
"Qualitatively, the results are mostly the same: there is a common minimizing region of INLINEFORM0 , and conditions are satisfied, at least in the common minimal region. However, the minimizing region starkly shifted down, and became sharper for composite closure, intra-sentence closure, and arbitrary inverse.",
"Once more, the results are mostly the same. Arbitrary closure error drastically increased, but both are still highly anti-correlated, and mostly monotonic, despite the erratic fluctuations in the arbitrary closure error.",
"Figs.( FIGREF49 )&( FIGREF50 ) show the linear combination search for INLINEFORM0 . The model was retrained, and achieved INLINEFORM1 for the displayed results.",
"Interestingly, the optimal latent dimension occurs significantly higher than for the other reported hyperparameter pairs. This result, however, is not true for all retrainings at this INLINEFORM0 pair.",
"The entropy in the arbitrary closure loss increased, and the commutative closure loss seemed to asymptote at higher latent dimension.",
"Figs.( FIGREF51 )&( FIGREF52 ) show the linear combination search for INLINEFORM0 . The model was retrained, and achieved INLINEFORM1 for the displayed results.",
"At lower dimensions, the optimal latent dimension was no longer shared between the satisfied conditions.",
"The unsatisfied conditions displayed mostly the same behavior at lower dimensions."
],
[
"To explore the geometric distribution of word vectors, the angles and distances between 1) random pairs of words, 2) all words and the global average word vector, 3) random pairs of co-occurring words, 4) all words with a co-occurring word vector average, 5) adjacent tangent vectors, 6) tangent vectors with a co-occurring tangent vector average were computed. The magnitudes of the average word vectors, average co-occurring word vectors, and average tangent vectors were also computed.",
"Additionally, the relative effect of words on states is computed verses their cosine similarities and relative distances, measured by Eqs.( EQREF37 )-().",
"In the figures that follow, there are, generally, three categories of word vectors explored: 1) random word vectors from the pool of all word vectors, 2) co-occurring word vectors, and 3) tangent vectors (the difference vector between adjacent words).",
"Fig.( FIGREF54 ) shows the distribution in the Euclidean norms of the average vectors that were investigated.",
"The tangent vectors and average word vectors had comparable norms. The non-zero value of the average word vector indicates that words do not perfectly distribute throughout space. The non-zero value of the average tangent vectors indicates that tweets in general progress in a preferred direction relative to the origin in embedding space; albeit, since the magnitudes are the smallest of the categories investigated, the preference is only slight. The norm of the average of co-occurring word vectors is significantly larger than the norms of others categories of vectors, indicating that the words in tweets typically occupy a more strongly preferred region of embedding space (e.g. in a cone, thus preventing component-wise cancellations when computing the average).",
"Fig.( FIGREF55 ) shows the distribution of the Euclidean cosine similarities of both pairs of vectors and vectors relative to the categorical averages.",
"The cosine similarity of pairs of random words and co-occurring words shared a very common distribution, albeit with the notable spikes are specific angles and a prominent spike at INLINEFORM0 for co-occurring pairs. The prominent spike could potentially be explained by the re-occurrence of punctuation within tweets, so it may not indicate anything of importance; the potential origin of the smaller spikes throughout the co-occurring distribution is unclear. Generally, the pairs strongly preferred to be orthogonal, which is unsurprising given recent investigations into the efficacy of orthogonal embeddings BIBREF37 . Adjacent pairs of tangent vectors, however, exhibited a very strong preference for obtuse relative angles, with a spike at INLINEFORM1 .",
"Words tended to have at most a very slightly positive cosine similarity to the global average, which is again indicative of the fact words did not spread out uniformly. Co-occurring words tended to form acute angles with respect to the co-occurring average. Meanwhile, tangent vectors strongly preferred to be orthogonal to the average.",
"The strong negative cosine similarity of adjacent tangent vectors, and the strong positive cosine similarity of words with their co-occurring average, indicate co-occurring words tended to form a grid structure in a cone. That is, adjacent words tended to be perpendicular to each other in the positive span of some set of word basis vectors. Of course, this was not strictly adhered to, but the preferred geometry is apparent.",
"Fig.( FIGREF56 ) shows the distribution of the Euclidean distances of both pairs of vectors and vectors relative to the categorical averages.",
"Distributions of random pairs of words and co-occurring words were virtually identical in both plots, indicating that most of the variation is attributable to the relative orientations of the vectors rather than the distances between them.",
"Fig.( FIGREF57 ) shows the correlation of the similarity of the action of pairs of words to their cosine similarity and distances apart.",
"Both plots confirm that the more similar words are, the more similar their actions on the hidden states are. The strongly linear, bi-modal dependence of the fractional difference on the distance between words indicates that word distance is a stronger predictor of the relative meaning of words than the popular cosine similarity."
],
[
"The important take-aways from the results are:",
"The GRU trivially learned an identity `word'.",
"The action of the GRU for any individual word admits an inverse for sufficiently large embedding dimension relative to the hidden dimension.",
"The successive action of the GRU for any arbitrary pair of words is not, generally, equivalent to the action of the GRU for any equivalent third `word'.",
"The commutation of successive actions of the GRU for any arbitrary pair of words is not equivalent to the action of the GRU for any equivalent third `word'.",
"The successive action of the GRU for any co-occurring pair of words is equivalent to the action of the GRU for an equivalent third `word' for sufficiently large embedding dimension relative to the hidden dimension.",
"The successive action of the GRU for any series of co-occuring words is equivalent to the action of the GRU for an equivalent `word' for sufficiently large embedding dimension relative to the hidden dimension.",
"The action of the GRU for any series of co-occurring words admits an inverse for sufficiently large embedding dimension relative to the hidden dimension.",
"Any condition satisfied for a sufficiently large embedding dimension relative to the hidden dimension is true for any pair of dimensions given an appropriate linear combination of the outputs of the GRU projected into an appropriate lower dimension (latent dimension).",
"The axiomatic errors for all satisfied conditions for the most performant models are minimized for specific, shared latent dimensions, and increases away from these latent dimensions; the optimal latent dimension is not shared for sufficiently small embedding dimensions.",
"Models with lower test performance tend to optimally satisfy these conditions for lower latent dimensions.",
"Co-occurring word vectors tend to be perpendicular to each other and occupy a cone in embedding space.",
"The difference of the action of two word vectors on a hidden state increases linearly with the distance between the two words, and follows a generally bi-modal trend.",
"Although there are still several outstanding points to consider, we offer an attempt to interpret these results in this section.",
"Identity, inverse, and closure properties for co-occurring words are satisfied, and in such a way that they are all related under some algebraic structure. Since closure is not satisfied for arbitrary pairs of words, there are, essentially, two possible explanations for the observed structure:",
"The union of all sets of co-occurring words is the Cartesian product of multiple Lie groups: DISPLAYFORM0 ",
"where INLINEFORM0 is the space of words, and INLINEFORM1 is a Lie group. Since multiplication between groups is not defined, the closure of arbitrary pairs of words is unsatisfied.",
"The GRU's inability to properly close pairs of words it has never encountered together is the result of the generalization problem, and all words consequently embed in a larger Lie group: DISPLAYFORM0 ",
"In either case, words can be considered elements of a Lie group. Since Lie groups are also manifolds, the word vector components can be interpreted as coordinates on this Lie group. Traditionally, Lie groups are practically handled by considering the Lie algebra that generates them, INLINEFORM0 . The components of the Lie vectors in INLINEFORM1 are then typically taken to be the coordinates on the Lie group. This hints at a connection between INLINEFORM2 and the word vectors, but this connection was not made clear by any of the experiments. Furthermore, RNNs learn a nonlinear representation of the group on some latent space spanned by the hidden layer.",
"Since sentences form paths on the embedding group, it's reasonable to attempt to form a more precise interpretation of the action of RNNs. We begin by considering their explicit action on hidden states as the path is traversed: DISPLAYFORM0 ",
" Eq.() takes the form of a difference equation. In particular, it looks very similar to the finite form of the differential equation governing the nonlinear parallel transport along a path, INLINEFORM0 , on a principal fibre bundle with base space INLINEFORM1 and group INLINEFORM2 . If the tangent vector at INLINEFORM3 is INLINEFORM4 , and the vector being transported at INLINEFORM5 is INLINEFORM6 then we have DISPLAYFORM0 ",
" where INLINEFORM0 is the (nonlinear) connection at INLINEFORM1 . If INLINEFORM2 were explicitly a function of INLINEFORM3 , Eq.( EQREF76 ) would take a more familiar form: DISPLAYFORM0 ",
" Given the striking resemblance between Eqs.( EQREF77 )&(), is it natural to consider either",
"The word embedding group serving as the base space, INLINEFORM0 , so that the path INLINEFORM1 corresponds explicitly to the sentence path.",
"A word field on the base space, INLINEFORM0 , so that there exists a mapping between INLINEFORM1 and the sentence path.",
"The second option is more general, but requires both a candidate for INLINEFORM0 and a compelling way to connect INLINEFORM1 and INLINEFORM2 . This is also more challenging, since, generally, parallel transport operators, while taking values in the group, are not closed. If the path were on INLINEFORM3 itself, closure would be guaranteed, since any parallel transport operator would be an element of the co-occurring subgroup, and closure arises from an equivalence class of paths.",
"To recapitulate the final interpretations of word embeddings and RNNs in NLP:",
"Words naturally embed as elements in a Lie group, INLINEFORM0 , and end-to-end word vectors may be related to the generating Lie algebra.",
"RNNs learn to parallel transport nonlinear representations of INLINEFORM0 either on the Lie group itself, or on a principal INLINEFORM1 -bundle."
],
[
"The geometric derivative along a path parameterized by INLINEFORM0 is defined as: DISPLAYFORM0 ",
"where INLINEFORM0 is the tangent vector at INLINEFORM1 , and INLINEFORM2 is the connection. This implies RNNs learn the solution of the first-order geometric differential equation: DISPLAYFORM0 ",
"It is natural, then, to consider neural network solutions to higher-order generalizations: DISPLAYFORM0 ",
"Networks that solve Eq.( EQREF85 ) are recurrent-like. Updates to a hidden state will generally depend on states beyond the immediately preceding one; often, this dependence can be captured by evolving on the phase space of the hidden states, rather than on the sequences of the hidden states themselves. The latter results in a nested RNN structure for the recurrent-like cell, similar to the structure proposed in BIBREF12 .",
"Applications of Eq.( EQREF85 ) are currently being explored. In particular, if no additional structure exists and RNNs parallel transport states along paths on the word embedding group itself (the first RNN interpretation), geodesics emerge as a natural candidate for sentence paths to lie on. Thus, sentence generation could potentially be modeled using the geodesic equation and a nonlinear adjoint representation: INLINEFORM0 , INLINEFORM1 in Eq.( EQREF85 ). This geodesic neural network (GeoNN) is the topic of a manuscript presently in preparation."
],
[
"The embeddings trained end-to-end in this work provided highly performant results. Unfortunately, training embeddings on end-tasks with longer documents is challenging, and the resulting embeddings are often poor for rare words. However, it would seem constructing pre-trained word embeddings by leveraging the emergent Lie group structure observed herein could provide competitive results without the need for end-to-end training.",
"Intuitively, it is unsurprising groups appear as a candidate to construct word embeddings. Evidently, the proximity of words is governed by their actions on hidden states, and groups are often the natural language to describe actions on vectors. Since groups are generally non-commutative, embedding words in a Lie group can additionally capture their order- and context-dependence. Lie groups are also generated by Lie algebras, so one group can act on the algebra of another group, and recursively form a hierarchical tower. Such an arrangement can explicitly capture the hierarchical structure language is expected to exhibit. E.g., the group structure in the first interpretation given by Eq.( EQREF72 ), DISPLAYFORM0 ",
"admits, for appropriately selected INLINEFORM0 , hierarchical representations of the form DISPLAYFORM0 ",
" where INLINEFORM0 . Such embedding schemes have the potential to generalize current attempts at capturing hierarchy, such as Poincaré embeddings BIBREF22 . Indeed, hyperbolic geometries, such as the Poincaré ball, owe their structure to their isometry groups. Indeed, it is well known that the hyperbolic INLINEFORM1 dimensional Minkowski space arises as a representation of INLINEFORM2 + translation symmetries.",
"In practice, Lie group embedding schemes would involve representing words as constrained matrices and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, dubbed “LieGr,\" in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation."
],
[
"The results presented herein offer insight into how RNNs and word embeddings naturally tend to structure themselves for text classification. Beyond elucidating the inner machinations of deep NLP, such results can be used to help construct novel network architectures and embeddings.",
"There is, however, much immediate followup work worth pursuing. In particular, the uniqueness of identities, inverses, and multiplicative closure was not addressed in this work, which is critical to better understand the observed emergent algebraic structure. The cause for the hyperparameter stratification of the error in, and a more complete exploration of, commutative closure remains outstanding. Additionally, the cause of the breakdown of the common optimal latent dimension for low embedding dimension is unclear, and the bi-model, linear relationship between the action of words on hidden states and the Euclidean distance between end-to-end word embeddings invites much investigation.",
"As a less critical, but still curious inquiry: is the additive relationship between words, e.g. “king - man + woman = queen,\" preserved, or is it replaced by something new? In light of the Lie group structure words trained on end tasks seem to exhibit, it would not be surprising if a new relationship, such as the Baker-Campbell-Hausdorff formula, applied."
],
[
"The author would like to thank Robin Tully, Dr. John H. Cantrell, and Mark Laczin for providing useful discussions, of both linguistic and mathematical natures, as the work unfolded. Robin in particular provided essential feedback throughout the work, and helped explore the potential use of free groups in computational linguistics at the outset. John furnished many essential conversations that ensured the scientific and mathematical consistency of the experiments, and provided useful insights into the results. Mark prompted the investigation into potential emergent monoid structures since they appear frequently in state machines."
]
],
"section_name": [
"Introduction",
"Summary of results",
"Intuition and motivation",
"Data and methods",
"Hyperparameters and model accuracy",
"Algebraic properties",
"Linear combination search",
"Embedding structure",
"Interpretation of results",
"Proposal for class of recurrent-like networks",
"Proposal for new word embeddings",
"Closing remarks",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"bf774ae56bda98520bf2f391d2cdac452d7de496"
],
"answer": [
{
"evidence": [
"We trained word embeddings and a uni-directional GRU connected to a dense layer end-to-end for text classification on a set of scraped tweets using cross-entropy as the loss function. End-to-end training was selected to impose as few heuristic constraints on the system as possible. Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about. Tokens that occurred fewer than 5 times were disregarded in the model. The model was trained on 22106 tweets over 10 epochs, while 5526 were reserved for validation and testing sets (2763 each). The network demonstrated an insensitivity to the initialization of the hidden state, so, for algebraic considerations, INLINEFORM0 was chosen for hidden dimension of INLINEFORM1 . A graph of the network is shown in Fig.( FIGREF13 )."
],
"extractive_spans": [],
"free_form_answer": "To classify a text as belonging to one of the ten possible classes.",
"highlighted_evidence": [
"Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4cf33f4a57b1f413770469eff61c75f11e032edd"
],
"answer": [
{
"evidence": [
"First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0",
"where DISPLAYFORM0",
"and where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state. In particular, using INLINEFORM5 for sentence generation is the topic of a manuscript presently in preparation."
],
"extractive_spans": [],
"free_form_answer": "A network, whose learned functions satisfy a certain equation. The network contains RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state.",
"highlighted_evidence": [
"First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0\n\nwhere DISPLAYFORM0\n\nand where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1fd935fdd540ccac84ee98148411a1bc8e6e4d78"
],
"answer": [
{
"evidence": [
"Second, we propose embedding schemes that explicitly embed words as elements of a Lie group. In practice, these embedding schemes would involve representing words as constrained matrices, and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What text classification task is considered?",
"What novel class of recurrent-like networks is proposed?",
"Is there a formal proof that the RNNs form a representation of the group?"
],
"question_id": [
"30b5e5293001f65d2fb9e4d1fdf4dc230e8cf320",
"993b896771c31f3478f28112a7335e7be9d03f21",
"dee116df92f9f92d9a67ac4d30e32822c22158a6"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The simple network trained as a classifier: GRU→Dense→Linear→Softmax. There are 10 nonlinear neurons dedicated to each of the final 10 energies that are combined through a linear layer before softmaxing. This is to capitalize on the universal approximation theorem’s implication that neurons serve as basis functions - i.e. each energy function is determined by 10 basis functions. The hidden dimension n of the GRU, and the word embedding dimension, are hyperparameters that are scanned over.",
"Figure 2: The range of the model accuracy is [50.1%, 89.7%].",
"Figure 3: The % axiomatic error as a function of the word embedding and GRU hidden dimensions. (a) The existence of an identity element for multiple hidden states. Note the log scale. (b) The existence of an inverse word for every word acting on random hidden states. Linear scale. (c) The existence of a third, ‘effective’ word performing the action of two randomly chosen words in succession, acting on random states. Linear scale. (d) The existence of a third word performing the action of the commutation of two randomly chosen words, acting on random states. Nonlinear scale.",
"Figure 4: The % axiomatic error as a function of the word embedding and GRU hidden dimensions. (a) The existence of a third word performing the action of all, ordered words comprising a tweet, acting on the initial state. Linear scale. (b) The existence of a word that reverses the action of the ordered words comprising a tweet that acted on the initial state. Nonlinear scale. (c) The existence of a third word performing the action of two random words co-occurring within a tweet, acting on random states. Linear scale. (d) The existence of an inverse word for every word acting on random hidden states. This is the same as in Fig.(3), and is simply provided for side-by-side comparison.",
"Figure 5: (m,n) = (280, 220). Graphs of % axiomatic error for the satisfied conditions after a linear combination search. The graphs are ordered as they were in Fig.(4)",
"Figure 6: (m,n) = (280, 220). Graphs of % axiomatic error for the unsatisfied conditions after a linear combination search.",
"Figure 7: (m,n) = (280, 220), retrained. Graphs of % axiomatic error for the satisfied conditions after a linear combination search. The graphs are ordered as they were in Fig.(4)",
"Figure 8: (m,n) = (280, 220), retrained. Graphs of % axiomatic error for the unsatisfied conditions after a linear combination search.",
"Figure 9: (m,n) = (180, 220). Graphs of % axiomatic error for the satisfied conditions after a linear combination search. The graphs are ordered as they were in Fig.(4)",
"Figure 10: (m,n) = (180, 220). Graphs of % axiomatic error for the unsatisfied conditions after a linear combination search.",
"Figure 11: (m,n) = (100, 180). Graphs of % axiomatic error for the satisfied conditions after a linear combination search. The graphs are ordered as they were in Fig.(4)",
"Figure 12: (m,n) = (100, 180). Graphs of % axiomatic error for the unsatisfied conditions after a linear combination search.",
"Figure 13: The frequency distribution of the norm of average vectors. There was one instance of a norm for the average of all word vectors, hence the singular spike for its distribution. The other vector distributions were over the average for different individual tweets.",
"Figure 14: Distributions of cosine similarities of vectors with respect to (a) other vectors (b) category average vectors. Averages were taken as they were in Fig.(13).",
"Figure 15: Distributions of the Euclidean distances of vectors to (a) other vectors (b) category average vectors. Averages were taken as they were in Fig.(13).",
"Figure 16: Plots of E with respect to (a) word cosine similarity, cos(θw) (b) distance between words, |∆w|. Eqs.(3.3)-(3.5)"
],
"file": [
"5-Figure1-1.png",
"8-Figure2-1.png",
"10-Figure3-1.png",
"11-Figure4-1.png",
"13-Figure5-1.png",
"13-Figure6-1.png",
"14-Figure7-1.png",
"14-Figure8-1.png",
"15-Figure9-1.png",
"15-Figure10-1.png",
"16-Figure11-1.png",
"16-Figure12-1.png",
"17-Figure13-1.png",
"18-Figure14-1.png",
"19-Figure15-1.png",
"20-Figure16-1.png"
]
} | [
"What text classification task is considered?"
] | [
[
"1803.02839-Data and methods-0"
]
] | [
"To classify a text as belonging to one of the ten possible classes."
] | 315 |
2001.07263 | Single headed attention based sequence-to-sequence model for state-of-the-art results on Switchboard-300 | It is generally believed that direct sequence-to-sequence (seq2seq) speech recognition models are competitive with hybrid models only when a large amount of data, at least a thousand hours, is available for training. In this paper, we show that state-of-the-art recognition performance can be achieved on the Switchboard-300 database using a single headed attention, LSTM based model. Using a cross-utterance language model, our single-pass speaker independent system reaches 6.4% and 12.5% word error rate (WER) on the Switchboard and CallHome subsets of Hub5'00, without a pronunciation lexicon. While careful regularization and data augmentation are crucial in achieving this level of performance, experiments on Switchboard-2000 show that nothing is more useful than more data. | {
"paragraphs": [
[
"Powerful neural networks have enabled the use of “end-to-end” speech recognition models that directly map a sequence of acoustic features to a sequence of words without conditional independence assumptions. Typical examples are attention based encoder-decoder BIBREF0 and recurrent neural network transducer models BIBREF1. Due to training on full sequences, an utterance corresponds to a single observation from the view point of these models; thus, data sparsity is a general challenge for such approaches, and it is believed that these models are effective only when sufficient training data is available. Indeed, many end-to-end speech recognition papers focus on LibriSpeech, which has 960 hours of training audio. Nevertheless, the best performing systems follow the traditional hybrid approach BIBREF2, outperforming attention based encoder-decoder models BIBREF3, BIBREF4, BIBREF5, BIBREF6, and when less training data is used, the gap between “end-to-end” and hybrid models is more prominent BIBREF3, BIBREF7. Several methods have been proposed to tackle data sparsity and overfitting problems; a detailed list can be found in Sec. SECREF2. Recently, increasingly complex attention mechanisms have been proposed to improve seq2seq model performance, including stacking self and regular attention layers and using multiple attention heads in the encoder and decoder BIBREF4, BIBREF8.",
"We show that consistent application of various regularization techniques brings a simple, single-head LSTM attention based encoder-decoder model to state-of-the-art performance on Switchboard-300, a task where data sparsity is more severe than LibriSpeech. We also note that remarkable performance has been achieved with single-head LSTM models in a recent study on language modeling BIBREF9."
],
[
"In contrast to traditional hybrid models, where even recurrent networks are trained on randomized, aligned chunks of labels and features BIBREF10, BIBREF11, whole sequence models are more prone to memorizing the training samples. In order to improve generalization, many of the methods we investigate introduce additional noise, either directly or indirectly, to stochastic gradient descent (SGD) training to avoid narrow, local optima. The other techniques we study address the highly non-convex nature of training neural networks, ease the optimization process, and speed up convergence.",
"Weight decay adds the $l_2$ norm of the trainable parameters to the loss function, which encourages the weights to stay small unless necessary, and is one of the oldest techniques to improve neural network generalization. As shown in BIBREF12, weight decay can improve generalization by suppressing some of the effects of static noise on the targets.",
"Dropout randomly deactivates neurons with a predefined probability in every training step BIBREF13 to reduce co-adaptation of neurons.",
"DropConnect, which is similar in spirit to dropout, randomly deactivates connections between neurons by temporarily zeroing out weights BIBREF14.",
"Zoneout, which is also inspired by dropout and was especially developed for recurrent models BIBREF15, stochastically forces some hidden units to maintain their previous values. In LSTMs, the method is applied on the cell state or on the recurrent feedback of the output.",
"Label smoothing interpolates the hard label targets with a uniform distribution over targets, and improves generalization in many classification tasks BIBREF16.",
"Batch normalization (BN) accelerates training by standardizing the distribution of each layer's input BIBREF17. In order to reduce the normalization mismatch between training and testing, we modify the original approach by freezing the batch normalization layers in the middle of the training when the magnitude of parameter updates is small. After freezing, the running statistics are not updated, batch statistics are ignored, and BN layers approximately operate as global normalization.",
"Scheduled sampling stochastically uses the token produced by a sequence model instead of the true previous token during training to mitigate the effects of exposure bias BIBREF18.",
"Residual networks address the problem of vanishing and exploding gradients by including skip connections BIBREF19 in the model that force the neural network to learn a residual mapping function using a stack of layers. Optimization of this residual mapping is easier, allowing the use of much deeper structures.",
"Curriculum learning simplifies deep neural network training by presenting training examples in a meaningful order, usually by increasing order of difficulty BIBREF20. In seq2seq models, the input acoustic sequences are frequently sorted in order of increasing length BIBREF21.",
"Speed and tempo perturbation changes the rate of speech, typically by $\\pm $10%, with or without altering the pitch and timbre of the speech signal BIBREF22, BIBREF23. The goal of these methods is to increase the amount of training data for the model.",
"Sequence noise injection adds structured sequence level noise generated from speech utterances to training examples to improve the generalization of seq2seq models BIBREF24. As previously shown, input noise during neural network training encourages convergence to a local optimum with lower curvature, which indicates better generalization BIBREF25.",
"Weight noise adds noise directly to the network parameters to improve generalization BIBREF26. This form of noise can be interpreted as a simplified form of Bayesian inference that optimizes a minimum description length loss BIBREF27.",
"SpecAugment masks blocks of frequency channels and blocks of time steps BIBREF3 and also warps the spectrogram along the time axis to perform data augmentation. It is closely related to BIBREF28."
],
[
"This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task. Our acoustic and text data preparation follows the Kaldi BIBREF29 s5c recipe. Our attention based seq2seq model is similar to BIBREF30, BIBREF31 and follows the structure of BIBREF32.",
"We extract 80-dimensional log-Mel filterbank features over 25ms frames every 10ms from the input speech signal. The input audio is speed and/or tempo perturbed with 56 probability. Following BIBREF24, sequence noise mixed from up to 4 utterances is injected with 40% probability and 0.3 weight. The filterbank output is mean-and-variance normalized at the speaker level, and first ($\\Delta $) and second ($\\Delta \\Delta $) derivatives are also calculated. The final features presented to the network are also processed through a SpecAugment block that uses the SM policy BIBREF3 with $p=0.3$ and no time warping.",
"The encoder network comprises 8 bidirectional LSTM layers with 1536 nodes per direction per layer BIBREF33, BIBREF34. As shown in Fig. FIGREF1, each LSTM block in the encoder includes a residual connection with a linear transformation that bypasses the LSTM, a 1024-dimensional linear reduction layer on the LSTM output, and batch-normalization (BN) of the block output. A pyramidal structure BIBREF31 in the first two LSTM layers reduces the frame rate by a factor of 4. The final dimension of the encoder output is 256, enforced by a linear bottleneck. We apply 30% dropout to the LSTM outputs and 30% drop-connect to the hidden-to-hidden matrices BIBREF14, BIBREF35. As suggested by BIBREF36, the weight dropout is fixed for a batch of sequences.",
"The attention based decoder model is illustrated in Fig. FIGREF1. The decoder models the sequence of 600 BPE units estimated on characters BIBREF37, where the BPE units are embedded in 256 dimensions. We use additive, location aware attention, without key/value transformations, and the attention is smoothed by 256, 5-dimensional kernels BIBREF38. The decoder block consists of 2 unidirectional LSTM layers: one is a dedicated language-model-like component with 512 nodes that operates only on the embedded predicted symbol sequence, and the other is a 768 unit layer processing acoustic and symbol information. The output of both LSTMs is reduced to 256 dimensions by a linear bottleneck BIBREF39. Fixed sequence-level weight dropout of 15% is applied in the decoder LSTMs, a dropout of 5% is applied to the embeddings, and a dropout of 15% is applied to the decoder LSTM outputs. The second LSTM in the decoder also uses zoneout, where the cell state update is deactivated with 15% probability and the recurrent feedback from the output maintains its previous value with 5% probability.",
"Overall, the model has 280M parameters, of which only 5.4M are in the decoder. Aiming at the best word error rate, this design choice is based on our observation that an external language model has significantly larger effect if the decoder is not over-parametrized BIBREF32. The model is trained for 250 epochs on 32 P100 GPUs in less than 4 days using a PyTorch BIBREF40 implementation of distributed synchronous SGD with up to 32 sequences per GPU per batch. Training uses a learning rate of 0.03 and Nesterov momentum BIBREF41 of 0.9. The weight decay parameter is 4e-6, the label smoothing parameter is 0.35, and teacher forcing is fixed to 0.8 throughout training. In the first 3 epochs the learning rate is warmed up and batch size is gradually increased from 8 to 32 BIBREF42. In the first 35 epochs, the neural network is trained on sequences sorted in ascending order of length of the input. Afterwards, batches are randomized within length buckets, ensuring that a batch always contains sequences with similar length. Weight noise from a normal distribution with mean 0.0 and variance 0.015 is switched on after 70 epochs. After 110 epochs, the updates of sufficient statistics in the batch-normalization layers are turned off, converting them into fixed affine transformations. The learning rate is annealed by 0.9 per epoch after 180 epochs of training, and simultaneously label smoothing is also switched off.",
"The external language model (LM) is built on the BPE segmentation of 24M words from the Switchboard and Fisher corpora. It is trained for 40 epochs using label smoothing of 0.15 in the first 20 epochs. The baseline LM has 57M parameters and consists of 2 unidirectional LSTM layers with 2048 nodes BIBREF43 trained with drop-connect and dropout probabilities of 15%. The embedding layer has 512 nodes, and the output of the last LSTM is projected to 128 dimensions. When the LM is trained and evaluated across utterances, consecutive segments of a single-channel recording are grouped together up to 40 seconds. Perplexities (PPL) are measured at the word level on the concatenation of ground truth transcripts, while the WER is obtained by retaining the LM state of the single-best hypothesis of the preceding utterance.",
"Decoding uses simple beam search with a beam width of 60 hypotheses and no lexical prefix tree constraint BIBREF44. The search performs shallow fusion of the encoder-decoder score, the external language model score, a length normalization term, and a coverage term BIBREF45, BIBREF46, BIBREF47. For more details, please refer to BIBREF32. Hub5'00 is used as a development set to optimize decoding hyperparameters, while Hub5'01 and RT03 are used as final test sets."
],
[
"Our current setup is the result of incremental development. Keeping in mind that several other equally powerful setups probably exist, the focus of the following experiments is to investigate ours around the current optimum."
],
[
"We first investigate the importance of different data processing steps. The s5c Kaldi recipe includes a duplicate filtering step, in which the maximum number of occurrences of utterances with the same content is limited. We measure the impact of duplicate filtering and also the effect of filtering out word fragments and noise tokens from the training transcripts. Since the LM is trained on identically filtered transcripts from Fisher+Switchboard data, word fragment and noise token filters were applied consistently. The results are summarized in Table TABREF5. Deactivating the duplicate filter is never harmful when an external LM is used, and the gains on CallHome can be substantial. Considering performance on the complete Hub5'00 data, the best systems either explicitly handle both word fragments and noise tokens or filter them all out. When an external LM is used, the best results are obtained when word fragment and noise token filters are activated and the duplicate filter is deactivated. This setting is also appealing in cases where the external LM may be trained on text data that will not contain word fragments or noise; thus, the remaining experiments are carried out with this system setting."
],
[
"In a second set of experiments, we characterize the importance of each of the regularization methods described in Sec. SECREF2 for our model performance by switching off one training method at a time without re-optimizing the remaining settings. In these experiments, decoding is performed without an external language model. Curriculum learning is evaluated by either switching to randomized batches after 35 epochs or leaving the sorting on throughout training. We also test the importance of $\\Delta $ and $\\Delta \\Delta $ features BIBREF48. Sorting the results by decreasing number of absolute errors on Hub5'00, Table TABREF7 indicates that each regularization method contributes to the improved WER. SpecAugment is by far the most important method, while using $\\Delta $ and $\\Delta \\Delta $ features or switching off the curriculum learning in the later stage of training have marginal but positive effects. Other direct input level perturbation steps (speed/tempo perturbation and sequence noise injection) are also key techniques that can be found in the upper half of the table. If we compare the worst and baseline models, we find that the relative performance difference between them is nearly unchanged by including the external LM in decoding. Without the LM, the gap is 18% relative, while with the LM the gap is 17% relative. This clearly underlines the importance of the regularization techniques."
],
[
"The following experiments summarize our optimization of the LM. Compared to our previous LM BIBREF24, we measure better perplexity and WER if no bottleneck is used before the softmax layer (rows 1 and 3 in Table TABREF9). Increasing the model capacity to 122M parameters results in a significant gain in PPL only after the dropout rates are tuned (rows 3, 5 and 6). Similar to BIBREF49, BIBREF50, significant PPL gain is observed if the LM was trained across utterances. However, this PPL improvement does not translate into reduced WER with a bigger model when cross utterance modeling is used (rows 4 and 7). Thus, in all other experiments we use the smaller, 57M-parameter model."
],
[
"A 280M-parameter model may be larger than is practical in many applications. Thus, we also conduct experiments to see if this model size is necessary for reasonable ASR performance. Models are trained without changing the training configuration, except that the size or number of LSTM layers is reduced. As Table TABREF11 shows, although our smallest attention based model achieves reasonable results on this task, a significant loss is indeed observed with decreasing model size, especially on CallHome. Nevertheless, an external language model reduces the performance gap. A small, 57M-parameter model together with a similar size language model is only 5% relative worse than our largest model. We note that this model already outperforms the best published attention based seq2seq model BIBREF3, with roughly 66% fewer parameters.",
"Additional experiments are carried out to characterize the search and modeling errors in decoding. The results of tuning the beam size and keeping the other search hyperparameters unchanged are shown in Fig. FIGREF12. “Small” denotes the 57M model, while “large” denotes the 280M model. When greedy search (beam 1) is used, the external language model increases WER, an effect that might be mitigated with re-optimized hyperparameters. Nevertheless, if a beam of at least 2 hypotheses is used, the positive effect of the language model is clear. We also observe that without the language model the search saturates much earlier, around beam 8, fluctuating within only a few absolute errors afterwards. On the contrary, decoding with the language model, we measure consistent but small gains with larger beams. The minimum number of word errors was measured with a relatively large beam of 240. The figure also shows that the effect of a cross-utterance language model grows with larger beams. Lastly, if the model is trained on 2000 hours of speech data (see next section), the extremely fast greedy decoding gives remarkably good performance. Although the importance of beam search decreases with an increased amount of training data, we still measure 10% relative degradation compared to a system with a cross-utterance LM and wide (240) beam search."
],
[
"As a contrast to our best results on Switchboard-300, we also train a seq2seq model on the 2000-hour Switchboard+Fisher data. This model consists of 10 encoder layers, and is trained for only 50 epochs. Our overall results on the Hub5'00 and other evaluation sets are summarized in Table TABREF14. The results in Fig. FIGREF12 and Table TABREF14 show that adding more training data greatly improves the system, by around 30% relative in some cases. For comparison with others, the 2000-hour system reaches 8.7% and 7.4% WER on rt02 and rt04. We observe that the regularization techniques, which are extremely important on the 300h setup, are still beneficial but have a significantly smaller effect.",
""
],
[
"For comparison with results in the literature we refer to the Switchboard-300 results in BIBREF3, BIBREF7, BIBREF51, BIBREF52 and the Switchboard-2000 results in BIBREF50, BIBREF51, BIBREF53, BIBREF54, BIBREF55, BIBREF56. Our 300-hour model not only outperforms the previous best attention based encoder-decoder model BIBREF3 by a large margin, it also surpasses the best hybrid systems with multiple LMs BIBREF7. Our result on Switchboard-2000 is also better than any single system results reported to date, and reaches the performance of the best system combinations.",
""
],
[
"We presented an attention based encoder-decoder setup which achieves state-of-the-art performance on Switchboard-300. A rather simple model built from LSTM layers and a decoder with a single-headed attention mechanism outperforms the standard hybrid approach. This is particularly remarkable given that in our model neither a pronunciation lexicon nor a speech model with explicit hidden state representations is needed. We also demonstrated that excellent results are possible with smaller models and with practically search-free, greedy decoding. The best results were achieved with a speaker independent model in a single decoding pass, using a minimalistic search algorithm, and without any attention mechanism in the language model. Thus, we believe that further improvements are still possible if we apply a more complicated sequence-level training criterion and speaker adaptation.",
""
]
],
"section_name": [
"Introduction",
"Methods to improve seq2seq models",
"Experimental setup",
"Experimental results",
"Experimental results ::: Effect of data preparation",
"Experimental results ::: Ablation study",
"Experimental results ::: Optimizing the language model",
"Experimental results ::: Effect of beam size and number of parameters",
"Experimental results ::: Experiments on Switchboard-2000",
"Comparison with the literature",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"1fea4fe9abca58bfbaf8fa9d73e27e286350f040"
],
"answer": [
{
"evidence": [
"This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task. Our acoustic and text data preparation follows the Kaldi BIBREF29 s5c recipe. Our attention based seq2seq model is similar to BIBREF30, BIBREF31 and follows the structure of BIBREF32.",
"As a contrast to our best results on Switchboard-300, we also train a seq2seq model on the 2000-hour Switchboard+Fisher data. This model consists of 10 encoder layers, and is trained for only 50 epochs. Our overall results on the Hub5'00 and other evaluation sets are summarized in Table TABREF14. The results in Fig. FIGREF12 and Table TABREF14 show that adding more training data greatly improves the system, by around 30% relative in some cases. For comparison with others, the 2000-hour system reaches 8.7% and 7.4% WER on rt02 and rt04. We observe that the regularization techniques, which are extremely important on the 300h setup, are still beneficial but have a significantly smaller effect."
],
"extractive_spans": [],
"free_form_answer": "Switchboard-2000 contains 1700 more hours of speech data.",
"highlighted_evidence": [
"This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task.",
"As a contrast to our best results on Switchboard-300, we also train a seq2seq model on the 2000-hour Switchboard+Fisher data. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"a9f4cd475cd9986d42837be38f4f6876d4a02175"
],
"answer": [
{
"evidence": [
"This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task. Our acoustic and text data preparation follows the Kaldi BIBREF29 s5c recipe. Our attention based seq2seq model is similar to BIBREF30, BIBREF31 and follows the structure of BIBREF32."
],
"extractive_spans": [
"300-hour English conversational speech"
],
"free_form_answer": "",
"highlighted_evidence": [
"This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"How much bigger is Switchboard-2000 than Switchboard-300 database?",
"How big is Switchboard-300 database?"
],
"question_id": [
"94bee0c58976b58b4fef9e0adf6856fe917232e5",
"7efbe48e84894971d7cd307faf5f6dae9d38da31"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: (a) Building block of the encoder; (b) attention based decoder network used in the experiments.",
"Table 1: Effect of data preparation steps on WER [%] measured on Hub5’00. The second row corresponds to the Kaldi s5c recipe.",
"Table 2: Ablation study on the final training recipe.",
"Table 3: Optimizing dropout (dropo.), DropConnect (dropc.), layer and bottleneck (bn) size for LSTM LM, optionally modeling across utterances (x-utt.)",
"Table 5: Detailed results with the best performing systems.",
"Table 4: Reducing the model size"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table5-1.png",
"4-Table4-1.png"
]
} | [
"How much bigger is Switchboard-2000 than Switchboard-300 database?"
] | [
[
"2001.07263-Experimental setup-0",
"2001.07263-Experimental results ::: Experiments on Switchboard-2000-0"
]
] | [
"Switchboard-2000 contains 1700 more hours of speech data."
] | 316 |
1905.11037 | Harry Potter and the Action Prediction Challenge from Natural Language | We explore the challenge of action prediction from textual descriptions of scenes, a testbed to approximate whether text inference can be used to predict upcoming actions. As a case of study, we consider the world of the Harry Potter fantasy novels and inferring what spell will be cast next given a fragment of a story. Spells act as keywords that abstract actions (e.g. 'Alohomora' to open a door) and denote a response to the environment. This idea is used to automatically build HPAC, a corpus containing 82 836 samples and 85 actions. We then evaluate different baselines. Among the tested models, an LSTM-based approach obtains the best performance for frequent actions and large scene descriptions, but approaches such as logistic regression behave well on infrequent actions. | {
"paragraphs": [
[
"Natural language processing (nlp) has achieved significant advances in reading comprehension tasks BIBREF0 , BIBREF1 . These are partially due to embedding methods BIBREF2 , BIBREF3 and neural networks BIBREF4 , BIBREF5 , BIBREF6 , but also to the availability of new resources and challenges. For instance, in cloze-form tasks BIBREF7 , BIBREF8 , the goal is to predict the missing word given a short context. weston2015towards presented baBI, a set of proxy tasks for reading comprenhension. In the SQuAD corpus BIBREF9 , the aim is to answer questions given a Wikipedia passage. 2017arXiv171207040K introduce NarrativeQA, where answering the questions requires to process entire stories. In a related line, 2017arXiv171011601F use fictional crime scene investigation data, from the CSI series, to define a task where the models try to answer the question: ‘who committed the crime?’.",
"In an alternative line of work, script induction BIBREF10 has been also a useful approach to evaluate inference and semantic capabilities of nlp systems. Here, a model processes a document to infer new sequences that reflect events that are statistically probable (e.g. go to a restaurant, be seated, check the menu, ...). For example, chambers2008unsupervised introduce narrative event chains, a representation of structured knowledge of a set of events occurring around a protagonist. They then propose a method to learn statistical scripts, and also introduce two different evaluation strategies. With a related aim, Pichotta2014Statistical propose a multi-event representation of statistical scripts to be able to consider multiple entities. These same authors BIBREF11 have also studied the abilities of recurrent neural networks for learning scripts, generating upcoming events given a raw sequence of tokens, using bleu BIBREF12 for evaluation.",
"This paper explores instead a new task: action prediction from natural language descriptions of scenes. The challenge is addressed as follows: given a natural language input sequence describing the scene, such as a piece of a story coming from a transcript, the goal is to infer which action is most likely to happen next."
],
[
"To build an action prediction corpus, we need to: (1) consider the set of actions, and (2) collect data where these occur. Data should come from different users, to approximate a real natural language task. Also, it needs to be annotated, determining that a piece of text ends up triggering an action. These tasks are however time consuming, as they require annotators to read vast amounts of large texts. In this context, machine comprehension resources usually establish a compromise between their complexity and the costs of building them BIBREF7 , BIBREF13 ."
],
[
"We rely on an intuitive idea that uses transcripts from the Harry Potter world to build up a corpus for textual action prediction. The domain has a set of desirable properties to evaluate reading comprehension systems, which we now review.",
"Harry Potter novels define a variety of spells. These are keywords cast by witches and wizards to achieve purposes, such as turning on a light (‘Lumos’), unlocking a door (‘Alohomora’) or killing (‘Avada Kedavra’). They abstract complex and non-ambiguous actions. Their use also makes it possible to build an automatic and self-annotated corpus for action prediction. The moment a spell occurs in a text represents a response to the environment, and hence, it can be used to label the preceding text fragment as a scene description that ends up triggering that action. Table 1 illustrates it with some examples from the original books.",
"This makes it possible to consider texts from the magic world of Harry Potter as the domain for the action prediction corpus, and the spells as the set of eligible actions. Determining the length of the preceding context, namely snippet, that has to be considered as the scene description is however not trivial. This paper considers experiments (§ \"Experiments\" ) using snippets with the 32, 64, 96 and 128 previous tokens to an action. We provide the needed scripts to rebuild the corpus using arbitrary lengths."
],
[
"The number of occurrences of spells in the original Harry Potter books is small (432 occurrences), which makes it difficult to train and test a machine learning model. However, the amount of available fan fiction for this saga allows to create a large corpus. For hpac, we used fan fiction (and only fan fiction texts) from https://www.fanfiction.net/book/Harry-Potter/ and a version of the crawler by milli2016beyond. We collected Harry Potter stories written in English and marked with the status ‘completed’. From these we extracted a total of 82 836 spell occurrences, that we used to obtain the scene descriptions. Table 2 details the statistics of the corpus (see also Appendix \"Corpus distribution\" ). Note that similar to Twitter corpora, fan fiction stories can be deleted over time by users or admins, causing losses in the dataset.",
"We tokenized the samples with BIBREF14 and merged the occurrences of multi-word spells into a single token."
],
[
"This work addresses the task as a classification problem, and in particular as a sequence to label classification problem. For this reason, we rely on standard models used for this type of task: multinomial logistic regression, a multi-layered perceptron, convolutional neural networks and long short-term memory networks. We outline the essentials of each of these models, but will treat them as black boxes. In a related line, kaushik2018much discuss the need of providing rigorous baselines that help better understand the improvement coming from future and complex models, and also the need of not demanding architectural novelty when introducing new datasets.",
"Although not done in this work, an alternative (but also natural) way to address the task is as a special case of language modelling, where the output vocabulary is restricted to the size of the `action' vocabulary. Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty.",
"The source code for the models can be found in the GitHub repository mentioned above."
],
[
"The input sentence $w_{1:n}$ is encoded as a one-hot vector, $\\mathbf {v}$ (total occurrence weighting scheme).",
"Let mlr $_\\theta (\\mathbf {v})$ be an abstraction of a multinomial logistic regression parametrized by $\\theta $ , the output for an input $\\mathbf {v}$ is computed as the $\\operatornamewithlimits{arg\\,max}_{a \\in A}$ $P(y=a|\\mathbf {v})$ , where $P(y=a|\\mathbf {v})$ is a $softmax$ function, i.e, $P(y=a|\\mathbf {v}) = \\frac{e^{W_{a} \\cdot \\mathbf {v}}}{\\sum _{a^{\\prime }}^{A} e^{W_{a^{\\prime }} \\cdot \\mathbf {v}}}$ .",
"We use one hidden layer with a rectifier activation function ( $relu(x)$ = $max(0,x)$ ). The output is computed as mlp $_\\theta (\\mathbf {v})$ = $softmax(W_2 \\cdot relu(W \\cdot \\mathbf {v} + \\mathbf {b}) + \\mathbf {b_2})$ ."
],
[
"The input sequence is represented as a sequence of word embeddings, $\\mathbf {w}_{1:n}$ , where $\\mathbf {w}_i$ is a concatenation of an internal embedding learned during the training process for the word $w_i$ , and a pre-trained embedding extracted from GloVe BIBREF15 , that is further fine-tuned.",
" BIBREF5 : The output for an element $\\mathbf {w}_i$ also depends on the output of $\\mathbf {w}_{i-1}$ . The lstm $_\\theta (\\mathbf {w}_{1:n})$ takes as input a sequence of word embeddings and produces a sequence of hidden outputs, $\\mathbf {h}_{1:n}$ ( $\\mathbf {h}_{i}$ size set to 128). The last output of the lstm $_\\theta $ , $\\mathbf {h}_n$ , is fed to a mlp $_\\theta $ .",
" BIBREF16 , BIBREF17 . It captures local properties over continuous slices of text by applying a convolution layer made of different filters. We use a wide convolution, with a window slice size of length 3 and 250 different filters. The convolutional layer uses a $\\mathit {relu}$ as the activation function. The output is fed to a max pooling layer, whose output vector is passed again as input to a mlp $_\\theta $ ."
],
[
"We explored action prediction from written stories. We first introduced a corpus set in the world of Harry Potter's literature. Spells in these novels act as keywords that abstract actions. This idea was used to label a collection of fan fiction. We then evaluated standard nlp approaches, from logistic regression to sequential models such as lstms. The latter performed better in general, although vanilla models achieved a higher performance for actions that occurred a few times in the training set. An analysis over the output of the lstm approach also revealed difficulties to discriminate among semantically related actions.",
"The challenge here proposed corresponded to a fictional domain. A future line of work we are interested in is to test whether the knowledge learned with this dataset could be transferred to real-word actions (i.e. real-domain setups), or if such transfer is not possible and a model needs to be trained from scratch."
],
[
"This work has received support from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01), and from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150)."
],
[
"Table 6 summarizes the label distribution across the training, development and test sets of the hpac corpus."
]
],
"section_name": [
"Introduction",
"HPAC: The Harry Potter's Action prediction Corpus",
"Domain motivation",
"Data crawling",
"Models",
"Machine learning models",
"Sequential models",
"Conclusion",
"Acknowlegments",
"Corpus distribution"
]
} | {
"answers": [
{
"annotation_id": [
"622d720ec8d7a8da26e700b9bb84a0cbe2c97629"
],
"answer": [
{
"evidence": [
"Although not done in this work, an alternative (but also natural) way to address the task is as a special case of language modelling, where the output vocabulary is restricted to the size of the `action' vocabulary. Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty."
],
"extractive_spans": [],
"free_form_answer": "1. there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty.\n2. Macro F1 = 14.6 (MLR, length 96 snippet)\nWeighted F1 = 31.1 (LSTM, length 128 snippet)",
"highlighted_evidence": [
"Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"08f81a5d78e451df16193028defb70150c4201c9"
]
},
{
"annotation_id": [
"6632734db0c9a2c15798fba066d7b522a263cb08"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"08f81a5d78e451df16193028defb70150c4201c9"
]
},
{
"annotation_id": [
"207cc167334925e5da9b2f786803c57c89eb87f5"
],
"answer": [
{
"evidence": [
"This paper explores instead a new task: action prediction from natural language descriptions of scenes. The challenge is addressed as follows: given a natural language input sequence describing the scene, such as a piece of a story coming from a transcript, the goal is to infer which action is most likely to happen next."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The challenge is addressed as follows: given a natural language input sequence describing the scene, such as a piece of a story coming from a transcript, the goal is to infer which action is most likely to happen next."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"08f81a5d78e451df16193028defb70150c4201c9"
]
},
{
"annotation_id": [
"e753711dcd04520b734f48a1bcdb43fb94ab3acf"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"08f81a5d78e451df16193028defb70150c4201c9"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Why do they think this task is hard? What is the baseline performance?",
"Isn't simple word association enough to predict the next spell?",
"Do they literally just treat this as \"predict the next spell that appears in the text\"?",
"How well does a simple bag-of-words baseline do?"
],
"question_id": [
"b9025c39838ccc2a79c545bec4a676f7cc4600eb",
"be6971827707afcd13af3085d0a775a0bd61c5dd",
"19608e727b527562b750949e41e763908566b58e",
"0428e06f0550e1063a64d181210795053a8e6436"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 2: Corpus statistics: s is the length of the snippet.",
"Table 4: Averaged recall at k over 5 runs.",
"Table 3: Macro and weighted F-scores over 5 runs.",
"Table 5: Performance on frequent (those that occur above the average) and infrequent actions.",
"Table 6: Label distribution for the HPAC corpus"
],
"file": [
"3-Table2-1.png",
"4-Table4-1.png",
"4-Table3-1.png",
"4-Table5-1.png",
"7-Table6-1.png"
]
} | [
"Why do they think this task is hard? What is the baseline performance?"
] | [
[
"1905.11037-Models-1"
]
] | [
"1. there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty.\n2. Macro F1 = 14.6 (MLR, length 96 snippet)\nWeighted F1 = 31.1 (LSTM, length 128 snippet)"
] | 319 |
1710.10609 | Finding Dominant User Utterances And System Responses in Conversations | There are several dialog frameworks which allow manual specification of intents and rule based dialog flow. The rule based framework provides good control to dialog designers at the expense of being more time consuming and laborious. The job of a dialog designer can be reduced if we could identify pairs of user intents and corresponding responses automatically from prior conversations between users and agents. In this paper we propose an approach to find these frequent user utterances (which serve as examples for intents) and corresponding agent responses. We propose a novel SimCluster algorithm that extends standard K-means algorithm to simultaneously cluster user utterances and agent utterances by taking their adjacency information into account. The method also aligns these clusters to provide pairs of intents and response groups. We compare our results with those produced by using simple Kmeans clustering on a real dataset and observe upto 10% absolute improvement in F1-scores. Through our experiments on synthetic dataset, we show that our algorithm gains more advantage over K-means algorithm when the data has large variance. | {
"paragraphs": [
[
"There are several existing works that focus on modelling conversation using prior human to human conversational data BIBREF0 , BIBREF1 , BIBREF2 . BIBREF3 models the conversation from pairs of consecutive tweets. Deep learning based approaches have also been used to model the dialog in an end to end manner BIBREF4 , BIBREF5 . Memory networks have been used by Bordes et al Bor16 to model goal based dialog conversations. More recently, deep reinforcement learning models have been used for generating interactive and coherent dialogs BIBREF6 and negotiation dialogs BIBREF7 .",
"Industry on the other hand has focused on building frameworks that allow manual specification of dialog models such as api.ai, Watson Conversational Services, and Microsoft Bot framework. These frameworks provide ways to specify intents, and a dialog flow. The user utterances are mapped to intents that are passed to a dialog flow manager. The dialog manager generates a response and updates the dialog state. See Figure FIGREF4 for an example of some intents and a dialog flow in a technical support domain. The dialog flow shows that when a user expresses an intent of # laptop_heat, then the system should respond with an utterance “Could you let me know the serial number of your machine ”. The designer needs to specify intents (for example # laptop_heat, # email_not_opening) and also provide corresponding system responses in the dialog flow. This way of specifying a dialog model using intents and corresponding system responses manually is more popular in industry than a data driven approach as it makes dialog model easy to interpret and debug as well as provides a better control to a dialog designer. However, this is very time consuming and laborious and thus involves huge costs.",
"One approach to reduce the task of a dialog designer is to provide her with frequent user intents and possible corresponding system responses in a given domain. This can be done by analysing prior human to human conversations in the domain. Figure FIGREF5 (a) provides some example conversations in the technical support domain between users and agents.",
"In order to identify frequent user intents, one can use existing clustering algorithms to group together all the utterances from the users. Here each cluster would correspond to a new intent and each utterance in the cluster would correspond to an example for the intent. Similarly the agents utterances can be clustered to identify system responses. However, we argue that rather than treating user utterances and agents responses in an isolated manner, there is merit in jointly clustering them. There is adjacency information of these utterances that can be utilized to identify better user intents and system responses. As an example, consider agent utterances A.2 in box A and A.2 in box C in Figure FIGREF5 (a). The utterances “Which operating system do you use?\" and “What OS is installed in your machine\" have no syntactic similarity and therefore may not be grouped together. However the fact that these utterances are adjacent to the similar user utterances “I am unable to start notes email client\" and “Unable to start my email client\" provides some evidence that the agent utterances might be similar. Similarly the user utterances “My system keeps getting rebooted\" and “Machine is booting time and again\" ( box B and D in Figure FIGREF5 (a))- that are syntactically not similar - could be grouped together since the adjacent agent utterances, “Is your machine heating up?\" and “Is the machine heating?\" are similar.",
"Joint clustering of user utterances and agent utterances allow us to align the user utterance clusters with agent utterance clusters. Figure FIGREF5 (b) shows some examples of user utterance clusters and agent utterance clusters along with their alignments. Note that the user utterance clusters can be used by a dialog designer to specify intents, the agent utterance clusters can be used to create system responses and their alignment can be used to create part of the dialog flow.",
"We propose two ways to take adjacency information into account. Firstly we propose a method called SimCluster for jointly or simultaneously clustering user utterances and agent utterances. SimCluster extends the K-means clustering method by incorporating additional penalty terms in the objective function that try to align the clusters together (described in Section SECREF3 ). The algorithm creates initial user utterance clusters as well as agent utterance clusters and then use bi-partite matching to get the best alignment across these clusters. Minimizing the objective function pushes the cluster centroids to move towards the centroids of the aligned clusters. The process implicitly ensures that the similarity of adjacent agent utterances affect the grouping of user utterances and conversely similarity of adjacent user utterances affect the grouping of agent utterances. In our second approach we use the information about neighbouring utterances for creating the vector representation of an utterance. For this we train a sequence to sequence model BIBREF8 to create the vectors (described in Section SECREF5 ).",
"Our experiments described in section SECREF5 show that we achieve upto 10% absolute improvement in F1 scores over standard K-means using SimCluster. Also we observe that clustering of customer utterances gains significantly by using the adjacency information of agent utterances whereas the gain in clustering quality of agent utterances is moderate. This is because the agent utterances typically follow similar syntactic constructs whereas customer utterances are more varied. Considering the agent utterances into account while clustering users utterances is thus helpful. The organization of the rest of the paper is as follows. In Section SECREF2 we describe the related work. In Section SECREF3 we describe our problem formulation for clustering and the associated algorithm. Finally in sections SECREF4 and SECREF5 we discuss our experiments on synthetic and real datasets respectively."
],
[
"The notion of adjacency pairs was introduced by Sacks et al SSE74 to formalize the structure of a dialog. Adjacency pairs have been used to analyze the semantics of the dialog in computational linguistics community BIBREF9 . Clustering has been used for different tasks related to conversation. BIBREF10 considers the task of discovering dialog acts by clustering the raw utterances. We aim to obtain the frequent adjacency pairs through clustering.",
"There have been several works regarding extensions of clustering to different scenarios such as:-"
],
[
"In this section we describe our approach SimCluster that performs clustering in the two domains simultaneously and ensures that the generated clusters can be aligned with each other. We will describe the model in section SECREF9 and the algorithm in Section SECREF11 ."
],
[
"We consider a problem setting where we are given a collection of pairs of consecutive utterances, with vector representations INLINEFORM0 where INLINEFORM1 s are in speaker 1's domain and INLINEFORM2 s are in speaker 2's domain. We need to simultaneously cluster the utterances in their respective domains to minimize the variations within each domain and also ensure that the clusters for both domains are close together.",
"We denote the clusters for speaker 1's domain by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 .",
"We denote the clusters for second speaker by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . The usual energy function has the terms for distance of points from their corresponding cluster centroids. To be able to ensure that the clusters in each domain are similar, we also consider an alignment between the centroids of the two domains. Since the semantic representations in the two domains are not comparable we consider a notion of induced centroids.",
"We define the induced centroids INLINEFORM0 as the arithmetic means of the points INLINEFORM1 s such that INLINEFORM2 's have the same cluster assigned to them. Similarly, we define INLINEFORM3 as the arithmetic means of INLINEFORM4 s such that INLINEFORM5 s have the same cluster assigned to them. More formally, we define these induced centroids as:- INLINEFORM6 ",
"and INLINEFORM0 ",
"The alignment between these clusters given by the function INLINEFORM0 , which is a bijective mapping from the cluster indices in speaker 1's domain to those in speaker 2's domain. Though there can be several choices for this alignment function, we consider this alignment to be a matching which maximizes the sum of number of common indices in the aligned clusters. More formally we define INLINEFORM1 ",
"Then the matching INLINEFORM0 is defined to be the bijective function which maximizes INLINEFORM1 . We consider a term in the cost function corresponding to the sum of distances between the original centroids and the matched induced centroids. Our overall cost function is now given by:- INLINEFORM2 ",
"We explain the above definition via an example. Consider the clusters shown in Figure FIGREF10 . Here the INLINEFORM0 would match INLINEFORM1 to INLINEFORM2 , INLINEFORM3 to INLINEFORM4 and INLINEFORM5 to INLINEFORM6 , giving a match score of 6. Since INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are present in the cluster INLINEFORM10 , INLINEFORM11 is given by INLINEFORM12 . Similarly INLINEFORM13 ",
" In a similar manner, INLINEFORM0 s can also be defined. Now the alignment terms are given by:- INLINEFORM1 ",
" "
],
[
"[] SimCluster [1] SimClusterInput: INLINEFORM0 ,k (No. of cluster) Output: A cluster assignment INLINEFORM1 for INLINEFORM2 s and a cluster assignment INLINEFORM3 for INLINEFORM4 s Initialize a set of centroids INLINEFORM5 , and INLINEFORM6 Perform simple clustering for a few iterations For each i, compute INLINEFORM7 as the index j among 1 to k which minimizes INLINEFORM8 . Similarly , compute INLINEFORM9 as the index j' among 1 to k which minimizes INLINEFORM10 . Update the centroids, INLINEFORM11 and INLINEFORM12 as:- INLINEFORM13 ",
"and INLINEFORM0 ",
" Perform a Hungarian matching between the cluster indices in the two domains with weights",
"N(j,j') on edges from index j to index j'. convergence To minimize the above energy term we adopt an approach similar to Lloyd's clustering algorithm Llo82 . We assume that we are given a set of initial seeds for the cluster centroids INLINEFORM0 and INLINEFORM1 . We repeat the following steps iteratively:-",
"Minimize the energy with respect to cluster assignment keeping centroids unchanged. As in standard K-means algorithm, this is achieved by updating the cluster assignment, INLINEFORM0 for each index i to be the cluster index j which minimizes INLINEFORM1 . Correspondingly for INLINEFORM2 , we pick the cluster index j' which minimizes INLINEFORM3 .",
" Minimize the energy with respect to the centroids keeping cluster assignment unchanged. To achieve this step we need to minimize the energy function with respect to the centroids INLINEFORM0 and INLINEFORM1 . This is achieved by setting INLINEFORM2 for each j and INLINEFORM3 for each j.",
"Setting INLINEFORM0 , we obtain INLINEFORM1 ",
"or equivalently INLINEFORM0 ",
" Similarly, setting INLINEFORM0 , we obtain INLINEFORM1 ",
"Finally we update the matching between the clusters. To do so, we need to find a bipartite matching match on the cluster indices so as to maximize INLINEFORM0 . We use Hungarian algorithm BIBREF13 to perform the same i.e. we define a bipartite graph with vertices consisting of cluster indices in the two domains. There is an edge from vertex representing cluster indices j (in domain 1) and j' in domain 2, with weight N(j,j'). We find a maximum weight bipartite matching in this graph.",
"Similar to Lloyd's algorithm, each step of the above algorithm decreases the cost function. This ensures that the algorithm achieves a local minima of the cost function if it converges. See Algorithm SECREF11 for a formal description of the approach. The centroid update step of the above algorithm also has an intuitive explanation i.e. we are slightly moving away the centroid towards the matched induced centroid. This is consistent with our goal of aligning the clusters together in the two domains."
],
[
"The algorithm above maintains a mapping between the clusters in each speaker's domain. This mapping serves to give us the alignment between the clusters required to provide a corresponding response for a given user intent."
],
[
"We performed experiments on synthetically generated dataset since it gives us a better control over the distribution of the data. Specifically we compared the gains obtained using our approach versus the variance of the distribution. We created dataset from the following generative process. [H] Generative Process [1] Generate data",
"Pick k points INLINEFORM0 as domain -1 means and a corresponding set of k points INLINEFORM1 as domain-2 means, and covariance matrices INLINEFORM2 ",
"iter INLINEFORM0 upto num INLINEFORM1 samples Sample class INLINEFORM2 Sample INLINEFORM3 Sample INLINEFORM4 Add q and a so sampled to the list of q,a pairs We generated the dataset from the above sampling process with means selected on a 2 dimensional grid of size INLINEFORM5 with variance set as INLINEFORM6 in each dimension.10000 sample points were generated. The parameter INLINEFORM7 of the above algorithm was set to 0.5 and k was set to 9 (since the points could be generated from one of the 9 gaussians with centroids on a INLINEFORM8 grid).",
"We compared the results with simple K-means clustering with k set to 9. For each of these, the initialization of means was done using INLINEFORM0 sampling approach BIBREF14 ."
],
[
"To evaluate the clusters we computed the following metrics",
"ARI (Adjusted Rand Index): Standard Rand Index is a metric used to check the clustering quality against a given standard set of clusters by comparing the pairwise clustering decisions. It is defined as INLINEFORM0 , where a is the number of true positive pairs, b is the number of true negative pairs, c is the number of false positive pairs and d is the number of false negative pairs. Adjusted rand index corrects the standard rand index for chance and is defined as INLINEFORM1 BIBREF15 .",
"We compute ARI score for both the source clusters as well as the target clusters.",
"F1 scores: We also report F1 scores for the pairwise clustering decisions. In the above notation we considered the pair-precision as INLINEFORM0 and recall as INLINEFORM1 . The F1 measure is the Harmonic mean given as INLINEFORM2 .",
"We used the gaussian index from which an utterance pair was generated as the ground truth label, which served to provide ground truth clusters for computation of the above evaluation metrics. Table TABREF15 shows a comparison of the results on SimCluster versus K-means algorithm. Here our SimCluster algorithm improves the F1-scores from 0.412 and 0.417 in the two domains to 0.442 and 0.441. The ARI scores also improve from 0.176 and 0.180 to 0.203 and 0.204.",
"We also performed experiments to see how the performance of SimCluster is affected by the variance in the cluster (controlled by the generative process in Algorithm SECREF11 ). Intuitively we expect SimCluster to obtain an advantage over simple K-means when variance is larger. This is because at larger variance, the data points are more likely to be generated away from the centroid due to which they might be clustered incorrectly with the points from neighbouring cluster. However if the corresponding point from the other domain is generated closer to the centroid, it might help in clustering the given data point correctly. We performed these experiments with points generated from Algorithm SECREF11 at differet values of variance. We generated the points with centroids located on a grid of size INLINEFORM0 in each domain. The value of k was set to 9. The experiment was repeated for each value of variance between 0.1 to 1.0 in the intervals of 0.1. Figures FIGREF22 and FIGREF23 show the percentage improvement on ARI score and F1 score respectively achieved by SimCluster (over K-means) versus variance."
],
[
"We have experimented on a dataset containing Twitter conversations between customers and Amazon help. The dataset consisted of 92130 conversations between customers and amazon help. We considered the conversations with exactly two speakers Amazon Help and a customer. Consecutive utterances by the same speaker were concatenated and considered as a single utterance. From these we extracted adjacency pairs of the form of a customer utterance followed by an agent (Amazon Help) utterance. We then selected the utterance pairs from 8 different categories, like late delivery of item, refund, unable to sign into the account, replacement of item, claim of warranty, tracking delivery information etc. A total of 1944 utterance pairs were selected.",
"To create the vector representation we had used two distinct approaches:-",
"Paragraph to vector approach (Doc2Vec) by Le and Mikolov LM14. Here we trained the vectors using distributed memory algorithm and trained for 40 iterations. A window size of 4 was used.",
"We also trained the vectors using sequence to sequence approach BIBREF8 , on the Twitter dataset where we considered the task of predicting the reply of Amazon Help for customer's query and vice versa.",
"The encoded vector from the input sequence forms the corresponding vector representation. For the task of generating the agent's response for customer utterance the encoding from the input sequence (in the trained model) forms the vector representation for the customer utterance. Similarly for the task of generating the previous customer utterance from the agent's response, the intermediate encoding forms the vector representation for the agent utterance. We used an LSTM based 3-layered sequence to sequence model with attention for this task.",
"We ran the K-means clustering algorithm for 5 iterations followed by our SimCluster algorithm for 30 iterations to form clusters in both the (customer and agent) domains. The hyper parameter( INLINEFORM0 ) is chosen based on a validation set. We varied the value of INLINEFORM1 from 0.5 to 1.0 at intervals of 0.025. The initialization of centroids was performed using INLINEFORM2 sampling approach BIBREF14 ."
],
[
"For the clusters so obtained we have computed F1 and ARI measures as before and compared with the K-means approach. We used the partitioning formed by the 8 categories (from which the utterance pairs were selected) as the ground truth clustering.",
"Table TABREF20 summarizes the results. We observe that for K-means algorithm, the vectors generated from sequence to sequence model perform better than the vectors generated using paragraph to vector for both the domains. This is expected as the vectors generated from sequence to sequence model encode some adjacency information as well. We further observe that the SimCluster approach performs better than the K-means approach for both the vector representations. It improves the F1-scores for Doc2Vec representation from 0.787 and 0.783 to 0.88 and 0.887 in the two domains. Also the F1-scores on Seq2Seq based representation improve from 0.83 and 0.9 to 0.86 and 0.916 using SimCluster. However the gains are much more in case of Doc2Vec representations than Seq2Seq representations since Doc2Vec did not have any information from the other domain where as some amount of this information is already captured by Seq2Seq representation. Moreover it is the clustering of customer utterances which is likely to see an improvement. This is because agent utterances tends to follow a generic pattern while customer utterances tend to be more varied. Considering agent utterances while generating clusters in the user domain thus tends to be more helpful than the other way round.",
"Table TABREF25 shows qualitative results on the same dataset. Column 1 and 2 consists of clusters of utterances in customer domain and agent domain respectively. The utterances with usual font are representative utterances from clusters obtained through K-means clustering. The utterances in bold face indicate the similar utterances which were incorrectly classified in different clusters using K-means but were correctly classified together with the utterances by SimCluster algorithm."
],
[
"One of the first steps to automate the construction of conversational systems could be to identify the frequent user utterances and their corresponding system responses. In this paper we proposed an approach to compute these groups of utterances by clustering the utterances in both the domains using our novel SimCluster algorithm which seeks to simultaneously cluster the utterances and align the utterances in two domains. Through our experiments on synthetically generated datset we have shown that SimCluster has more advantage over K-means on datasets with larger variance. Our technique improves upon the ARI and F1 scores on a real dataset containing Twitter conversations."
],
[
"We thank Dr. David Nahamoo (CTO, Speech Technology and Fellow IBM Research ) for his valuable guidance and feedback. We also acknowledge the anonymous reviewers of IJCNLP 2017 for their comments."
]
],
"section_name": [
"Introduction",
"Related Work",
"The Proposed Approach",
" Model",
"SimCluster Algorithm",
"Alignment",
"Experiments on Synthetic Dataset",
"Evaluation and Results",
"Description and preprocessing of dataset",
"Results",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"b5256d39f74e32a23745916ba19ada536993c6ea"
],
"answer": [
{
"evidence": [
"In order to identify frequent user intents, one can use existing clustering algorithms to group together all the utterances from the users. Here each cluster would correspond to a new intent and each utterance in the cluster would correspond to an example for the intent. Similarly the agents utterances can be clustered to identify system responses. However, we argue that rather than treating user utterances and agents responses in an isolated manner, there is merit in jointly clustering them. There is adjacency information of these utterances that can be utilized to identify better user intents and system responses. As an example, consider agent utterances A.2 in box A and A.2 in box C in Figure FIGREF5 (a). The utterances “Which operating system do you use?\" and “What OS is installed in your machine\" have no syntactic similarity and therefore may not be grouped together. However the fact that these utterances are adjacent to the similar user utterances “I am unable to start notes email client\" and “Unable to start my email client\" provides some evidence that the agent utterances might be similar. Similarly the user utterances “My system keeps getting rebooted\" and “Machine is booting time and again\" ( box B and D in Figure FIGREF5 (a))- that are syntactically not similar - could be grouped together since the adjacent agent utterances, “Is your machine heating up?\" and “Is the machine heating?\" are similar."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Similarly the agents utterances can be clustered to identify system responses. However, we argue that rather than treating user utterances and agents responses in an isolated manner, there is merit in jointly clustering them. There is adjacency information of these utterances that can be utilized to identify better user intents and system responses."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"b48d4c99485ff7fd118f070ca2588e2139066291"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"20bc3e295844fab5380da0a33210552423bf60cf"
],
"answer": [
{
"evidence": [
"We used the gaussian index from which an utterance pair was generated as the ground truth label, which served to provide ground truth clusters for computation of the above evaluation metrics. Table TABREF15 shows a comparison of the results on SimCluster versus K-means algorithm. Here our SimCluster algorithm improves the F1-scores from 0.412 and 0.417 in the two domains to 0.442 and 0.441. The ARI scores also improve from 0.176 and 0.180 to 0.203 and 0.204."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF15 shows a comparison of the results on SimCluster versus K-means algorithm. Here our SimCluster algorithm improves the F1-scores from 0.412 and 0.417 in the two domains to 0.442 and 0.441. The ARI scores also improve from 0.176 and 0.180 to 0.203 and 0.204."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"d8e2514bb4d80f73a30335bc26d577d76c1e30a0"
],
"answer": [
{
"evidence": [
"We performed experiments on synthetically generated dataset since it gives us a better control over the distribution of the data. Specifically we compared the gains obtained using our approach versus the variance of the distribution. We created dataset from the following generative process. [H] Generative Process [1] Generate data",
"Pick k points INLINEFORM0 as domain -1 means and a corresponding set of k points INLINEFORM1 as domain-2 means, and covariance matrices INLINEFORM2",
"iter INLINEFORM0 upto num INLINEFORM1 samples Sample class INLINEFORM2 Sample INLINEFORM3 Sample INLINEFORM4 Add q and a so sampled to the list of q,a pairs We generated the dataset from the above sampling process with means selected on a 2 dimensional grid of size INLINEFORM5 with variance set as INLINEFORM6 in each dimension.10000 sample points were generated. The parameter INLINEFORM7 of the above algorithm was set to 0.5 and k was set to 9 (since the points could be generated from one of the 9 gaussians with centroids on a INLINEFORM8 grid)."
],
"extractive_spans": [],
"free_form_answer": "using generative process",
"highlighted_evidence": [
"We performed experiments on synthetically generated dataset since it gives us a better control over the distribution of the data. Specifically we compared the gains obtained using our approach versus the variance of the distribution. We created dataset from the following generative process. [H] Generative Process [1] Generate data\n\nPick k points INLINEFORM0 as domain -1 means and a corresponding set of k points INLINEFORM1 as domain-2 means, and covariance matrices INLINEFORM2\n\niter INLINEFORM0 upto num INLINEFORM1 samples Sample class INLINEFORM2 Sample INLINEFORM3 Sample INLINEFORM4 Add q and a so sampled to the list of q,a pairs We generated the dataset from the above sampling process with means selected on a 2 dimensional grid of size INLINEFORM5 with variance set as INLINEFORM6 in each dimension.10000 sample points were generated. The parameter INLINEFORM7 of the above algorithm was set to 0.5 and k was set to 9 (since the points could be generated from one of the 9 gaussians with centroids on a INLINEFORM8 grid)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they study frequent user responses to help automate modelling of those?",
"How do they divide text into utterances?",
"Do they use the same distance metric for both the SimCluster and K-means algorithm?",
"How do they generate the synthetic dataset?"
],
"question_id": [
"3f7a7e81908a763e5ca720f90570c5f224ac64f6",
"28e7711f94e093137eb8828f0b1eff1b05e4fa38",
"49b38189b8336ce41d0f0b4c5c9459722736e15b",
"40c2bab4a6bf3c0628079fcf19e8b52f27f51d98"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Some intents and dialog flow",
"Figure 2: Some sample conversations and the obtained clusters",
"Figure 3: Sample clusters with matching",
"Table 1: Performance of SimCluster versus K-means clustering on synthetic dataset",
"Figure 4: Improvement in ARI figures achieved by SimCluster versus variance",
"Figure 5: Variation of Improvement in F1 score figures achieved by SimCluster versus variance",
"Table 2: Performance of SimCluster versus K-means clustering on both Doc2Vec as well as seq2seq based vectors",
"Table 3: Sample clusters in user and agent domains. Utterances in bold are those which were not in the given cluster using K-means, but could be correctly classified with the cluster using SimCluster"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"6-Table1-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png",
"8-Table2-1.png",
"8-Table3-1.png"
]
} | [
"How do they generate the synthetic dataset?"
] | [
[
"1710.10609-Experiments on Synthetic Dataset-2",
"1710.10609-Experiments on Synthetic Dataset-0"
]
] | [
"using generative process"
] | 320 |
1906.03538 | Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims | One key consequence of the information revolution is a significant increase and a contamination of our information supply. The practice of fact checking won't suffice to eliminate the biases in text data we observe, as the degree of factuality alone does not determine whether biases exist in the spectrum of opinions visible to us. To better understand controversial issues, one needs to view them from a diverse yet comprehensive set of perspectives. For example, there are many ways to respond to a claim such as"animals should have lawful rights", and these responses form a spectrum of perspectives, each with a stance relative to this claim and, ideally, with evidence supporting it. Inherently, this is a natural language understanding task, and we propose to address it as such. Specifically, we propose the task of substantiated perspective discovery where, given a claim, a system is expected to discover a diverse set of well-corroborated perspectives that take a stance with respect to the claim. Each perspective should be substantiated by evidence paragraphs which summarize pertinent results and facts. We construct PERSPECTRUM, a dataset of claims, perspectives and evidence, making use of online debate websites to create the initial data collection, and augmenting it using search engines in order to expand and diversify our dataset. We use crowd-sourcing to filter out noise and ensure high-quality data. Our dataset contains 1k claims, accompanied with pools of 10k and 8k perspective sentences and evidence paragraphs, respectively. We provide a thorough analysis of the dataset to highlight key underlying language understanding challenges, and show that human baselines across multiple subtasks far outperform ma-chine baselines built upon state-of-the-art NLP techniques. This poses a challenge and opportunity for the NLP community to address. | {
"paragraphs": [
[
"Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspectives presented in them or whether they are supported by evidence.",
"In this paper, we explore an approach to mitigating this selection bias BIBREF0 when studying (disputed) claims. Consider the claim shown in Figure FIGREF1 : “animals should have lawful rights.” One might compare the biological similarities/differences between humans and other animals to support/oppose the claim. Alternatively, one can base an argument on morality and rationality of animals, or lack thereof. Each of these arguments, which we refer to as perspectives throughout the paper, is an opinion, possibly conditional, in support of a given claim or against it. A perspective thus constitutes a particular attitude towards a given claim.",
"Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims. In this work, we propose and study a setting that would facilitate discovering diverse perspectives and their supporting evidence with respect to a given claim. Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges. For example, for the claim in Figure FIGREF1 , multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is “animals have no interest or rationality”, a perspective that should be identified as taking an opposing stance with respect to the claim. Each perspective should also be well-supported by evidence found in a pool of potential pieces of evidence. While it might be impractical to provide an exhaustive spectrum of ideas with respect to a claim, presenting a small but diverse set of perspectives could be an important step towards addressing the selection bias problem. Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources. We are not attempting to do that. We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness BIBREF1 , BIBREF2 and possibly others. Inherently, our objective requires understanding the relations between perspectives and claims, the nuances in the meaning of various perspectives in the context of claims, and relations between perspectives and evidence. This, we argue, can be done with a diverse enough, but not exhaustive, dataset. And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here.",
"To facilitate the research towards developing solutions to such challenging issues, we propose [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m, a dataset of claims, perspectives and evidence paragraphs. For a given claim and pools of perspectives and evidence paragraphs, a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs.",
"Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs. In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging. We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise.",
"The contributions of this paper are as follows:"
],
[
"In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery. To clarify our description we use to following notation. Let INLINEFORM0 indicate a target claim of interest (for example, the claims INLINEFORM1 and INLINEFORM2 in Figure FIGREF6 ). Each claim INLINEFORM3 is addressed by a collection of perspectives INLINEFORM4 that are grouped into clusters of equivalent perspectives. Additionally, each perspective INLINEFORM5 is supported, relative to INLINEFORM6 , by at least one evidence paragraph INLINEFORM7 , denoted INLINEFORM8 .",
"Creating systems that would address our challenge in its full glory requires solving the following interdependent tasks:",
"Determination of argue-worthy claims: not every claim requires an in-depth discussion of perspectives. For a system to be practical, it needs to be equipped with understanding argumentative structures BIBREF3 in order to discern disputed claims from those with straightforward responses. We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims.",
"Discovery of pertinent perspectives: a system is expected to recognize argumentative sentences BIBREF4 that directly address the points raised in the disputed claim. For example, while the perspectives in Figure FIGREF6 are topically related to the claims, INLINEFORM0 do not directly address the focus of claim INLINEFORM1 (i.e., “use of animals” in “entertainment”).",
"Perspective equivalence: a system is expected to extract a minimal and diverse set of perspectives. This requires the ability to discover equivalent perspectives INLINEFORM0 , with respect to a claim INLINEFORM1 : INLINEFORM2 . For instance, INLINEFORM3 and INLINEFORM4 are equivalent in the context of INLINEFORM5 ; however, they might not be equivalent with respect to any other claim. The conditional nature of perspective equivalence differentiates it from the paraphrasing task BIBREF5 .",
"Stance classification of perspectives: a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) BIBREF6 .",
"Substantiating the perspectives: a system is expected to find valid evidence paragraph(s) in support of each perspective. Conceptually, this is similar to the well-studied problem of textual entailment BIBREF7 except that here the entailment decisions depend on the choice of claims.",
""
],
[
"In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies.",
"We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators.",
"For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions. Only after successful completion are they allowed to start the annotation tasks.",
"Throughout our annotations, it is our aim to make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences). The screen-shots of the annotation interfaces for each step are included in the Appendix (Section SECREF56 ).",
"In the steps outlined below, we filter out a subset of the data with low rater–rater agreement INLINEFORM0 (see Appendix SECREF47 ). In certain steps, we use an information retrieval (IR) system to generate the best candidates for the task at hand.",
"We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org. This yields INLINEFORM0 claims, INLINEFORM1 perspectives and INLINEFORM2 evidence paragraphs (for complete statistics, see Table TABREF46 in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data.",
"For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim. For a fixed pair of claim and perspective, we ask the crowd-workers to label the perspective with one of the five categories of support, oppose, mildly-support, mildly-oppose, or not a valid perspective. The reason that we ask for two levels of intensity is to distinguish mild or conditional arguments from those that express stronger positions.",
"Every 10 claims (and their relevant perspectives) are bundled to form a HIT. Three independent annotators solve a HIT, and each gets paid $1.5-2 per HIT. To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of INLINEFORM0 . To account for minor disagreements in the intensity of perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing mildly-support and mildly-oppose into support and oppose, respectively.",
"To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims). Afterwards, the differences were adjudicated. We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation. This results in an accuracy of 94%, which shows high-agreement with the crowdsourced annotations.",
"To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives. We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of $1.50.",
"Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases. We create HITs of 24 candidate paraphrases to be verified, with a reward of $1. Overall, this process gives us INLINEFORM0 paraphrased perspectives. The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps.",
"In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have. Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying “claim+perspective”. We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences. These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained.",
"In a final round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together. Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims. To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved by the IR system.",
"The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not. Performing these annotations exhaustively for any perspective-evidence pair is not possible. Instead, we make use of a retrieval system to annotate only the relevant pairs. In particular, we create an index of all the perspectives retained from step 2a. For a given evidence paragraph, we retrieve the top relevant perspectives. We ask the annotators to note whether a given evidence paragraph supports a given perspective or not. Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives. Each HIT is paid $1 and annotated by at least 4 independent annotators.",
"In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated. We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7%. This indicates the high quality of the crowdsourced data."
],
[
"We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 .",
"To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics. Figure FIGREF22 depicts a few popular categories and sampled questions from each."
],
[
"We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work BIBREF37 , BIBREF38 . The result of this annotation is depicted in Figure FIGREF24 . As can be seen, the problem requires understanding of common-sense, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference."
],
[
"In this section we provide empirical analysis to address the tasks. We create a split of 60%/15%/25% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the “topic” annotation from Section SECREF20 ), we make sure to have claims with the same topic fall into the same split.",
"For simplicity, we define a notation which we will extensively use for the rest of this paper. The clusters of equivalent perspectives are denoted as INLINEFORM0 , given a representative member INLINEFORM1 . Let INLINEFORM2 denote the collection of relevant perspectives to a claim INLINEFORM3 , which is the union of all the equivalent perspectives participating in the claim: INLINEFORM4 . Let INLINEFORM5 denote the set of evidence documents lending support to a perspective INLINEFORM6 . Additionally, denote the two pools of perspectives and evidence with INLINEFORM7 and INLINEFORM8 , respectively."
],
[
"We make use of the following systems in our evaluation:",
"(Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index.",
"(Contextual representations). A recent state-of-the-art contextualized representation BIBREF40 . This system has been shown to be effective on a broad range of natural language understanding tasks.",
"Human performance provides us with an estimate of the best achievable results on datasets. We use human annotators to measure human performance for each task. We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4."
],
[
"We perform evaluations on four different subtasks in our dataset. In all of the following evaluations, the systems are given the two pools of perspectives INLINEFORM0 and evidences INLINEFORM1 .",
"A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim. Let INLINEFORM0 be the set of output perspectives. Define the precision and recall as INLINEFORM1 and INLINEFORM2 respectively. To calculate dataset metrics, the aforementioned per-claim metrics are averaged across all the claims in the test set.",
"Given a claim, a system is expected to label every perspective in INLINEFORM0 with one of two labels support or oppose. We use the well-established definitions of precision-recall for this binary classification task.",
"A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim. We evaluate this task in a way similar to a clustering problem. For a pair of perspectives INLINEFORM0 , a system predicts whether the two are in the same cluster or not. The ground-truth is whether there is a cluster which contains both of the perspectives or not: INLINEFORM1 . We use this pairwise definition for all the pairs in INLINEFORM2 , for any claim INLINEFORM3 in the test set.",
"Given a perspective INLINEFORM0 , we expect a system to return all the evidence INLINEFORM1 from the pool of evidence INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 be the predicted and gold evidence for a perspective INLINEFORM5 . Define macro-precision and macro-recall as INLINEFORM6 and INLINEFORM7 , respectively. The metrics are averaged across all the perspectives INLINEFORM8 participating in the test set.",
"The goal is to get estimates of the overall performance of the systems. Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline). We note that the task of INLINEFORM3 (perspective equivalence) is indirectly being measured within INLINEFORM4 . Furthermore, since we do not report an IR performance on INLINEFORM5 , we use the “always supp” baseline instead to estimate an overall performance for IR."
],
[
"Table TABREF40 shows a summary of the experimental results. To measure the performance of the IR system, we use the index containing INLINEFORM0 . Given each claim, we query the top INLINEFORM1 perspectives, ranked according to their retrieval scores. We tune INLINEFORM2 on our development set and report the results on the test section according to the tuned parameter. We use IR results as candidates for other solvers (including humans). For this task, IR with top-15 candidates yields INLINEFORM3 90% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). In order to train BERT on this task, we use the IR candidates as the training instances. We then tune a threshold on the dev data to select the top relevant perspectives. In order to measure human performance, we create an interface where two human annotators see IR top- INLINEFORM4 and select a minimal set of perspectives (i.e., no two equivalent perspectives).",
"We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to {support, oppose}. The candidate inputs are generated on the collection of perspectives INLINEFORM0 relevant to a claim INLINEFORM1 . To have an understanding of a lower bound for the metric, we measure the quality of an always-support baseline. We measure the performance of BERT on this task as well, which is about 20% below human performance. This might be because this task requires a deep understanding of commonsense knowledge/reasoning (as indicated earlier in Section SECREF5 ). Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task.",
"We create instances in the form of INLINEFORM0 where INLINEFORM1 . The expected label is whether the two perspectives belong to the same equivalence class or not. In the experiments, we observe that BERT has a significant performance gain of INLINEFORM2 over the IR baseline. Meanwhile, this system is behind human performance by a margin of INLINEFORM3 .",
"We evaluate the systems on the extraction of items from the pool of evidences INLINEFORM0 , given a claim-perspective pair. To measure the performance of the IR system working with the index containing INLINEFORM1 we issue a query containing the concatenation of a perspective-claim pair. Given the sorted results (according to their retrieval confidence score), we select the top candidates using a threshold parameter tuned on the dev set. We also use the IR system's candidates (top-60) for other baselines. This set of candidates yields a INLINEFORM2 85% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). We train BERT system to map each (gold) claim-perspective pair to its corresponding evidence paragraph(s). Since each evidence paragraph could be long (hence hard to feed into BERT), we split each evidence paragraph into sliding windows of 3 sentences. For each claim-perspective pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples. In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by BERT, we consider the whole evidence as positive (i.e. it supports a given perspective).",
"Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs. Similar to the previous scenarios, the BERT solver has a significant gain over a trivial baseline, while standing behind human with a significant margin."
],
[
"As one of the key consequences of the information revolution, information pollution and over-personalization have already had detrimental effects on our life. In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too BIBREF41 .",
"The dataset presented here is not intended to be exhaustive, nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness. Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influenced by their prior beliefs BIBREF42 . To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances. For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content).",
"A few algorithmic components were not addressed in this work, although they are important to the complete perspective discovery and presentation pipeline. For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim. And, to construct the pool of perspectives, one has to extract relevant arguments BIBREF43 . In a similar vein, since our main focus is the study of the relations between claims, perspectives, and evidence, we leave out important issues such as their degree of factuality BIBREF8 or trustworthiness BIBREF44 , BIBREF1 as separate aspects of problem.",
"We hope that some of these challenges and limitations will be addressed in future work."
],
[
"The importance of this work is three-fold; we define the problem of substantiated perspective discovery and characterize language understanding tasks necessary to address this problem. We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem. Finally, we build and evaluate strong baseline supervised systems for this problem. Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction.",
"There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works."
],
[
"The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions. This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government."
],
[
"We provide brief statistics on the sources of different content in our dataset in Table TABREF46 . In particular, this table shows:",
"the size of the data collected from online debate websites (step 1).",
"the size of the data filtered out (step 2a).",
"the size of the perspectives added by paraphrases (step 2b).",
"the size of the perspective candidates added by web (step 2c)."
],
[
"We use the following definition formula in calculation of our measure of agreement. For a fixed subject (problem instance), let INLINEFORM0 represent the number of raters who assigned the given subject to the INLINEFORM1 -th category. The measure of agreement is defined as INLINEFORM2 ",
"where for INLINEFORM0 . Intuitively, this function measure concentration of values the vector INLINEFORM1 . Take the edge cases:",
"Values concentrated: INLINEFORM0 (in other words INLINEFORM1 ) INLINEFORM2 .",
"Least concentration (uniformly distribution): INLINEFORM0 .",
"This definition is used in calculation of more extensive agreement measures (e.g, Fleiss' kappa BIBREF49 ). There multiple ways of interpreting this formula:",
"It indicates how many rater–rater pairs are in agreement, relative to the number of all possible rater–rater pairs.",
"One can interpret this measure by a simple combinatorial notions. Suppose we have sets INLINEFORM0 which are pairwise disjunct and for each INLINEFORM1 let INLINEFORM2 . We choose randomly two elements from INLINEFORM3 . Then the probability that they are from the same set is the expressed by INLINEFORM4 .",
"We can write INLINEFORM0 in terms of INLINEFORM1 which is the conventional Chi-Square statistic for testing if the vector of INLINEFORM2 values comes from the all-categories-equally-likely flat multinomial model."
]
],
"section_name": [
"Introduction",
"Design Principles and Challenges",
"Dataset construction",
"Statistics on the dataset",
"Required skills",
"Empirical Analysis",
"Systems",
"Evaluation metrics.",
"Results",
"Discussion",
"Conclusion",
"Acknowledgments",
"Statistics",
"Measure of agreement"
]
} | {
"answers": [
{
"annotation_id": [
"ab33b2755926e5837deb5730e0f745a8112ccebc"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: A summary of PERSPECTRUM statistics",
"We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 ."
],
"extractive_spans": [],
"free_form_answer": "Average claim length is 8.9 tokens.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: A summary of PERSPECTRUM statistics",
"The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ca50def99936d136fbf6a777c036e56c7902e00f"
],
"answer": [
{
"evidence": [
"We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org. This yields INLINEFORM0 claims, INLINEFORM1 perspectives and INLINEFORM2 evidence paragraphs (for complete statistics, see Table TABREF46 in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data."
],
"extractive_spans": [
"idebate.com",
"debatewise.org",
"procon.org"
],
"free_form_answer": "",
"highlighted_evidence": [
"We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"93e3ed9d654782c639a0b5cb487cfc53cc748ffe"
],
"answer": [
{
"evidence": [
"We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators."
],
"extractive_spans": [
"Amazon Mechanical Turk (AMT)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"21435ff9d4f022d246a5a715e6f970e54fb8fa57"
],
"answer": [
{
"evidence": [
"(Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index."
],
"extractive_spans": [
"Information Retrieval"
],
"free_form_answer": "",
"highlighted_evidence": [
"(Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f99cfc195a1f5ae017eef26f900531e9c1db96c5"
],
"answer": [
{
"evidence": [
"There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works."
],
"extractive_spans": [
"one needs to develop mechanisms to recognize valid argumentative structures",
"we ignore trustworthiness and credibility issues"
],
"free_form_answer": "",
"highlighted_evidence": [
"There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"466d202f3d3f64fea9e86f5bb3955023c6f2e851"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 3: Distribution of claim topics.",
"To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics. Figure FIGREF22 depicts a few popular categories and sampled questions from each."
],
"extractive_spans": [],
"free_form_answer": "Ethics, Gender, Human rights, Sports, Freedom of Speech, Society, Religion, Philosophy, Health, Culture, World, Politics, Environment, Education, Digital Freedom, Economy, Science and Law",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 3: Distribution of claim topics.",
"To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"What is the average length of the claims?",
"What debate websites did they look at?",
"What crowdsourcing platform did they use?",
"Which machine baselines are used?",
"What challenges are highlighted?",
"What debate topics are included in the dataset?"
],
"question_id": [
"281cd4e78b27a62713ec43249df5000812522a89",
"fb96c0cd777bb2961117feca19c6d41bfd8cfd42",
"534f69c8c90467d5aa4e38d7c25c53dbc94f4b24",
"090f2b941b9c5b6b7c34ae18c2cc97e9650f1f0b",
"5e032de729ce9fc727b547e3064be04d30009324",
"01dc6893fc2f49b732449dfe1907505e747440b0"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Given a claim, a hypothetical system is expected to discover various perspectives that are substantiated with evidence and their stance with respect to the claim.",
"Figure 2: Depiction of a few claims, their perspectives and evidences from PERSPECTRUM. The supporting",
"Table 1: Comparison of PERSPECTRUM to a few notable datasets in the field.",
"Table 2: A summary of PERSPECTRUM statistics",
"Figure 3: Distribution of claim topics.",
"Figure 4: Visualization of the major topics and sample claims in each category.",
"Figure 5: The set of reasoning abilities required to solve the stance classification task.",
"Table 3: Quality of different baselines on different subtasks (Section 5). All the numbers are in percentage. Top machine baselines are in bold.",
"Figure 6: Candidates retrieved from IR baselines vs Precision, Recall, F1, for T1 and T4 respectively.",
"Table 4: The dataset statistics (See section 4.1).",
"Figure 7: Histogram of popular noun-phrases in our dataset. The y-axis shows count in logarithmic scale.",
"Figure 8: Graph visualization of three related example claims (colored in red) in our dataset with their perspectives. Each edge indicates a supporting/opposing relation between a perspective and a claim.",
"Figure 9: Interfaces shown to the human annotators. Top: the interface for verification of perspectives (step 2a). Middle: the interface for annotation of evidences (step 3a). Bottom: the interface for generation of perspective paraphrases (step 2b).",
"Figure 10: Annotation interface used for topic of claims (Section 4.2)"
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png",
"8-Table3-1.png",
"13-Figure6-1.png",
"13-Table4-1.png",
"14-Figure7-1.png",
"14-Figure8-1.png",
"15-Figure9-1.png",
"16-Figure10-1.png"
]
} | [
"What is the average length of the claims?",
"What debate topics are included in the dataset?"
] | [
[
"1906.03538-5-Table2-1.png",
"1906.03538-Statistics on the dataset-0"
],
[
"1906.03538-Statistics on the dataset-1",
"1906.03538-6-Figure3-1.png"
]
] | [
"Average claim length is 8.9 tokens.",
"Ethics, Gender, Human rights, Sports, Freedom of Speech, Society, Religion, Philosophy, Health, Culture, World, Politics, Environment, Education, Digital Freedom, Economy, Science and Law"
] | 322 |
1803.09230 | Pay More Attention - Neural Architectures for Question-Answering | Machine comprehension is a representative task of natural language understanding. Typically, we are given context paragraph and the objective is to answer a question that depends on the context. Such a problem requires to model the complex interactions between the context paragraph and the question. Lately, attention mechanisms have been found to be quite successful at these tasks and in particular, attention mechanisms with attention flow from both context-to-question and question-to-context have been proven to be quite useful. In this paper, we study two state-of-the-art attention mechanisms called Bi-Directional Attention Flow (BiDAF) and Dynamic Co-Attention Network (DCN) and propose a hybrid scheme combining these two architectures that gives better overall performance. Moreover, we also suggest a new simpler attention mechanism that we call Double Cross Attention (DCA) that provides better results compared to both BiDAF and Co-Attention mechanisms while providing similar performance as the hybrid scheme. The objective of our paper is to focus particularly on the attention layer and to suggest improvements on that. Our experimental evaluations show that both our proposed models achieve superior results on the Stanford Question Answering Dataset (SQuAD) compared to BiDAF and DCN attention mechanisms. | {
"paragraphs": [
[
"Enabling machines to understand natural language is one of the key challenges to achieve artificially intelligent systems. Asking machines questions and getting a meaningful answer adds value to us since it automatizes knowledge acquisition efforts drastically. Apple's Siri and Amazon's Echo are two such examples of mass market products capable of machine comprehension that has led to a paradigm shift on how consumers' interact with machines.",
"Over the last decade, research in the field of Natural Language Processing (NLP) has massively benefited from neural architectures. Those approaches have outperformed former state-of-the-art non-neural machine learning model families while needing far less human intervention since they don't require any manual feature engineering. A subset of NLP research focuses on building systems that are able to answer questions about a given document. To jointly expand the current best practice, the Stanford Question Answering Dataset (SQuAD) was setup as a basis for a global competition between different research groups BIBREF0 . SQuAD was published in 2016 and includes 100,000+ context-question-triplets on 500+ articles, significantly larger than previous reading comprehension datasets BIBREF1 . The context paragraphs were obtained from more then 500 Wikipedia articles and the answers were sourced with Amazon Mechanical Turk. Recently, researchers were able to make machines outperform humans (as of Jan 2018) BIBREF1 . Answers in this dataset are taken from the document itself and are not dynamically generated from scratch. Instead of generating text that provides a suitable answer, the objective is to find the boundaries in which the answer is contained in the document. The aim is to achieve close to human performance in generating correct answers from a context paragraph given any new unseen questions.",
"To solve this problem of question answering, neural attention mechanisms have recently gained significant popularity by focusing on the most relevant area within a context paragraph, useful to answer the question BIBREF2 , BIBREF3 . Attention mechanisms have proven to be an important extension to achieve better results in NLP problems BIBREF4 . While earlier attention mechanisms for this task were usually uni-directional, obtaining a fixed size vector for each context word summarizing the question words, bi-directional attention flow applies an attention scheme in both directions (context-to-question as well as question-to-context). In this paper, we study two state-of-the-art neural architectures with an attention flow going in both directions called Bi-Directional Attention Flow (BiDAF) BIBREF5 and Dynamic Co-Attention network (DCN) BIBREF6 that were once themselves leading architectures in the SQuAD challenge. We would also like to propose yet another hybrid neural architecture that shows competitive results by bringing together these two models. More specifically, we combined the attention layer of both BiDAF and Co-Attention models. In addition to this, we propose another simpler model family called Double Cross Attention (DCA) which in itself performs better than both BiDAF and Co-Attention while giving similar performance as hybrid model. The objective of this paper is to do a comparative study of the performance of attention layer and not to optimize the performance of the overall system."
],
[
"We started our development by re-implementing the BiDAF and DCN models. We figured that these models individually enhanced the baseline performance significantly, so the hope was that a combination would eventually lead to superior results. Thereby we created our “hybrid\" model, which we will subsequently explain shortly. In the following subsections, we describe each layer of our model in more detail."
],
[
"The word embedding layer maps each word in a context and question to a vector of fixed size using pre-trained GloVe embeddings BIBREF7 . First, we encode each word in the question and context with the pre-trained Glove embedding as given in the baseline code. Then we concatenate to the word encodings an optional Character-level Embedding with CNNs since it helps to deal with out-of-vocabulary words BIBREF5 , BIBREF8 . The joint concatenated encoding of words and characters is subsequently fed into the context and question encoding layer."
],
[
"Once we have a context and question embeddings, we use a Bidirectional GRU to translate these context and question embeddings into encodings. Whereas a simple LSTM/GRU cell encodes sequence data such as a sentences only from left-to-right, a bi-directional approach also parses a sentence from the end to the start. Both representations of a sequence are then usually concatenated and are assumed to encode the sequence structure more expressively ultimately leading to higher model performance."
],
[
"The attention layer is the modeling layer that eventually involves modeling the complex interactions between the context and question words. Next, we describe several different attention mechanisms that we implemented in our system.",
"We implemented a complete BiDAF layer as suggested in the project handout and in the original paper BIBREF5 . Bi-directional attention flow approaches the machine comprehension challenge slightly differently. Instead of using an attention layer for transforming context inputs to fixed-size vectors, the BiDAF model computes the attention from both question-to-context as well as context-to-question and combines them effectively. The basic idea is essentially to obtain a similarity matrix to capture relations between context and question words and use this matrix to obtain context-to-question as well as question-to-context attention vectors. Finally, these attention vectors are concatenated to the context encodings in a specific way to obtain the output of the Bi-directional attention flow layer. In the original BiDAF paper, an additional Bidirectional-RNN is used to again encode these concatenated vectors. However, it didn't give any improvement in our setup, hence we chose to omit it in our final implementation.",
"Dynamic Co-Attention Network layer (DCN), similar to BiDAF involves a two-way attention between the context and the question but unlike BiDAF, DCN involves a second-level attention computation over the previously computed attentions BIBREF6 . The dynamic co-attention network (DCN) is an end-to-end neural network architecture. The authors claim that the ability of attending to context inputs strongly depends on the query (question). The intuition behind that is also reflected by a human's ability to better answer a question on an input paragraph, when the question is known before reading the context itself, because then one can attend specifically to relevant information in the context. For details, please check the project handout, the original paper and our implementation code. In the original paper and the project handout, there was also a concept of sentinel vectors that was introduced but in our tests, it again didn't seem to provide any significant advantage, so we again chose to omit this as well in our final implementation.",
"This is model that we propose and it builds heavily on aspects of the BiDAF BIBREF5 as well as the DCN models BIBREF6 . Since the attention outputs from both the BiDAF and DCN seem to have their merits, our idea was to combine them by concatenating both attentions to the original context states. The intuition was that the neural network should be able to train in order to use and pick them both effectively. Experimental results that we describe later, also verify our claim. Please check the code for exact implementation details",
"In this section, we propose another simple idea called Double Cross Attention (DCA) which seem to provide better results compared to both BiDAF and Co-Attention while providing similar performance as concatenated hybrid model discussed in previous section. The motivation behind this approach is that first we pay attention to each context and question and then we attend those attentions with respect to each other in a slightly similar way as DCN. The intuition is that if iteratively read/attend both context and question, it should help us to search for answers easily. The DCA mechanism is explained graphically in Figure. 1 and the formal description of the layer is as follows.",
"Assume we have context hidden states $\\mathbf {c}_1, \\mathbf {c}_2...,\\mathbf {c}_N\\in \\mathbb {R}^{2h}$ and question hidden states $\\mathbf {q}_1, \\mathbf {q}_2...,\\mathbf {q}_M\\in \\mathbb {R}^{2h}$ obtained after passing context and question embeddings through a bi-directional GRU. First, we compute a cross-attention matrix $\\mathbf {S}\\in \\mathbb {R}^{N\\times M}$ , which contains a similarity score $S_{ij}$ for each pair of context and question hidden states $(\\mathbf {c}_i,\\mathbf {q}_j)$ . We chose $S_{ij}=\\mathbf {c}_i^T\\mathbf {q}_j$ , since it is a parameter free approach to calculate attention but one can also construct this function with a trainable weight parameter (which can be shared in the subsequent step).",
"First we obtain Context-to-Question (C2Q) attention vectors $\\mathbf {a}_i$ as follows: ",
"$$\\alpha _i = \\text{softmax} \\mathbf {S_{(i:)}}\\in \\mathbb {R}^M,\n\\mathbf {a}_i = \\sum _{j=1}^{M}\\alpha _i^j\\mathbf {q}_j \\in \\mathbb {R}^{2h}$$ (Eq. 9) ",
"Next, we also obtain Question-to-Context (Q2C) attention vectors $\\mathbf {b}_j$ as follows: ",
"$$\\beta _j = \\text{softmax} \\mathbf {S_{(:j)}}\\in \\mathbb {R}^N,\n\\mathbf {b}_j = \\sum _{i=1}^{N}\\beta _j^i\\mathbf {c}_i \\in \\mathbb {R}^{2h}$$ (Eq. 10) ",
"Then we compute a second-level cross attention matrix $\\mathbf {R}\\in \\mathbb {R}^{N\\times M}$ , which contains a similarity score $R_{ij}$ for each pair of context and question attention states $(\\mathbf {a}_i, \\mathbf {b}_j)$ . We again choose a simple dot product attention $R_{ij}=\\mathbf {a}_i^T\\mathbf {b}_j$ . Additionally, we obtain Context Attention-to-Question Attention(CA2QA) cross attention vectors $\\mathbf {d}_i$ as follows: ",
"$$\\gamma _i = \\text{softmax} \\mathbf {R_{(i:)}}\\in \\mathbb {R}^M,\n\\mathbf {d}_i = \\sum _{1}^{M}\\gamma _i^j\\mathbf {b}_j \\in \\mathbb {R}^{2h}$$ (Eq. 11) ",
"Finally, we concatenate $\\mathbf {c}_i$ , $\\mathbf {a}_i$ and $\\mathbf {d}_i$ as a new state $[\\mathbf {c}_i; \\mathbf {a}_i; \\mathbf {d}_i]$ and pass it through a biLSTM layer to obtain double query attended encoded context states as follows. ",
"$$\\lbrace \\mathbf {u}_1,.... \\mathbf {u}_N\\rbrace = \\text{biLSTM} (\\lbrace [\\mathbf {c}_1; \\mathbf {a}_1; \\mathbf {d}_1],.... [\\mathbf {c}_N; \\mathbf {a}_N; \\mathbf {d}_N]\\rbrace )$$ (Eq. 12) ",
"Finally all attention layer outputs are concatenated and fed into a Softmax layer that computes the probability distributions for the start and end token independently, as it is done in the baseline implementation."
],
[
"Before we started the enhancements of the baseline model, we studied the SQuAD data set. Figure. 2 shows the distribution of the answer, question and context lengths as well as the relative position of the answer span inside a context. Furthermore, we counted the different question types. We found that most answers have a length less than 5 words. Additionally, a question usually consists of 5-20 words. Moreover, we noticed that on average a context is of length 120 (visualization excluded due to lack of space). Furthermore, answers for a question tend to be represented by spans of context words that are at the beginning of a context. Finally, we can see that “what\" questions build the majority of questions, almost the same amount as all other question types combined."
],
[
"In this section, we report the results of our experiments. To ensure the generality of our model, we used Dropout technique for regularizing neural networks. We start our experiments with default hyperparameters: embedding size of 100, batch size 100, hidden size 200, learning rate of 0.001 and a dropout rate of 0.15. For character level encoding, default character embedding size is 20, kernel size is 5 and number of filters are 100. For each architecture, we report the evaluation metrics F1 and EM (Exact match) computed on the dev set.",
"The effect of character embedding on the BiDAF model is reported in Table 1 . We can notice that character embedding boosts up the performance by roughly 2% for both EM and F1 score. This is expected since character embedding can help deal with non-dictionary words by giving them a unique embedding. Next, we report the results of the model performances for baseline, BiDAF, Co-Attention, Hybrid and DCA attention mechanisms in Table 1 . Notice that none of these architectures were optimized for EM/F1 scores but we are more interested in difference between these mechanisms for a fixed set of hyperparameters. Hybrid and DCA have a slight edge over plain BiDAF and Co-Attention module as per the results. Co-Attention with char embedding was giving us worse results so we put the best numbers we got for Co-Attention there. We would like to point out that the BiDAF model here doesn't include BiLSTM layer as present in original paper because the BiLSTM didn't give any advantage except for slowing down the training. Selected tensorboard visualizations are also shown in Figure 3 . Visualizations demonstrate that both hybrid and DCA models perform better than vanilla Co-Attention and BiDAF attention mechanisms and reduce the losses faster and increase the dev F1/EM scores faster as well."
],
[
"We made a brief attempt to do a bit of hyperparameter tuning on our proposed DCA model and we report the results in Table 3 . Ideally, hyperparameter tuning for neural network architectures should be done using bayesian hyperparameter optimization but due to the lack of time we tried to do a random search on a small set of hyperparameters that we guessed could be more suitable. While, we didn't find any significantly good set of parameters, we noticed that reducing the hidden size has a minor effect on improving the performance. This is probably because it reduces the system complexity which makes the model easier to train."
],
[
"In Table 4 , we briefly provide error analysis on a small sample of results for hybrid and DCA models and try to explain the model behavior."
],
[
"In this paper, we studied and implemented two well known attention mechanisms namely, BiDAF and Co-Attention. We also introduced a simple combination of these two schemes called Hybrid attention mechanism that outperforms both BiDAF and Co-Attention. In addition to this, we propose our own attention mechanism called Double Cross Attention that gives similar results as the Hybrid model. The objective of the paper was primarily to study and compare the two aforementioned popular attention schemes on their own and not to chase the leaderboard scores. In particular, we isolated the attention layer and suggested our own improvements to it. The comparative results between different schemes are obtained for same set of hyperparameters.",
"To improve the F1/EM scores of the overall system, a number of enhancement techniques could be used. For e.g. while we simply concatenated character and word embeddings, more advanced techniques to effectively combine them have been suggested in the literature BIBREF9 . Also. a number of other attention mechanisms have been suggested which need to be investigated as well BIBREF10 , BIBREF11 . Another possible improvement is to properly condition the end position on the start position of the answer span. An LSTM based solution was used in the original BiDAF paper. Exponential moving average of weights and ensembling are additional common methods to further fine-tune and improve the results. Hierarchical Maxout Network as mentioned in the co-attention paper could be a replacement to our simple Softmax output layer to improve the performance even further. There are also a few possible directions where DCA model can further be improved/extended. We can continue recursively calculating the cross attention weights and combine them in some more intuitive or non-linear way. While, we didn't optimize for the number of parameters, it is possible to reduce the overall number of trainable parameters by appropriately sharing weights between layers when possible.",
"All of the above mentioned suggestions, we see as enhancement opportunities (some we partially already tried to implement but could not finally manage to include in final running model). As a final project for the cs224n course, we found the task challenging but we were extremely satisfied with our own personal learning curve. We are sure that with even more time, we could significantly improve our model from the baseline enhancement we achieved so far. All in all, we believe that the experience of this project, will be of utmost value for our future professional work."
],
[
"First of all, we would like to thank the course instructor Richard Socher for making the class highly informative and a great learning experience. We would also like to thank the TAs for prompt feedback and insightful discussions. Lastly, we would like to thank the fellow students who regularly helped each other on course forum regarding any questions.."
]
],
"section_name": [
"Introduction",
"Model",
"Word and Character Embedding Layer",
"Context and Question Encoding Layer",
"Attention Layer",
"Experiments",
"Results",
"Hyperparameter Tuning",
"Error Analysis",
"Conclusions and Future work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"21caa6c0b7e0fb8d6dcb8cf48fc6829fbf2201e7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Effect of Character Embedding"
],
"extractive_spans": [],
"free_form_answer": "In terms of F1 score, the Hybrid approach improved by 23.47% and 1.39% on BiDAF and DCN respectively. The DCA approach improved by 23.2% and 1.12% on BiDAF and DCN respectively.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Effect of Character Embedding"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"By how much, the proposed method improves BiDAF and DCN on SQuAD dataset?"
],
"question_id": [
"9776156fc93daa36f4613df591e2b49827d25ad2"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"BiDAF"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Double Cross Attention Model",
"Figure 2: Exploratory Data Analysis",
"Table 1: Effect of Character Embedding",
"Figure 3: Tensorboard Visualizations",
"Table 4: Hyperparameter Tuning for DCA Model"
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"5-Table1-1.png",
"6-Figure3-1.png",
"6-Table4-1.png"
]
} | [
"By how much, the proposed method improves BiDAF and DCN on SQuAD dataset?"
] | [
[
"1803.09230-5-Table1-1.png"
]
] | [
"In terms of F1 score, the Hybrid approach improved by 23.47% and 1.39% on BiDAF and DCN respectively. The DCA approach improved by 23.2% and 1.12% on BiDAF and DCN respectively."
] | 323 |
1709.05404 | Creating and Characterizing a Diverse Corpus of Sarcasm in Dialogue | The use of irony and sarcasm in social media allows us to study them at scale for the first time. However, their diversity has made it difficult to construct a high-quality corpus of sarcasm in dialogue. Here, we describe the process of creating a large- scale, highly-diverse corpus of online debate forums dialogue, and our novel methods for operationalizing classes of sarcasm in the form of rhetorical questions and hyperbole. We show that we can use lexico-syntactic cues to reliably retrieve sarcastic utterances with high accuracy. To demonstrate the properties and quality of our corpus, we conduct supervised learning experiments with simple features, and show that we achieve both higher precision and F than previous work on sarcasm in debate forums dialogue. We apply a weakly-supervised linguistic pattern learner and qualitatively analyze the linguistic differences in each class. | {
"paragraphs": [
[
"Irony and sarcasm in dialogue constitute a highly creative use of language signaled by a large range of situational, semantic, pragmatic and lexical cues. Previous work draws attention to the use of both hyperbole and rhetorical questions in conversation as distinct types of lexico-syntactic cues defining diverse classes of sarcasm BIBREF0 .",
"Theoretical models posit that a single semantic basis underlies sarcasm's diversity of form, namely \"a contrast\" between expected and experienced events, giving rise to a contrast between what is said and a literal description of the actual situation BIBREF1 , BIBREF2 . This semantic characterization has not been straightforward to operationalize computationally for sarcasm in dialogue. Riloffetal13 operationalize this notion for sarcasm in tweets, achieving good results. Joshietal15 develop several incongruity features to capture it, but although they improve performance on tweets, their features do not yield improvements for dialogue.",
"Previous work on the Internet Argument Corpus (IAC) 1.0 dataset aimed to develop a high-precision classifier for sarcasm in order to bootstrap a much larger corpus BIBREF3 , but was only able to obtain a precision of just 0.62, with a best F of 0.57, not high enough for bootstrapping BIBREF4 , BIBREF5 . Justoetal14 experimented with the same corpus, using supervised learning, and achieved a best precision of 0.66 and a best F of 0.70. Joshietal15's explicit congruity features achieve precision around 0.70 and best F of 0.64 on a subset of IAC 1.0.",
"We decided that we need a larger and more diverse corpus of sarcasm in dialogue. It is difficult to efficiently gather sarcastic data, because only about 12% of the utterances in written online debate forums dialogue are sarcastic BIBREF6 , and it is difficult to achieve high reliability for sarcasm annotation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Thus, our contributions are:"
],
[
"There has been relatively little theoretical work on sarcasm in dialogue that has had access to a large corpus of naturally occurring examples. Gibbs00 analyzes a corpus of 62 conversations between friends and argues that a robust theory of verbal irony must account for the large diversity in form. He defines several subtypes, including rhetorical questions and hyperbole:",
"Other categories of irony defined by Gibbs00 include understatements, jocularity, and sarcasm (which he defines as a critical/mocking form of irony). Other work has also tackled jocularity and humor, using different approaches for data aggregation, including filtering by Twitter hashtags, or analyzing laugh-tracks from recordings BIBREF11 , BIBREF12 .",
"Previous work has not, however, attempted to operationalize these subtypes in any concrete way. Here we describe our methods for creating a corpus for generic sarcasm (Gen) (Sec. SECREF11 ), rhetorical questions (RQ), and hyperbole (Hyp) (Sec. SECREF15 ) using data from the Internet Argument Corpus (IAC 2.0). Table TABREF9 provides examples of sarcastic and not-sarcastic posts from the corpus we create. Table TABREF10 summarizes the final composition of our sarcasm corpus."
],
[
"We first replicated the pattern-extraction experiments of LukinWalker13 on their dataset using AutoSlog-TS BIBREF13 , a weakly-supervised pattern learner that extracts lexico-syntactic patterns associated with the input data. We set up the learner to extract patterns for both sarcastic and not-sarcastic utterances. Our first discovery is that we can classify not-sarcastic posts with very high precision, ranging between 80-90%.",
"Because our main goal is to build a larger, more diverse corpus of sarcasm, we use the high-precision not-sarcastic patterns extracted by AutoSlog-TS to create a \"not-sarcastic\" filter. We did this by randomly selecting a new set of 30K posts (restricting to posts with between 10 and 150 words) from IAC 2.0 BIBREF14 , and applying the high-precision not-sarcastic patterns from AutoSlog-TS to filter out any posts that contain at least one not-sarcastic cue. We end up filtering out two-thirds of the pool, only keeping posts that did not contain any of our high-precision not-sarcastic cues. We acknowledge that this may also filter out sarcastic posts, but we expect it to increase the ratio of sarcastic posts in the remaining pool.",
"We put out the remaining 11,040 posts on Mechanical Turk. As in LukinWalker13, we present the posts in \"quote-response\" pairs, where the response post to be annotated is presented in the context of its “dialogic parent”, another post earlier in the thread, or a quote from another post earlier in the thread BIBREF15 . In the task instructions, annotators are presented with a definition of sarcasm, followed by one example of a quote-response pair that clearly contains sarcasm, and one pair that clearly does not. Each task consists of 20 quote-response pairs that follow the instructions. Figure FIGREF13 shows the instructions and layout of a single quote-response pair presented to annotators. As in LukinWalker13 and Walkeretal12d, annotators are asked a binary question: Is any part of the response to this quote sarcastic?.",
"To help filter out unreliable annotators, we create a qualifier consisting of a set of 20 manually-selected quote-response pairs (10 that should receive a sarcastic label and 10 that should receive a not-sarcastic label). A Turker must pass the qualifier with a score above 70% to participate in our sarcasm annotations tasks.",
"Our baseline ratio of sarcasm in online debate forums dialogue is the estimated 12% sarcastic posts in the IAC, which was found previously by Walker et al. by gathering annotations for sarcasm, agreement, emotional language, attacks, and nastiness from a subset of around 20K posts from the IAC across various topics BIBREF6 . Similarly, in his study of recorded conversation among friends, Gibbs cites 8% sarcastic utterances among all conversational turns BIBREF0 .",
"We choose a conservative threshold: a post is only added to the sarcastic set if at least 6 out of 9 annotators labeled it sarcastic. Of the 11,040 posts we put out for annotation, we thus obtain 2,220 new posts, giving us a ratio of about 20% sarcasm – significantly higher than our baseline of 12%. We choose this conservative threshold to ensure the quality of our annotations, and we leave aside posts that 5 out of 9 annotators label as sarcastic for future work – noting that we can get even higher ratios of sarcasm by including them (up to 31%). The percentage agreement between each annotator and the majority vote is 80%.",
"We then expand this set, using only 3 highly-reliable Turkers (based on our first round of annotations), giving them an exclusive sarcasm qualification to do additional HITs. We gain an additional 1,040 posts for each class when using majority agreement (at least 2 out of 3 sarcasm labels) for the additional set (to add to the 2,220 original posts). The average percent agreement with the majority vote is 89% for these three annotators. We supplement our sarcastic data with 2,360 not-sarcastic posts from the original data by BIBREF3 that follow our 150-word length restriction, and complete the set with 900 posts that were filtered out by our not-sarcastic filter – resulting in a total of 3,260 posts per class (6,520 total posts).",
"Rows 1 and 2 of Table TABREF9 show examples of posts that are labeled sarcastic in our final generic sarcasm set. Using our filtering method, we are able to reduce the number of posts annotated from our original 30K to around 11K, achieving a percentage of 20% sarcastic posts, even though we choose to use a conservative threshold of at least 6 out of 9 sarcasm labels. Since the number of posts being annotated is only a third of the original set size, this method reduces annotation effort, time, and cost, and helps us shift the distribution of sarcasm to more efficiently expand our dataset than would otherwise be possible."
],
[
"The goal of collecting additional corpora for rhetorical questions and hyperbole is to increase the diversity of the corpus, and to allow us to explore the semantic differences between sarcastic and not-sarcastic utterances when particular lexico-syntactic cues are held constant. We hypothesize that identifying surface-level cues that are instantiated in both sarcastic and not sarcastic posts will force learning models to find deeper semantic cues to distinguish between the classes.",
"Using a combination of findings in the theoretical literature, and observations of sarcasm patterns in our generic set, we developed a regex pattern matcher that runs against the 400K unannotated posts in the IAC 2.0 database and retrieves matching posts, only pulling posts that have parent posts and a maximum of 150 words. Table TABREF16 only shows a small subset of the “more successful” regex patterns we defined for each class.",
"Cue annotation experiments. After running a large number of retrieval experiments with our regex pattern matcher, we select batches of the resulting posts that mix different cue classes to put out for annotation, in such a way as to not allow the annotators to determine what regex cues were used. We then successively put out various batches for annotation by 5 of our highly-qualified annotators, in order to determine what percentage of posts with these cues are sarcastic.",
"Table TABREF16 summarizes the results for a sample set of cues, showing the number of posts found containing the cue, the subset that we put out for annotation, and the percentage of posts labeled sarcastic in the annotation experiments. For example, for the hyperbolic cue \"wow\", 977 utterances with the cue were found, 153 were annotated, and 44% of those were found to be sarcastic (i.e. 56% were found to be not-sarcastic). Posts with the cue \"oh wait\" had the highest sarcasm ratio, at 87%. It is the distinction between the sarcastic and not-sarcastic instances that we are specifically interested in. We describe the corpus collection process for each subclass below.",
"It is important to note that using particular cues (regex) to retrieve sarcastic posts does not result in posts whose only cue is the regex pattern. We demonstrate this quantitatively in Sec. SECREF4 . Sarcasm is characterized by multiple lexical and morphosyntactic cues: these include the use of intensifiers, elongated words, quotations, false politeness, negative evaluations, emoticons, and tag questions inter alia. Table TABREF17 shows how sarcastic utterances often contain combinations of multiple indicators, each playing a role in the overall sarcastic tone of the post.",
"Rhetorical Questions. There is no previous work on distinguishing sarcastic from non-sarcastic uses of rhetorical questions (RQs). RQs are syntactically formulated as a question, but function as an indirect assertion BIBREF16 . The polarity of the question implies an assertion of the opposite polarity, e.g. Can you read? implies You can't read. RQs are prevalent in persuasive discourse, and are frequently used ironically BIBREF17 , BIBREF18 , BIBREF0 . Previous work focuses on their formal semantic properties BIBREF19 , or distinguishing RQs from standard questions BIBREF20 .",
"We hypothesized that we could find RQs in abundance by searching for questions in the middle of a post, that are followed by a statement, using the assumption that questions followed by a statement are unlikely to be standard information-seeking questions. We test this assumption by randomly extracting 100 potential RQs as per our definition and putting them out on Mechanical Turk to 3 annotators, asking them whether or not the questions (displayed with their following statement) were rhetorical. According to majority vote, 75% of the posts were rhetorical.",
"We thus use this \"middle of post\" heuristic to obviate the need to gather manual annotations for RQs, and developed regex patterns to find RQs that were more likely to be sarcastic. A sample of the patterns, number of matches in the corpus, the numbers we had annotated, and the percent that are sarcastic after annotation are summarized in Table TABREF16 .",
"We extract 357 posts following the intermediate question-answer pairs heuristic from our generic (Gen) corpus. We then supplement these with posts containing RQ cues from our cue-annotation experiments: posts that received 3 out of 5 sarcastic labels in the experiments were considered sarcastic, and posts that received 2 or fewer sarcastic labels were considered not-sarcastic. Our final rhetorical questions corpus consists of 851 posts per class (1,702 total posts). Table TABREF18 shows some examples of rhetorical questions and self-answering from our corpus.",
"",
"Hyperbole. Hyperbole (Hyp) has been studied as an independent form of figurative language, that can coincide with ironic intent BIBREF21 , BIBREF22 , and previous computational work on sarcasm typically includes features to capture hyperbole BIBREF23 . KreuzRoberts95 describe a standard frame for hyperbole in English where an adverb modifies an extreme, positive adjective, e.g. \"That was absolutely amazing!\" or \"That was simply the most incredible dining experience in my entire life.\"",
"ColstonObrien00b provide a theoretical framework that explains why hyperbole is so strongly associated with sarcasm. Hyperbole exaggerates the literal situation, introducing a discrepancy between the \"truth\" and what is said, as a matter of degree. A key observation is that this is a type of contrast BIBREF24 , BIBREF1 . In their framework:",
"An event or situation evokes a scale;",
"",
"An event can be placed on that scale;",
"",
"The utterance about the event contrasts with actual scale placement.",
"Fig. FIGREF22 illustrates that the scales that can be evoked range from negative to positive, undesirable to desirable, unexpected to expected and certain to uncertain. Hyperbole moves the strength of an assertion further up or down the scale from the literal meaning, the degree of movement corresponds to the degree of contrast. Depending on what they modify, adverbial intensifiers like totally, absolutely, incredibly shift the strength of the assertion to extreme negative or positive.",
"Table TABREF23 shows examples of hyperbole from our corpus, showcasing the effect that intensifiers have in terms of strengthening the emotional evaluation of the response. To construct a balanced corpus of sarcastic and not-sarcastic utterances with hyperbole, we developed a number of patterns based on the literature and our observations of the generic corpus. The patterns, number matches on the whole corpus, the numbers we had annotated and the percent that are sarcastic after annotation are summarized in Table TABREF16 . Again, we extract a small subset of examples from our Gen corpus (30 per class), and supplement them with posts that contain our hyperbole cues (considering them sarcastic if they received at least 3/5 sarcastic labels, not-sarcastic otherwise). The final hyperbole dataset consists of 582 posts per class (1,164 posts in total).",
"To recap, Table TABREF10 summarizes the total number of posts for each subset of our final corpus."
],
[
"Our primary goal is not to optimize classification results, but to explore how results vary across different subcorpora and corpus properties. We also aim to demonstrate that the quality of our corpus makes it more straightforward to achieve high classification performance. We apply both supervised learning using SVM (from Scikit-Learn BIBREF25 ) and weakly-supervised linguistic pattern learning using AutoSlog-TS BIBREF13 . These reveal different aspects of the corpus.",
"",
"Supervised Learning. We restrict our supervised experiments to a default linear SVM learner with Stochastic Gradient Descent (SGD) training and L2 regularization, available in the SciKit-Learn toolkit BIBREF25 . We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings. We expect Word2Vec to be able to capture semantic generalizations that n-grams do not BIBREF26 , BIBREF27 . The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or \"!!!\"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 .",
"Table TABREF25 summarizes the results of our supervised learning experiments on our datasets using 10-fold cross validation. The data is balanced evenly between the sarcastic and not-sarcastic classes, and the best F-Measures for each class are shown in bold. The default W2V model, (trained on Google News), gives the best overall F-measure of 0.74 on the Gen corpus for the sarcastic class, while n-grams give the best not-sarcastic F-measure of 0.73. Both of these results are higher F than previously reported for classifying sarcasm in dialogue, and we might expect that feature engineering could yield even greater performance.",
"On the RQ corpus, n-grams provide the best F-measure for sarcastic at 0.70 and not-sarcastic at 0.71. Although W2V performs well, the n-gram model includes features involving repeated punctuation and emoticons, which the W2V model excludes. Punctuation and emoticons are often used as distinctive feature of sarcasm (i.e. \"Oh, really?!?!\", [emoticon-rolleyes]).",
"For the Hyp corpus, the best F-measure for both the sarcastic and not-sarcastic classes again comes from n-grams, with F-measures of 0.65 and 0.68 respectively. It is interesting to note that the overall results of the Hyp data are lower than those for Gen and RQs, likely due to the smaller size of the Hyp dataset.",
"To examine the effect of dataset size, we compare F-measure (using the same 10-fold cross-validation setup) for each dataset while holding the number of posts per class constant. Figure FIGREF26 shows the performance of each of the Gen, RQ, and Hyp datasets at intervals of 100 posts per class (up to the maximum size of 582 posts per class for Hyp, and 851 posts per class for RQ). From the graph, we can see that as a general trend, the datasets benefit from larger dataset sizes. Interestingly, the results for the RQ dataset are very comparable to those of Gen. The Gen dataset eventually gets the highest sarcastic F-measure (0.74) at its full dataset size of 3,260 posts per class.",
"",
"Weakly-Supervised Learning. AutoSlog-TS is a weakly supervised pattern learner that only requires training documents labeled broadly as sarcastic or not-sarcastic. AutoSlog-TS uses a set of syntactic templates to define different types of linguistic expressions. The left-hand side of Table TABREF28 lists each pattern template and the right-hand side illustrates a specific lexico-syntactic pattern (in bold) that represents an instantiation of each general pattern template for learning sarcastic patterns in our data. In addition to these 17 templates, we added patterns to AutoSlog for adjective-noun, adverb-adjective and adjective-adjective, because these patterns are frequent in hyperbolic sarcastic utterances.",
"The examples in Table TABREF28 show that Colston's notion of contrast shows up in many learned patterns, and that the source of the contrast is highly variable. For example, Row 1 implies a contrast with a set of people who are not your mother. Row 5 contrasts what you were asked with what you've (just) done. Row 10 contrasts chapter 12 and chapter 13 BIBREF30 . Row 11 contrasts what I am allowed vs. what you have to do.",
"AutoSlog-TS computes statistics on the strength of association of each pattern with each class, i.e. P(sarcastic INLINEFORM0 INLINEFORM1 ) and P(not-sarcastic INLINEFORM2 INLINEFORM3 ), along with the pattern's overall frequency. We define two tuning parameters for each class: INLINEFORM4 , the frequency with which a pattern occurs, INLINEFORM5 , the probability with which a pattern is associated with the given class. We do a grid-search, testing the performance of our patterns thresholds from INLINEFORM6 = {2-6} in intervals of 1, INLINEFORM7 ={0.60-0.85} in intervals of 0.05. Once we extract the subset of patterns passing our thresholds, we search for these patterns in the posts in our development set, classifying a post as a given class if it contains INLINEFORM8 ={1, 2, 3} of the thresholded patterns. For more detail, see BIBREF13 , BIBREF31 .",
"An advantage of AutoSlog-TS is that it supports systematic exploration of recall and precision tradeoffs, by selecting pattern sets using different parameters. The parameters have to be tuned on a training set, so we divide each dataset into 80% training and 20% test. Figure FIGREF30 shows the precision (x-axis) vs. recall (y-axis) tradeoffs on the test set, when optimizing our three parameters for precision. Interestingly, the subcorpora for RQ and Hyp can get higher precision than is possible for Gen. When precision is fixed at 0.75, the recall for RQ is 0.07 and the recall for Hyp is 0.08. This recall is low, but given that each retrieved post provides multiple cues, and that datasets on the web are huge, these P values make it possible to bootstrap these two classes in future."
],
[
"Here we aim to provide a linguistic characterization of the differences between the sarcastic and the not-sarcastic classes. We use the AutoSlog-TS pattern learner to generate patterns automatically, and the Stanford dependency parser to examine relationships between arguments BIBREF13 , BIBREF32 . Table TABREF31 shows the number of sarcastic patterns we extract with AutoSlog-TS, with a frequency of at least 2 and a probability of at least 0.75 for each corpus. We learn many novel lexico-syntactic cue patterns that are not the regex that we search for. We discuss specific novel learned patterns for each class below.",
"Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 .",
"Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs.",
"Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract.",
"",
"Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm.",
"We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge."
],
[
"We have developed a large scale, highly diverse corpus of sarcasm using a combination of linguistic analysis and crowd-sourced annotation. We use filtering methods to skew the distribution of sarcasm in posts to be annotated to 20-31%, much higher than the estimated 12% distribution of sarcasm in online debate forums. We note that when using Mechanical Turk for sarcasm annotation, it is possible that the level of agreement signals how lexically-signaled the sarcasm is, so we settle on a conservative threshold (at least 6 out of 9 annotators agreeing that a post is sarcastic) to ensure the quality of our annotations.",
"We operationalize lexico-syntactic cues prevalent in sarcasm, finding cues that are highly indicative of sarcasm, with ratios up to 87%. Our final corpus consists of data representing generic sarcasm, rhetorical questions, and hyperbole.",
"We conduct supervised learning experiments to highlight the quality of our corpus, achieving a best F of 0.74 using very simple feature sets. We use weakly-supervised learning to show that we can also achieve high precision (albeit with a low recall) for our rhetorical questions and hyperbole datasets; much higher than the best precision that is possible for the Generic dataset. These high precision values may be used for bootstrapping these two classes in the future.",
"We also present qualitative analysis of the different characteristics of rhetorical questions and hyperbole in sarcastic acts, and of the distinctions between sarcastic/not-sarcastic cues in generic sarcasm data. Our analysis shows that the forms of sarcasm and its underlying semantic contrast in dialogue are highly diverse.",
"In future work, we will focus on feature engineering to improve results on the task of sarcasm classification for both our generic data and subclasses. We will also begin to explore evaluation on real-world data distributions, where the ratio of sarcastic/not-sarcastic posts is inherently unbalanced. As we continue our analysis of the generic and fine-grained categories of sarcasm, we aim to better characterize and model the great diversity of sarcasm in dialogue."
],
[
"This work was funded by NSF CISE RI 1302668, under the Robust Intelligence Program."
]
],
"section_name": [
"Introduction",
"Creating a Diverse Sarcasm Corpus",
"Generic Dataset (Gen)",
"Rhetorical Questions and Hyperbole",
"Learning Experiments",
"Linguistic Analysis",
"Conclusion and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"21cace078e31fa2cc1349f5fd5edcd08a17822ef"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"fd9f6e56a5ff9e20a16b531d7393edc6b2ed4948"
],
"answer": [
{
"evidence": [
"Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 .",
"Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs.",
"Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract.",
"Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm.",
"We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge."
],
"extractive_spans": [],
"free_form_answer": "Each class has different patterns in adjectives, adverbs and verbs for sarcastic and non-sarcastic classes",
"highlighted_evidence": [
"Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. ",
"We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method.",
"Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs.\n\nMany of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset.",
"Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. ",
"We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"a264ed7bf25632eba2e52b9ace3f2f84f8b4d636"
],
"answer": [
{
"evidence": [
"Supervised Learning. We restrict our supervised experiments to a default linear SVM learner with Stochastic Gradient Descent (SGD) training and L2 regularization, available in the SciKit-Learn toolkit BIBREF25 . We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings. We expect Word2Vec to be able to capture semantic generalizations that n-grams do not BIBREF26 , BIBREF27 . The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or \"!!!\"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 ."
],
"extractive_spans": [
"unigrams, bigrams, and trigrams, including sequences of punctuation",
"Word2Vec word embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings. ",
"The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or \"!!!\"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"d558fcba8599d53068faee47a44a85092096ae5a"
],
"answer": [
{
"evidence": [
"Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 .",
"Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract.",
"Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm.",
"We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge."
],
"extractive_spans": [
"adjective and adverb patterns",
"verb, subject, and object arguments",
"verbal patterns"
],
"free_form_answer": "",
"highlighted_evidence": [
"We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. ",
"Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. ",
"One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. ",
"We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they report results only on English datasets?",
"What are the linguistic differences between each class?",
"What simple features are used?",
"What lexico-syntactic cues are used to retrieve sarcastic utterances?"
],
"question_id": [
"03a911049b6d7df2b6391ed5bc129a3b65133bcd",
"f5e571207d9f4701b4d01199ef7d0bfcfa2c0316",
"c5ac07528cf99d353413c9d9ea61a1a699dd783e",
"6608f171b3e0dcdcd51b3e0c697d6e5003ab5f02"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"irony",
"irony",
"irony",
"irony"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Examples of different types of SARCASTIC (S) and NOT-SARCASTIC (NS) Posts",
"Table 2: Total number of posts in each subcorpus (each with a 50% split of SARCASTIC and NOTSARCASTIC posts)",
"Figure 1: Mechanical Turk Task Layout",
"Table 3: Annotation Counts for a Subset of Cues",
"Table 4: Utterances with Multiple Sarcastic Cues",
"Figure 2: Hyperbole shifts the strength of what is said from literal to extreme negative or positive (Colston and O’Brien, 2000)",
"Table 5: Examples of Rhetorical Questions and Self-Answering",
"Table 6: Examples of Hyperbole and the Effects of Intensifiers",
"Table 7: Supervised Learning Results for Generic (Gen: 3,260 posts per class), Rhetorical Questions (RQ: 851 posts per class) and Hyperbole (Hyp: 582 posts per class)",
"Figure 3: Plot of Dataset size (x-axis) vs Sarc. FMeasure (y-axis) for the three subcorpora, with ngram features",
"Table 8: AutoSlog-TS Templates and Example Instantiations",
"Table 9: Examples of Characteristic Patterns for Gen using AutoSlog-TS Templates",
"Figure 4: Plot of Precision (x-axis) vs Recall (yaxis) for three subcorpora with AutoSlog-TS parameters, aimed at optimizing precision",
"Table 10: Total number of patterns passing threshold of Freq ≥ 2, Prob ≥ 0.75",
"Table 13: Verb Patterns in Hyperbole",
"Table 11: Attacks on Mental Ability in RQs",
"Table 12: Adverb Adjective Cues in Hyperbole"
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Figure2-1.png",
"5-Table5-1.png",
"5-Table6-1.png",
"6-Table7-1.png",
"6-Figure3-1.png",
"7-Table8-1.png",
"8-Table9-1.png",
"8-Figure4-1.png",
"8-Table10-1.png",
"9-Table13-1.png",
"9-Table11-1.png",
"9-Table12-1.png"
]
} | [
"What are the linguistic differences between each class?"
] | [
[
"1709.05404-Linguistic Analysis-5",
"1709.05404-Linguistic Analysis-6",
"1709.05404-Linguistic Analysis-2",
"1709.05404-Linguistic Analysis-3",
"1709.05404-Linguistic Analysis-1"
]
] | [
"Each class has different patterns in adjectives, adverbs and verbs for sarcastic and non-sarcastic classes"
] | 324 |
2003.05377 | Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network | Organize songs, albums, and artists in groups with shared similarity could be done with the help of genre labels. In this paper, we present a novel approach for automatic classifying musical genre in Brazilian music using only the song lyrics. This kind of classification remains a challenge in the field of Natural Language Processing. We construct a dataset of 138,368 Brazilian song lyrics distributed in 14 genres. We apply SVM, Random Forest and a Bidirectional Long Short-Term Memory (BLSTM) network combined with different word embeddings techniques to address this classification task. Our experiments show that the BLSTM method outperforms the other models with an F1-score average of $0.48$. Some genres like"gospel","funk-carioca"and"sertanejo", which obtained 0.89, 0.70 and 0.69 of F1-score, respectively, can be defined as the most distinct and easy to classify in the Brazilian musical genres context. | {
"paragraphs": [
[
"Music is part of the day-to-day life of a huge number of people, and many works try to understand the best way to classify, recommend, and identify similarities between songs. Among the tasks that involve music classification, genre classification has been studied widely in recent years BIBREF0 since musical genres are the main top-level descriptors used by music dealers and librarians to organize their music collections BIBREF1.",
"Automatic music genre classification based only on the lyrics is considered a challenging task in the field of Natural Language Processing (NLP). Music genres remain a poorly defined concept, and boundaries between genres still remain fuzzy, which makes the automatic classification problem a nontrivial task BIBREF1.",
"Traditional approaches in text classification have applied algorithms such as Support Vector Machine (SVM) and Naïve Bayes, combined with handcraft features (POS and chunk tags) and word count-based representations, like bag-of-words. More recently, the usage of Deep Learning methods such as Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) has produced great results in text classification tasks.",
"Some works like BIBREF2, BIBREF3 BIBREF4 focus on classification of mood or sentiment of music based on its lyrics or audio content. Other works, like BIBREF1, and BIBREF5, on the other hand, try to automatically classify the music genre; and the work BIBREF6 tries to classify, besides the music genre, the best and the worst songs, and determine the approximate publication time of a song.",
"In this work, we collected a set of about 130 thousand Brazilian songs distributed in 14 genres. We use a Bidirectional Long Short-Term Memory (BLSTM) network to make a lyrics-based music genre classification. We did not apply an elaborate set of handcraft textual features, instead, we represent the lyrics songs with a pre-trained word embeddings model, obtaining an F1 average score of $0.48$. Our experiments and results show some real aspects that exist among the Brazilian music genres and also show the usefulness of the dataset we have built for future works.",
"This paper is organized as follows. In the next section, we cite and comment on some related works. Section SECREF3 describes our experiments from data collection to the proposed model, presenting some important concepts. Our experimental results are presented in Section SECREF4, and Section SECREF5 presents our concluding remarks and future work."
],
[
"Several works have been carried out to add textual information to genre and mood classification. Fell and Sporleder BIBREF6 used several handcraft features, such as vocabulary, style, semantics, orientation towards the world, and song structure to obtain performance gains on three different classification tasks: detecting genre, distinguishing the best and the worst songs, and determining the approximate publication time of a song. The experiments in genre classification focused on eight genres: Blues, Rap, Metal, Folk, R&B, Reggae, Country, and Religious. Only lyrics in English were included and they used an SVM with the default settings for the classification.",
"Ying et al. BIBREF0 used Part-of-Speech (POS) features extracted from lyrics and combined them with three different machine learning techniques – k-Nearest-Neighbor, Naïve Bayes, and Support Vector Machines – to classify a collection of 600 English songs by the genre and mood.",
"Zaanen and Kanters BIBREF7 used the term frequency and inverse document frequency statistical metrics as features to solve music mood classification, obtaining an accuracy of more than 70%.",
"In recent years, deep learning techniques have also been applied to music genre classification. This kind of approach typically does not rely on handcraft features or external data. In BIBREF5, the authors used a hierarchical attention network to perform the task in a large dataset of nearly half a million song lyrics, obtaining an accuracy of more than 45%. Some papers such as BIBREF8 used word embedding techniques to represent words from the lyrics and then classify them by the genre using a 3-layer Deep Learning model."
],
[
"In this chapter we present all the major steps we have taken, from obtaining the dataset to the proposed approach to address the automatic music genre classification problem."
],
[
"In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites.",
"From the Vagalume's music web page, we collect the song title and lyrics, and the artist name. The genre was collected from the page of styles, which lists all the musical genres and, for each one, all the artists. We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8. Figure FIGREF6 presents an example of the Vagalume's music Web page with the song “Como é grande o meu amor por você”, of the Brazilian singer Roberto Carlos. Green boxes indicate information about music that can be extracted directly from the web page. From this information, the language in which the lyrics are available can be obtained by looking at the icon indicating the flag of Brazil preceded by the “Original” word.",
"After extracting data, we obtained a set of $138,368$ songs distributed across 14 genres. Table TABREF8 presents the number of songs and artists by genre. In order to use the data to learn how to automatically classify genre, we split the dataset into tree partitions: training ($96,857$ samples), validation ($27,673$ samples), and test ($13,838$ samples). The total dataset and splits are available for download."
],
[
"Word embeddings is a technique to represent words as real vectors, so that these vectors maintain some semantic aspects of the real words. Basically, vectors are computed by calculating probabilities of the context of words, with the intuition that semantically similar words have similar contexts, and must therefore have similar vectors.",
"Word2Vec, by Mikolov et al. BIBREF9, is one of the first and most widely used algorithms to make word embeddings. It has two architectures to compute word vectors: Continuous Bag-Of-Words (CBOW) and Skip-gram. CBOW gets a context as input and predicts the current word, while Skip-gram gets the current word as input and predicts its context.",
"In this work, we use the Python Word2Vec implementation provided by the Gensim library. The Portuguese pre-trained word embeddings created by BIBREF10 and available for download was used to represent words as vectors. We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models."
],
[
"Long Short-Term Memory (LSTM) is a specification of Recurrent Neural Network (RNN) that was proposed by Hochreiter and Schmidhuber BIBREF11. This kind of network is widely used to solve classification of sequential data and is designed to capture time dynamics through graph cycles. Figure FIGREF14 presents an LSTM unity, which receives an input from the previous unit, processes it, and passes it to the next unit.",
"The following equations are used to update $C_t$ and $h_t$ values.",
"where $W_f$, $W_i$, $W_C$, $W_o$ are the weight matrices for $h_{t-1}$ input; $U_f$, $U_i$, $U_C$, $U_o$ are the weight matrices for $x_t$ input; and $b_f$, $b_i$, $b_C$, $b_o$ are the bias vectors.",
"Basically, a Bidirectional LSTM network consists of using two LSTM networks: a forward LSTM and a backward LSTM. The intuition behind it is that, in some types of problems, past and future information captured by forward and backward LSTM layers are useful to predict the current data."
],
[
"Our proposed approach consists of three main steps. Firstly, we concatenate the title of the song with its lyrics, put all words in lower case and then we clean up the text by removing line breaks, multiple spaces, and some punctuation (,!.?). Secondly, we represent the text as a vector provided by a pre-trained word embeddings model. For classical learning algorithms like SVM and Random Forest, we generate, for each song, a vectorial representation by calculating the average of the vectors of each word in the song lyrics that can be can be expressed by the equation below:",
"where $L$ is the song lyrics, $w$ is a word in $L$, and $n$ is the number of words in $L$. If a word does not have a vector representation in the word embeddings model, it is not considered in the equation. For the BLSTM algorithm, the representation was made in the format of a matrix, as shown in Figure FIGREF16, where each line is a vector representation of a word in the lyrics. In the third step, we use as features the generated representation for the genre classification tasks using SVM, Random Forests, and BLSTM."
],
[
"In this section, we describe our experiments. We used the Linear SVM and Random Forest Scikit-learn implementations and Keras on top of TensorFlow for the BLSTM implementation. In this study, we did not focus on finding the best combination of parameters for the algorithms, so that for SVM we used the default parameters, and for Random Forest we used a number of 100 trees. Our BLSTM model was trained using 4 epochs, with Adam optimizer, and 256 as the size of the hidden layer.",
"As we can see in Table TABREF20, our BLSTM approach outperforms the other models with an F1-score average of $0.48$. In addition, we can note that the use of Wang2Vec pre-trained word embeddings made it possible to obtain better F1-score results in BLSTM, which is not necessarily noticed in other cases, since for SVM and Random Forest, Glove and FastText, respectively, were the techniques that obtained better F1-scores.",
"Table TABREF21 shows the BLSTM classification results for each genre. We can see that the genres gospel, funk-carioca and sertanejo have a greater distinction in relation to the other genres, since they were better classified by the model. In particular, funk-carioca obtained a good classification result although it did not have a large number of collected song lyrics.",
"In gospel song lyrics, we can identify some typical words, such as “Deus” (God) , “Senhor” (Lord), and “Jesus” (Jesus); in funk-carioca, songs have the words “bonde” (tram), “chão” (floor) and “baile” (dance ball), all used as slang; in sertanejo, some of the most common words are “amor” (love), “coração” (heart) and “saudade” (longing). The occurrence of these typical words could contribute to the higher performance of F1-scores in these genres.",
"The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model. The pop genre, by contrast, has a small distribution between the number of songs and the number of artists, and could not be well classified by our model. This may indicate that our model was unable to identify a pattern due to the low number of songs per artist, or that the song lyrics of this genre cover several subjects that are confused with other genres.",
"Figure FIGREF22 shows the confusion matrix of the results produced by our BLSTM model. We can notice that many instances of class forró are often confused with class sertanejo. Indeed, these two genres are very close. Both Forró and sertanejo have as theme the cultural and daily aspects of the Northeast region of Brazil. Instances of class infantil are often confused with class gospel: in infantil we have music for children for both entertainment and education. In some of the songs, songwriters try to address religious education, which could explain the confusion between those genres. The MPB (Brazilian Popular Music) genre was the most confused of all, which may indicate that song lyrics of this genre cover a wide range of subjects that intersect with other genres."
],
[
"In this work we constructed a dataset of $138,368$ Brazilian song lyrics distributed in 14 genres. We applied SVM, Random Forest, and a Bidirectional Long Short-Term Memory (BLSTM) network combined with different word embeddings techniques to address the automatic genre classification task based only on the song lyrics. We compared the results between the different combinations of classifiers and word embedding techniques, concluding that our BLSTM combined with the Wang2Vec pre-trained model obtained the best F1-score classification result. Beside the dataset construction and the comparison of tools, this work also evidences the lack of an absolute superiority between the different techniques of word embeddings, since their use and efficiency in this specific task showed to be very closely related to the classification technique.",
"As future work, it is possible to explore the dataset to identify genre or artist similarities, generating visualizations that may or may not confirm aspects pre-conceived by the consumers of Brazilian music. It is also possible to perform classification tasks by artists of a specific genre."
]
],
"section_name": [
"Introduction",
"Related Works",
"Methods",
"Methods ::: Data Acquisition",
"Methods ::: Word Embeddings",
"Methods ::: Bidirectional Long Short-Term Memory",
"Methods ::: Proposed Approach",
"Experimental Results",
"Conclusion and Future Works"
]
} | {
"answers": [
{
"annotation_id": [
"fd4dc678ccb665a4b219e9090c219f7e563ebd51"
],
"answer": [
{
"evidence": [
"In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites."
],
"extractive_spans": [
"Vagalume website"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c20f4809fe05f26cd92436dab755631ffb097a28"
],
"answer": [
{
"evidence": [
"The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model. The pop genre, by contrast, has a small distribution between the number of songs and the number of artists, and could not be well classified by our model. This may indicate that our model was unable to identify a pattern due to the low number of songs per artist, or that the song lyrics of this genre cover several subjects that are confused with other genres."
],
"extractive_spans": [
" bossa-nova and jovem-guarda genres"
],
"free_form_answer": "",
"highlighted_evidence": [
"The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"65e0d3c5681c82bc5b22025668e306e046484997"
],
"answer": [
{
"evidence": [
"In this work, we use the Python Word2Vec implementation provided by the Gensim library. The Portuguese pre-trained word embeddings created by BIBREF10 and available for download was used to represent words as vectors. We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models."
],
"extractive_spans": [
"Word2Vec, Wang2Vec, and FastText"
],
"free_form_answer": "",
"highlighted_evidence": [
"We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"21d0b14446eb78cb9f244a435809ee085d92dd6f"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: The number of songs and artists by genre",
"From the Vagalume's music web page, we collect the song title and lyrics, and the artist name. The genre was collected from the page of styles, which lists all the musical genres and, for each one, all the artists. We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8. Figure FIGREF6 presents an example of the Vagalume's music Web page with the song “Como é grande o meu amor por você”, of the Brazilian singer Roberto Carlos. Green boxes indicate information about music that can be extracted directly from the web page. From this information, the language in which the lyrics are available can be obtained by looking at the icon indicating the flag of Brazil preceded by the “Original” word."
],
"extractive_spans": [],
"free_form_answer": "Gospel, Sertanejo, MPB, Forró, Pagode, Rock, Samba, Pop, Axé, Funk-carioca, Infantil, Velha-guarda, Bossa-nova and Jovem-guarda",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The number of songs and artists by genre",
"We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what is the source of the song lyrics?",
"what genre was the most difficult to classify?",
"what word embedding techniques did they experiment with?",
"what genres do they songs fall under?"
],
"question_id": [
"52b113e66fd691ae18b9bb8a8d17e1ee7054bb81",
"163a21c0701d5cda15be2d0eb4981a686e54a842",
"36b5f0f62ee9be1ab50d1bb6170e98328d45997d",
"6b91fe29175be8cd8f22abf27fb3460e43b9889a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: An example of a Vagalume’s song web page",
"Table 1: The number of songs and artists by genre",
"Figure 2: The Long Short-Term Memory unit.",
"Figure 3: Our BLSTM model architecture",
"Table 2: Classification results for each classifier and word embeddings model combination",
"Table 3: Detailed result of BLSTM",
"Figure 4: Normalized confusion matrix"
],
"file": [
"3-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Figure4-1.png"
]
} | [
"what genres do they songs fall under?"
] | [
[
"2003.05377-Methods ::: Data Acquisition-1",
"2003.05377-3-Table1-1.png"
]
] | [
"Gospel, Sertanejo, MPB, Forró, Pagode, Rock, Samba, Pop, Axé, Funk-carioca, Infantil, Velha-guarda, Bossa-nova and Jovem-guarda"
] | 325 |
2001.05467 | AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses | Many sequence-to-sequence dialogue models tend to generate safe, uninformative responses. There have been various useful efforts on trying to eliminate them. However, these approaches either improve decoding algorithms during inference, rely on hand-crafted features, or employ complex models. In our work, we build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering. Specifically, we start with a simple yet effective automatic metric, AvgOut, which calculates the average output probability distribution of all time steps on the decoder side during training. This metric directly estimates which tokens are more likely to be generated, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). We then leverage this novel metric to propose three models that promote diversity without losing relevance. The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch; the second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level; the third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal. Moreover, we experiment with a hybrid model by combining the loss terms of MinAvgOut and RL. All four models outperform their base LSTM-RNN model on both diversity and relevance by a large margin, and are comparable to or better than competitive baselines (also verified via human evaluation). Moreover, our approaches are orthogonal to the base model, making them applicable as an add-on to other emerging better dialogue models in the future. | {
"paragraphs": [
[
"Many modern dialogue generation models use a sequence-to-sequence architecture as their backbone BIBREF0, following its success when applied to Machine Translation (MT) BIBREF1. However, dialogue tasks also have a requirement different from that of MT: the response not only has to be \"correct\" (coherent and relevant), but also needs to be diverse and informative. However, seq2seq has been reported by many previous works to have low corpus-level diversity BIBREF2, BIBREF3, BIBREF0, BIBREF4, as it tends to generate safe, terse, and uninformative responses, such as \"I don't know.\". These responses unnecessarily make a dialogue system much less interactive than it should be.",
"To increase the diversity of dialogue responses, the first step is to faithfully evaluate how diverse a response is. There are metrics used by previous work that are correlated to diversity, but not strongly, such as ratio of distinct tokens BIBREF2 and response length BIBREF5. However, a response can be long but extremely boring in meaning, such as \"I am sure that I don't know about it.\", or short but interesting (i.e., contains a lot of information), such as \"Dad was mean.\". Only investigating discrete token output by the model is also not ideal, because these tokens are only a single realization of the model's output probability distribution at each time step, which unavoidably loses valuable information indicated by the whole distribution. BIBREF6 (BIBREF6) manually collect a shortlist of dull responses, and during training discourage the model from producing such utterances. However, an important drawback of hand-crafted rules is that the set of dull tokens or utterances is static, while in fact it usually evolves during training: when the current dull tokens are eliminated, another set of them might reveal themselves.",
"In our work, we begin with a simple yet effective approach to measure how diverse a response is. This metric, which we name \"Average Output Probability Distribution\", or AvgOut, draws information directly from the training-in-session model itself. We calculate it by keeping track of the exponential average of all output probability distributions on the decoder side during training. This metric dynamically measures which tokens the model is biased toward without any hand-crafted rules, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). In addition, since AvgOut is a one-dimensional categorical distribution rather than a dimensionless numerical value like entropy, it naturally carries and conveys more information about model diversity.",
"We then propose three models that leverage our novel metric to promote diversity in dialogue generation. The first MinAvgOut model minimizes the dot product of current batch AvgOut and the exponential average AvgOut across batches, which encourages low-frequency tokens to be generated. The second LFT model uses a labeled transduction method and scales a \"diversity label\" by the diversity score of the ground-truth target sequence during training, while during testing can generate responses of different levels of diversity by tweaking the intended diversity score. The third RL model leverages reinforcement learning, where our novel metric is applied to discrete tokens and serve as a reward signal. In addition, since MinAvgOut regularizes directly on the continuous distribution while RL calculates its reward based on discrete sampled tokens, we simply add up the loss terms of the two models, creating an even stronger hybrid model.",
"We first employ diverse automatic metrics, including Distinct-1 and -2 from previous work BIBREF2 and our novel metric Diveristy-iAUC (which calculates one minus the sum of normalized frequencies of the most frequent tokens produced by the model), plus activity/entity F1s, to evaluate the diversity and relevance of the generated responses. We then conduct human evaluations to verify that these models not only outperform their base model LSTM by a large margin, but are also comparable to or better than an advanced decoding algorithm MMI BIBREF2 and a very competitive model VHRED BIBREF7 on the Ubuntu dataset."
],
[
"By only keeping a static shortlist of boring responses or tokens, one basically assumes that we humans should decide which tokens are dull. However, we argue that we should instead look from the model's perspective to identify dull tokens, because even if the model outputs a word that we consider rare, including it in too many responses is still considered a dull behavior. Motivated by this thought experiment, we propose a novel metric, Average Output Probability Distribution (AvgOut), that dynamically keeps track of which tokens the model is biased toward. To calculate this, during training, we average out all the output probability distributions for each time step of the decoder for the whole mini-batch. The resulting vector $D^{\\prime }$ will reflect each token's probability of being generated from the model's perspective. Note that we do not use discrete ground-truth tokens to evaluate the model's bias, because there is a fine distinction between the two: a statistics of frequency on ground-truth tokens is an evaluation of the corpus's bias, while AvgOut is an evaluation of what bias the model has learned because by generating dull responses more frequently than the training corpus has, it is the model itself that we should adjust. Also note that the reason we take the average is that a single output distribution will largely depend on the context and the previous target tokens (which are fed as inputs to the decoder during training), but on average the distribution should be a faithful evaluation on which words are more likely to be generated from model's perspective.",
"To avoid batches that have AvgOut significantly different from those of other batches, which would lead the model astray, we keep the exponential average of this metric across batches to make it less biased toward any specific batch. Let it be $D$. After training on a mini-batch and obtain $D^{\\prime }$, we update $D$ like the following:",
"where $\\gamma $ is $0.01$ in our experiments.",
"Another consideration of AvgOut is that theoretically we can have two choices. The first is to use the output distributions when we are teacher-forcing (i.e., only feeding ground-truth tokens); the other is to let the model use its own predictions during greedy/beam-search decoding or sampling. We reason that the former is a much better estimation of the model's bias, because the latter will result in a cascading enlargement of the model bias due to the auto-regressive nature of LSTM-RNN models (i.e., the tokens fed to the decoder are themselves also polluted by the model's bias). Our early experimental results also agreed with the above reasoning.",
"Although we try to come up with the most faithful evaluation of how diverse a response is, our approach certainly has its drawbacks too. For example, using very frequent words but less frequent combinations of them may result in a good response which will be penalized by our metric. A natural solution to this is to also use bigram and trigram diversities and take a linear combination of them, which on a high-level is similar to BLEU BIBREF8. However, considering even bigram distribution takes up $O(|V|^2)$ space and calculation time, hence we did not try it due to limited resources. However, as will be presented in Section SECREF5, regularizing unigram distributions can already greatly help on higher-gram diversities, while also improving relevance."
],
[
"AvgOut can play at least three roles. First, it can be used to directly supervise output distribution during training; second, it can be used as a prior in labeled sequence transduction methods to control diversity of the generated response; and third, it can be used as a reward signal for Reinforcement Learning to encourage diverse sampled responses. In this section, we begin with a base vanilla seq2seq model, and next present our three models to diversify responses based on AvgOut.",
"Our base model LSTM is identical to that proposed by BIBREF1 (BIBREF1), which consists of a single-layer bi-directional LSTM-RNN BIBREF9 encoder and a single-layer LSTM-RNN decoder with additive attention."
],
[
"Our MinAvgOut model (Figure FIGREF3) directly integrates AvgOut into the loss function by summarizing it into a single numerical value named Continuous-AvgOut. We do this by taking the dot-product of $D$ and $D^{\\prime }$ (Figure FIGREF6). The intuition behind this simple calculation is that $D$ can also be viewed as a set of weights which add up to $1.0$, since it is a probability vector. By taking the dot product, we are actually calculating a weighted average of each probability in $D^{\\prime }$. To evaluate how diverse the model currently is, the duller tokens should obviously carry higher weights since they contribute more to the \"dullness\" of the whole utterance. Assuming that $D$ is a column vector, the continuous diversity score is $B_c$, and the resulting extra loss term is $L_B$, the total loss $L$ is given by:",
"where $\\alpha $ is a coefficient to balance the regularization loss with the maximum likelihood loss (a.k.a. teacher forcing loss) $L_{ML}$. This is important because the regularization term continues to discourage the model from generating the ground-truth token, which we need to balance by ML loss to reduce the impact (otherwise the model will be led astray). Note that since $D$ is a moving average which does not depend on the model parameters of the current mini-batch, only $D^{\\prime }$ will result in gradient flow during back-propagation, which is what we intend."
],
[
"We also borrow the continuous version of the Label-Fine-Tuning (LFT) model from BIBREF10 (BIBREF10), which is an extension of the discrete labeled sequence transduction methods BIBREF11. The LFT model leverages a continuous label to serve as a prior for generating the target sequence. This label corresponds to an embedding just like a normal token does, but can be scaled by a continuous value. This model is applicable to our case because the diversity score of a response can also be viewed as a style, ranging from $0.0$ to $1.0$. Specifically, we add to the vocabulary a diversity label and scale its embedding vector with the intended diversity score of the target sequence. During training, this score is obtained by evaluating the diversity of the ground-truth target sequence (see Figure FIGREF8); during test time, we instead feed the model a diversity label scaled by a score of our choice (i.e., when we want the model to generate a more diverse response, we scale the label's embedding by a higher score, while to generate a duller response, we scale the embedding by a lower one)."
],
[
"We also explore a model (see Figure FIGREF11) which regularizes on the discrete token level, because merely monitoring output probability distribution may ignore certain bad styles such as repetition (e.g. \"I don't don't know.\"). We use Discrete-AvgOut to calculate the continuous diversity score of a discrete sequence. Let $\\lbrace G_1, G_2, ..., G_{N_G}\\rbrace $ be a sequence of $N_G$ tokens sampled by the model during training. Then from $D$, we extract the probabilities $\\lbrace P_1, P_2, ..., P_{N_G}\\rbrace $ corresponding to each generated token. The diversity score $B_{d}$ on these discrete tokens will be:",
"where $N_{unique}$ is the number of unique tokens in the sampled sequence (see Figure FIGREF12). Note that this division explicitly discourages the model from outputting repeated tokens, because when that happens, the nominator will stay the same, while the denominator will decrease, resulting in a lower diversity score. Also note that MinAvgOut can be complementary to RL since calculating diversity scores based on discrete tokens unavoidably loses valuable information from the output distribution before argmax is taken. In Section SECREF5, we show with both automatic and human evaluations that this combination indeed achieves the best results among our models. Following BIBREF12 (BIBREF12), our loss function consists of two terms. The first term is the Maximum Likelihood loss ($L_{\\textsc {ml}}$); the other is the Reinforcement Learning loss ($L_{\\textsc {rl}}$). The total loss $L$ is then:",
"where $\\beta $ is a hyperparameter indicating how much weight we want to assign to the rl part of the loss, $x$ is the source sequence, $\\lbrace y_t^*\\rbrace $ are the ground truth tokens and $\\lbrace y_t^s\\rbrace $ are the sampled tokens. We use a policy gradient method BIBREF13 to calculate the RL loss. Specifically, we sample a response for each context $x$, and assign to it a reward $R$, which is equal to $B_d$ because we want to encourage the model to be diverse. We also use a baseline $R_b$ that helps reduce variance during training BIBREF14. In our case this baseline is again the exponential average of all $B_d$ in previous mini-batches."
],
[
"We use the task-oriented Ubuntu Dialogue dataset BIBREF15, because it not only has F1 metrics to evaluate the relevance of responses, but the dialogues in them are also open-ended to allow enough space for diversity. We also chose this dataset because previous work, e.g., HRED BIBREF3 and VHRED BIBREF7 both used Ubuntu to showcase their diversity-promotion models. Due to the popularity of this dataset, we were able to reproduce almost all models on this same dataset and have a meaningful comparison on their effectiveness of eliminating dullness. As future work, we plan to apply our models to other datasets where diversity is desired."
],
[
"To measure the relevance of the model responses, we follow BIBREF16 (BIBREF16) and evaluate on F1's for both activities (technical verbs, e.g., \"upload\", \"install\") and entities (technical nouns, e.g., \"root\", \"internet\"). The F1's are computed by mapping the ground-truth and model responses to their corresponding activity-entity representations BIBREF16, who considered F1 to be \"particularly suited for the goal-oriented Ubuntu Dialogue Corpus\". We did not evaluate on BLEU score BIBREF8 because BIBREF17 showed that BLEU does not correlate well with dialogue quality. BIBREF18 (BIBREF18) also made similar observations on BLEU. To evaluate diversity, we employ two evaluation metrics from previous work, namely Distinct-1 and Distinct-2 BIBREF2. These are the ratios between the number of unique tokens and all tokens for unigrams and bigrams, respectively. In addition, we propose a novel diversity graph and its corresponding metric, which we name Diversity-32 and Diversity-AUC, respectively. We gather statistics of sentence, unigram, bigram and trigram, and sort their normalized frequencies from highest to lowest. Observing that all four graphs follow long-tail distributions, we only keep the highest 32 frequencies and plot them. We then calculate one minus the Area under Curve (Diversity-AUC) for each graph, which draws a high-level picture of how diverse a model is."
],
[
"Although we proposed the effective AvgOut metric, we did find that the model sometimes still cheats to gain higher automatic diversity score. For example, as can be seen in the selected output examples (Section SECREF5), the model tends to generate words with typo since these are rarer tokens as compared to their correct counterparts. This is unavoidable for noisy datasets like Ubuntu. Thus, without human evaluation, we can never be sure if our models are good or they only look good because our metrics are exploited.",
"We thus also conducted human studies on Amazon MTurk to evaluate the generated responses with pairwise comparison for dialogue quality. We compare our models with an advanced decoding algorithm MMI BIBREF2 and two models, namely LSTM BIBREF0 and VHRED BIBREF7, both with additive attention. To our best knowledge, LSTM and VHRED were the primary models with which F1's were reported on the Ubuntu dataset. Following BIBREF5 (BIBREF5), we employ two criteria: Plausibility and Content Richness. The first criterion measures whether the response is plausible given the context, while the second gauges whether the response is diverse and informative. The utterances were randomly shuffled to anonymize model identity. We only allowed annotators located in the US-located with at least an approval rate of $98\\%$ and $10,000$ approved HITs. We collected 100 annotations in total after rejecting those completed by people who assign exactly the same score to all model responses. Since we evaluated 7 models, we collected 700 annotations in total, which came from a diverse pool of annotators."
],
[
"For each of the three models, the hidden size of the encoder is 256, while the decoder hidden size is 512. For MinAvgOut, the coefficient of the regularization loss term $\\alpha $ is $100.0$; For LFT, during inference we feed a score of $0.015$ since it achieves a good balance between response coherence and diversity. For RL, the coefficient of the RL term $\\beta $ is $100.0$. For the hybrid model MinAvgOut + RL, $\\alpha $ and $\\beta $ share a coefficient of $50.0$."
],
[
"We employ several complementary metrics to capture different aspects of the model. The F1 results are shown in Table TABREF24. Among all single models, LFT performs the best, followed by MinAvgOut. RL is also comparable with previous state-of-the-art models VHRED (attn) and Reranking-RL. We think that this is because LFT exerts no force in pulling the model predictions away from the ground-truth tokens, but rather just makes itself aware of how dull each response is. Consequently, its responses appear more relevant than the other two approaches. Moreover, the hybrid model (last row) outperforms all other models by a large margin. One might expect that minimizing AVGOUT causes the models to move further away from the ground-truth tokens, so that it will hurt relevance. However, our F1 results show that as the responses become more diverse, they are more likely to include information more related and specific to the input contexts, which actually makes the model gain on both diversity and relevance. This will be further confirmed by the output examples in Table TABREF29.",
"We also present Diversity-32 graphs (Figure FIGREF16) and report Diversity-AUC as well as Distinct-1 and -2 for each model (Table TABREF25). We can see that all our models have significantly better sentence-level diversity than VHRED, let alone LSTM. For unigram diversity, they are also better than LSTM, though hard to distinguish from VHRED. Both bigram and trigram graphs reveal that all models are more diverse than LSTM, except that RL shows lower diversity than the other models, which agree with our F1 results. Note that since our models are only trained based on unigram output distributions, the bigram and trigram diversities are still far away from that of the ground-truth, which points to future direction. That said, the table does show that encouraging unigram diversity can already have positive influence on higher grams as well. Also note that the hybrid model (last row) does not achieve the best result in terms of diversity. We hypothesize that this is because RL, which is usually harder to optimize than ML losses, faces exacerbated issues when combined with a strong MinAvgOut loss, which tries to pull the model output distribution away from the token distribution in the training corpus.",
"Neither Distinct-1 nor -2 correlates well with our observation and evaluation of diversity and relevance. We reason that this is because these metrics only capture how many distinct tokens are used rather than each token's frequency, which is easier to be exploited because whether each token is used unnecessarily often (a strong sign of dullness) is not reflected in this measure."
],
[
"As mentioned in experimental setup, we conducted human evaluations on our models for both Plausibility and Content Richness, as well as calculating their average (to show overall score) and their difference (to show balance between the two criteria) (Table TABREF26). We can see from the table that all our models are statistically significantly better than the baseline models on both Plausibility and Content Richness, except that RL is slightly weaker on Content Richness, which agrees with the trend in automatic evaluations. Although MinAvgOut+RL model only ranks the second on average score (statistically equivalent to MinAvgOut) in human evaluation, it achieves a good balance, and it also ranks the second in automatic diversity and the first in F1 values. We thus consider it to be our best model."
],
[
"We present two selected examples of generated responses from the investigated models (Table TABREF29). We can see that all our models learn to attend well to the context, generating coherent and informative responses."
],
[
"Multiple metrics and approaches have been proposed to measure dialogue diversity. Some focus more on how similar the responses are to the ground-truth sequences, such as Word Error Rate BIBREF3 and BLEU BIBREF20, while the others explicitly have diversity in mind when being created, such as Distinct-1 and -2 BIBREF2. The key difference between AvgOut and the previous work is that first, our metric is dynamic with no feature-engineering; second, ours is versatile enough to be applied to both continuous distributions and discrete sequences, while theirs are only for discrete tokens; third, ours can be used for both sentence-level and corpus-level evaluation, while theirs are only meaningful as corpus-level metrics because they measure the extent of repetition across responses rather than for a single response."
],
[
"Researchers have different opinions on why dull responses are generated, which lead to various solutions. They can be roughly divided into four categories. The first category considers using conditional likelihood as a decoding objective the culprit BIBREF5, BIBREF2, BIBREF21, BIBREF22. They thus focus on improving the decoding algorithm during training. The second category traces the cause of the low-diversity problem back to the lack of model variability. They then adopt Variational Autoencoders and rely on sampling from a latent random variable as an additional prior to the decoder BIBREF7, BIBREF23, BIBREF24. The third category thinks that the issue is a lack of universal background knowledge and common sense beyond the input context. They consequently aim to integrate prior knowledge into the generation process BIBREF25, BIBREF26, BIBREF27, BIBREF28. The fourth category believes that the underlying model itself needs improvement. Some use hierarchical LSTM-RNN to encourage the model to capture high-level context BIBREF3; some use more advanced attention mechanism such as multi-head attention BIBREF29; and some use either more complicated architectures or models prone to degeneracies, such as Generative Adversarial Networks BIBREF30, Deep Reinforcement Learning BIBREF6 and Mixture Models BIBREF31. Our RL model has the same architecture as the Reinforcement Learning model, except with different rewards. BIBREF32 (BIBREF32) consider the reason for dull responses as the model's over-confidence. They then propose to add to the loss function a regularization term to maximize the entropy of the output probability distribution. Interestingly, they only proposed this simple approach rather than actually implementing it. Our MinAvgOut approach is related to their idea. Our approach is also related to posterior regularization BIBREF33, BIBREF34, BIBREF35, but our work is neural-based."
],
[
"We proposed a novel measure AvgOut to dynamically evaluate how diverse a model or a response is based on the models' own parameters, which themselves evolve during training. We then leveraged this effective measure to train three models, plus a hybrid model, to eliminate dull responses for dialogue generation tasks. In addition, we designed novel automatic metrics to evaluate the trained models on diversity, in addition to the ones from previous work. Both automatic and human evaluations consolidated that our models are able to generate more diverse and relevant responses, even when compared with state-of-the-art approaches. For future work, we plan to apply these models to different generative tasks where diversity is desired."
],
[
"We thank the reviewers for their helpful comments. This work was supported by NSF-CAREER Award #1846185, ONR #N00014-18-1-2871, and awards from Google, Facebook, Salesforce (views are not of the funding agency)."
]
],
"section_name": [
"Introduction",
"AvgOut as an Effective Diversity Metric",
"Three Models to Leverage AvgOut",
"Three Models to Leverage AvgOut ::: Regularization by Minimizing Continuous-AvgOut",
"Three Models to Leverage AvgOut ::: Label-Fine-Tuning Model",
"Three Models to Leverage AvgOut ::: Reward-Based Reinforcement Learning",
"Experimental Setup ::: Dataset and Task",
"Experimental Setup ::: Automatic Evaluation",
"Experimental Setup ::: Human Evaluation",
"Experimental Setup ::: Training Details",
"Results and Analysis ::: Automatic Evaluation Results",
"Results and Analysis ::: Human Evaluation Results",
"Results and Analysis ::: Selected Output Examples",
"Related Work ::: Measurements of Response Diversity",
"Related Work ::: Diversity-Promoting Dialogue Models",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"58eb36e018db5f54effe1a5c0708afa5e6517db0"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI."
],
"extractive_spans": [],
"free_form_answer": "LSTMs with and without attention, HRED, VHRED with and without attention, MMI and Reranking-RL",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"eba9739ba030a175b54715e76f3371844b42baaf"
],
"answer": [
{
"evidence": [
"We thus also conducted human studies on Amazon MTurk to evaluate the generated responses with pairwise comparison for dialogue quality. We compare our models with an advanced decoding algorithm MMI BIBREF2 and two models, namely LSTM BIBREF0 and VHRED BIBREF7, both with additive attention. To our best knowledge, LSTM and VHRED were the primary models with which F1's were reported on the Ubuntu dataset. Following BIBREF5 (BIBREF5), we employ two criteria: Plausibility and Content Richness. The first criterion measures whether the response is plausible given the context, while the second gauges whether the response is diverse and informative. The utterances were randomly shuffled to anonymize model identity. We only allowed annotators located in the US-located with at least an approval rate of $98\\%$ and $10,000$ approved HITs. We collected 100 annotations in total after rejecting those completed by people who assign exactly the same score to all model responses. Since we evaluated 7 models, we collected 700 annotations in total, which came from a diverse pool of annotators."
],
"extractive_spans": [],
"free_form_answer": "Through Amazon MTurk annotators to determine plausibility and content richness of the response",
"highlighted_evidence": [
"We thus also conducted human studies on Amazon MTurk to evaluate the generated responses with pairwise comparison for dialogue quality. We compare our models with an advanced decoding algorithm MMI BIBREF2 and two models, namely LSTM BIBREF0 and VHRED BIBREF7, both with additive attention. To our best knowledge, LSTM and VHRED were the primary models with which F1's were reported on the Ubuntu dataset. Following BIBREF5 (BIBREF5), we employ two criteria: Plausibility and Content Richness. The first criterion measures whether the response is plausible given the context, while the second gauges whether the response is diverse and informative. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"221e0c560bc7afb1fd71b1fec7242065cd0c957e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI."
],
"extractive_spans": [],
"free_form_answer": "on diversity 6.87 and on relevance 4.6 points higher",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"db65791ce84319eb969dcc077c281e857cd35b3b"
],
"answer": [
{
"evidence": [
"We employ several complementary metrics to capture different aspects of the model. The F1 results are shown in Table TABREF24. Among all single models, LFT performs the best, followed by MinAvgOut. RL is also comparable with previous state-of-the-art models VHRED (attn) and Reranking-RL. We think that this is because LFT exerts no force in pulling the model predictions away from the ground-truth tokens, but rather just makes itself aware of how dull each response is. Consequently, its responses appear more relevant than the other two approaches. Moreover, the hybrid model (last row) outperforms all other models by a large margin. One might expect that minimizing AVGOUT causes the models to move further away from the ground-truth tokens, so that it will hurt relevance. However, our F1 results show that as the responses become more diverse, they are more likely to include information more related and specific to the input contexts, which actually makes the model gain on both diversity and relevance. This will be further confirmed by the output examples in Table TABREF29."
],
"extractive_spans": [],
"free_form_answer": "the hybrid model MinAvgOut + RL",
"highlighted_evidence": [
"Among all single models, LFT performs the best, followed by MinAvgOut. RL is also comparable with previous state-of-the-art models VHRED (attn) and Reranking-RL. We think that this is because LFT exerts no force in pulling the model predictions away from the ground-truth tokens, but rather just makes itself aware of how dull each response is. Consequently, its responses appear more relevant than the other two approaches. Moreover, the hybrid model (last row) outperforms all other models by a large margin. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"To what other competitive baselines is this approach compared?",
"How is human evaluation performed, what was the criteria?",
"How much better were results of the proposed models than base LSTM-RNN model?",
"Which one of the four proposed models performed best?"
],
"question_id": [
"4b8a0e99bf3f2f6c80c57c0e474c47a5ee842b2c",
"a09633584df1e4b9577876f35e38b37fdd83fa63",
"5e9732ff8595b31f81740082333b241d0a5f7c9a",
"58edc6ed7d6966715022179ab63137c782105eaf"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: MinAvgOut model: use the dot product of average output distribution of the exponential average and the current batch to evaluate how diverse the current batch is.",
"Figure 2: An example of AVGOUT applied to a single token, which readily generalizes to multiple tokens within a response. We calculate diversity score of a continuous distribution through dot product. We sum up the values in the last graph. Note that although “turf ” (the fourth word from the right in all three sub-figures) has higher probability in the current batch D′, it still contributes less than the word “is” to the overall diversity measure when taking the dot product, due to its low probability in the exponential average distribution D (i.e., lower weights). All probabilities are for illustration purpose and do not correspond to distributions from our models.",
"Figure 3: LFT model: the diversity label is scaled by the diversity score of the ground-truth target during training.",
"Figure 4: RL model: the diversity score of the sampled response is fed back to the model as reward signal.",
"Figure 5: Diversity score calculation for a discrete sequence: Bd = 1.0− (0.05 + 0.2 + 0.1 + 0.01)/4.",
"Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI.",
"Figure 6: Diversity-32 graphs of all models. Curves with lower AUC correspond to more diverse models.",
"Table 2: Automatic Evaluation results for the baselines and our proposed models (“iAUC” means “inverted AUC”, or “1 - AUC”; “attn” means “with attention”; “s”, “1”, “2” and “3” correspond to “sentence-level”, “unigram”, “bigram” and “trigram”, respectively; iAUC-avg is the average of all the other AUC columns). Best results are boldfaced. We do not calculate p-value because it does not apply to corpus-level metrics.",
"Table 3: Human Evaluation results for all the models we produce on Plausibility, Richness, average of the two, and scaled difference (difference between them divided by their average). Best Results are boldfaced. Note that for the last column, lower is better since we want balance between Plausibility and Content Richness. All results are pair-wise statistically significantly different with p < 0.05, except between MINAVGOUT and RL on Plausibility, and between MINAVGOUT and MINAVGOUT+RL on Average.",
"Table 4: Selected output examples from all models. Context-X and -Y are given as model inputs during inference."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"4-Figure5-1.png",
"5-Table1-1.png",
"5-Figure6-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png"
]
} | [
"To what other competitive baselines is this approach compared?",
"How is human evaluation performed, what was the criteria?",
"How much better were results of the proposed models than base LSTM-RNN model?",
"Which one of the four proposed models performed best?"
] | [
[
"2001.05467-5-Table1-1.png"
],
[
"2001.05467-Experimental Setup ::: Human Evaluation-1"
],
[
"2001.05467-5-Table1-1.png"
],
[
"2001.05467-Results and Analysis ::: Automatic Evaluation Results-0"
]
] | [
"LSTMs with and without attention, HRED, VHRED with and without attention, MMI and Reranking-RL",
"Through Amazon MTurk annotators to determine plausibility and content richness of the response",
"on diversity 6.87 and on relevance 4.6 points higher",
"the hybrid model MinAvgOut + RL"
] | 327 |
1909.09484 | Generative Dialog Policy for Task-oriented Dialog Systems | There is an increasing demand for task-oriented dialogue systems which can assist users in various activities such as booking tickets and restaurant reservations. In order to complete dialogues effectively, dialogue policy plays a key role in task-oriented dialogue systems. As far as we know, the existing task-oriented dialogue systems obtain the dialogue policy through classification, which can assign either a dialogue act and its corresponding parameters or multiple dialogue acts without their corresponding parameters for a dialogue action. In fact, a good dialogue policy should construct multiple dialogue acts and their corresponding parameters at the same time. However, it's hard for existing classification-based methods to achieve this goal. Thus, to address the issue above, we propose a novel generative dialogue policy learning method. Specifically, the proposed method uses attention mechanism to find relevant segments of given dialogue context and input utterance and then constructs the dialogue policy by a seq2seq way for task-oriented dialogue systems. Extensive experiments on two benchmark datasets show that the proposed model significantly outperforms the state-of-the-art baselines. In addition, we have publicly released our codes. | {
"paragraphs": [
[
"Task-oriented dialogue system is an important tool to build personal virtual assistants, which can help users to complete most of the daily tasks by interacting with devices via natural language. It's attracting increasing attention of researchers, and lots of works have been proposed in this area BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7.",
"The existing task-oriented dialogue systems usually consist of four components: (1) natural language understanding (NLU), it tries to identify the intent of a user; (2) dialogue state tracker (DST), it keeps the track of user goals and constraints in every turn; (3) dialogue policy maker (DP), it aims to generate the next available dialogue action; and (4) natural language generator (NLG), it generates a natural language response based on the dialogue action. Among the four components, dialogue policy maker plays a key role in order to complete dialogues effectively, because it decides the next dialogue action to be executed.",
"As far as we know, the dialogue policy makers in most existing task-oriented dialogue systems just use the classifiers of the predefined acts to obtain dialogue policy BIBREF0, BIBREF2, BIBREF4, BIBREF8, BIBREF9. The classification-based dialogue policy learning methods can assign either only a dialogue act and its corresponding parameters BIBREF10, BIBREF2, BIBREF0 or multiple dialogue acts without their corresponding parameters for a dialogue action BIBREF11. However, all these existing methods cannot obtain multiple dialogue acts and their corresponding parameters for a dialogue action at the same time.",
"Intuitively, it will be more reasonable to construct multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. For example, it can be shown that there are 49.4% of turns in the DSTC2 dataset and 61.5% of turns in the Maluuba dataset have multiple dialogue acts and their corresponding parameters as the dialogue action. If multiple dialogue acts and their corresponding parameters can be obtained at the same time, the final response of task-oriented dialogue systems will become more accurate and effective. For example, as shown in Figure FIGREF3, a user wants to get the name of a cheap french restaurant. The correct dialogue policy should generate three acts in current dialogue turn: offer(name=name_slot), inform(food=french) and inform(food=cheap). Thus, the user's real thought may be: “name_slot is a cheap french restaurant\". If losing the act offer, the system may generate a response like “There are some french restaurants\", which will be far from the user's goal.",
"To address this challenge, we propose a Generative Dialogue Policy model (GDP) by casting the dialogue policy learning problem as a sequence optimization problem. The proposed model generates a series of acts and their corresponding parameters by the learned dialogue policy. Specifically, our proposed model uses a recurrent neural network (RNN) as action decoder to construct dialogue policy maker instead of traditional classifiers. Attention mechanism is used to help the decoder decode dialogue acts and their corresponding parameters, and then the template-based natural language generator uses the results of the dialogue policy maker to choose an appropriate sentence template as the final response to the user.",
"Extensive experiments conducted on two benchmark datasets verify the effectiveness of our proposed method. Our contributions in this work are three-fold.",
"The existing methods cannot construct multiple dialogue acts and their corresponding parameters at the same time. In this paper, We propose a novel generative dialogue policy model to solve the problem.",
"The extensive experiments demonstrate that the proposed model significantly outperforms the state-of-the-art baselines on two benchmarks.",
"We publicly release the source code."
],
[
"Usually, the existing task-oriented dialogue systems use a pipeline of four separate modules: natural language understanding, dialogue belief tracker, dialogue policy and natural language generator. Among these four modules, dialogue policy maker plays a key role in task-oriented dialogue systems, which generates the next dialogue action.",
"As far as we know, nearly all the existing approaches obtain the dialogue policy by using the classifiers of all predefined dialogue acts BIBREF12, BIBREF13. There are usually two kinds of dialogue policy learning methods. One constructs a dialogue act and its corresponding parameters for a dialogue action. For example, BIBREF0 constructs a simple classifier for all the predefined dialogue acts. BIBREF2 build a complex classifier for some predefined dialogue acts, addtionally BIBREF2 adds two acts for each parameter: one to inform its value and the other to request it. The other obtains the dialogue policy by using multi-label classification to consider multiple dialogue acts without their parameters. BIBREF11 performs multi-label multi-class classification for dialogue policy learning and then the multiple acts can be decided based on a threshold. Based on these classifiers, the reinforcement learning can be used to further update the dialogue policy of task-oriented dialogue systems BIBREF3, BIBREF14, BIBREF9.",
"In the real scene, an correct dialogue action usually consists of multiple dialogue acts and their corresponding parameters. However, it is very hard for existing classification-based dialogue policy maker to achieve this goal. Thus, in this paper we propose a novel generative dialogue policy maker to address this issue by casting the dialogue policy learning problem as a sequence optimization problem."
],
[
"Seq2Seq model was first introduced by BIBREF15 for statistical machine translation. It uses two recurrent neural networks (RNN) to solve the sequence-to-sequence mapping problem. One called encoder encodes the user utterance into a dense vector representing its semantics, the other called decoder decodes this vector to the target sentence. Now Seq2Seq framework has already been used in task-oriented dialog systems such as BIBREF4 and BIBREF1, and shows the challenging performance. In the Seq2Seq model, given the user utterance $Q=(x_1, x_2, ..., x_n)$, the encoder squeezes it into a context vector $C$ and then used by decoder to generate the response $R=(y_1, y_2, ..., y_m)$ word by word by maximizing the generation probability of $R$ conditioned on $Q$. The objective function of Seq2Seq can be written as:",
"In particular, the encoder RNN produces the context vector $C$ by doing calculation below:",
"The $h_t$ is the hidden state of the encoder RNN at time step $t$ and $f$ is the non-linear transformation which can be a long-short term memory unit LSTM BIBREF16 or a gated recurrent unit GRU BIBREF15. In this paper, we implement $f$ by using GRU.",
"The decoder RNN generates each word in reply conditioned on the context vector $C$. The probability distribution of candidate words at every time step $t$ is calculated as:",
"The $s_t$ is the hidden state of decoder RNN at time step $t$ and $y_{t-1}$ is the generated word in the reply at time $t-1$ calculated by softmax operations."
],
[
"Attention mechanisms BIBREF17 have been proved to improved effectively the generation quality for the Seq2Seq framework. In Seq2Seq with attention, each $y_i$ corresponds to a context vector $C_i$ which is calculated dynamically. It is a weighted average of all hidden states of the encoder RNN. Formally, $C_i$ is defined as $C_i=\\sum _{j=1}^{n} \\alpha _{ij}h_j$, where $\\alpha _{ij}$ is given by:",
"where $s_{i-1}$ is the last hidden state of the decoder, the $\\eta $ is often implemented as a multi-layer-perceptron (MLP) with tanh as the activation function."
],
[
"Figure FIGREF13 shows the overall system architecture of the proposed GDP model. Our model contains five main components: (1) utterance encoder; (2) dialogue belief tracker; (3) dialogue policy maker; (4) knowledge base; (5) template-based natural language generator. Next, we will describe each component of our proposed GDP model in detail."
],
[
"Given the user utterance $U_t$ at turn $t$ and the dialogue context $C_{t-1}$ which contains the result of the dialogue belief tracker at turn $t-1$, the task-oriented dialog system needs to generate user's intents $C_t$ by dialogue belief tracker and then uses this information to get the knowledge base query result $k_t \\in \\mathbb {R}^k$. Then the model needs to generate the next dialogue action $A_t$ based on $k_t$, $U_t$ and $C_t$. The natural language generator provides the template-based response $R_t$ as the final reply by using $A_t$. The $U_t$ and $C_t$ are the sequences, $k_t$ is a one-hot vector representing the number of the query results. For baselines, in this paper, the $A_t$ is the classification result of the next dialogue action, but in our proposed model it's a sequence which contains multiple acts and their corresponding parameters."
],
[
"A bidirectional GRU is used to encode the user utterance $U_t$, the last turn response $R_{t-1}$ made by the system and the dialogue context $C_{t-1}$ into a continuous representation. The vector is generated by concatenating the last forward and backward GRU states. $U_t = (w_1, w_2, ..., w_{T_m})$ is the user utterance at turn $t$. $C_{t-1}=(c_1, c_2, ..., c_{T_n})$ is the dialogue context made by dialogue belief tracker at $t-1$ turn. $R_{t-1}$ is the response made by our task-oriented dialogue system at last turn. Then the words of $[C_{t-1}, R_{t-1}, U_t]$ are firstly mapped into an embedding space and further serve as the inputs of each step to the bidirectional GRU. Let $n$ denotes the number of words in the sequence $[C_{t-1}, R_{t-1}, U_t]$. The $\\overrightarrow{h_{t^{\\prime }}^u}$ and $\\overleftarrow{h_{t^{\\prime }}^u}$ represent the forward and backward GRU state outputs at time step $t^{\\prime }$. The encoder output of timestep $i$ denote as $\\overline{h_i^u}$.",
"where $e([C_{t-1}, R_{t-1}, U_t])$ is the embedding of the input sequence, $d_h$ is the hidden size of the GRU. $H_u$ contains the encoder hidden state of each timestep, which will be used by attention mechanism in dialogue policy maker."
],
[
"Dialogue state tracker maintains the state of a conversation and collects the user's goals during the dialogue. Recent work successfully represents this component as discriminative classifiers. BIBREF5 verified that the generation is a better way to model the dialogue state tracker.",
"Specifically, we use a GRU as the generator to decode the $C_t$ of current turn. In order to capture user intent information accurately, the basic attention mechanism is calculated when the decoder decodes the $C_t$ at each step, which is the same as the Eq. (DISPLAY_FORM12).",
"where $m$ is the length of $C_t$, $e(y_i)$ is the embedding of the token, $d_h$ is the hidden size of the GRU and the hidden state at $i$ timestep of the RNN in dialogue state tracker denote as $h_i^d$. The decoded token at step $i$ denotes as $y_i^d$."
],
[
"Knowledge base is a database that stores information about the related task. For example, in the restaurant reservation, a knowledge base stores the information of all the restaurants, such as location and price. After dialogue belief tracker, the $C_t$ will be used as the constraints to search the results in knowledge base. Then the one-hot vector $k_t$ will be produced when the system gets the number of the results.",
"The search result $k_t$ has a great influence on dialogue policy. For example, if the result has multiple matches, the system should request more constraints of the user. In practice, let $k_t$ be an one-hot vector of 20 dimensions to represent the number of query results. Then $k_t$ will be used as the cue for dialogue policy maker."
],
[
"In task-oriented dialogue systems, supervised classification is a straightforward solution for dialogue policy modeling. However, we observe that classification cannot hold enough information for dialogue policy modeling. The generative approach is another way to model the dialogue policy maker for task-oriented dialogue systems, which generates the next dialogue acts and their corresponding parameters based on the dialogue context word by word. Thus the generative approach converts the dialogue policy learning problem into a sequence optimization problem.",
"The dialogue policy maker generates the next dialogue action $A_t$ based on $k_t$ and $[H_u, H_d]$. Our proposed model uses the GRU as the action decoder to decode the acts and their parameters for the response. Particularly, at step $i$, for decoding $y_i^p$ of $A_t$, the decoder GRU takes the embedding of $y_{i-1}^p$ to generate a hidden vector $h_i^p$. Basic attention mechanism is calculated.",
"where $e$ is the embedding of the token, $c_u$ is the context vector of the input utterance and $c_d$ is the context vector of the dialogue state tracker. $h_i^p$ is the hidden state of the GRU in dialogue policy maker at $i$ timestep.",
"where $y_i^p$ is the token decoded at $i$ timestep. And the final results of dialogue policy maker denote as $A_t$, and the $k$ is the length of it. In our proposed model, the dialogue policy maker can be viewed as a decoder of the seq2seq model conditioned on $[C_{t-1},R_{t-1},U_t]$ and $k_t$."
],
[
"After getting the dialogue action $A_t$ by the learned dialogue policy maker, the task-oriented dialogue system needs to generate an appropriate response $R_t$ for users. We construct the natural language generator by using template sentences. For each dataset, we extract all the system responses, then we manually modify responses to construct the sentence templates for task-oriented dialogue systems. In our proposed model, the sequence of the acts and parameters $A_t$ will be used for searching appropriate template. However, the classification-based baselines use the categories of acts and their corresponding parameters to search the corresponding template."
],
[
"In supervised learning, because our proposed model is built in a seq2seq way, the standard cross entropy is adopted as our objective function to train dialogue belief tracker and dialogue policy maker.",
"After supervised learning, the dialogue policy can be further updated by using reinforcement learning. In the context of reinforcement learning, the decoder of dialogue policy maker can be viewed as a policy network, denoted as $\\pi _{\\theta }(y_j)$ for decoding $y_j$, $\\theta $ is the parameters of the decoder. Accordingly, the hidden state created by GRU is the corresponding state, and the choice of the current token $y_j$ is an action.",
"Reward function is also very important for reinforcement learning when decoding every token. To encourage our policy maker to generate correct acts and their corresponding parameters, we set the reward function as follows: once the dialogue acts and their parameters are decoded correctly, the reward is 2; otherwise, the reward is -5; only the label of the dialogue act is decoded correctly but parameters is wrong, the reward is 1; $\\lambda $ is a decay parameter. More details are shown in Sec SECREF41. In our proposed model, rewards can only be obtained at the end of decoding $A_t$. In order to get the rewards at each decoding step, we sample some results $A_t$ after choosing $y_j$, and the reward of $y_j$ is set as the average of all the sampled results' rewards.",
"In order to ensure that the model's performance is stable during the fine-tuning phase of reinforcement learning, we freeze the parameters of user utterance and dialogue belief tracker, only the parameters of the dialogue policy maker will be optimized by reinforcement learning. Policy gradient algorithm REINFORCE BIBREF18 is used for pretrained dialogue policy maker:",
"where the $m$ is the length of the decoded action. The objective function $J$ can be optimized by gradient descent."
],
[
"We evaluate the performance of the proposed model in three aspects: (1) the accuracy of the dialogue state tracker, it aims to show the impact of the dialogue state tracker on the dialogue policy maker; (2) the accuracy of dialogue policy maker, it aims to explain the performance of different methods of constructing dialogue policy; (3) the quality of the final response, it aims to explain the impact of the dialogue policy on the final dialogue response. The evaluation metrics are listed as follows:",
"BPRA: Belief Per-Response Accuracy (BPRA) tests the ability to generate the correct user intents during the dialogue. This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1.",
"APRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth.",
"BLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system.",
"We also choose the following metrics to evaluate the efficiency of training the model:",
"$\\mathbf {Time_{full}}$: The time for training the whole model, which is important for industry settings.",
"$\\mathbf {Time_{DP}}$: The time for training the dialogue policy maker in a task-oriented dialogue system."
],
[
"We adopt the DSTC2 BIBREF20 dataset and Maluuba BIBREF21 dataset to evaluate our proposed model. Both of them are the benchmark datasets for building the task-oriented dialog systems. Specifically, the DSTC2 is a human-machine dataset in the single domain of restaurant searching. The Maluuba is a very complex human-human dataset in travel booking domain which contains more slots and values than DSTC2. Detailed slot information in each dataset is shown in Table TABREF34."
],
[
"For comparison, we choose two state-of-the-art baselines and their variants.",
"E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11.",
"CDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy.",
"E2ECM+RL: It fine tunes the classification parameters of the dialogue policy by REINFORCE BIBREF18.",
"CDM+RL: It fine tunes the classification of the act and corresponding parameters by REINFORCE BIBREF18.",
"In order to verify the performance of the dialogue policy maker, the utterance encoder and dialogue belief tracker of our proposed model and baselines is the same, only dialogue policy maker is different."
],
[
"For all models, the hidden size of dialogue belief tracker and utterance encoder is 350, and the embedding size $d_{emb}$ is set to 300. For our proposed model, the hidden size of decoder in dialogue policy maker is 150. The vocabulary size $|V|$ is 540 for DSTC2 and 4712 for Maluuba. And the size of $k_t$ is set to 20. An Adam optimizer BIBREF22 is used for training our models and baselines, with a learning rate of 0.001 for supervised training and 0.0001 for reinforcement learning. In reinforcement learning, the decay parameter $\\lambda $ is set to 0.8. The weight decay is set to 0.001. And early stopping is performed on developing set."
],
[
"The experimental results of the proposed model and baselines will be analyzed from the following aspects.",
"BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker. All the models perform very well in BPRA on DSTC2 dataset. On Maluuba dataset, the BPRA decreases because of the complex domains. We can notice that BPRA of CDM is slightly worse than other models on Maluuba dataset, the reason is that the CDM's dialogue policy maker contains lots of classifications and has the bigger loss than other models because of complex domains, which affects the training of the dialogue belief tracker.",
"APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets. It can be noted that we do not compare with the E2ECM baseline in APRA. E2ECM only uses a simple classifier to recognize the label of the acts and ignores the parameters information. In our experiment, APRA of E2ECM is slightly better than our method. Considering the lack of parameters of the acts, it's unfair for our GDP method. Furthermore, the CDM baseline considers the parameters of the act. But GDP is far better than CDM in supervised learning and reinforcement learning.",
"BLEU Results: GDP significantly outperforms the baselines on BLEU. As mentioned above, E2ECM is actually slightly better than GDP in APRA. But in fact, we can find that the language quality of the response generated by GDP is still better than E2ECM, which proves that lack of enough parameters information makes it difficult to find the appropriate sentence template in NLG. It can be found that the BLEU of all models is very poor on Maluuba dataset. The reason is that Maluuba is a human-human task-oriented dialogue dataset, the utterances are very flexible, the natural language generator for all methods is difficult to generate an accurate utterance based on the context. And DSTC2 is a human-machine dialog dataset. The response is very regular so the effectiveness of NLG will be better than that of Maluuba. But from the results, the GDP is still better than the baselines on Maluuba dataset, which also verifies that our proposed method is more accurate in modeling dialogue policy on complex domains than the classification-based methods.",
"Time and Model Size: In order to obtain more accurate and complete dialogue policy for task-oriented dialogue systems, the proposed model has more parameters on the dialogue policy maker than baselines. As shown in Figure FIGREF44, E2ECM has the minimal dialogue policy parameters because of the simple classification. It needs minimum training time, but the performance of E2ECM is bad. The number of parameters in the CDM model is slightly larger than E2ECM. However, because both of them are classification methods, they all lose some important information about dialogue policy. Therefore, we can see from the experimental results that the quality of CDM's dialogue policy is as bad as E2ECM. The number of dialogue policy maker's parameters in GDP model is much larger than baselines. Although the proposed model need more time to be optimized by supervised learning and reinforcement learning, the performance is much better than all baselines."
],
[
"Table TABREF43 illustrates an example of our proposed model and baselines on DSTC2 dataset. In this example, a user's goal is to find a cheap restaurant in the east part of the town. In the current turn, the user wants to get the address of the restaurant.",
"E2ECM chooses the inform and offer acts accurately, but the lack of the inform's parameters makes the final output deviate from the user's goal. CDM generates the parameters of offer successfully, but the lack of the information of inform also leads to a bad result. By contrast, the proposed model GDP can generate all the acts and their corresponding parameters as the dialogue action. Interestingly, the final result of GDP is exactly the same as the ground truth, which verifies that the proposed model is better than the state-of-the-art baselines."
],
[
"In this paper, we propose a novel model named GDP. Our proposed model treats the dialogue policy modeling as the generative task instead of the discriminative task which can hold more information for dialogue policy modeling. We evaluate the GDP on two benchmark task-oriented dialogue datasets. Extensive experiments show that GDP outperforms the existing classification-based methods on both action accuracy and BLEU."
]
],
"section_name": [
"Introduction",
"Related Work",
"Technical Background ::: Encoder-Decoder Seq2Seq Models",
"Technical Background ::: Attention Mechanism",
"Generative Dialogue Policy",
"Generative Dialogue Policy ::: Notations and Task Formulation",
"Generative Dialogue Policy ::: Utterance Encoder",
"Generative Dialogue Policy ::: Dialogue State Tracker",
"Generative Dialogue Policy ::: Knowledge Base",
"Generative Dialogue Policy ::: Dialogue Policy Maker",
"Generative Dialogue Policy ::: Nature Language Generator",
"Generative Dialogue Policy ::: Training",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Baselines",
"Experiments ::: Parameters settings",
"Experiments ::: Experimental Results",
"Experiments ::: Case Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"22373b0432d3562414ad265ca283a3aa073e45c1"
],
"answer": [
{
"evidence": [
"BPRA: Belief Per-Response Accuracy (BPRA) tests the ability to generate the correct user intents during the dialogue. This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1.",
"APRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth.",
"BLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system."
],
"extractive_spans": [
"BPRA",
"APRA",
"BLEU"
],
"free_form_answer": "",
"highlighted_evidence": [
"This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1.\n\nAPRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth.\n\nBLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"615d3929516cba04f35fd6ea19e5b8858a41318a"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: The performance of baselines and proposed model on DSTC2 and Maluuba dataset. T imefull is the time spent on training the whole model, T imeDP is the time spent on training the dialogue policy maker.",
"BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker. All the models perform very well in BPRA on DSTC2 dataset. On Maluuba dataset, the BPRA decreases because of the complex domains. We can notice that BPRA of CDM is slightly worse than other models on Maluuba dataset, the reason is that the CDM's dialogue policy maker contains lots of classifications and has the bigger loss than other models because of complex domains, which affects the training of the dialogue belief tracker.",
"APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets. It can be noted that we do not compare with the E2ECM baseline in APRA. E2ECM only uses a simple classifier to recognize the label of the acts and ignores the parameters information. In our experiment, APRA of E2ECM is slightly better than our method. Considering the lack of parameters of the acts, it's unfair for our GDP method. Furthermore, the CDM baseline considers the parameters of the act. But GDP is far better than CDM in supervised learning and reinforcement learning.",
"BLEU Results: GDP significantly outperforms the baselines on BLEU. As mentioned above, E2ECM is actually slightly better than GDP in APRA. But in fact, we can find that the language quality of the response generated by GDP is still better than E2ECM, which proves that lack of enough parameters information makes it difficult to find the appropriate sentence template in NLG. It can be found that the BLEU of all models is very poor on Maluuba dataset. The reason is that Maluuba is a human-human task-oriented dialogue dataset, the utterances are very flexible, the natural language generator for all methods is difficult to generate an accurate utterance based on the context. And DSTC2 is a human-machine dialog dataset. The response is very regular so the effectiveness of NLG will be better than that of Maluuba. But from the results, the GDP is still better than the baselines on Maluuba dataset, which also verifies that our proposed method is more accurate in modeling dialogue policy on complex domains than the classification-based methods."
],
"extractive_spans": [],
"free_form_answer": "most of the models have similar performance on BPRA: DSTC2 (+0.0015), Maluuba (+0.0729)\nGDP achieves the best performance in APRA: DSTC2 (+0.2893), Maluuba (+0.2896)\nGDP significantly outperforms the baselines on BLEU: DSTC2 (+0.0791), Maluuba (+0.0492)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: The performance of baselines and proposed model on DSTC2 and Maluuba dataset. T imefull is the time spent on training the whole model, T imeDP is the time spent on training the dialogue policy maker.",
"BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker.",
"APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets.",
"Results: GDP significantly outperforms the baselines on BLEU."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f03d68c71fa5f28eb6448fc02b51b7328196349c"
],
"answer": [
{
"evidence": [
"E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11.",
"CDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy."
],
"extractive_spans": [
"E2ECM",
"CDM"
],
"free_form_answer": "",
"highlighted_evidence": [
"E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11.\n\nCDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"449ac2920bdabea5c37f9b81912b305e54b16ad6"
],
"answer": [
{
"evidence": [
"We adopt the DSTC2 BIBREF20 dataset and Maluuba BIBREF21 dataset to evaluate our proposed model. Both of them are the benchmark datasets for building the task-oriented dialog systems. Specifically, the DSTC2 is a human-machine dataset in the single domain of restaurant searching. The Maluuba is a very complex human-human dataset in travel booking domain which contains more slots and values than DSTC2. Detailed slot information in each dataset is shown in Table TABREF34."
],
"extractive_spans": [
"DSTC2",
"Maluuba"
],
"free_form_answer": "",
"highlighted_evidence": [
"We adopt the DSTC2 BIBREF20 dataset and Maluuba BIBREF21 dataset to evaluate our proposed model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What metrics are used to measure performance of models?",
"How much is proposed model better than baselines in performed experiments?",
"What are state-of-the-art baselines?",
"What two benchmark datasets are used?"
],
"question_id": [
"b366706e2fff6dd8edc89cc0c6b9d5b0790f43aa",
"c165ea43256d7ee1b1fb6f5c0c8af5f7b585e60d",
"e72a672f8008bbc52b93d8037a5fedf8956136af",
"57586358dd01633aa2ebeef892e96a549b1d1930"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The examples in DSTC2 dataset, our proposed model can hold more information about dialogue policy than the classification models mentioned above. “MA, w/o P” is the model that chooses multiple acts without corresponding parameters during dialogue police modeling, “w/o MA, P” is the model that chooses only one act and its parameters.",
"Figure 2: GDP overview. The utterance encoder encodes the user utterance, the dialogue context and the last reply of the systems into the dense vector. As for dialogue belief tracker, we use the approach of Lei et al. (2018) to generate dialogue context. Then this information will be used to search the knowledge base. Based on the user’s intents and query results, dialogue policy maker generates the next dialogue action by using our RNN-based proposed method.",
"Table 1: The details of DSTC2 and Maluuba dataset. The Maluuba dataset is more complex than DSTC2, and has some continuous value space such as time and price which is hard to solve for classification model.",
"Table 2: The performance of baselines and proposed model on DSTC2 and Maluuba dataset. T imefull is the time spent on training the whole model, T imeDP is the time spent on training the dialogue policy maker.",
"Table 3: Case Study on DSTC2 dataset. The first column is the Dialogue Context of this case, it contains three parts: (1) Inf is the user’s intent captured by dialogue state tracker; (2) sys is the system response at last turn; (3) user is the user utterance in this turn. The second column to the fifth column has two rows, above is the action made by the learned dialogue policy maker below is the final response made by template-based generator.",
"Figure 3: The number of the parameters. GDP has the bigger model size and more dialogue policy parameters because of the RNN-based dialogue policy maker."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Figure3-1.png"
]
} | [
"How much is proposed model better than baselines in performed experiments?"
] | [
[
"1909.09484-Experiments ::: Experimental Results-3",
"1909.09484-Experiments ::: Experimental Results-1",
"1909.09484-7-Table2-1.png",
"1909.09484-Experiments ::: Experimental Results-2"
]
] | [
"most of the models have similar performance on BPRA: DSTC2 (+0.0015), Maluuba (+0.0729)\nGDP achieves the best performance in APRA: DSTC2 (+0.2893), Maluuba (+0.2896)\nGDP significantly outperforms the baselines on BLEU: DSTC2 (+0.0791), Maluuba (+0.0492)"
] | 328 |
1909.02776 | Features in Extractive Supervised Single-document Summarization: Case of Persian News | Text summarization has been one of the most challenging areas of research in NLP. Much effort has been made to overcome this challenge by using either the abstractive or extractive methods. Extractive methods are more popular, due to their simplicity compared with the more elaborate abstractive methods. In extractive approaches, the system will not generate sentences. Instead, it learns how to score sentences within the text by using some textual features and subsequently selecting those with the highest-rank. Therefore, the core objective is ranking and it highly depends on the document. This dependency has been unnoticed by many state-of-the-art solutions. In this work, the features of the document are integrated into vectors of every sentence. In this way, the system becomes informed about the context, increases the precision of the learned model and consequently produces comprehensive and brief summaries. | {
"paragraphs": [
[
"From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5.",
"Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity.",
"One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10.",
"As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost.",
"We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately.",
"The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper."
],
[
"Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18.",
"Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8.",
"Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8.",
"A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms.",
"The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28.",
"Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33.",
"However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section.",
"All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document.",
"Our work contributes to this line of research and includes document features in the learning and ranking processes."
],
[
"As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation.",
"Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections."
],
[
"The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next."
],
[
"Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail."
],
[
"Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\\frac{5}{5}$ for the first sentence, $\\frac{4}{5}$ for the second, and so on to $\\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\\frac{1}{sentence\\ number}$. With such a definition, we may have several sentences, for example, with position=$\\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6).",
"Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6).",
"The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts.",
"The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document.",
"Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature."
],
[
"Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is",
"in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware.",
"Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is:",
"in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa.",
"TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware.",
"POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows:"
],
[
"In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5):",
"Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important.",
"Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered.",
"Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included.",
"An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section."
],
[
"Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment."
],
[
"Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values.",
"In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model.",
"Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5)."
],
[
"Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22)."
],
[
"Initially, sentence features need to be extracted. Again, normalization, sentence tokenization, word tokenization, and stop words removal are preliminary steps. The same features used in the learning phase should be calculated."
],
[
"In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence."
],
[
"By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document.",
"Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable."
],
[
"In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems.",
"Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting.",
"The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40.",
"ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison."
],
[
"Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25).",
"A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research."
],
[
"We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences."
],
[
"All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead."
],
[
"In assigning the target to a sentence, as mentioned in section (SECREF16), the goal is to assign a number between 0 and 1, with higher values as an indicator that the sentence is present in the majority of golden summaries. Because exact matching between sentences is not possible, to resolve the question of presence in a single golden summary such as $g$, we calculated the cosine similarity of the desired sentence with each sentence: $s_j\\in g$ . Then the maximum value of these similarities is selected as an indicator of presence. This indicator is then calculated for other golden summaries and their average is assigned to the sentence as the target.",
"in which G is set of summaries written for the document containing s. This is an additional explicit evidence that target (and subsequently, ranking) is related to the document."
],
[
"A vast collection of scikit-learn tools were used for the learning phase. K-fold cross-validation is applied with k=4 and split size of 0.25. Three different regression methods were applied, including Linear Regression, Decision Tree Regression, and Epsilon-Support Vector Regression(SVR). Overall results were the same with minor differences. Thus only the SVR result is reported. Various values for parameters were examined but the best results were achieved by epsilon=0.01, kernel=rbf, and default values for other parameters. With the aim of evaluating summary qualities, the fitted regressor of each run was used to rank documents sentences in the test set. To compare with each standard summary, a summary with the same count of sentences was produced, and compared by ROUGE. Averaging these ROUGE scores over each document and then over the dataset, the overall quality of summaries produced by the model can be obtained.",
"The same process was repeated with a random regressor that needed no training, and which simply assigns a random number between zero and one to any given sample. Apart from measuring the performance of this regressor on the test set, the quality of summaries produced is evaluated and reported as a baseline. The juxtaposition of this baseline and our measured results will demonstrate how effective our feature set was and how intelligent our whole system worked."
],
[
"In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks.",
"ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features.",
"These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents.",
"Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer."
],
[
"This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset.",
"Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo."
]
],
"section_name": [
"Introduction",
"Related works",
"Incorporating Document Features",
"Incorporating Document Features ::: Learning Phase",
"Incorporating Document Features ::: Learning Phase ::: Feature Extraction",
"Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-unaware Features",
"Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-aware Features",
"Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Explicit Document Features",
"Incorporating Document Features ::: Learning Phase ::: Target Assignment",
"Incorporating Document Features ::: Learning Phase ::: Training Model",
"Incorporating Document Features ::: Summarization Phase",
"Incorporating Document Features ::: Summarization Phase ::: Feature Extraction",
"Incorporating Document Features ::: Summarization Phase ::: Sentence Ranking",
"Incorporating Document Features ::: Summarization Phase ::: Sentence Selection",
"Incorporating Document Features ::: Evaluation Measures",
"Experiments",
"Experiments ::: Dataset",
"Experiments ::: Extracting Features and Scaling",
"Experiments ::: Target Assignment",
"Experiments ::: Training Model",
"Results and Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"b474cdce67d7756ba614277e31d748158d546c14"
],
"answer": [
{
"evidence": [
"We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences."
],
"extractive_spans": [
"the Pasokh dataset BIBREF42 "
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"661b1531b3ac10e04a83e6b648561724e1e3a388"
],
"answer": [
{
"evidence": [
"Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-unaware Features",
"Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\\frac{5}{5}$ for the first sentence, $\\frac{4}{5}$ for the second, and so on to $\\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\\frac{1}{sentence\\ number}$. With such a definition, we may have several sentences, for example, with position=$\\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6).",
"Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6).",
"The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts.",
"The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document.",
"Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature.",
"Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-aware Features",
"Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is",
"in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware.",
"Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is:",
"in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa.",
"TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware.",
"POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows:",
"Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Explicit Document Features",
"In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5):",
"Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important.",
"Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered.",
"Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included."
],
"extractive_spans": [
"Ordinal position",
"Length of sentence",
"The Ratio of Nouns",
"The Ratio of Numerical entities",
"Cue Words",
"Cosine position",
"Relative Length",
"TF-ISF",
"POS features",
"Document sentences",
"Document words",
"Topical category",
"Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs"
],
"free_form_answer": "",
"highlighted_evidence": [
"Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-unaware Features\nOrdinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\\frac{5}{5}$ for the first sentence, $\\frac{4}{5}$ for the second, and so on to $\\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\\frac{1}{sentence\\ number}$. With such a definition, we may have several sentences, for example, with position=$\\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6).\n\nLength of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6).\n\nThe Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts.\n\nThe Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document.\n\nCue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature.\n\nIncorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-aware Features\nCosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is\n\nin which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware.\n\nRelative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is:\n\nin which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa.\n\nTF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware.\n\nPOS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows:\n\nIncorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Explicit Document Features\nIn order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5):\n\nDocument sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important.\n\nDocument words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered.\n\nTopical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"22d71db40c9cf33fea77b82572356b4a4ddbd915"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 3: ROUGE Quality of produced summaries in term of precision."
],
"extractive_spans": [],
"free_form_answer": "ROUGE-1 increases by 0.05, ROUGE-2 by 0.06 and ROUGE-L by 0.09",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 3: ROUGE Quality of produced summaries in term of precision."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"6593f6ff717a9155ef4cd1d20094c83328de4eb9"
],
"answer": [
{
"evidence": [
"Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What dataset is used for this task?",
"What features of the document are integrated into vectors of every sentence?",
"By how much is precission increased?",
"Is new approach tested against state of the art?"
],
"question_id": [
"49ea25af6f75e2e96318bad5ecf784ce84e4f76b",
"aecd09a817c38cf7606e2888d0df7f14e5a74b95",
"81064bbd0a0d72a82d8677c32fb71b06501830a0",
"7d841b98bcee29aaa9852ef7ceea1213d703deba"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An excerpt of whole feature set. SC and SP under Topical category stand for Science and Sport, respectively.",
"Table 1: Quality of the regression model’s predictions on the test set.",
"Figure 2: ROUGE Quality of produced summaries in terms of f-measure.",
"Figure 3: ROUGE Quality of produced summaries in term of precision.",
"Figure 4: ROUGE Quality of produced summaries in term of recall."
],
"file": [
"6-Figure1-1.png",
"8-Table1-1.png",
"9-Figure2-1.png",
"9-Figure3-1.png",
"9-Figure4-1.png"
]
} | [
"By how much is precission increased?"
] | [
[
"1909.02776-9-Figure3-1.png"
]
] | [
"ROUGE-1 increases by 0.05, ROUGE-2 by 0.06 and ROUGE-L by 0.09"
] | 332 |
1911.00133 | Dreaddit: A Reddit Dataset for Stress Analysis in Social Media | Stress is a nigh-universal human experience, particularly in the online world. While stress can be a motivator, too much stress is associated with many negative health outcomes, making its identification useful across a range of domains. However, existing computational research typically only studies stress in domains such as speech, or in short genres such as Twitter. We present Dreaddit, a new text corpus of lengthy multi-domain social media data for the identification of stress. Our dataset consists of 190K posts from five different categories of Reddit communities; we additionally label 3.5K total segments taken from 3K posts using Amazon Mechanical Turk. We present preliminary supervised learning methods for identifying stress, both neural and traditional, and analyze the complexity and diversity of the data and characteristics of each category. | {
"paragraphs": [
[
"In our online world, social media users tweet, post, and message an incredible number of times each day, and the interconnected, information-heavy nature of our lives makes stress more prominent and easily observable than ever before. With many platforms such as Twitter, Reddit, and Facebook, the scientific community has access to a massive amount of data to study the daily worries and stresses of people across the world.",
"Stress is a nearly universal phenomenon, and we have some evidence of its prevalence and recent increase. For example, the American Psychological Association (APA) has performed annual studies assessing stress in the United States since 2007 which demonstrate widespread experiences of chronic stress. Stress is a subjective experience whose effects and even definition can vary from person to person; as a baseline, the APA defines stress as a reaction to extant and future demands and pressures, which can be positive in moderation. Health and psychology researchers have extensively studied the connection between too much stress and physical and mental health BIBREF0, BIBREF1.",
"In this work, we present a corpus of social media text for detecting the presence of stress. We hope this corpus will facilitate the development of models for this problem, which has diverse applications in areas such as diagnosing physical and mental illness, gauging public mood and worries in politics and economics, and tracking the effects of disasters. Our contributions are as follows:",
"Dreaddit, a dataset of lengthy social media posts in five categories, each including stressful and non-stressful text and different ways of expressing stress, with a subset of the data annotated by human annotators;",
"Supervised models, both discrete and neural, for predicting stress, providing benchmarks to stimulate further work in the area; and",
"Analysis of the content of our dataset and the performance of our models, which provides insight into the problem of stress detection.",
"In the remainder of this paper, we will review relevant work, describe our dataset and its annotation, provide some analysis of the data and stress detection problem, present and discuss results of some supervised models on our dataset, and finally conclude with our summary and future work."
],
[
"Because of the subjective nature of stress, relevant research tends to focus on physical signals, such as cortisol levels in saliva BIBREF2, electroencephalogram (EEG) readings BIBREF3, or speech data BIBREF4. This work captures important aspects of the human reaction to stress, but has the disadvantage that hardware or physical presence is required. However, because of the aforementioned proliferation of stress on social media, we believe that stress can be observed and studied purely from text.",
"Other threads of research have also made this observation and generally use microblog data (e.g., Twitter). The most similar work to ours includes BIBREF5, who use Long Short-Term Memory Networks (LSTMs) to detect stress in speech and Twitter data; BIBREF6, who examine the Facebook and Twitter posts of users who score highly on a diagnostic stress questionnaire; and BIBREF7, who detect stress on microblogging websites using a Convolutional Neural Network (CNN) and factor graph model with a suite of discrete features. Our work is unique in that it uses data from Reddit, which is both typically longer and not typically as conducive to distant labeling as microblogs (which are labeled in the above work with hashtags or pattern matching, such as “I feel stressed”). The length of our posts will ultimately enable research into the causes of stress and will allow us to identify more implicit indicators. We also limit ourselves to text data and metadata (e.g., posting time, number of replies), whereas BIBREF5 also train on speech data and BIBREF7 include information from photos, neither of which is always available. Finally, we label individual parts of longer posts for acute stress using human annotators, while BIBREF6 label users themselves for chronic stress with the users' voluntary answers to a psychological questionnaire.",
"Researchers have used Reddit data to examine a variety of mental health conditions such as depression BIBREF8 and other clinical diagnoses such as general anxiety BIBREF9, but to our knowledge, our corpus is the first to focus on stress as a general experience, not only a clinical concept."
],
[
"Reddit is a social media website where users post in topic-specific communities called subreddits, and other users comment and vote on these posts. The lengthy nature of these posts makes Reddit an ideal source of data for studying the nuances of phenomena like stress. To collect expressions of stress, we select categories of subreddits where members are likely to discuss stressful topics:",
"Interpersonal conflict: abuse and social domains. Posters in the abuse subreddits are largely survivors of an abusive relationship or situation sharing stories and support, while posters in the social subreddit post about any difficulty in a relationship (often but not exclusively romantic) and seek advice for how to handle the situation.",
"Mental illness: anxiety and Post-Traumatic Stress Disorder (PTSD) domains. Posters in these subreddits seek advice about coping with mental illness and its symptoms, share support and successes, seek diagnoses, and so on.",
"Financial need: financial domain. Posters in the financial subreddits generally seek financial or material help from other posters.",
"We include ten subreddits in the five domains of abuse, social, anxiety, PTSD, and financial, as detailed in tab:data-spread, and our analysis focuses on the domain level. Using the PRAW API, we scrape all available posts on these subreddits between January 1, 2017 and November 19, 2018; in total, 187,444 posts. As we will describe in sec:annotation, we assign binary stress labels to 3,553 segments of these posts to form a supervised and semi-supervised training set. An example segment is shown in fig:stress-example. Highlighted phrases are indicators that the writer is stressed: the writer mentions common physical symptoms (nausea), explicitly names fear and dread, and uses language indicating helplessness and help-seeking behavior.",
"The average length of a post in our dataset is 420 tokens, much longer than most microblog data (e.g., Twitter's character limit as of this writing is 280 characters). While we label segments that are about 100 tokens long, we still have much additional data from the author on which to draw. We feel this is important because, while our goal in this paper is to predict stress, having longer posts will ultimately allow more detailed study of the causes and effects of stress.",
"In tab:data-examples, we provide examples of labeled segments from the various domains in our dataset. The samples are fairly typical; the dataset contains mostly first-person narrative accounts of personal experiences and requests for assistance or advice. Our data displays a range of topics, language, and agreement levels among annotators, and we provide only a few examples. Lengthier examples are available in the appendix."
],
[
"We annotate a subset of the data using Amazon Mechanical Turk in order to begin exploring the characteristics of stress. We partition the posts into contiguous five-sentence chunks for labeling; we wish to annotate segments of the posts because we are ultimately interested in what parts of the post depict stress, but we find through manual inspection that some amount of context is important. Our posts, however, are quite long, and it would be difficult for annotators to read and annotate entire posts. This type of data will allow us in the future not only to classify the presence of stress, but also to locate its expressions in the text, even if they are diffused throughout the post.",
"We set up an annotation task in which English-speaking Mechanical Turk Workers are asked to label five randomly selected text segments (of five sentences each) after taking a qualification test; Workers are allowed to select “Stress”, “Not Stress”, or “Can't Tell” for each segment. In our instructions, we define stress as follows: “The Oxford English Dictionary defines stress as `a state of mental or emotional strain or tension resulting from adverse or demanding circumstances'. This means that stress results from someone being uncertain that they can handle some threatening situation. We are interested in cases where that someone also feels negatively about it (sometimes we can find an event stressful, but also find it exciting and positive, like a first date or an interview).”. We specifically ask Workers to decide whether the author is expressing both stress and a negative attitude about it, not whether the situation itself seems stressful. Our full instructions are available in the appendix.",
"We submit 4,000 segments, sampled equally from each domain and uniformly within domains, to Mechanical Turk to be annotated by at least five Workers each and include in each batch one of 50 “check questions” which have been previously verified by two in-house annotators. After removing annotations which failed the check questions, and data points for which at least half of the annotators selected “Can't Tell”, we are left with 3,553 labeled data points from 2,929 different posts. We take the annotators' majority vote as the label for each segment and record the percentage of annotators who agreed. The resulting dataset is nearly balanced, with 52.3% of the data (1,857 instances) labeled stressful.",
"Our agreement on all labeled data is $\\kappa =0.47$, using Fleiss's Kappa BIBREF10, considered “moderate agreement” by BIBREF11. We observe that annotators achieved perfect agreement on 39% of the data, and for another 32% the majority was 3/5 or less. This suggests that our data displays significant variation in how stress is expressed, which we explore in the next section."
],
[
"While all our data has the same genre and personal narrative style, we find distinctions among domains with which classification systems must contend in order to perform well, and distinctions between stressful and non-stressful data which may be useful when developing such systems. Posters in each subreddit express stress, but we expect that their different functions and stressors lead to differences in how they do so in each subreddit, domain, and broad category.",
"By domain. We examine the vocabulary patterns of each domain on our training data only, not including unlabeled data so that we may extend our analysis to the label level. First, we use the word categories from the Linguistic Inquiry and Word Count (LIWC) BIBREF12, a lexicon-based tool that gives scores for psychologically relevant categories such as sadness or cognitive processes, as a proxy for topic prevalence and expression variety. We calculate both the percentage of tokens per domain which are included in a specific LIWC word list, and the percentage of words in a specific LIWC word list that appear in each domain (“coverage” of the domain).",
"Results of the analysis are highlighted in tab:domain-liwc. We first note that variety of expression depends on domain and topic; for example, the variety in the expression of negative emotions is particularly low in the financial domain (with 1.54% of words being negative emotion (“negemo”) words and only 31% of “negemo” words used). We also see clear topic shifts among domains: the interpersonal domains contain roughly 1.5 times as many social words, proportionally, as the others; and domains are stratified by their coverage of the anxiety word list (with the most in the mental illness domains and the least in the financial domain).",
"We also examine the overall lexical diversity of each domain by calculating Yule's I measure BIBREF13. fig:domain-yule shows the lexical diversity of our data, both for all words in the vocabulary and for only words in LIWC's “negemo” word list. Yule's I measure reflects the repetitiveness of the data (as opposed to the broader coverage measured by our LIWC analysis). We notice exceptionally low lexical diversity for the mental illness domains, which we believe is due to the structured, clinical language surrounding mental illnesses. For example, posters in these domains discuss topics such as symptoms, medical care, and diagnoses (fig:stress-example, tab:data-examples). When we restrict our analysis to negative emotion words, this pattern persists only for anxiety; the PTSD domain has comparatively little lexical variety, but what it does have contributes to its variety of expression for negative emotions.",
"By label. We perform similar analyses on data labeled stressful or non-stressful by a majority of annotators. We confirm some common results in the mental health literature, including that stressful data uses more first-person pronouns (perhaps reflecting increased self-focus) and that non-stressful data uses more social words (perhaps reflecting a better social support network).",
"Additionally, we calculate measures of syntactic complexity, including the percentage of words that are conjunctions, average number of tokens per labeled segment, average number of clauses per sentence, Flesch-Kincaid Grade Level BIBREF14, and Automated Readability Index BIBREF15. These scores are comparable for all splits of our data; however, as shown in tab:label-complexity, we do see non-significant but persistent differences between stressful and non-stressful data, with stressful data being generally longer and more complex but also rated simpler by readability indices. These findings are intriguing and can be explored in future work.",
"By agreement. Finally, we examine the differences among annotator agreement levels. We find an inverse relationship between the lexical variety and the proportion of annotators who agree, as shown in fig:agreement-diversity. While the amount of data and lexical variety seem to be related, Yule's I measure controls for length, so we believe that this trend reflects a difference in the type of data that encourages high or low agreement."
],
[
"In order to train supervised models, we group the labeled segments by post and randomly select 10% of the posts ($\\approx $ 10% of the labeled segments) to form a test set. This ensures that while there is a reasonable distribution of labels and domains in the train and test set, the two do not explicitly share any of the same content. This results in a total of 2,838 train data points (51.6% labeled stressful) and 715 test data points (52.4% labeled stressful). Because our data is relatively small, we train our traditional supervised models with 10-fold cross-validation; for our neural models, we break off a further random 10% of the training data for validation and average the predictions of 10 randomly-initialized trained models.",
"In addition to the words of the posts (both as bag-of-n-grams and distributed word embeddings), we include features in three categories:",
"Lexical features. Average, maximum, and minimum scores for pleasantness, activation, and imagery from the Dictionary of Affect in Language (DAL) BIBREF16; the full suite of 93 LIWC features; and sentiment calculated using the Pattern sentiment library BIBREF17.",
"Syntactic features. Part-of-speech unigrams and bigrams, the Flesch-Kincaid Grade Level, and the Automated Readability Index.",
"Social media features. The UTC timestamp of the post; the ratio of upvotes to downvotes on the post, where an upvote roughly corresponds to a reaction of “like” and a downvote to “dislike” (upvote ratio); the net score of the post (karma) (calculated by Reddit, $n_\\text{upvotes} - n_\\text{downvotes}$); and the total number of comments in the entire thread under the post."
],
[
"We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We tune the parameters for these models using grid search and 10-fold cross-validation, and obtain results for different combinations of input and features.",
"For input representation, we experiment with bag-of-n-grams (for $n \\in \\lbrace 1..3\\rbrace $), Google News pre-trained Word2Vec embeddings (300-dimensional) BIBREF18, Word2Vec embeddings trained on our large unlabeled corpus (300-dimensional, to match), and BERT embeddings trained on our unlabeled corpus (768-dimensional, the top-level [CLS] embedding) BIBREF19. We experiment with subsets of the above features, including separating the features by category (lexical, syntactic, social) and by magnitude of the Pearson correlation coefficient ($r$) with the training labels. Finally, we stratify the training data by annotator agreement, including separate experiments on only data for which all annotators agreed, data for which at least 4/5 annotators agreed, and so on.",
"We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). Because neural models require large amounts of data, we do not cull the data by annotator agreement for these experiments and use all the labeled data we have. We experiment with training embeddings with random initialization as well as initializing with our domain-specific Word2Vec embeddings, and we also concatenate the best feature set from our non-neural experiments onto the representations after the recurrent and convolutional/pooling layers respectively.",
"Finally, we apply BERT directly to our task, fine-tuning the pretrained BERT-base on our classification task for three epochs (as performed in BIBREF19 when applying BERT to any task). Our parameter settings for our various models are available in the appendix."
],
[
"We present our results in tab:supervised-results. Our best model is a logistic regression classifier with Word2Vec embeddings trained on our unlabeled corpus, high-correlation features ($\\ge $ 0.4 absolute Pearson's $r$), and high-agreement data (at least 4/5 annotators agreed); this model achieves an F-score of 79.8 on our test set, a significant improvement over the majority baseline, the n-gram baseline, and the pre-trained embedding model, (all by the approximate randomization test, $p < 0.01$). The high-correlation features used by this model are LIWC's clout, tone, and “I” pronoun features, and we investigate the use of these features in the other model types. Particularly, we apply different architectures (GRNN and CNN) and different input representations (pretrained Word2Vec, domain-specific BERT).",
"We find that our logistic regression classifier described above achieves comparable performance to BERT-base (approximate randomization test, $p > 0.5$) with the added benefits of increased interpretability and less intensive training. Additionally, domain-specific word embeddings trained on our unlabeled corpus (Word2Vec, BERT) significantly outperform n-grams or pretrained embeddings, as expected, signaling the importance of domain knowledge in this problem.",
"We note that our basic deep learning models do not perform as well as our traditional supervised models or BERT, although they consistently, significantly outperform the majority baseline. We believe this is due to a serious lack of data; our labeled dataset is orders of magnitude smaller than neural models typically require to perform well. We expect that neural models can make good use of our large unlabeled dataset, which we plan to explore in future work. We believe that the superior performance of the pretrained BERT-base model (which uses no additional features) on our dataset supports this hypothesis as well.",
"In tab:data-and-feat-comparison, we examine the impact of different feature sets and levels of annotator agreement on our logistic regressor with domain-specific Word2Vec embeddings and find consistent patterns supporting this model. First, we see a tradeoff between data size and data quality, where lower-agreement data (which can be seen as lower-quality) results in worse performance, but the larger 80% agreement data consistently outperforms the smaller perfect agreement data. Additionally, LIWC features consistently perform well while syntactic features consistently do not, and we see a trend towards the quality of features over their quantity; those with the highest Pearson correlation with the train set (which all happen to be LIWC features) outperform sets with lower correlations, which in turn outperform the set of all features. This suggests that stress detection is a highly lexical problem, and in particular, resources developed with psychological applications in mind, like LIWC, are very helpful.",
"Finally, we perform an error analysis of the two best-performing models. Although the dataset is nearly balanced, both BERT-base and our best logistic regression model greatly overclassify stress, as shown in tab:confusion-matrices, and they broadly overlap but do differ in their predictions (disagreeing with one another on approximately 100 instances).",
"We note that the examples misclassified by both models are often, though not always, ones with low annotator agreement (with the average percent agreement for misclassified examples being 0.55 for BERT and 0.61 for logistic regression). Both models seem to have trouble with less explicit expressions of stress, framing negative experiences in a positive or retrospective way, and stories where another person aside from the poster is the focus; these types of errors are difficult to capture with the features we used (primarily lexical), and further work should be aware of them. We include some examples of these errors in tab:error-analysis-paper, and further illustrative examples are available in the appendix."
],
[
"In this paper, we present a new dataset, Dreaddit, for stress classification in social media, and find the current baseline at 80% F-score on the binary stress classification problem. We believe this dataset has the potential to spur development of sophisticated, interpretable models of psychological stress. Analysis of our data and our models shows that stress detection is a highly lexical problem benefitting from domain knowledge, but we note there is still room for improvement, especially in incorporating the framing and intentions of the writer. We intend for our future work to use this dataset to contextualize stress and offer explanations using the content features of the text. Additional interesting problems applicable to this dataset include the development of effective distant labeling schemes, which is a significant first step to developing a quantitative model of stress."
],
[
"We would like to thank Fei-Tzin Lee, Christopher Hidey, Diana Abagyan, and our anonymous reviewers for their insightful comments during the writing of this paper. This research was funded in part by a Presidential Fellowship from the Fu Foundation School of Engineering and Applied Science at Columbia University."
],
[
"We include several full posts (with identifying information removed and whitespace collapsed) in fig:data-appendix-1,fig:data-appendix-2,fig:data-appendix-3,fig:data-appendix-4. Posts are otherwise reproduced exactly as obtained (with spelling errors, etc.). The selected examples are deliberately of a reasonable but fairly typical length for readability and space concerns; recall that our average post length is 420 tokens, longer for interpersonal subreddits and shorter for other subreddits."
],
[
"We provide our annotation instructions in full in fig:annotation. Mechanical Turk Workers were given these instructions and examples followed by five text segments (one of which was one of our 50 check questions) and allowed to select “Stress”, “Not Stress', or “Can't Tell” for each. Workers were given one hour to complete the HIT and paid $0.12 for each HIT where they correctly answered the check question, with a limit of 30 total submissions per Worker."
],
[
"We tune our traditional supervised models' parameters using grid search, all as implemented in Python's scikit-learn library BIBREF25. Our best model uses unbalanced class weights, L2 penalty, and a constant term C=10, with other parameters at their default values. All cross-validation runs were initialized with the same random seed for comparability and reproducibility.",
"We train each of our neural models with the Adam optimizer BIBREF24 for up to ten epochs with early stopping measured on the validation set. We apply a dropout rate of 0.5 during training in the recurrent layers and after the convolutional layers. We set our hidden sizes (i.e., the output of the recurrent and pooling layers) as well as our batch size to 128, and tune our learning rate to $5\\cdot 10^{-4}$; we set these parameters relatively small to try to work with our small data. We also experiment with scheduling the learning rate on plateau of the validation loss, and with pre-training the models on a much larger sentiment dataset, the Stanford Sentiment Treebank BIBREF26, to help combat the problem of small data, but this does not improve the performance of our neural networks."
],
[
"As a supplement to our error analysis discussion in sec:results, we provide additional examples of test data points which one or both of our best models (BERT-base or our best logistic regressor with embeddings trained on our unlabeled corpus and high-correlation discrete features) failed to classify correctly in tab:error-analysis-appendix."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset ::: Reddit Data",
"Dataset ::: Data Annotation",
"Data Analysis",
"Methods",
"Methods ::: Supervised Models",
"Results and Discussion",
"Conclusion and Future Work",
"Acknowledgements",
"Data Samples",
"Full Annotation Guidelines",
"Parameter Settings",
"Error Analysis Examples"
]
} | {
"answers": [
{
"annotation_id": [
"fabde6151d3a3807a6927286d467f749e8e11c41"
],
"answer": [
{
"evidence": [
"We submit 4,000 segments, sampled equally from each domain and uniformly within domains, to Mechanical Turk to be annotated by at least five Workers each and include in each batch one of 50 “check questions” which have been previously verified by two in-house annotators. After removing annotations which failed the check questions, and data points for which at least half of the annotators selected “Can't Tell”, we are left with 3,553 labeled data points from 2,929 different posts. We take the annotators' majority vote as the label for each segment and record the percentage of annotators who agreed. The resulting dataset is nearly balanced, with 52.3% of the data (1,857 instances) labeled stressful."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The resulting dataset is nearly balanced, with 52.3% of the data (1,857 instances) labeled stressful."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"847728fce61fc25519f1c2346b763efcd401f1df"
],
"answer": [
{
"evidence": [
"We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We tune the parameters for these models using grid search and 10-fold cross-validation, and obtain results for different combinations of input and features.",
"We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). Because neural models require large amounts of data, we do not cull the data by annotator agreement for these experiments and use all the labeled data we have. We experiment with training embeddings with random initialization as well as initializing with our domain-specific Word2Vec embeddings, and we also concatenate the best feature set from our non-neural experiments onto the representations after the recurrent and convolutional/pooling layers respectively."
],
"extractive_spans": [
"Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees",
"a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. ",
"We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"22ee8eb516e612fda08ddc0efbc031e2b2e88b0d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: An example of stress being expressed in social media from our dataset, from a post in r/anxiety (reproduced exactly as found). Some possible expressions of stress are highlighted."
],
"extractive_spans": [],
"free_form_answer": "binary label of stress or not stress",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: An example of stress being expressed in social media from our dataset, from a post in r/anxiety (reproduced exactly as found). Some possible expressions of stress are highlighted."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"28ebe4991c83d18342d17f3701012f06928b6353"
],
"answer": [
{
"evidence": [
"We include ten subreddits in the five domains of abuse, social, anxiety, PTSD, and financial, as detailed in tab:data-spread, and our analysis focuses on the domain level. Using the PRAW API, we scrape all available posts on these subreddits between January 1, 2017 and November 19, 2018; in total, 187,444 posts. As we will describe in sec:annotation, we assign binary stress labels to 3,553 segments of these posts to form a supervised and semi-supervised training set. An example segment is shown in fig:stress-example. Highlighted phrases are indicators that the writer is stressed: the writer mentions common physical symptoms (nausea), explicitly names fear and dread, and uses language indicating helplessness and help-seeking behavior."
],
"extractive_spans": [
"abuse, social, anxiety, PTSD, and financial"
],
"free_form_answer": "",
"highlighted_evidence": [
"We include ten subreddits in the five domains of abuse, social, anxiety, PTSD, and financial, as detailed in tab:data-spread, and our analysis focuses on the domain level. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Is the dataset balanced across categories?",
"What supervised methods are used?",
"What labels are in the dataset?",
"What categories does the dataset come from?"
],
"question_id": [
"4e8233826f9e04f5763b307988298e73f841af74",
"adae0c32a69928929101d0ba37d36c0a45298ad6",
"d0f831c97d345a5b8149a9d51bf321f844518434",
"1ccfd288f746c35006f5847297ab52020729f523"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"social media",
"social media",
"social media",
"social media"
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: An example of stress being expressed in social media from our dataset, from a post in r/anxiety (reproduced exactly as found). Some possible expressions of stress are highlighted.",
"Table 1: Data Statistics. We include ten total subreddits from five domains in our dataset. Because some subreddits are more or less popular, the amount of data in each domain varies. We endeavor to label a comparable amount of data from each domain for training and testing.",
"Table 2: Data Examples. Examples from our dataset with their domains, assigned labels, and number of annotators who agreed on the majority label (reproduced exactly as found, except that a link to the GoFundMe has been removed in the last example). Annotators labeled these five-sentence segments of larger posts.",
"Table 3: LIWC Analysis by Domain. Results from our analysis using LIWC word lists. Each term in quotations refers to a specific word list curated by LIWC; percentage refers to the percent of words in the domain that are included in that word list, and coverage refers to the percent of words in that word list which appear in the domain.",
"Figure 2: Lexical Diversity by Domain. Yule’s I measure (on the y-axes) is plotted against domain size (on the x-axes) and each domain is plotted as a point on two graphics. a) measures the lexical diversity of all words in the vocabulary, while b) deletes all words that were not included in LIWC’s negative emotion word list.",
"Table 4: LIWC Analysis by Label. Results from our analysis using LIWC word lists, with the same definitions as in Table 3. First-person pronouns (“1st-Person”) use the LIWC “I” word list.",
"Table 5: Complexity by Label. Measures of syntactic complexity for stressful and non-stressful data.",
"Figure 3: Lexical Diversity by Agreement. Yule’s I measure (on the y-axis) is plotted against domain size (on the x-axis) for each level of annotator agreement. Perfect means all annotators agreed; High, 4/5 or more; Medium, 3/5 or more; and Low, everything else.",
"Table 6: Supervised Results. Precision (P), recall (R), and F1-score (F) for our supervised models. Our best model achieves 79.80 F1-score on our test set, comparable to the state-of-the-art pretrained BERT-base model. In this table, “features” always refers to our best-performing feature set (≥ 0.4 absolute Pearson’s r). Models marked with a * show a significant improvement over the majority baseline (approximate randomization test, p < 0.01).",
"Table 7: Feature Sets and Data Sets. The results of our best classifier trained on different subsets of features and data. Features are grouped by type and by magnitude of their Pearson correlation with the train labels (no features had an absolute correlation greater than 0.5); data is separated by the proportion of annotators who agreed. Our best score (corresponding to our best non-neural model) is shown in bold.",
"Table 8: Confusion Matrices. Confusion matrices of our best models and the gold labels. 0 represents data labeled not stressed while 1 represents data labeled stressed.",
"Table 9: Error Analysis Examples. Examples of test samples our models failed to classify correctly.“BERT” refers to the state-of-the-art BERT-base model, while “LogReg” is our best logistic regressor described in section 6.",
"Figure 8: Our full annotation instructions.",
"Table 10: Additional Error Analysis Examples. Supplementary examples for our error analysis.“BERT” refers to the state-of-the-art BERT-base model, while “LogReg” is our best logistic regressor described in section 6."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Figure2-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"6-Figure3-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"9-Table8-1.png",
"9-Table9-1.png",
"14-Figure8-1.png",
"15-Table10-1.png"
]
} | [
"What labels are in the dataset?"
] | [
[
"1911.00133-2-Figure1-1.png"
]
] | [
"binary label of stress or not stress"
] | 333 |
1709.05413 | "How May I Help You?": Modeling Twitter Customer Service Conversations Using Fine-Grained Dialogue Acts | Given the increasing popularity of customer service dialogue on Twitter, analysis of conversation data is essential to understand trends in customer and agent behavior for the purpose of automating customer service interactions. In this work, we develop a novel taxonomy of fine-grained"dialogue acts"frequently observed in customer service, showcasing acts that are more suited to the domain than the more generic existing taxonomies. Using a sequential SVM-HMM model, we model conversation flow, predicting the dialogue act of a given turn in real-time. We characterize differences between customer and agent behavior in Twitter customer service conversations, and investigate the effect of testing our system on different customer service industries. Finally, we use a data-driven approach to predict important conversation outcomes: customer satisfaction, customer frustration, and overall problem resolution. We show that the type and location of certain dialogue acts in a conversation have a significant effect on the probability of desirable and undesirable outcomes, and present actionable rules based on our findings. The patterns and rules we derive can be used as guidelines for outcome-driven automated customer service platforms. | {
"paragraphs": [
[
"The need for real-time, efficient, and reliable customer service has grown in recent years. Twitter has emerged as a popular medium for customer service dialogue, allowing customers to make inquiries and receive instant live support in the public domain. In order to provide useful information to customers, agents must first understand the requirements of the conversation, and offer customers the appropriate feedback. While this may be feasible at the level of a single conversation for a human agent, automatic analysis of conversations is essential for data-driven approaches towards the design of automated customer support agents and systems.",
"Analyzing the dialogic structure of a conversation in terms of the \"dialogue acts\" used, such as statements or questions, can give important meta-information about conversation flow and content, and can be used as a first step to developing automated agents. Traditional dialogue act taxonomies used to label turns in a conversation are very generic, in order to allow for broad coverage of the majority of dialogue acts possible in a conversation BIBREF0 , BIBREF1 , BIBREF2 . However, for the purpose of understanding and analyzing customer service conversations, generic taxonomies fall short. Table TABREF1 shows a sample customer service conversation between a human agent and customer on Twitter, where the customer and agent take alternating \"turns\" to discuss the problem. As shown from the dialogue acts used at each turn, simply knowing that a turn is a Statement or Request, as is possible with generic taxonomies, is not enough information to allow for automated handling or response to a problem. We need more fine-grained dialogue acts, such as Informative Statement, Complaint, or Request for Information to capture the speaker's intent, and act accordingly. Likewise, turns often include multiple overlapping dialogue acts, such that a multi-label approach to classification is often more informative than a single-label approach.",
"Dialogue act prediction can be used to guide automatic response generation, and to develop diagnostic tools for the fine-tuning of automatic agents. For example, in Table TABREF1 , the customer's first turn (Turn 1) is categorized as a Complaint, Negative Expressive Statement, and Sarcasm, and the agent's response (Turn 2) is tagged as a Request for Information, Yes-No Question, and Apology. Prediction of these dialogue acts in a real-time setting can be leveraged to generate appropriate automated agent responses to similar situations.",
"Additionally, important patterns can emerge from analysis of the fine-grained acts in a dialogue in a post-prediction setting. For example, if an agent does not follow-up with certain actions in response to a customer's question dialogue act, this could be found to be a violation of a best practice pattern. By analyzing large numbers of dialogue act sequences correlated with specific outcomes, various rules can be derived, i.e. \"Continuing to request information late in a conversation often leads to customer dissatisfaction.\" This can then be codified into a best practice pattern rules for automated systems, such as \"A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation.\"",
"In this work, we are motivated to predict the dialogue acts in conversations with the intent of identifying problem spots that can be addressed in real-time, and to allow for post-conversation analysis to derive rules about conversation outcomes indicating successful/unsuccessful interactions, namely, customer satisfaction, customer frustration, and problem resolution. We focus on analysis of the dialogue acts used in customer service conversations as a first step to fully automating the interaction. We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent. The novelty of our work comes from the development of our fine-grained dialogue act taxonomy and multi-label approach for act prediction, as well as our analysis of the customer service domain on Twitter. Our goal is to offer useful analytics to improve outcome-oriented conversational systems.",
"We first expand upon previous work and generic dialogue act taxonomies, developing a fine-grained set of dialogue acts for customer service, and conducting a systematic user study to identify these acts in a dataset of 800 conversations from four Twitter customer service accounts (i.e. four different companies in the telecommunication, electronics, and insurance industries). We then aim to understand the conversation flow between customers and agents using our taxonomy, so we develop a real-time sequential SVM-HMM model to predict our fine-grained dialogue acts while a conversation is in progress, using a novel multi-label scheme to classify each turn. Finally, using our dialogue act predictions, we classify conversations based on the outcomes of customer satisfaction, frustration, and overall problem resolution, then provide actionable guidelines for the development of automated customer service systems and intelligent agents aimed at desired customer outcomes BIBREF3 , BIBREF4 .",
"We begin with a discussion of related work, followed by an overview of our methodology. Next, we describe our conversation modeling framework, and explain our outcome analysis experiments, to show how we derive useful patterns for designing automated customer service agents. Finally, we present conclusions and directions for future work."
],
[
"Developing computational speech and dialogue act models has long been a topic of interest BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , with researchers from many different backgrounds studying human conversations and developing theories around conversational analysis and interpretation on intent. Modern intelligent conversational BIBREF3 , BIBREF4 and dialogue systems draw principles from many disciplines, including philosophy, linguistics, computer science, and sociology. In this section, we describe relevant previous work on speech and dialogue act modeling, general conversation modeling on Twitter, and speech and dialogue act modeling of customer service in other data sources.",
"Previous work has explored speech act modeling in different domains (as a predecessor to dialogue act modeling). Zhang et al. present work on recognition of speech acts on Twitter, following up with a study on scalable speech act recognition given the difficulty of obtaining labeled training data BIBREF9 . They use a simple taxonomy of four main speech acts (Statement, Question, Suggestion, Comment, and a Miscellaneous category). More recently, Vosoughi et al. develop BIBREF10 a speech act classifier for Twitter, using a modification of the taxonomy defined by Searle in 1975, including six acts they observe to commonly occur on Twitter: Assertion, Recommendation Expression, Question, Request, again plus a Miscellaneous category. They describe good features for speech act classification and the application of such a system to detect stories on social media BIBREF11 . In this work, we are interested in the dialogic characteristics of Twitter conversations, rather than speech acts in stand-alone tweets.",
"Different dialogue act taxonomies have been developed to characterize conversational acts. Core and Allen present the Dialogue Act Marking in Several Layers (DAMSL), a standard for discourse annotation that was developed in 1997 BIBREF0 . The taxonomy contains a total of 220 tags, divided into four main categories: communicative status, information level, forward-looking function, and backward-looking function. Jurafsky, Shriberg, and Biasca develop a less fine-grained taxonomy of 42 tags based on DAMSL BIBREF1 . Stolcke et al. employ a similar set for general conversation BIBREF2 , citing that \"content- and task-related distinctions will always play an important role in effective DA [Dialogue Act] labeling.\" Many researchers have tackled the task of developing different speech and dialogue act taxonomies and coding schemes BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . For the purposes of our own research, we require a set of dialogue acts that is more closely representative of customer service domain interactions - thus we expand upon previously defined taxonomies and develop a more fine-grained set.",
"Modeling general conversation on Twitter has also been a topic of interest in previous work. Honeycutt and Herring study conversation and collaboration on Twitter using individual tweets containing \"@\" mentions BIBREF16 . Ritter et al. explore unsupervised modeling of Twitter conversations, using clustering methods on a corpus of 1.3 million Twitter conversations to define a model of transitional flow between in a general Twitter dialogue BIBREF17 . While these approaches are relevant to understanding the nature of interactions on Twitter, we find that the customer service domain presents its own interesting characteristics that are worth exploring further.",
"The most related previous work has explored speech and dialogue act modeling in customer service, however, no previous work has focused on Twitter as a data source. In 2005, Ivanovic uses an abridged set of 12 course-grained dialogue acts (detailed in the Taxonomy section) to describe interactions between customers and agents in instant messaging chats BIBREF18 , BIBREF19 , leading to a proposal on response suggestion using the proposed dialogue acts BIBREF20 . Follow-up work using the taxonomy selected by Ivanovic comes from Kim et al., where they focus on classifying dialogue acts in both one-on-one and multi-party live instant messaging chats BIBREF21 , BIBREF22 . These works are similar to ours in the nature of the problem addressed, but we use a much more fine-grained taxonomy to define the interactions possible in the customer service domain, and focus on Twitter conversations, which are unique in their brevity and the nature of the public interactions.",
"The most similar work to our own is that of Herzig et al. on classifying emotions in customer support dialogues on Twitter BIBREF23 . They explore how agent responses should be tailored to the detected emotional response in customers, in order to improve the quality of service agents can provide. Rather than focusing on emotional response, we seek to model the dialogic structure and intents of the speakers using dialogue acts, with emotion included as features in our model, to characterize the emotional intent within each act."
],
[
"The underlying goal of this work is to show how a well-defined taxonomy of dialogue acts can be used to summarize semantic information in real-time about the flow of a conversation to derive meaningful insights into the success/failure of the interaction, and then to develop actionable rules to be used in automating customer service interactions. We focus on the customer service domain on Twitter, which has not previously been explored in the context of dialogue act classification. In this new domain, we can provide meaningful recommendations about good communicative practices, based on real data. Our methodology pipeline is shown in Figure FIGREF2 ."
],
[
"As described in the related work, the taxonomy of 12 acts to classify dialogue acts in an instant-messaging scenario, developed by Ivanovic in 2005, has been used by previous work when approaching the task of dialogue act classification for customer service BIBREF18 , BIBREF20 , BIBREF19 , BIBREF21 , BIBREF22 . The dataset used consisted of eight conversations from chat logs in the MSN Shopping Service (around 550 turns spanning around 4,500 words) BIBREF19 . The conversations were gathered by asking five volunteers to use the platform to inquire for help regarding various hypothetical situations (i.e. buying an item for someone) BIBREF19 . The process of selection of tags to develop the taxonomy, beginning with the 42 tags from the DAMSL set BIBREF0 , involved removing tags inappropriate for written text, and collapsing sets of tags into a more coarse-grained label BIBREF18 . The final taxonomy consists of the following 12 dialogue acts (sorted by frequency in the dataset): Statement (36%), Thanking (14.7%), Yes-No Question (13.9%), Response-Acknowledgement (7.2%), Request (5.9%), Open-Question (5.3%), Yes-Answer (5.1%), Conventional-Closing (2.9%), No-Answer (2.5%), Conventional-Opening (2.3%), Expressive (2.3%) and Downplayer (1.9%).",
"For the purposes of our own research, focused on customer service on Twitter, we found that the course-grained nature of the taxonomy presented a natural shortcoming in terms of what information could be learned by performing classification at this level. We observe that while having a smaller set of dialogue acts may be helpful for achieving good agreement between annotators (Ivanovic cites kappas of 0.87 between the three expert annotators using this tag set on his data BIBREF18 ), it is unable to offer deeper semantic insight into the specific intent behind each act for many of the categories. For example, the Statement act, which comprises the largest percentage (36% of turns), is an extremely broad category that fails to provide useful information from an analytical perspective. Likewise, the Request category also does not specify any intent behind the act, and leaves much room for improvement.",
"For this reason, and motivated by previous work seeking to develop dialogue act taxonomies appropriate for different domains BIBREF19 , BIBREF21 , we convert the list of dialogue acts presented by the literature into a hierarchical taxonomy, shown in Figure FIGREF6 .",
"We first organize the taxonomy into six high-level dialogue acts: Greeting, Statement, Request, Question, Answer, and Social Act. Then, we update the taxonomy using two main steps: restructuring and adding additional fine-grained acts.",
"We base our changes upon the taxonomy used by Ivanovic and Kim et al. in their work on instant messaging chat dialogues BIBREF19 , BIBREF21 , but also on general dialogue acts observed in the customer service domain, including complaints and suggestions. Our taxonomy does not make any specific restrictions on which party in the dialogue may perform each act, but we do observe that some acts are far more frequent (and sometimes non-existent) in usage, depending on whether the customer or agent is the speaker (for example, the Statement Complaint category never shows up in Agent turns).",
"In order to account for gaps in available act selections for annotators, we include an Other act in the broadest categories. While our taxonomy fills in many gaps from previous work in our domain, we do not claim to have handled coverage of all possible acts in this domain. Our taxonomy allows us to more closely specify the intent and motivation behind each turn, and ultimately how to address different situations."
],
[
"Given our taxonomy of fine-grained dialogue acts that expands upon previous work, we set out to gather annotations for Twitter customer service conversations.",
"For our data collection phase, we begin with conversations from the Twitter customer service pages of four different companies, from the electronics, telecommunications, and insurance industries. We perform several forms of pre-processing to the conversations. We filter out conversations if they contain more than one customer or agent speaker, do not have alternating customer/agent speaking turns (single turn per speaker), have less than 5 or more than 10 turns, have less than 70 words in total, and if any turn in the conversation ends in an ellipses followed by a link (indicating that the turn has been cut off due to length, and spans another tweet). Additionally, we remove any references to the company names (substituting with \"Agent\"), any references to customer usernames (substituting with \"Customer\"), and replacing and links or image references with INLINEFORM0 link INLINEFORM1 and INLINEFORM2 img INLINEFORM3 tokens.",
"Using these filters as pre-processing methods, we end up with a set of 800 conversations, spanning 5,327 turns. We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell:",
"We ask 5 Turkers to annotate each conversation HIT, and pay $0.20 per HIT. We find the list of \"majority dialogue acts\" for each tweet by finding any acts that have received majority-vote labels (at least 3 out of 5 judgements).",
"It is important to note at this point that we make an important choice as to how we will handle dialogue act tagging for each turn. We note that each turn may contain more than one dialogue act vital to carry its full meaning. Thus, we choose not to carry out a specific segmentation task on our tweets, contrary to previous work BIBREF24 , BIBREF25 , opting to characterize each tweet as a single unit composed of different, often overlapping, dialogue acts. Table TABREF16 shows examples of tweets that receive majority vote on more than one label, where the act boundaries are overlapping and not necessarily distinguishable.",
"It is clear that the lines differentiating these acts are not very well defined, and that segmentation would not necessarily aid in clearly separating out each intent. For these reasons, and due to the overall brevity of tweets in general, we choose to avoid the overhead of requiring annotators to provide segment boundaries, and instead ask for all appropriate dialogue acts."
],
[
"Figure FIGREF17 shows the distribution of the number of times each dialogue act in our taxonomy is selected a majority act by the annotators (recall that each turn is annotated by 5 annotators). From the distribution, we see that the largest class is Statement Info which is part of the majority vote list for 2,152 of the 5,327 total turns, followed by Request Info, which appears in 1,088 of the total turns. Although Statement Informative comprises the largest set of majority labels in the data (as did Statement in Ivanovic's distribution), we do observe that other fine-grained categories of Statement occur in the most frequent labels as well, including Statement Complaint, Statement Expressive Negative, and Statement Suggestion – giving more useful information as to what form of statement is most frequently occurring. We find that 147 tweets receive no majority label (i.e. no single act received 3 or more votes out of 5). At the tail of the distribution, we see less frequent acts, such as Statement Sarcasm, Social Act Downplayer, Statement Promise, Greeting Closing, and Request Other. It is also interesting to note that both opening and closing greetings occur infrequently in the data – which is understandable given the nature of Twitter conversation, where formal greeting is not generally required.",
"Table TABREF19 shows a more detailed summary of the distribution of our top 12 dialogue acts according to the annotation experiments, as presented by Ivanovic BIBREF18 . Since each turn has an overlapping set of labels, the column % of Turns (5,327) represents what fraction of the total 5,327 turns contain that dialogue act label (these values do not sum to 1, since there is overlap). To give a better sense of the percentage appearance of each dialogue act class in terms of the total number of annotated labels given, we also present column % of Annotations (10,343) (these values are percentages). We measure agreement in our annotations using a few different techniques. Since each item in our annotation experiments allows for multiple labels, we first design an agreement measure that accounts for how frequently each annotator selects the acts that agree with the majority-selected labels for the turns they annotated. To calculate this for each annotator, we find the number of majority-selected acts for each conversation they annotated (call this MAJ), and the number of subset those acts that they selected (call this SUBS), and find the ratio (SUBS/MAJ). We use this ratio to systematically fine-tune our set of annotators by running our annotation in four batches, restricting our pool of annotators to those that have above a 0.60 ratio of agreement with the majority from the previous batch, as a sort of quality assurance test. We also measure Fleiss' Kappa BIBREF26 agreement between annotators in two ways: first by normalizing our annotation results into binary-valued items indicating annotators' votes for each label contain within each turn. We find an average Fleiss- INLINEFORM0 for the full dataset, including all turn-and-label items, representing moderate agreement on the 24-label problem.",
"We also calculate the Fleiss- INLINEFORM0 values for each label, and use the categories defined by Landis and Koch to bin our speech acts based on agreement BIBREF27 . As shown in Table TABREF18 , we find that the per-label agreement varies from \"almost perfect\" agreement of INLINEFORM1 for lexically defined categories such as Apology and Thanks, with only slight agreement of INLINEFORM2 for less clearly-defined categories, such as Statement (Other), Answer Response Acknowledgement and Request (Other). For the conversation-level questions, we calculate the agreement across the \"Agree\" label for all annotators, finding an average Fleiss- INLINEFORM3 , with question-level results of INLINEFORM4 for customer satisfaction, INLINEFORM5 for problem resolution, and INLINEFORM6 for customer frustration. These results suggest room for improvement for further development of the taxonomy, to address problem areas for annotators and remedy areas of lower agreement."
],
[
"We test our hypothesis that tweet turns are often characterized by more than one distinct dialogue act label by measuring the percentage overlap between frequent pairs of labels. Of the 5,327 turns annotated, across the 800 conversations, we find that 3,593 of those turns (67.4%) contained more than one majority-act label. Table TABREF22 shows the distribution percentage of the most frequent pairs.",
"For example, we observe that answering with informative statements is the most frequent pair, followed by complaints coupled with negative sentiment or informative statements. We also observe that requests are usually formed as questions, but also co-occur frequently with apologies. This experiment validates our intuition that the majority of turns do contain more than a single label, and motivates our use of a multi-label classification method for characterizing each turn in the conversation modeling experiments we present in the next section."
],
[
"In this section, we describe the setup and results of our conversational modeling experiments on the data we collected using our fine-grained taxonomy of customer service dialogue acts. We begin with an overview of the features and classes used, followed by our experimental setup and results for each experiment performed."
],
[
"The following list describes the set of features used for our dialogue act classification tasks:",
"Word/Punctuation: binary bag-of-word unigrams, binary existence of a question mark, binary existence of an exclamation mark in a turn",
"Temporal: response time of a turn (time in seconds elapsed between the posting time of the previous turn and that of the current turn)",
"Second-Person Reference: existence of an explicit second-person reference in the turn (you, your, you're)",
"Emotion: count of words in each of the 8 emotion classes from the NRC emotion lexicon BIBREF28 (anger, anticipation, disgust, fear, joy, negative, positive, sadness, surprise, and trust)",
"Dialogue: lexical indicators in the turn: opening greetings (hi, hello, greetings, etc), closing greetings (bye, goodbye), yes-no questions (turns with questions starting with do, did, can, could, etc), wh- questions (turns with questions starting with who, what, where, etc), thanking (thank*), apology (sorry, apolog*), yes-answer, and no-answer"
],
[
"Table TABREF30 shows the division of classes we use for each of our experiments. We select our classes using the distribution of annotations we observe in our data collection phase (see Table TABREF19 ), selecting the top 12 classes as candidates.",
"While iteratively selecting the most frequently-occurring classes helps to ensure that classes with the most data are represented in our experiments, it also introduces the problem of including classes that are very well-defined lexically, and may not require learning for classification, such as Social Act Apology and Social Act Thanking in the first 10-Class set. For this reason, we call this set 10-Class (Easy), and also experiment using a 10-Class (Hard) set, where we add in the next two less-defined and more semantically rich labels, such as Statement Offer and Question Open. When using each set of classes, a turn is either classified as one of the classes in the set, or it is classified as \"other\" (i.e. any of the other classes). We discuss our experiments in more detail and comment on performance differences in the experiment section."
],
[
"Following previous work on conversation modeling BIBREF23 , we use a sequential SVM-HMM (using the INLINEFORM0 toolkit BIBREF29 ) for our conversation modeling experiments. We hypothesize that a sequential model is most suited to our dialogic data, and that we will be able to concisely capture conversational attributes such as the order in which dialogue acts often occur (i.e. some Answer act after Question a question act, or Apology acts after Complaints).",
"We note that with default settings for a sequence of length INLINEFORM0 , an SVM-HMM model will be able to refine its answers for any turn INLINEFORM1 as information becomes available for turns INLINEFORM2 . However, we opt to design our classifier under a real-time setting, where turn-by-turn classification is required without future knowledge or adaptation of prediction at any given stage. In our setup, turns are predicted in a real-time setting to fairly model conversation available to an intelligent agent in a conversational system. At any point, a turn INLINEFORM3 is predicted using information from turns INLINEFORM4 , and where a prediction is not changed when new information is available.",
"We test our hypothesis by comparing our real-time sequential SVM-HMM model to non-sequential baselines from the NLTK BIBREF30 and Scikit-Learn BIBREF31 toolkits. We use our selected feature set (described above) to be generic enough to apply to both our sequential and non-sequential models, in order to allow us to fairly compare performance. We shuffle and divide our data into 70% for training and development (560 conversations, using 10-fold cross-validation for parameter tuning), and hold out 30% of the data (240 conversations) for test.",
"Motivated by the prevalent overlap of dialogue acts, we conduct our learning experiments using a multi-label setup. For each of the sets of classes, we conduct binary classification task for each label: for each INLINEFORM0 -class classification task, a turn is labeled as either belonging to the current label, or not (i.e. \"other\"). In this setup, each turn is assigned a binary value for each label (i.e. for the 6-class experiment, each turn receives a value of 0/1 for each indicating whether the classifier predicts it to be relevant to the each of the 6 labels). Thus, for each INLINEFORM1 -class experiment, we end up with INLINEFORM2 binary labels, for example, whether the turn is a Statement Informative or Other, Request Information or Other, etc. We aggregate the INLINEFORM3 binary predictions for each turn, then compare the resultant prediction matrix for all turns to our majority-vote ground-truth labels, where at least 3 out of 5 annotators have selected a label to be true for a given turn. The difficulty of the task increases as the number of classes INLINEFORM4 increases, as there are more classifications done for each turn (i.e., for the 6-class problem, there are 6 classification tasks per turn, while for the 8-class problem, there are 8, etc). Due to the inherent imbalance of label-distribution in the data (shown in Figure FIGREF17 ), we use weighted F-macro to calculate our final scores for each feature set (which finds the average of the metrics for each label, weighted by the number of true instances for that label) BIBREF31 .",
"Our first experiment sets out to compare the use of a non-sequential classification algorithm versus a sequential model for dialogue act classification on our dataset. We experiment with the default Naive Bayes (NB) and Linear SVC algorithms from Scikit-Learn BIBREF31 , comparing with our sequential SVM-HMM model. We test each classifier on each of our four class sets, reporting weighted F-macro for each experiment. Figure FIGREF33 shows the results of the experiments.",
"From this experiment, we observe that our sequential SVM-HMM outperforms each non-sequential baseline, for each of the four class sets. We select the sequential SVM-HMM model for our preferred model for subsequent experiments. We observe that while performance may be expected to drop as the number of classes increases, we instead get a spike in performance for the 10-Class (Easy) setting. This increase occurs due to the addition of the lexically well-defined classes of Statement Apology and Statement Thanks, which are much simpler for our model to predict. Their addition results in a performance boost, comparable to that of the simpler 6-Class problem. When we remove the two well-defined classes and add in the next two broader dialogue act classes of Statement Offer and Question Open (as defined by the 10-Class (Hard) set), we observe a drop in performance, and an overall result comparable to our 8-Class problem. This result is still strong, since the number of classes has increased, but the overall performance does not drop.",
"We also observe that while NB and LinearSVC have the same performance trend for the smaller number of classes, Linear SVC rapidly improves in performance as the number of classes increases, following the same trend as SVM-HMM. The smallest margin of difference between SVM-HMM and Linear SVC also occurs at the 10-Class (Easy) setting, where the addition of highly-lexical classes makes for a more differentiable set of turns.",
"Our next experiment tests the differences in performance when training and testing our real-time sequential SVM-HMM model using only a single type of speaker's turns (i.e. only Customer or only Agent turns). Figure FIGREF35 shows the relative performance of using only speaker-specific turns, versus our standard results using all turns.",
"We observe that using Customer-only turns gives us lower prediction performance than using both speakers' turns, but that Agent-only turns actually gives us higher performance. Since agents are put through training on how to interact with customers (often using templates), agent behavior is significantly more predictable than customer behavior, and it is easier to predict agent turns even without utilizing any customer turn information (which is more varied, and thus more difficult to predict).",
"We again observe a boost in performance at out 10-Class (Easy) set, due to the inclusion of lexically well-defined classes. Notably, we achieve best performance for the 10-Class (Easy) set using only agent turns, where the use of the Apology and Thanks classes are both prevalent and predictable.",
"In our final experiment, we explore the changes in performance we get by splitting the training and test data based on company domain. We compare this performance with our standard setup for SVM-HMM from our baseline experiments (Figure FIGREF33 ), where our train-test data splitting is company-independent (i.e. all conversations are randomized, and no information is used to differentiate different companies or domains). To recap, our data consists of conversations from four companies from three different industrial domains (one from the telecommunication domain, two from the electronics domain, and one from the insurance domain). We create four different versions of our 6-class real-time sequential SVM-HMM, where we train on the data from three of the companies, and test on the remaining company. We present our findings in Table TABREF37 .",
"From the table, we see that our real-time model achieves best prediction results when we use one of the electronics companies in the test fold, even though the number of training samples is smallest in these cases. On the other hand, when we assign insurance company in the test fold, our model's prediction performance is comparatively low. Upon further investigation, we find that customer-agent conversations in the telecommunication and electronics domains are more similar than those in the insurance domain. Our findings show that our model is robust to different domains as our test set size increases, and that our more generic, company-independent experiment gives us better performance than any domain-specific experiments."
],
[
"Given our observation that Agent turns are more predictable, and that we achieve best performance in a company-independent setting, we question whether the training that agents receive is actually reliable in terms of resulting in overall \"satisfied customers\", regardless of company domain. Ultimately, our goal is to discover whether we can use the insight we derive from our predicted dialogue acts to better inform conversational systems aimed at offering customer support. Our next set of experiments aims to show the utility of our real-time dialogue act classification as a method for summarizing semantic intent in a conversation into rules that can be used to guide automated systems."
],
[
"We conduct three supervised classification experiments to better understand full conversation outcome, using the default Linear SVC classifier in Scikit-Learn BIBREF31 (which gave us our best baseline for the dialogue classification task). Each classification experiments centers around one of three problem outcomes: customer satisfaction, problem resolution, and customer frustration. For each outcome, we remove any conversation that did not receive majority consensus for a label, or received majority vote of \"can't tell\". Our final conversation sets consist of 216 satisfied and 500 unsatisfied customer conversations, 271 resolved and 425 unresolved problem conversations, and 534 frustrated and 229 not frustrated customer conversations. We retain the inherent imbalance in the data to match the natural distribution observed. The clear excess of consensus of responses that indicate negative outcomes further motivates us to understand what sorts of dialogic patterns results in such outcomes.",
"We run the experiment for each conversation outcome using 10-fold cross-validation, under each of our four class settings: 6-Class, 8-Class, 10-Class (Easy), and 10-Class (Hard). The first feature set we use is Best_Features (from the original dialogue act classification experiments), which we run as a baseline.",
"Our second feature set is our Dialogue_Acts predictions for each turn – we choose the most probable dialogue act prediction for each turn using our dialogue act classification framework to avoid sparsity. In this way, for each class size INLINEFORM0 , each conversation is converted into a vector of INLINEFORM1 (up to 10) features that describe the most strongly associated dialogue act from the dialogue act classification experiments for each turn, and the corresponding turn number. For example, a conversation feature vector may look as follows: INLINEFORM2 ",
"Thus, our classifier can then learn patterns based on these features (for example, that specific acts appearing at the end of a conversation are strong indicators of customer satisfaction) that allow us to derive rules about successful/unsuccessful interactions.",
"Figure FIGREF38 shows the results of our binary classification experiments for each outcome. For each experiment, the Best_Features set is constant over each class size, while the Dialogue_Act features are affected by class size (since the predicted act for each turn will change based on the set of acts available for that class size). Our first observation is that we achieve high performance on the binary classification task, reaching F-measures of 0.70, 0.65, and 0.83 for the satisfaction, resolution, and frustration outcomes, respectively. Also, we observe that the performance of our predicted dialogue act features is comparable to that of the much larger set of best features for each label (almost identical in the case of frustration).",
"In more detail, we note interesting differences comparing the performance of the small set of dialogue act features that \"summarize\" the large, sparse set of best features for each label, as a form of data-driven feature selection. For satisfaction, we see that the best feature set outperforms the dialogue acts for each class set except for 10-Class (Easy), where the dialogue acts are more effective. The existence of the very lexically well-defined Social Act Thanking and Social Act Apology classes makes the dialogue acts ideal for summarization. In the case of problem resolution, we see that the performance of the dialogue acts approaches that of the best feature set as the number of classes increases, showing that the dialogue features are able to express the full intent of the turns well, even at more difficult class settings. Finally, for the frustration experiment, we observe negligible different between the best features and dialogue act features, and very high classification results overall."
],
[
"While these experiments highlight how we can use dialogue act predictions as a means to greatly reduce feature sparsity and predict conversation outcome, our main aim is to gain good insight from the use of the dialogue acts to inform and automate customer service interactions. We conduct deeper analysis by taking a closer look at the most informative dialogue act features in each experiment.",
"Table TABREF44 shows the most informative features and weights for each of our three conversation outcomes. To help guide our analysis, we divide the features into positions based on where they occur in the conversation: start (turns 1-3), middle (turns 4-6), and end (turns 7-10). Desirable outcomes (customers that are satisfied/not frustrated and resolved problems) are shown at the top rows of the table, and undesirable outcomes (unsatisfied/frustrated customers and unresolved problems) are shown at the bottom rows.",
"Our analysis helps zone in on how the use of certain dialogue acts may be likely to result in different outcomes. The weights we observe vary in the amount of insight provided: for example, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems (with ratios of above 6:1). However, some outcomes are much more subtle: for example, asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers. Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated. Likewise, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers, with ratios of at least 4:1.",
"By using the feature weights we derive from using our predicted dialogue acts in our outcome classification experiments, we can thus derive data-driven patterns that offer useful insight into good/bad practices. Our goal is to then use these rules as guidelines, serving as a basis for automated response planning in the customer service domain. For example, these rules can be used to recommend certain dialogue act responses given the position in a conversation, and based previous turns. This information, derived from correlation with conversation outcomes, gives a valuable addition to conversational flow for intelligent agents, and is more useful than canned responses."
],
[
"In this paper, we explore how we can analyze dialogic trends in customer service conversations on Twitter to offer insight into good/bad practices with respect to conversation outcomes. We design a novel taxonomy of fine-grained dialogue acts, tailored for the customer service domain, and gather annotations for 800 Twitter conversations. We show that dialogue acts are often semantically overlapping, and conduct multi-label supervised learning experiments to predict multiple appropriate dialogue act labels for each turn in real-time, under varying class sizes. We show that our sequential SVM-HMM model outperforms all non-sequential baselines, and plan to continue our exploration of other sequential models including Conditional Random Fields (CRF) BIBREF32 and Long Short-Term Memory (LSTM) BIBREF33 , as well as of dialogue modeling using different Markov Decision Process (MDP) BIBREF34 models such as the Partially-Observed MDP (POMDP) BIBREF35 .",
"We establish that agents are more predictable than customers in terms of the dialogue acts they utilize, and set out to understand whether the conversation strategies agents employ are well-correlated with desirable conversation outcomes. We conduct binary classification experiments to analyze how our predicted dialogue acts can be used to classify conversations as ending in customer satisfaction, customer frustration, and problem resolution. We observe interesting correlations between the dialogue acts agents use and the outcomes, offering insights into good/bad practices that are more useful for creating context-aware automated customer service systems than generating canned response templates.",
"Future directions for this work revolve around the integration of the insights derived in the design of automated customer service systems. To this end, we aim to improve the taxonomy and annotation design by consulting domain-experts and using annotator feedback and agreement information, derive more powerful features for dialogue act prediction, and automate ranking and selection of best-practice rules based on domain requirements for automated customer service system design."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Taxonomy Definition",
"Data Collection",
"Annotation Results",
"Motivation for Multi-Label Classification",
"Conversation Modeling",
"Features",
"Classes",
"Experiments",
"Conversation Outcome Analysis",
"Classifying Problem Outcomes",
"Actionable Rules for Automated Customer Support",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"e02902f52907ab54c212407401fe155cd9708319"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Example Twitter Customer Service Conversation"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Example Twitter Customer Service Conversation"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"234d43540abc5c6095cc13c62097aabf98a50983"
],
"answer": [
{
"evidence": [
"Additionally, important patterns can emerge from analysis of the fine-grained acts in a dialogue in a post-prediction setting. For example, if an agent does not follow-up with certain actions in response to a customer's question dialogue act, this could be found to be a violation of a best practice pattern. By analyzing large numbers of dialogue act sequences correlated with specific outcomes, various rules can be derived, i.e. \"Continuing to request information late in a conversation often leads to customer dissatisfaction.\" This can then be codified into a best practice pattern rules for automated systems, such as \"A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation.\"",
"Our analysis helps zone in on how the use of certain dialogue acts may be likely to result in different outcomes. The weights we observe vary in the amount of insight provided: for example, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems (with ratios of above 6:1). However, some outcomes are much more subtle: for example, asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers. Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated. Likewise, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers, with ratios of at least 4:1."
],
"extractive_spans": [
"A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation",
" offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems ",
"asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers",
"Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated",
"requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers"
],
"free_form_answer": "",
"highlighted_evidence": [
"By analyzing large numbers of dialogue act sequences correlated with specific outcomes, various rules can be derived, i.e. \"Continuing to request information late in a conversation often leads to customer dissatisfaction.\" This can then be codified into a best practice pattern rules for automated systems, such as \"A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation.\"",
"Our analysis helps zone in on how the use of certain dialogue acts may be likely to result in different outcomes. The weights we observe vary in the amount of insight provided: for example, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems (with ratios of above 6:1). However, some outcomes are much more subtle: for example, asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers. Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated. Likewise, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers, with ratios of at least 4:1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"88938f27642d3dccae3767e59cf4a52b35476706"
],
"answer": [
{
"evidence": [
"Using these filters as pre-processing methods, we end up with a set of 800 conversations, spanning 5,327 turns. We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell:"
],
"extractive_spans": [],
"free_form_answer": "By annotators on Amazon Mechanical Turk.",
"highlighted_evidence": [
"We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell:"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"98085b2c62d977e4e5e87c792856a6629cc5ab9d"
],
"answer": [
{
"evidence": [
"We first expand upon previous work and generic dialogue act taxonomies, developing a fine-grained set of dialogue acts for customer service, and conducting a systematic user study to identify these acts in a dataset of 800 conversations from four Twitter customer service accounts (i.e. four different companies in the telecommunication, electronics, and insurance industries). We then aim to understand the conversation flow between customers and agents using our taxonomy, so we develop a real-time sequential SVM-HMM model to predict our fine-grained dialogue acts while a conversation is in progress, using a novel multi-label scheme to classify each turn. Finally, using our dialogue act predictions, we classify conversations based on the outcomes of customer satisfaction, frustration, and overall problem resolution, then provide actionable guidelines for the development of automated customer service systems and intelligent agents aimed at desired customer outcomes BIBREF3 , BIBREF4 ."
],
"extractive_spans": [
" four different companies in the telecommunication, electronics, and insurance industries"
],
"free_form_answer": "",
"highlighted_evidence": [
"We first expand upon previous work and generic dialogue act taxonomies, developing a fine-grained set of dialogue acts for customer service, and conducting a systematic user study to identify these acts in a dataset of 800 conversations from four Twitter customer service accounts (i.e. four different companies in the telecommunication, electronics, and insurance industries)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"c801e60b5f7eda767550576173a5ca6859c405c7"
],
"answer": [
{
"evidence": [
"In this work, we are motivated to predict the dialogue acts in conversations with the intent of identifying problem spots that can be addressed in real-time, and to allow for post-conversation analysis to derive rules about conversation outcomes indicating successful/unsuccessful interactions, namely, customer satisfaction, customer frustration, and problem resolution. We focus on analysis of the dialogue acts used in customer service conversations as a first step to fully automating the interaction. We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent. The novelty of our work comes from the development of our fine-grained dialogue act taxonomy and multi-label approach for act prediction, as well as our analysis of the customer service domain on Twitter. Our goal is to offer useful analytics to improve outcome-oriented conversational systems."
],
"extractive_spans": [
"overlapping dialogue acts"
],
"free_form_answer": "",
"highlighted_evidence": [
". We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate only on English datasets?",
"Which patterns and rules are derived?",
"How are customer satisfaction, customer frustration and overall problem resolution data collected?",
"Which Twitter customer service industries are investigated?",
"Which dialogue acts are more suited to the twitter domain?"
],
"question_id": [
"b8cee4782e05afaeb9647efdb8858554490feba5",
"915cf3d481164217290d7b1eb9d48ed3e249196d",
"d6e8b32048ff83c052e978ff3b8f1cb097377786",
"e26e7e9bcd7e2cea561af596c59b98e823653a4b",
"b24767fe7e6620369063e646fd3048dc645a8348"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Example Twitter Customer Service Conversation",
"Figure 1: Methodology Pipeline",
"Figure 2: Proposed Fine-Grained Dialogue Act Taxonomy for Customer Service",
"Table 3: Dialogue Act Agreement in Fleiss-κ Bins (from Landis and Koch, 1977)",
"Figure 3: Distribution of Annotated Dialogue Act Labels",
"Table 4: Detailed Distribution of Top 12 Fine-Grained Dialogue Acts Derived From Annotations",
"Table 5: Distribution of the 10 Most Frequent Dialogue Act Pairs for Turns with More Than 1 Label (3,593)",
"Table 6: Dialogue Acts Used in Each Set of Experiments",
"Figure 4: Plot of Non-Sequential Baselines vs. Sequential SVM-HMM Model",
"Figure 5: Plot of Both Speaker Turns vs. Only Customer/Agent Turns for Sequential SVM-HMM",
"Table 7: Company-Wise vs Company-Independent Evaluation for 6-Class Sequential SVM-HMM",
"Figure 6: Plot of Dialogue Act Features vs. Best Feature Sets for Satisfaction, Resolution, and Frustration Outcomes",
"Table 8: Most Informative Dialogue Act Features and Derivative Actionable Insights, by Conversation Outcome"
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table3-1.png",
"5-Figure3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"8-Figure4-1.png",
"8-Figure5-1.png",
"9-Table7-1.png",
"10-Figure6-1.png",
"11-Table8-1.png"
]
} | [
"How are customer satisfaction, customer frustration and overall problem resolution data collected?"
] | [
[
"1709.05413-Data Collection-2"
]
] | [
"By annotators on Amazon Mechanical Turk."
] | 335 |
1704.00253 | Building a Neural Machine Translation System Using Only Synthetic Parallel Data | Recent works have shown that synthetic parallel data automatically generated by translation models can be effective for various neural machine translation (NMT) issues. In this study, we build NMT systems using only synthetic parallel data. As an efficient alternative to real parallel data, we also present a new type of synthetic parallel corpus. The proposed pseudo parallel data are distinct from previous works in that ground truth and synthetic examples are mixed on both sides of sentence pairs. Experiments on Czech-German and French-German translations demonstrate the efficacy of the proposed pseudo parallel corpus, which shows not only enhanced results for bidirectional translation tasks but also substantial improvement with the aid of a ground truth real parallel corpus. | {
"paragraphs": [
[
"Given the data-driven nature of neural machine translation (NMT), the limited source-to-target bilingual sentence pairs have been one of the major obstacles in building competitive NMT systems. Recently, pseudo parallel data, which refer to the synthetic bilingual sentence pairs automatically generated by existing translation models, have reported promising results with regard to the data scarcity in NMT. Many studies have found that the pseudo parallel data combined with the real bilingual parallel corpus significantly enhance the quality of NMT models BIBREF0 , BIBREF1 , BIBREF2 . In addition, synthesized parallel data have played vital roles in many NMT problems such as domain adaptation BIBREF0 , zero-resource NMT BIBREF3 , and the rare word problem BIBREF4 .",
"Inspired by their efficacy, we attempt to train NMT models using only synthetic parallel data. To the best of our knowledge, building NMT systems with only pseudo parallel data has yet to be studied. Through our research, we explore the availability of synthetic parallel data as an effective alternative to the real-world parallel corpus. The active usage of synthetic data in NMT particularly has its significance in low-resource environments where the ground truth parallel corpora are very limited or not established. Even in recent approaches such as zero-shot NMT BIBREF5 and pivot-based NMT BIBREF6 , where direct source-to-target bilingual data are not required, the direct parallel corpus brings substantial improvements in translation quality where the pseudo parallel data can also be employed.",
"Previously suggested synthetic data, however, have several drawbacks to be a reliable alternative to the real parallel corpus. As illustrated in Figure 1 , existing pseudo parallel corpora can be classified into two groups: source-originated and target-originated. The common property between them is that ground truth examples exist only on a single side (source or target) of pseudo sentence pairs, while the other side is composed of synthetic sentences only. The bias of synthetic examples in sentence pairs, however, may lead to the imbalance of the quality of learned NMT models when the given pseudo parallel corpus is exploited in bidirectional translation tasks (e.g., French $\\rightarrow $ German and German $\\rightarrow $ French). In addition, the reliability of the synthetic parallel data is heavily influenced by a single translation model where the synthetic examples originate. Low-quality synthetic sentences generated by the translation model would prevent NMT models from learning solid parameters.",
"To overcome these shortcomings, we propose a novel synthetic parallel corpus called PSEUDOmix. In contrast to previous works, PSEUDOmix includes both synthetic and real sentences on either side of sentence pairs. In practice, it can be readily built by mixing source- and target-originated pseudo parallel corpora for a given translation task. Experiments on several language pairs demonstrate that the proposed PSEUDOmix shows useful properties that make it a reliable candidate for real-world parallel data. In detail, we make the following contributions:"
],
[
"Given a source sentence $x = (x_1, \\ldots , x_m)$ and its corresponding target sentence $y= (y_1, \\ldots , y_n)$ , the NMT aims to model the conditional probability $p(y|x)$ with a single large neural network. To parameterize the conditional distribution, recent studies on NMT employ the encoder-decoder architecture BIBREF7 , BIBREF8 , BIBREF9 . Thereafter, the attention mechanism BIBREF10 , BIBREF11 has been introduced and successfully addressed the quality degradation of NMT when dealing with long input sentences BIBREF12 .",
"In this study, we use the attentional NMT architecture proposed by Bahdanau et al. bahdanau2014neural. In their work, the encoder, which is a bidirectional recurrent neural network, reads the source sentence and generates a sequence of source representations $\\bf {h} =(\\bf {h_1}, \\ldots , \\bf {h_m}) $ . The decoder, which is another recurrent neural network, produces the target sentence one symbol at a time. The log conditional probability thus can be decomposed as follows: ",
"$$\\log p(y|x) = \\sum _{t=1}^{n} \\log p(y_t|y_{<t}, x)$$ (Eq. 3) ",
"where $y_{<t}$ = ( $y_1, \\ldots , y_{t-1}$ ). As described in Equation (2), the conditional distribution of $p(y_t|y_{<t}, x)$ is modeled as a function of the previously predicted output $y_{t-1}$ , the hidden state of the decoder $s_t$ , and the context vector $c_t$ . ",
"$$p(y_t|y_{<t}, x) \\propto \\exp \\lbrace g(y_{t-1}, s_t, c_t)\\rbrace $$ (Eq. 4) ",
"The context vector $c_t$ is used to determine the relevant part of the source sentence to predict $y_t$ . It is computed as the weighted sum of source representations $\\bf {h_1}, \\ldots , \\bf {h_m}$ . Each weight $\\alpha _{ti}$ for $\\bf {h_i}$ implies the probability of the target symbol $y_t$ being aligned to the source symbol $x_i$ : ",
"$$c_t = \\sum _{i=1}^{m} \\alpha _{ti} \\bf {h_i}$$ (Eq. 5) ",
"Given a sentence-aligned parallel corpus of size $N$ , the entire parameter $\\theta $ of the NMT model is jointly trained to maximize the conditional probabilities of all sentence pairs ${ \\lbrace (x^n, y^n)\\rbrace }_{ n=1 }^{ N }$ : ",
"$$\\theta ^* = \\underset{\\theta }{\\arg \\!\\max } \\sum _{n=1}^{N} \\log p(y^{n}|x^{n})$$ (Eq. 6) ",
"where $\\theta ^*$ is the optimal parameter."
],
[
"In statistical machine translation (SMT), synthetic bilingual data have been primarily proposed as a means to exploit monolingual corpora. By applying a self-training scheme, the pseudo parallel data were obtained by automatically translating the source-side monolingual corpora BIBREF13 , BIBREF14 . In a similar but reverse way, the target-side monolingual corpora were also employed to build the synthetic parallel data BIBREF15 , BIBREF16 . The primary goal of these works was to adapt trained SMT models to other domains using relatively abundant in-domain monolingual data.",
"Inspired by the successful application in SMT, there have been efforts to exploit synthetic parallel data in improving NMT systems. Source-side BIBREF1 , target-side BIBREF0 and both sides BIBREF2 of the monolingual data have been used to build synthetic parallel corpora. In their work, the pseudo parallel data combined with a real training corpus significantly enhanced the translation quality of NMT. In Sennrich et al., sennrich2015improving, domain adaptation of NMT was achieved by fine-tuning trained NMT models using a synthetic parallel corpus. Firat et al. firat2016zero attempted to build NMT systems without any direct source-to-target parallel corpus. In their work, the pseudo parallel corpus was employed in fine-tuning the target-specific attention mechanism of trained multi-way multilingual NMT BIBREF17 models, which enabled zero-resource NMT between the source and target languages. Lastly, synthetic sentence pairs have been utilized to enrich the training examples having rare or unknown translation lexicons BIBREF4 ."
],
[
"As described in the previous section, synthetic parallel data have been widely used to boost the performance of NMT. In this work, we further extend their application by training NMT with only synthetic data. In certain language pairs or domains where the source-to-target real parallel corpora are very rare or even unprepared, the model trained with synthetic parallel data can function as an effective baseline model. Once the additional ground truth parallel corpus is established, the trained model can be improved by retraining or fine-tuning using the real parallel data."
],
[
"For a given translation task, we classify the existing pseudo parallel data into the following groups:",
"Source-originated: The source sentences are from a real corpus, and the associated target sentences are synthetic. The corpus can be formed by automatically translating a source-side monolingual corpus into the target language BIBREF4 , BIBREF1 . It can also be built from source-pivot bilingual data by introducing a pivot language. In this case, a pivot-to-target translation model is employed to translate the pivot language corpus into the target language. The generated target sentences paired with the original source sentences form a pseudo parallel corpus.",
"Target-originated: The target sentences are from a real corpus, and the associated source sentences are synthetic. The corpus can be formed by back-translating a target-side monolingual corpus into the source language BIBREF0 . Similar to the source-originated case, it can be built from a pivot-target bilingual corpus using a pivot-to-source translation model BIBREF3 .",
"The process of building each synthetic parallel corpus is illustrated in Figure 1 . As shown in Figure 1 , the previous studies on pseudo parallel data share a common property: synthetic and ground truth sentences are biased on a single side of sentence pairs. In such a case where the synthetic parallel data are the only or major resource used to train NMT, this may severely limit the availability of the given pseudo parallel corpus. For instance, as will be demonstrated in our experiments, synthetic data showing relatively high quality in one translation task (e.g., French $\\rightarrow $ German) can produce poor results in the translation task of the reverse direction (German $\\rightarrow $ French).",
"Another drawback of employing synthetic parallel data in training NMT is that the capacity of the synthetic parallel corpus is inherently influenced by the mother translation model from which the synthetic sentences originate. Depending on the quality of the mother model, ill-formed or inaccurate synthetic examples could be generated, which would negatively affect the reliability of the resultant synthetic parallel data. In the previous study, Zhang and Zong zhang2016exploiting bypassed this issue by freezing the decoder parameters while training with the minibatches of pseudo bilingual pairs made from a source language monolingual corpus. This scheme, however, cannot be applied to our scenario as the decoder network will remain untrained during the entire training process."
],
[
"To overcome the limitations of the previously suggested pseudo parallel data, we propose a new type of synthetic parallel corpus called PSEUDOmix. Our approach is quite straightforward: For a given translation task, we first build both source-originated and target-originated pseudo parallel data. PSEUDOmix can then be readily built by mixing them together. The overall process of building PSEUDOmix for the French $\\rightarrow $ German translation task is illustrated in Figure 1 .",
"By mixing source- and target-originated pseudo parallel data, the resultant corpus includes both real and synthetic examples on either side of sentence pairs, which is the most evident feature of PSEUDOmix. Through the mixing approach, we attempt to lower the overall discrepancy in the quality of the source and target examples of synthetic sentence pairs, thus enhancing the reliability as a parallel resource. In the following section, we evaluate the actual benefits of the mixed composition in the synthetic parallel data."
],
[
"In this section, we analyze the effects of the mixed composition in the synthetic parallel data. Mixing pseudo parallel corpora derived from different sources, however, inevitably brings diversity, which affects the capacity of the resulting corpus. We isolate this factor by building both source- and target-originated synthetic corpora from the identical source-to-target real parallel corpus. Our experiments are performed on French (Fr) $\\leftrightarrow $ German (De) translation tasks. Throughout the remaining paper, we use the notation * to denote the synthetic part of the pseudo sentence pairs."
],
[
"By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\\rightarrow $ De and train another NMT model for En $\\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . A beam of size 5 is used to generate synthetic sentences. Lastly, to match the size of the training data, PSEUDOmix is established by randomly sampling half of each Fr*-De and Fr-De* corpus and mixing them together.",
"We use the parallel corpora from the shared translation task of WMT'15 and WMT'16 BIBREF27 . Using the same pivot-based technique as the previous task, Cs-De* and Fr-De* corpora are built from the WMT'15 Cs-En and Fr-En parallel data respectively. For Cs*-De and Fr*-De, WMT'16 En-De parallel data are employed. We again use pre-trained NMT models for En $\\rightarrow $ Cs, En $\\rightarrow $ De, and En $\\rightarrow $ Fr to generate synthetic sentences. A beam of size 1 is used for fast decoding.",
"For the Real Fine-tuning scenario, we use real parallel corpora from the Europarl and News Commentary11 dataset. These direct parallel corpora are obtained from OPUS BIBREF28 . The size of each set of ground truth and synthetic parallel data is presented in Table 5 . Given that the training corpus for widely studied language pairs amounts to several million lines, the Cs-De language pair (0.6M) reasonably represents a low-resource situation. On the other hand, the Fr-De language pair (1.8M) is considered to be relatively resource-rich in our experiments. The details of the preprocessing are identical to those in the previous case."
],
[
"Each training corpus is tokenized using the tokenization script in Moses BIBREF20 . We represent every sentence as a sequence of subword units learned from byte-pair encoding BIBREF21 . We remove empty lines and all the sentences of length over 50 subword units. For a fair comparison, all cleaned synthetic parallel data have equal sizes. The summary of the final parallel corpora is presented in Table 1 ."
],
[
"All networks have 1024 hidden units and 500 dimensional embeddings. The vocabulary size is limited to 30K for each language. Each model is trained for 10 epochs using stochastic gradient descent with Adam BIBREF22 . The Minibatch size is 80, and the training set is reshuffled between every epoch. The norm of the gradient is clipped not to exceed 1.0 BIBREF23 . The learning rate is $2 \\cdot 10^{-4}$ in every case.",
"We use the newstest 2012 set for a development set and the newstest 2011 and newstest 2013 sets as test sets. At test time, beam search is used to approximately find the most likely translation. We use a beam of size 12 and normalize probabilities by the length of the candidate sentences. The evaluation metric is case-sensitive tokenized BLEU BIBREF24 computed with the multi-bleu.perl script from Moses. For each case, we present average BLEU evaluated on three different models trained from scratch.",
"We use the same experimental settings that we used for the previous case except for the Real Fine-tuning scenario. In the fine-tuning step, we use the learning rate of $2 \\cdot 10^{-5}$ , which produced better results. Embeddings are fixed throughout the fine-tuning steps. For evaluation, we use the same development and test sets used in the previous task."
],
[
"Before we choose the pivot language-based method for data synthesis, we conduct a preliminary experiment analyzing both pivot-based and direct back-translation. The model used for direct back-translation was trained with the ground truth Europarl Fr-De data made from the multi-parallel corpus presented in Table 2 . On the newstest 2012/2013 sets, the synthetic corpus generated using the pivot approach showed higher BLEU (19.11 / 20.45) than the back-translation counterpart (18.23 / 19.81) when used in training a De $\\rightarrow $ Fr NMT model. Although the back-translation method has been effective in many studies BIBREF0 , BIBREF25 , its availability becomes restricted in low-resource cases which is our major concern. This is due to the poor quality of the back-translation model built from the limited source-to-target parallel corpus. Instead, one can utilize abundant pivot-to-target parallel corpora by using a rich-resource language as the pivot language. This consequently improves the reliability of the quality of baseline translation models used for generating synthetic corpora.",
"From Table 2 , we find that the bias of the synthetic examples in pseudo parallel corpora brings imbalanced quality in the bidirectional translation tasks. Given that the source- and target-originated classification of a specific synthetic corpus is reversed depending on the direction of the translation, the overall results imply that the target-originated corpus for each translation task outperforms the source-originated data. The preference of target-originated synthetic data over the source-originated counterparts was formerly investigated in SMT by Lambert et al., lambert2011investigations. In NMT, it can be explained by the degradation in quality in the source-originated data owing to the erroneous target language model formed by the synthetic target sentences. In contrast, we observe that PSEUDOmix not only produces balanced results for both Fr $\\rightarrow $ De and De $\\rightarrow $ Fr translation tasks but also shows the best or competitive translation quality for each task.",
"We note that mixing two different synthetic corpora leads to improved BLEU not their intermediate value. To investigate the cause of the improvement in PSEUDOmix, we build additional target-originated synthetic corpora for each Fr $\\leftrightarrow $ De translation with a beam of size 3. As shown in Table 3 , for the De $\\rightarrow $ Fr task, the new target-originated corpus (c) shows higher BLEU than the source-originated corpus (b) by itself. The improvement in BLEU, however, occurs only when mixing the source- and target-originated synthetic parallel data (b+d) compared to mixing two target-originated synthetic corpora (c+d). The same phenomenon is observed in the Fr $\\rightarrow $ De case as well. The results suggest that real and synthetic sentences mixed on either side of sentence pairs enhance the capability of a synthetic parallel corpus. We conjecture that ground truth examples in both encoder and decoder networks not only compensate for the erroneous language model learned from synthetic sentences but also reinforces patterns of use latent in the pseudo sentences.",
"We also evaluate the effects of the proposed mixing strategy in phrase-based statistical machine translation BIBREF26 . We use Moses BIBREF20 and its baseline configuration for training. A 5-gram Kneser-Ney model is used as the language model. Table 4 shows the translation results of the phrase-based statistical machine translation (PBSMT) systems. In all experiments, NMT shows higher BLEU (2.44-3.38) compared to the PBSMT setting. We speculate that the deep architecture of NMT provides noise robustness in the synthetic examples. It is also notable that the proposed PSEUDOmix outperforms other synthetic corpora in PBSMT. The results clearly show that the benefit of the mixed composition in synthetic sentence pairs is beyond a specific machine translation framework.",
"Table 6 shows the results of the Pseudo Only scenario on Cs $\\leftrightarrow $ De and Fr $\\leftrightarrow $ De tasks. For the baseline comparison, we also present the translation quality of the NMT models trained with the ground truth Europarl+NC11 parallel corpora (a). In Cs $\\leftrightarrow $ De, the Pseudo Only scenario shows outperforming results compared to the real parallel corpus by up to 3.86-4.43 BLEU on the newstest 2013 set. Even for the Fr $\\leftrightarrow $ De case, where the size of the real parallel corpus is relatively large, the best BLEU of the pseudo parallel corpora is higher than that of the real parallel corpus by 1.3 (Fr $\\rightarrow $ De) and 0.49 (De $\\rightarrow $ Fr). We list the results on the newstest 2011 and newstest 2012 in the appendix. From the results, we conclude that large-scale synthetic parallel data can perform as an effective alternative to the real parallel corpora, particularly in low-resource language pairs.",
"As shown in Table 6 , the model learned from the Cs*-De corpus outperforms the model trained with the Cs-De* corpus in every case. This result is slightly different from the previous case, where the target-originated synthetic corpus for each translation task reports better results than the source-originated data. This arises from the diversity in the source of each pseudo parallel corpus, which vary in their suitability for the given test set. Table 6 also shows that mixing the Cs*-De corpus with the Cs-De* corpus of worse quality brings improvements in the resulting PSEUDOmix, showing the highest BLEU for bidirectional Cs $\\leftrightarrow $ De translation tasks. In addition, PSEUDOmix again shows much more balanced performance in Fr $\\leftrightarrow $ De translations compared to other synthetic parallel corpora.",
"While the mixing strategy compensates for most of the gap between the Fr-De* and the Fr*-De (3.01 $\\rightarrow $ 0.17) in the De $\\rightarrow $ Fr case, the resulting PSEUDOmix still shows lower BLEU than the target-originated Fr-De* corpus. We thus enhance the quality of the synthetic examples of the source-originated Fr*-De data by further training its mother translation model (En $\\rightarrow $ Fr). As illustrated in Figure 2 , with the target-originated Fr-De* corpus being fixed, the quality of the models trained with the source-originated Fr*-De data and PSEUDOmix increases in proportion to the quality of the mother model for the Fr*-De corpus. Eventually, PSEUDOmix shows the highest BLEU, outperforming both Fr*-De and Fr-De* data. The results indicate that the benefit of the proposed mixing approach becomes much more evident when the quality gap between the source- and target-originated synthetic data is within a certain range.",
"As presented in Table 6 , we observe that fine-tuning using ground truth parallel data brings substantial improvements in the translation qualities of all NMT models. Among all fine-tuned models, PSEUDOmix shows the best performance in all experiments. This is particularly encouraging for the case of De $\\rightarrow $ Fr, where PSEUDOmix reported lower BLEU than the Fr-De* data before it was fine-tuned. Even in the case where PSEUDOmix shows comparable results with other synthetic corpora in the Pseudo Only scenario, it shows higher improvements in the translation quality when fine-tuned with the real parallel data. These results clearly demonstrate the strengths of the proposed PSEUDOmix, which indicate both competitive translation quality by itself and relatively higher potential improvement as a result of the refinement using ground truth parallel corpora.",
"In Table 6 (b), we also present the performance of NMT models learned from the ground truth Europarl+NC11 data merged with the target-originated synthetic parallel corpus for each task. This is identical in spirit to the method in Sennrich et al. sennrich2015improving which employs back-translation for data synthesis. Instead of direct back-translation, we used pivot-based back-translation, as we verified the strength of the pivot-based data synthesis in low-resource environments. Although the ground truth data is only used for the refinement, the Real Fine-tuning scheme applied to PSEUDOmix shows better translation quality compared to the models trained with the merged corpus (b). Even the results of the Real Fine-tuning on the target-originated corpus provide comparable results to the training with the merged corpus from scratch. The overall results support the efficacy of the proposed two-step methods in practical application: the Pseudo Only method to introduce useful prior on the NMT parameters and the Real Fine-tuning scheme to reorganize the pre-trained NMT parameters using in-domain parallel data."
],
[
"The experiments shown in the previous section verify the potential of PSEUDOmix as an efficient alternative to the real parallel data. The condition in the previous case, however, is somewhat artificial, as we deliberately match the sources of all pseudo parallel corpora. In this section, we move on to more practical and large-scale applications of synthetic parallel data. Experiments are conducted on Czech (Cs) $\\leftrightarrow $ German (De) and French (Fr) $\\leftrightarrow $ German (De) translation tasks."
],
[
"We analyze the efficacy of the proposed mixing approach in the following application scenarios:",
"Pseudo Only: This setting trains NMT models using only synthetic parallel data without any ground truth parallel corpus.",
"Real Fine-tuning: Once the training of an NMT model is completed in the Pseudo Only manner, the model is fine-tuned using only a ground truth parallel corpus.",
"The suggested scenarios reflect low-resource situations in building NMT systems. In the Real Fine-tuning, we fine-tune the best model of the Pseudo Only scenario evaluated on the development set."
],
[
"In this work, we have constructed NMT systems using only synthetic parallel data. For this purpose, we suggest a novel pseudo parallel corpus called PSEUDOmix where synthetic and ground truth real examples are mixed on either side of sentence pairs. Experiments show that the proposed PSEUDOmix not only shows enhanced results for bidirectional translation but also reports substantial improvement when fine-tuned with ground truth parallel data. Our work has significance in that it provides a thorough investigation on the use of synthetic parallel corpora in low-resource NMT environment. Without any adjustment, the proposed method can also be extended to other learning areas where parallel samples are employed. For future work, we plan to explore robust data sampling methods, which would maximize the quality of the mixed synthetic parallel data."
]
],
"section_name": [
"Introduction",
"Neural Machine Translation",
"Related Work",
"Motivation",
"Limits of the Previous Approaches",
"Proposed Mixing Approach",
"Experiments: Effects of Mixing Real and Synthetic Sentences",
"Data Preparation",
"Data Preprocessing",
"Training and Evaluation",
"Results and Analysis",
"Experiments: Large-scale Application",
"Application Scenarios",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"c77fc6eed80a8a59203bd22cb5a34730869fc963"
],
"answer": [
{
"evidence": [
"While the mixing strategy compensates for most of the gap between the Fr-De* and the Fr*-De (3.01 $\\rightarrow $ 0.17) in the De $\\rightarrow $ Fr case, the resulting PSEUDOmix still shows lower BLEU than the target-originated Fr-De* corpus. We thus enhance the quality of the synthetic examples of the source-originated Fr*-De data by further training its mother translation model (En $\\rightarrow $ Fr). As illustrated in Figure 2 , with the target-originated Fr-De* corpus being fixed, the quality of the models trained with the source-originated Fr*-De data and PSEUDOmix increases in proportion to the quality of the mother model for the Fr*-De corpus. Eventually, PSEUDOmix shows the highest BLEU, outperforming both Fr*-De and Fr-De* data. The results indicate that the benefit of the proposed mixing approach becomes much more evident when the quality gap between the source- and target-originated synthetic data is within a certain range.",
"As presented in Table 6 , we observe that fine-tuning using ground truth parallel data brings substantial improvements in the translation qualities of all NMT models. Among all fine-tuned models, PSEUDOmix shows the best performance in all experiments. This is particularly encouraging for the case of De $\\rightarrow $ Fr, where PSEUDOmix reported lower BLEU than the Fr-De* data before it was fine-tuned. Even in the case where PSEUDOmix shows comparable results with other synthetic corpora in the Pseudo Only scenario, it shows higher improvements in the translation quality when fine-tuned with the real parallel data. These results clearly demonstrate the strengths of the proposed PSEUDOmix, which indicate both competitive translation quality by itself and relatively higher potential improvement as a result of the refinement using ground truth parallel corpora."
],
"extractive_spans": [],
"free_form_answer": "one",
"highlighted_evidence": [
"While the mixing strategy compensates for most of the gap between the Fr-De* and the Fr*-De (3.01 $\\rightarrow $ 0.17) in the De $\\rightarrow $ Fr case, the resulting PSEUDOmix still shows lower BLEU than the target-originated Fr-De* corpus. We thus enhance the quality of the synthetic examples of the source-originated Fr*-De data by further training its mother translation model (En $\\rightarrow $ Fr). As illustrated in Figure 2 , with the target-originated Fr-De* corpus being fixed, the quality of the models trained with the source-originated Fr*-De data and PSEUDOmix increases in proportion to the quality of the mother model for the Fr*-De corpus. Eventually, PSEUDOmix shows the highest BLEU, outperforming both Fr*-De and Fr-De* data. ",
"As presented in Table 6 , we observe that fine-tuning using ground truth parallel data brings substantial improvements in the translation qualities of all NMT models. Among all fine-tuned models, PSEUDOmix shows the best performance in all experiments. This is particularly encouraging for the case of De $\\rightarrow $ Fr, where PSEUDOmix reported lower BLEU than the Fr-De* data before it was fine-tuned. Even in the case where PSEUDOmix shows comparable results with other synthetic corpora in the Pseudo Only scenario, it shows higher improvements in the translation quality when fine-tuned with the real parallel data. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"2357c22bc121b659cb6aeebd0e58914fd8817702"
],
"answer": [
{
"evidence": [
"By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\\rightarrow $ De and train another NMT model for En $\\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . A beam of size 5 is used to generate synthetic sentences. Lastly, to match the size of the training data, PSEUDOmix is established by randomly sampling half of each Fr*-De and Fr-De* corpus and mixing them together."
],
"extractive_spans": [
"By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section."
],
"free_form_answer": "",
"highlighted_evidence": [
"By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"a733983c97cee11ee62ac44715f8a8528142e110"
],
"answer": [
{
"evidence": [
"By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\\rightarrow $ De and train another NMT model for En $\\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . A beam of size 5 is used to generate synthetic sentences. Lastly, to match the size of the training data, PSEUDOmix is established by randomly sampling half of each Fr*-De and Fr-De* corpus and mixing them together."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\\rightarrow $ De and train another NMT model for En $\\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 ."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How many improvements on the French-German translation benchmark?",
"How do they align the synthetic data?",
"Where do they collect the synthetic data?"
],
"question_id": [
"0a7ac8eccbc286e0ab55bc5949f3f8d2ea2d1a60",
"e84e80067b3343d136fd75300691c8b3d3efbdac",
"45bd22f2cfb62a5f79ec3c771c8324b963567cc0"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"data efficient",
"data efficient",
"data efficient"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The process of building each pseudo parallel corpus group for Czech→ German translation. * indicates the synthetic sentences generated by translation models. PSEUDOsrc and PSEUDOtgt can be made from Czech or German monolingual corpora or from parallel corpora including English, which is the pivot language.",
"Table 1: Statistics of the training parallel corpora for Cs→De and Fr→De. Note that each of PSEUDOsrc and PSEUDOtgt in one translation task (e.g., Cs→De) corresponds to PSEUDOtgt and PSEUDOsrc in the translation task of the reverse direction (De→Cs) respectively.",
"Table 2: Translation results (BLEU score) for Cs↔ De and Fr↔ De. For pseudo parallel corpora, the score on the first row within each cell is for pseudo only scenario while the score on the second row is the result of pseudo-real fine-tuning. The values in parentheses are improvements in BLEU by fine-tuning using the real parallel corpus. The highest score for each development and test set is bold-faced.",
"Figure 2: Translation results for De→Fr with respect to the performance of the mother model for PSEUDOsrc evaluated on the newstest2013 set.",
"Table 3: Translation results (BLEU score) for Cs↔ De and Fr↔ De in the pseudo-real merge setting. The scores on the second row of PSEUDOsrc and PSEUDOmix for De→Fr indicate translation results when the beam size of the mother model for PSEUDOsrc is increased from 1 to 5 with PSEUDOtgt remaining unchanged. The highest BLEU score for each development and test set is bold-faced."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Figure2-1.png",
"8-Table3-1.png"
]
} | [
"How many improvements on the French-German translation benchmark?"
] | [
[
"1704.00253-Results and Analysis-6",
"1704.00253-Results and Analysis-7"
]
] | [
"one"
] | 336 |
1909.11833 | SIM: A Slot-Independent Neural Model for Dialogue State Tracking | Dialogue state tracking is an important component in task-oriented dialogue systems to identify users' goals and requests as a dialogue proceeds. However, as most previous models are dependent on dialogue slots, the model complexity soars when the number of slots increases. In this paper, we put forward a slot-independent neural model (SIM) to track dialogue states while keeping the model complexity invariant to the number of dialogue slots. The model utilizes attention mechanisms between user utterance and system actions. SIM achieves state-of-the-art results on WoZ and DSTC2 tasks, with only 20% of the model size of previous models. | {
"paragraphs": [
[
"With the rapid development in deep learning, there is a recent boom of task-oriented dialogue systems in terms of both algorithms and datasets. The goal of task-oriented dialogue is to fulfill a user's requests such as booking hotels via communication in natural language. Due to the complexity and ambiguity of human language, previous systems have included semantic decoding BIBREF0 to project natural language input into pre-defined dialogue states. These states are typically represented by slots and values: slots indicate the category of information and values specify the content of information. For instance, the user utterance “can you help me find the address of any hotel in the south side of the city” can be decoded as $inform(area, south)$ and $request(address)$, meaning that the user has specified the value south for slot area and requested another slot address.",
"Numerous methods have been put forward to decode a user's utterance into slot values. Some use hand-crafted features and domain-specific delexicalization methods to achieve strong performance BIBREF1, BIBREF2. BIBREF0 employs CNN and pretrained embeddings to further improve the state tracking accuracy. BIBREF3 extends this work by using two additional statistical update mechanisms. BIBREF4 uses human teaching and feedback to boost the state tracking performance. BIBREF5 utilizes both global and local attention mechanism in the proposed GLAD model which obtains state-of-the-art results on WoZ and DSTC2 datasets. However, most of these methods require slot-specific neural structures for accurate prediction. For example, BIBREF5 defines a parametrized local attention matrix for each slot. Slot-specific mechanisms become unwieldy when the dialogue task involves many topics and slots, as is typical in a complex conversational setting like product troubleshooting. Furthermore, due to the sparsity of labels, there may not be enough data to thoroughly train each slot-specific network structure. BIBREF6, BIBREF7 both propose to remove the model's dependency on dialogue slots but there's no modification to the representation part, which could be crucial to textual understanding as we will show later.",
"To solve this problem, we need a state tracking model independent of dialogue slots. In other words, the network should depend on the semantic similarity between slots and utterance instead of slot-specific modules. To this end, we propose the Slot-Independent Model (SIM). Our model complexity does not increase when the number of slots in dialogue tasks go up. Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). The refined representation, in addition to cross and self-attention mechanisms, make our model achieve even better performance than slot-specific models. For instance, on Wizard-of-Oz (WOZ) 2.0 dataset BIBREF8, the SIM model obtains a joint-accuracy score of 89.5%, 1.4% higher than the previously best model GLAD, with only 22% of the number of parameters. On DSTC2 dataset, SIM achieves comparable performance with previous best models with only 19% of the model size."
],
[
"As outlined in BIBREF9, the dialogue state tracking task is formulated as follows: at each turn of dialogue, the user's utterance is semantically decoded into a set of slot-value pairs. There are two types of slots. Goal slots indicate the category, e.g. area, food, and the values specify the constraint given by users for the category, e.g. South, Mediterranean. Request slots refer to requests, and the value is the category that the user demands, e.g. phone, area. Each user's turn is thus decoded into turn goals and turn requests. Furthermore, to summarize the user's goals so far, the union of all previous turn goals up to the current turn is defined as joint goals.",
"Similarly, the dialogue system's reply from the previous round is labeled with a set of slot-value pairs denoted as system actions. The dialogue state tracking task requires models to predict turn goal and turn request given user's utterance and system actions from previous turns.",
"Formally, the ontology of dialogue, $O$, consists of all possible slots $S$ and the set of values for each slot, $V(s), \\forall s \\in S$. Specifically, req is the name for request slot and its values include all the requestable category information. The dialogue state tracking task is that, given the user's utterance in the $i$-th turn, $U$, and system actions from the $(i-1)$-th turn, $A=\\lbrace (s_1, v_1), ..., (s_q, v_q)\\rbrace $, where $s_j \\in S, v_j \\in V(s_j)$, the model should predict:",
"Turn goals: $\\lbrace (s_1, v_1), ..., (s_b, v_b)\\rbrace $, where $s_j \\in S, v_j \\in V(s_j)$,",
"Turn requests: $\\lbrace (req, v_1), ..., (req, v_t)\\rbrace $, where $v_j \\in V(req)$.",
"The joint goals at turn $i$ are then computed by taking the union of all the predicted turn goals from turn 1 to turn $i$.",
"Usually this prediction task is cast as a binary classification problem: for each slot-value pair $(s, v)$, determine whether it should be included in the predicted turn goals/requests. Namely, the model is to learn a mapping function $f(U, A, (s, v))\\rightarrow \\lbrace 0,1\\rbrace $."
],
[
"To predict whether a slot-value pair should be included in the turn goals/requests, previous models BIBREF0, BIBREF5 usually define network components for each slot $s\\in S$. This can be cumbersome when the ontology is large, and it suffers from the insufficient data problem: the labelled data for a single slot may not suffice to effectively train the parameters for the slot-specific neural networks structure.",
"Therefore, we propose that in the classification process, the model needs to rely on the semantic similarity between the user's utterance and slot-value pair, with system action information. In other words, the model should have only a single global neural structure independent of slots. We heretofore refer to this model as Slot-Independent Model (SIM) for dialogue state tracking."
],
[
"Suppose the user's utterance in the $i$-th turn contains $m$ words, $U=(w_1, w_2, ..., w_m)$. For each word $w_i$, we use GloVe word embedding $e_i$, character-CNN embedding $c_i$, Part-Of-Speech (POS) embedding $\\operatorname{POS}_i$, Named-Entity-Recognition (NER) embedding $\\operatorname{NER}_i$ and exact match feature $\\operatorname{EM}_i$. The POS and NER tags are extracted by spaCy and then mapped into a fixed-length vector. The exact matching feature has two bits, indicating whether a word and its lemma can be found in the slot-value pair representation, respectively. This is the first step to establish a semantic relationship between user utterance and slots. To summarize, we represent the user utterance as $X^U=\\lbrace {u}_1, {u}_2, ..., {u}_m\\rbrace \\in \\mathbb {R}^{m\\times d_u}, {u}_i=[e_i; c_i; \\operatorname{POS}_i; \\operatorname{NER}_i; \\operatorname{EM}_i]$.",
"For each slot-value pair $(s, v)$ either in system action or in the ontology, we get its text representation by concatenating the contents of slot and value. We use GloVe to embed each word in the text. Therefore, each slot-value pair in system actions is represented as $X^A\\in \\mathbb {R}^{a\\times d}$ and each slot-value pair in ontology is represented as $X^O\\in \\mathbb {R}^{o\\times d}$, where $a$ and $o$ is the number of words in the corresponding text."
],
[
"To incorporate contextual information, we employ a bi-directional RNN layer on the input representation. For instance, for user utterance,",
"We apply variational dropout BIBREF10 for RNN inputs, i.e. the dropout mask is shared over different timesteps.",
"After RNN, we use linear self-attention to get a single summarization vector for user utterance, using weight vector $w\\in \\mathbb {R}^{d_{rnn}}$ and bias scalar $b$:",
"For each slot-value pair in the system actions and ontology, we conduct RNN and linear self-attention summarization in a similar way. As the slot-value pair input is not a sentence, we only keep the summarization vector $s^A \\in \\mathbb {R}^{d_{rnn}}$ and $s^O \\in \\mathbb {R}^{d_{rnn}}$ for each slot-value pair in system actions and ontology respectively."
],
[
"To determine whether the current user utterance refers to a slot-value pair $(s, v)$ in the ontology, the model employs inter-attention between user utterance, system action and ontology. Similar to the framework in BIBREF5, we employ two sources of interactions.",
"The first is the semantic similarity between the user utterance, represented by embedding $R^U$ and each slot-value pair from ontology $(s, v)$, represented by embedding $s^O$. We linearly combine vectors in $R^U$ via the normalized inner product with $s^O$, which is then employed to compute the similarity score $y_1$:",
"The second source involves the system actions. The reason is that if the system requested certain information in the previous round, it is very likely that the user will give answer in this round, and the answer may refer to the question, e.g. “yes” or “no” to the question. Thus, we first attend to system actions from user utterance and then combine with the ontology to get similarity score. Suppose there are $L$ slot-values pairs in the system actions from previous round, represented by $s_1^A, ..., s_L^A$:",
"The final similarity score between the user utterance and a slot-value pair $(s, v)$ from the ontology is a linear combination of $y_1$ and $y_2$ and normalized using sigmoid function.",
"where $\\beta $ is a learned coefficient. The loss function is the sum of binary cross entropy over all slot-value pairs in the ontology:",
"where $y_{(s, v)}\\in \\lbrace 0, 1\\rbrace $ is the ground truth. We illustrate the model structure of SIM in fig:model."
],
[
"We evaluated our model on Wizard of Oz (WoZ) BIBREF8 and the second Dialogue System Technology Challenges BIBREF11. Both tasks are for restaurant reservation and have slot-value pairs of both goal and request types. WoZ has 4 kinds of slots (area, food, price range, request) and 94 values in total. DSTC2 has an additional slot name and 220 values in total. WoZ has 800 dialogues in the training and development set and 400 dialogues in the test set, while DSTC2 dataset consists of 2118 dialogues in the training and development set, and 1117 dialogues in the test set."
],
[
"We use accuracy on the joint goal and turn request as the evaluation metrics. Both are sets of slot-value pairs, so the predicted set must exactly match the answer to be judged as correct. For joint goals, if a later turn generates a slot-value pair where the slot has been specified in previous rounds, we replace the value with the latest content."
],
[
"We fix GloVe BIBREF12 as the word embedding matrix. The models are trained using ADAM optimizer BIBREF13 with an initial learning rate of 1e-3. The dimension of POS and NER embeddings are 12 and 8, respectively. In character-CNN, each character is embedded into a vector of length 50. The CNN window size is 3 and hidden size is 50. We apply a dropout rate of 0.1 for the input to each module. The hidden size of RNN is 125.",
"During training, we pick the best model with highest joint goal score on development set and report the result on the test set.",
"For DSTC2, we adhere to the standard procedure to use the N-best list from the noisy ASR results for testing. The ASR results are very noisy. We experimented with several strategies and ended up using only the top result from the N-best list. The training and validation on DSTC2 are based on noise-free user utterance. The WoZ task does not have ASR results available, so we directly use noise-free user utterance."
],
[
"We compare our model SIM with a number of baseline systems: delexicalization model BIBREF8, BIBREF1, the neural belief tracker model (NBT) BIBREF0, global-locally self-attentive model GLAD BIBREF5, large-scale belief tracking model LSBT BIBREF7 and scalable multi-domain dialogue state tracking model SMDST BIBREF6.",
"Table TABREF17 shows that, on WoZ dataset, SIM achieves a new state-of-the-art joint goal accuracy of 89.5%, a significant improvement of 1.4% over GLAD, and turn request accuracy of 97.3%, 0.2% above GLAD. On DSTC2 dataset, where noisy ASR results are used as user utterance during test, SIM obtains comparable results with GLAD. Furthermore, the better representation in SIM makes it significantly outperform previous slot-independent models LSBT and SMDST.",
"Furthermore, as SIM has no slot-specific neural network structures, its model size is much smaller than previous models. Table TABREF20 shows that, on WoZ and DSTC2 datasets, SIM model has the same number of parameters, which is only 23% and 19% of that in GLAD model.",
"Ablation Study. We conduct an ablation study of SIM on WoZ dataset. As shown in Table TABREF21, the additional utterance word features, including character, POS, NER and exact matching embeddings, can boost the performance by 2.4% in joint goal accuracy. These features include POS, NER and exact match features. This indicates that for the dialogue state tracking task, syntactic information and text matching are very useful. Character-CNN captures sub-word level information and is effective in understanding spelling errors, hence it helps with 1.2% in joint goal accuracy. Variational dropout is also beneficial, contributing 0.9% to the joint goal accuracy, which shows the importance of uniform masking during dropout."
],
[
"In this paper, we propose a slot-independent neural model, SIM, to tackle the dialogue state tracking problem. Via incorporating better feature representations, SIM can effectively reduce the model complexity while still achieving superior or comparable results on various datasets, compared with previous models.",
"For future work, we plan to design general slot-free dialogue state tracking models which can be adapted to different domains during inference time, given domain-specific ontology information. This will make the model more agile in real applications."
],
[
"We thank the anonymous reviewers for the insightful comments. We thank William Hinthorn for proof-reading our paper."
]
],
"section_name": [
"Introduction",
"Problem Formulation",
"Slot-Independent Model",
"Slot-Independent Model ::: Input Representation",
"Slot-Independent Model ::: Contextual Representation",
"Slot-Independent Model ::: Inter-Attention",
"Experiment ::: Dataset",
"Experiment ::: Metrics",
"Experiment ::: Training Details",
"Experiment ::: Baseline models and result",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"e4b2da7061b31ea9244a9a97fef20a4092b208a8"
],
"answer": [
{
"evidence": [
"To solve this problem, we need a state tracking model independent of dialogue slots. In other words, the network should depend on the semantic similarity between slots and utterance instead of slot-specific modules. To this end, we propose the Slot-Independent Model (SIM). Our model complexity does not increase when the number of slots in dialogue tasks go up. Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). The refined representation, in addition to cross and self-attention mechanisms, make our model achieve even better performance than slot-specific models. For instance, on Wizard-of-Oz (WOZ) 2.0 dataset BIBREF8, the SIM model obtains a joint-accuracy score of 89.5%, 1.4% higher than the previously best model GLAD, with only 22% of the number of parameters. On DSTC2 dataset, SIM achieves comparable performance with previous best models with only 19% of the model size."
],
"extractive_spans": [],
"free_form_answer": "They exclude slot-specific parameters and incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN).",
"highlighted_evidence": [
"Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"9a6caab00205504df4273240cd1ff5f0c77fe378"
],
"answer": [
{
"evidence": [
"To solve this problem, we need a state tracking model independent of dialogue slots. In other words, the network should depend on the semantic similarity between slots and utterance instead of slot-specific modules. To this end, we propose the Slot-Independent Model (SIM). Our model complexity does not increase when the number of slots in dialogue tasks go up. Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). The refined representation, in addition to cross and self-attention mechanisms, make our model achieve even better performance than slot-specific models. For instance, on Wizard-of-Oz (WOZ) 2.0 dataset BIBREF8, the SIM model obtains a joint-accuracy score of 89.5%, 1.4% higher than the previously best model GLAD, with only 22% of the number of parameters. On DSTC2 dataset, SIM achieves comparable performance with previous best models with only 19% of the model size."
],
"extractive_spans": [
"convolutional neural networks (CNN)"
],
"free_form_answer": "",
"highlighted_evidence": [
" To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2391525de7129ce53eaf2ecf5d4676091257701f"
],
"answer": [
{
"evidence": [
"Furthermore, as SIM has no slot-specific neural network structures, its model size is much smaller than previous models. Table TABREF20 shows that, on WoZ and DSTC2 datasets, SIM model has the same number of parameters, which is only 23% and 19% of that in GLAD model."
],
"extractive_spans": [],
"free_form_answer": "By the number of parameters.",
"highlighted_evidence": [
"Table TABREF20 shows that, on WoZ and DSTC2 datasets, SIM model has the same number of parameters, which is only 23% and 19% of that in GLAD model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they prevent the model complexity increasing with the increased number of slots?",
"What network architecture do they use for SIM?",
"How do they measure model size?"
],
"question_id": [
"4dad15fee1fe01c3eadce8f0914781ca0a6e3f23",
"892c346617a3391c7dafc9da1b65e5ea3890294d",
"36feaac9d9dee5ae09aaebc2019b014e57f61fbf"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: SIM model structure.",
"Table 1: Joint goal and turn request accuracies on WoZ and DSTC2 restaurant reservation datasets.",
"Table 2: Model size comparison between SIM and GLAD (Zhong et al., 2018) on WoZ and DSTC2.",
"Table 3: Ablation study of SIM on WoZ. We pick the model with highest joint goal score on development set and report its performance on test set."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png"
]
} | [
"How do they prevent the model complexity increasing with the increased number of slots?",
"How do they measure model size?"
] | [
[
"1909.11833-Introduction-2"
],
[
"1909.11833-Experiment ::: Baseline models and result-2"
]
] | [
"They exclude slot-specific parameters and incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN).",
"By the number of parameters."
] | 338 |
1804.00079 | Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning | A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations. | {
"paragraphs": [
[
"Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natural language tasks such as reading comprehension and sequence labeling BIBREF3 have benefited from pretrained word embeddings BIBREF4 , BIBREF5 that are either fine-tuned for a specific task or held fixed.",
"Many neural NLP systems are initialized with pretrained word embeddings but learn their representations of words in context from scratch, in a task-specific manner from supervised learning signals. However, learning these representations reliably from scratch is not always feasible, especially in low-resource settings, where we believe that using general purpose sentence representations will be beneficial.",
"Some recent work has addressed this by learning general-purpose sentence representations BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal.",
"Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. BIBREF14 and BIBREF15 demonstrate that neural machine translation (NMT) systems appear to capture morphology and some syntactic properties. BIBREF14 also present evidence that sequence-to-sequence parsers BIBREF16 more strongly encode source language syntax. Similarly, BIBREF17 probe representations extracted by sequence autoencoders, word embedding averages, and skip-thought vectors with a multi-layer perceptron (MLP) classifier to study whether sentence characteristics such as length, word content and word order are encoded.",
"To generalize across a diverse set of tasks, it is important to build representations that encode several aspects of a sentence. Neural approaches to tasks such as skip-thoughts, machine translation, natural language inference, and constituency parsing likely have different inductive biases. Our work exploits this in the context of a simple one-to-many multi-task learning (MTL) framework, wherein a single recurrent sentence encoder is shared across multiple tasks. We hypothesize that sentence representations learned by training on a reasonably large number of weakly related tasks will generalize better to novel tasks unseen during training, since this process encodes the inductive biases of multiple models. This hypothesis is based on the theoretical work of BIBREF18 . While our work aims at learning fixed-length distributed sentence representations, it is not always practical to assume that the entire “meaning” of a sentence can be encoded into a fixed-length vector. We merely hope to capture some of its characteristics that could be of use in a variety of tasks.",
"The primary contribution of our work is to combine the benefits of diverse sentence-representation learning objectives into a single multi-task framework. To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors. We demonstrate through extensive experimentation that representations learned in this way lead to improved performance across a diverse set of novel tasks not used in the learning of our representations. Such representations facilitate low-resource learning as exhibited by significant improvements to model performance for new tasks in the low labelled data regime - achieving comparable performance to a few models trained from scratch using only 6% of the available training set on the Quora duplicate question dataset."
],
[
"The problem of learning distributed representations of phrases and sentences dates back over a decade. For example, BIBREF19 present an additive and multiplicative linear composition function of the distributed representations of individual words. BIBREF20 combine symbolic and distributed representations of words using tensor products. Advances in learning better distributed representations of words BIBREF4 , BIBREF5 combined with deep learning have made it possible to learn complex non-linear composition functions of an arbitrary number of word embeddings using convolutional or recurrent neural networks (RNNs). A network's representation of the last element in a sequence, which is a non-linear composition of all inputs, is typically assumed to contain a squashed “summary” of the sentence. Most work in supervised learning for NLP builds task-specific representations of sentences rather than general-purpose ones.",
"Notably, skip-thought vectors BIBREF6 , an extension of the skip-gram model for word embeddings BIBREF4 to sentences, learn re-usable sentence representations from weakly labeled data. Unfortunately, these models take weeks or often months to train. BIBREF8 address this by considering faster alternatives such as sequential denoising autoencoders and shallow log-linear models. BIBREF21 , however, demonstrate that simple word embedding averages are comparable to more complicated models like skip-thoughts. More recently, BIBREF9 show that a completely supervised approach to learning sentence representations from natural language inference data outperforms all previous approaches on transfer learning benchmarks. Here we use the terms “transfer learning performance\" on “transfer tasks” to mean the performance of sentence representations evaluated on tasks unseen during training. BIBREF10 demonstrated that representations learned by state-of-the-art large-scale NMT systems also generalize well to other tasks. However, their use of an attention mechanism prevents the learning of a single fixed-length vector representation of a sentence. As a result, they present a bi-attentive classification network that composes information present in all of the model's hidden states to achieve improvements over a corresponding model trained from scratch. BIBREF11 and BIBREF12 demonstrate that discourse-based objectives can also be leveraged to learn good sentence representations.",
"Our work is most similar to that of BIBREF22 , who train a many-to-many sequence-to-sequence model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. There are two key differences between that work and our own. First, like BIBREF10 , their use of an attention mechanism prevents learning a fixed-length vector representation for a sentence. Second, their work aims for improvements on the same tasks on which the model is trained, as opposed to learning re-usable sentence representations that transfer elsewhere.",
"We further present a fine-grained analysis of how different tasks contribute to the encoding of different information signals in our representations following work by BIBREF14 and BIBREF17 .",
" BIBREF23 similarly present a multi-task framework for textual entailment with task supervision at different levels of learning. “Universal\" multi-task models have also been successfully explored in the context of computer vision problems BIBREF24 , BIBREF25 ."
],
[
"Five out of the six tasks that we consider for multi-task learning are formulated as sequence-to-sequence problems BIBREF26 , BIBREF27 . Briefly, sequence-to-sequence models are a specific case of encoder-decoder models where the inputs and outputs are sequential. They directly model the conditional distribution of outputs given inputs INLINEFORM0 . The input INLINEFORM1 and output INLINEFORM2 are sequences INLINEFORM3 and INLINEFORM4 . The encoder produces a fixed length vector representation INLINEFORM5 of the input, which the decoder then conditions on to generate an output. The decoder is auto-regressive and breaks down the joint probability of outputs into a product of conditional probabilities via the chain rule: INLINEFORM6 ",
" BIBREF26 and BIBREF27 use encoders and decoders parameterized as RNN variants such as Long Short-term Memory (LSTMs) BIBREF28 or Gated Recurrent Units (GRUs) BIBREF29 . The hidden representation INLINEFORM0 is typically the last hidden state of the encoder RNN.",
" BIBREF30 alleviate the gradient bottleneck between the encoder and the decoder by introducing an attention mechanism that allows the decoder to condition on every hidden state of the encoder RNN instead of only the last one. In this work, as in BIBREF6 , BIBREF8 , we do not employ an attention mechanism. This enables us to obtain a single, fixed-length, distributed sentence representation. To diminish the effects of vanishing gradient, we condition every decoding step on the encoder hidden representation INLINEFORM0 . We use a GRU for the encoder and decoder in the interest of computational speed. The encoder is a bidirectional GRU while the decoder is a unidirectional conditional GRU whose parameterization is as follows: DISPLAYFORM0 ",
"The encoder representation INLINEFORM0 is provided as conditioning information to the reset gate, update gate and hidden state computation in the GRU via the parameters INLINEFORM1 , INLINEFORM2 and INLINEFORM3 to avoid attenuation of information from the encoder."
],
[
" BIBREF31 present a simple one-to-many multi-task sequence-to-sequence learning model for NMT that uses a shared encoder for English and task-specific decoders for multiple target languages. BIBREF22 extend this by also considering many-to-one (many encoders, one decoder) and many-to-many architectures. In this work, we consider a one-to-many model since it lends itself naturally to the idea of combining inductive biases from different training objectives. The same bidirectional GRU encodes the input sentences from different tasks into a compressed summary INLINEFORM0 which is then used to condition a task-specific GRU to produce the output sentence."
],
[
"Our motivation for multi-task training stems from theoretical insights presented in BIBREF18 . We refer readers to that work for a detailed discussion of results, but the conclusions most relevant to this discussion are (i) that learning multiple related tasks jointly results in good generalization as measured by the number of training examples required per task; and (ii) that inductive biases learned on sufficiently many training tasks are likely to be good for learning novel tasks drawn from the same environment.",
"We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations."
],
[
"Multi-task training with different data sources for each task stills poses open questions. For example: When does one switch to training on a different task? Should the switching be periodic? Do we weight each task equally? If not, what training ratios do we use?",
" BIBREF31 use periodic task alternations with equal training ratios for every task. In contrast, BIBREF22 alter the training ratios for each task based on the size of their respective training sets. Specifically, the training ratio for a particular task, INLINEFORM0 , is the fraction of the number of training examples in that task to the total number of training samples across all tasks. The authors then perform INLINEFORM1 parameter updates on task INLINEFORM2 before selecting a new task at random proportional to the training ratios, where N is a predetermined constant.",
"We take a simpler approach and pick a new sequence-to-sequence task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below.",
"Model details can be found in section SECREF7 in the Appendix.",
"0ptcenterline ",
"",
"A set of INLINEFORM0 tasks with a common source language, a shared encoder INLINEFORM1 across all tasks and a set of INLINEFORM2 task specific decoders INLINEFORM3 . Let INLINEFORM4 denote each model's parameters, INLINEFORM5 a probability vector ( INLINEFORM6 ) denoting the probability of sampling a task such that INLINEFORM7 , datasets for each task INLINEFORM8 and a loss function INLINEFORM9 .",
"",
" INLINEFORM0 has not converged [1] Sample task INLINEFORM1 . Sample input, output pairs INLINEFORM2 . Input representation INLINEFORM3 . Prediction INLINEFORM4 INLINEFORM5 Adam INLINEFORM6 .",
"",
" "
],
[
"In this section, we describe our approach to evaluate the quality of our learned representations, present the results of our evaluation and discuss our findings."
],
[
"We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 .",
"The choice of transfer tasks and evaluation framework are borrowed largely from BIBREF9 . We provide a condensed summary of the tasks in section SECREF10 in the Appendix but refer readers to their paper for a more detailed description.",
"https://github.com/kudkudak/word-embeddings-benchmarks/wiki"
],
[
"Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section SECREF9 of the Appendix.",
"It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.0% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK-R) and entailment (SICK-E) consistent with observations made by BIBREF48 .",
"In Table TABREF19 , we show that simply training an MLP on top of our fixed sentence representations outperforms several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model BIBREF49 . When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by BIBREF47 in this low-resource setting.",
"Unlike BIBREF9 , who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table TABREF18 we observe that the learned word embeddings are competitive with popular methods such as GloVe, word2vec, and fasttext BIBREF50 on the benchmarks presented by BIBREF36 and BIBREF37 .",
"In Table TABREF20 , we probe our sentence representations to determine if certain sentence characteristics and syntactic properties can be inferred following work by BIBREF17 and BIBREF14 . We observe that syntactic properties are better encoded with the addition of multi-lingual NMT and parsing. Representations learned solely from NLI do appear to encode syntax but incorporation into our multi-task framework does not amplify this signal. Similarly, we observe that sentence characteristics such as length and word order are better encoded with the addition of parsing.",
"In Appendix Table TABREF30 , we note that our sentence representations outperform skip-thoughts and are on par with Infersent for image-caption retrieval. We also observe that comparing sentences using cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table TABREF31 ).",
"We also present qualitative analysis of our learned representations by visualizations using dimensionality reduction techniques (Figure FIGREF11 ) and nearest neighbor exploration (Appendix Table TABREF32 ). Figure FIGREF11 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by BIBREF51 . Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of BIBREF6 on TREC and SUBJ. Appendix Table TABREF32 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together."
],
[
"We present a multi-task framework for learning general-purpose fixed-length sentence representations. Our primary motivation is to encapsulate the inductive biases of several diverse training signals used to learn sentence representations into a single model. Our multi-task framework includes a combination of sequence-to-sequence tasks such as multi-lingual NMT, constituency parsing and skip-thought vectors as well as a classification task - natural language inference. We demonstrate that the learned representations yield competitive or superior results to previous general-purpose sentence representation methods. We also observe that this approach produces good word embeddings.",
"In future work, we would like understand and interpret the inductive biases that our model learns and observe how it changes with the addition of different tasks beyond just our simple analysis of sentence characteristics and syntax. Having a rich, continuous sentence representation space could allow the application of state-of-the-art generative models of images such as that of BIBREF52 to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model."
],
[
"The authors would like to thank Chinnadhurai Sankar, Sebastian Ruder, Eric Yuan, Tong Wang, Alessandro Sordoni, Guillaume Lample and Varsha Embar for useful discussions. We are also grateful to the PyTorch development team BIBREF53 . We thank NVIDIA for donating a DGX-1 computer used in this work and Fonds de recherche du Québec - Nature et technologies for funding."
],
[
"We present some architectural specifics and training details of our multi-task framework. Our shared encoder uses a common word embedding lookup table and GRU. We experiment with unidirectional, bidirectional and 2 layer bidirectional GRUs (details in Appendix section SECREF9 ). For each task, every decoder has its separate word embedding lookups, conditional GRUs and fully connected layers that project the GRU hidden states to the target vocabularies. The last hidden state of the encoder is used as the initial hidden state of the decoder and is also presented as input to all the gates of the GRU at every time step. For natural language inference, the same encoder is used to encode both the premise and hypothesis and a concatenation of their representations along with the absolute difference and hadamard product (as described in BIBREF9 ) are given to a single layer MLP with a dropout BIBREF55 rate of 0.3. All models use word embeddings of 512 dimensions and GRUs with either 1500 or 2048 hidden units. We used minibatches of 48 examples and the Adam BIBREF54 optimizer with a learning rate of 0.002. Models were trained for 7 days on an Nvidia Tesla P100-SXM2-16GB GPU. While BIBREF6 report close to a month of training, we only train for 7 days, made possible by advancements in GPU hardware and software (cuDNN RNNs).",
"We did not tune any of the architectural details and hyperparameters owing to the fact that we were unable to identify any clear criterion on which to tune them. Gains in performance on a specific task do not often translate to better transfer performance."
],
[
"In addition to performing 10-fold cross-validation to determine the L2 regularization penalty on the logistic regression models, we also tune the way in which our sentence representations are generated from the hidden states corresponding to words in a sentence. For example, BIBREF6 use the last hidden state while BIBREF9 perform max-pooling across all of the hidden states. We consider both of these approaches and pick the one with better performance on the validation set. We note that max-pooling works best on sentiment tasks such as MR, CR, SUBJ and MPQA, while the last hidden state works better on all other tasks.",
"We also employ vocabulary expansion on all tasks as in BIBREF6 by training a linear regression to map from the space of pre-trained word embeddings (GloVe) to our model's word embeddings."
],
[
"This section describes the specifics our multi-task ablations in the experiments section. These definitions hold for all tables except for TABREF18 and TABREF20 . We refer to skip-thought next as STN, French and German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par.",
"+STN +Fr +De : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors.",
"+STN +Fr +De +NLI : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 1500-dimensional hidden vectors and another bidirectional GRU with 1500-dimensional hidden vectors trained without NLI.",
"+STN +Fr +De +NLI +L : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without NLI.",
"+STN +Fr +De +NLI +L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without STP.",
"+STN +Fr +De +NLI +2L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a 2-layer bidirectional GRU with 2048-dimensional hidden vectors and a 1-layer bidirectional GRU with 2048-dimensional hidden vectors trained without STP.",
"+STN +Fr +De +NLI +L +STP +Par : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without Par.",
"In tables TABREF18 and TABREF20 we do not concatenate the representations of multiple models."
],
[
" BIBREF6 and BIBREF9 provide a detailed description of tasks that are typically used to evaluate sentence representations. We provide a condensed summary and refer readers to their work for a more thorough description."
],
[
"We evaluate on text classification benchmarks - sentiment classification on movie reviews (MR), product reviews (CR) and Stanford sentiment (SST), question type classification (TREC), subjectivity/objectivity classification (SUBJ) and opinion polarity (MPQA). Representations are used to train a logistic regression classifier with 10-fold cross validation to tune the L2 weight penalty. The evaluation metric for all these tasks is classification accuracy."
],
[
"We also evaluate on pairwise text classification tasks such as paraphrase identification on the Microsoft Research Paraphrase Corpus (MRPC) corpus. This is a binary classification problem to identify if two sentences are paraphrases of each other. The evaluation metric is classification accuracy and F1."
],
[
"To test if similar sentences share similar representations, we evaluate on the SICK relatedness (SICK-R) task where a linear model is trained to output a score from 1 to 5 indicating the relatedness of two sentences. We also evaluate using the entailment labels in the same dataset (SICK-E) which is a binary classification problem. The evaluation metric for SICK-R is Pearson correlation and classification accuracy for SICK-E."
],
[
"In this evaluation, we measure the relatedness of two sentences using only the cosine similarity between their representations. We use the similarity textual similarity (STS) benchmark tasks from 2012-2016 (STS12, STS13, STS14, STS15, STS16, STSB). The STS dataset contains sentences from a diverse set of data sources. The evaluation criteria is Pearson correlation."
],
[
"Image-caption retrieval is typically formulated as a ranking task wherein images are retrieved and ranked based on a textual description and vice-versa. We use 113k training images from MSCOCO with 5k images for validation and 5k for testing. Image features are extracted using a pre-trained 110 layer ResNet. The evaluation criterion is Recall@K and the median K across 5 different splits of the data."
],
[
"In addition to the above tasks which were considered by BIBREF9 , we also evaluate on the recently published Quora duplicate question dataset since it is an order of magnitude larger than the others (approximately 400,000 question pairs). The task is to correctly identify question pairs that are duplicates of one another, which we formulate as a binary classification problem. We use the same data splits as in BIBREF45 . Given the size of this data, we consider a more expressive classifier on top of the representations of both questions. Specifically, we train a 4 layer MLP with 1024 hidden units, with a dropout rate of 0.5 after every hidden layer. The evaluation criterion is classification accuracy. We also artificially create a low-resource setting by reducing the number of training examples between 1,000 and 25,000 using the same splits as BIBREF47 ."
],
[
"In an attempt to understand what information is encoded in by sentence representations, we consider six different classification tasks where the objective is to predict sentence characteristics such as length, word content and word order BIBREF17 or syntactic properties such as active/passive, tense and the top syntactic sequence (TSS) from the parse tree of a sentence BIBREF14 .",
"The sentence characteristic tasks are setup in the same way as described in BIBREF17 . The length task is an 8-way classification problem where sentence lengths are binned into 8 ranges. The content task is formulated as a binary classification problem that takes a concatenation of a sentence representation INLINEFORM0 and a word representation INLINEFORM1 to determine if the word is contained in the sentence. The order task is an extension of the content task where a concatenation of the sentence representation and word representations of two words in sentence is used to determine if the first word occurs before or after the second. We use a random subset of the 1-billion-word dataset for these experiments that were not used to train our multi-task representations.",
"The syntactic properties tasks are setup in the same way as described in BIBREF14 .The passive and tense tasks are characterized as binary classification problems given a sentence's representation. The former's objective is to determine if a sentence is written in active/passive voice while the latter's objective is to determine if the sentence is in the past tense or not. The top syntactic sequence (TSS) is a 20-way classification problem with 19 most frequent top syntactic sequences and 1 miscellaneous class. We use the same dataset as the authors but different training, validation and test splits."
]
],
"section_name": [
"Introduction",
"Related Work",
"Sequence-to-Sequence Learning",
"Multi-task Sequence-to-sequence Learning",
"Training Objectives & Evaluation",
"Multi-task training setup",
"Evaluation Strategies, Experimental Results & Discussion",
"Evaluation Strategy",
"Experimental Results & Discussion",
"Conclusion & Future Work",
"Acknowledgements",
"Model Training",
"Vocabulary Expansion & Representation Pooling",
"Multi-task model details",
"Description of evaluation tasks",
"Text Classification",
"Paraphrase Identification",
"Entailment and Semantic Relatedness",
"Semantic Textual Similarity",
"Image-caption retrieval",
"Quora Duplicate Question Classification",
"Sentence Characteristics & Syntax"
]
} | {
"answers": [
{
"annotation_id": [
"99b17a6492c1b08da462afad4d9bf5de5a3e224d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b69f7b5bd8aae40dc2b7f470e2f41648cd950ce6"
],
"answer": [
{
"evidence": [
"We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 ."
],
"extractive_spans": [
"standard benchmarks BIBREF36 , BIBREF37",
"to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters",
"transfer learning evaluation in an artificially constructed low-resource setting"
],
"free_form_answer": "",
"highlighted_evidence": [
"We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a89a123048d818e602e0a5f2ab6b95ba2a5f4492"
],
"answer": [
{
"evidence": [
"We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations.",
"Multi-task training setup"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Skip-thought vectors-Natural Language Inference paragraphs) The encoder for the current sentence and the decoders for the previous (STP) and next sentence (STN) are typically parameterized as separate RNNs\n- RNN",
"highlighted_evidence": [
"Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations.\n\nMulti-task training setup"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"23b131daf4bfc1127fcf7738dd40a46f9764dad8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9ba67ab3023ba046a86de0fdb6a12c49298fbfb3"
],
"answer": [
{
"evidence": [
"The primary contribution of our work is to combine the benefits of diverse sentence-representation learning objectives into a single multi-task framework. To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors. We demonstrate through extensive experimentation that representations learned in this way lead to improved performance across a diverse set of novel tasks not used in the learning of our representations. Such representations facilitate low-resource learning as exhibited by significant improvements to model performance for new tasks in the low labelled data regime - achieving comparable performance to a few models trained from scratch using only 6% of the available training set on the Quora duplicate question dataset."
],
"extractive_spans": [
"multi-lingual NMT",
"natural language inference",
"constituency parsing",
"skip-thought vectors"
],
"free_form_answer": "",
"highlighted_evidence": [
" To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"974f2a165624370e2a6974f1e3e6100e8ec4ed14"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: An approximate number of sentence pairs for each task."
],
"extractive_spans": [],
"free_form_answer": "- En-Fr (WMT14)\n- En-De (WMT15)\n- Skipthought (BookCorpus)\n- AllNLI (SNLI + MultiNLI)\n- Parsing (PTB + 1-billion word)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: An approximate number of sentence pairs for each task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Which language(s) do they work with?",
"How do they evaluate their sentence representations?",
"Which model architecture do they for sentence encoding?",
"How many tokens can sentences in their model at most contain?",
"Which training objectives do they combine?",
"Which data sources do they use?"
],
"question_id": [
"63a77d2640df8315bf0bc3925fdd7e27132b1244",
"50be9e6203c40ed3db48ed37103f967ef0ea946c",
"36a9230fadf997d3b0c5fc8af8d89bd48bf04f12",
"496304f63006205ee63da376e02ef1b3010c4aa1",
"00e9f088291fcf27956f32a791f87e4a1e311e41",
"e2f269997f5a01949733c2ec8169f126dabd7571"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: An approximate number of sentence pairs for each task.",
"Figure 1: T-SNE visualizations of our sentence representations on 3 different datasets. SUBJ (left), TREC (middle), DBpedia (right). Dataset details are presented in the Appendix.",
"Table 2: Evaluation of sentence representations on a set of 10 tasks using a linear model trained using each model’s representations. The FastSent and NMT En-Fr models are described in Hill et al. (2016), CNN-LSTM in Gan et al. (2016), Skipthought in Kiros et al. (2015), Word embedding average in Arora et al. (2016), DiscSent from Jernite et al. (2017), Byte mLSTM from Radford et al. (2017), Infersent in Conneau et al. (2017), Neural Semantic Encoder from Munkhdalai & Yu (2017), BLSTM-2DCNN from Zhou et al. (2016). STN, Fr, De, NLI, L, 2L, STP & Par stand for skip-thought next, French translation, German translation, natural language inference, large model, 2-layer large model, skip-thought previous and parsing respectively. ∆ indicates the average improvement over Infersent (AllNLI) across all 10 tasks. For MRPC and STSB we consider only the F1 score and Spearman correlations respectively and we also multiply the SICK-R scores by 100 to have all differences in the same scale. Bold numbers indicate the best performing transfer model on a given task. Underlines are used for each task to indicate both our best performing model as well as the best performing transfer model that isn’t ours.",
"Table 3: Evaluation of word embeddings. All results were computed using Faruqui & Dyer (2014) with the exception of the Skipgram, NMT, Charagram and Attract-Repel embeddings. Skipgram and NMT results were obtained from Jastrzebski et al. (2017)5. Charagram and Attract-Repel results were taken from Wieting et al. (2016) and Mrkšić et al. (2017) respectively. We also report QVEC benchmarks (Tsvetkov et al., 2015)",
"Table 4: Supervised & low-resource classification accuracies on the Quora duplicate question dataset. Accuracies are reported corresponding to the number of training examples used. The first 6 rows are taken from Wang et al. (2017), the next 4 are from Tomar et al. (2017), the next 5 from Shen et al. (2017) and The last 4 rows are our experiments using Infersent (Conneau et al., 2017) and our models.",
"Table 5: Evaluation of sentence representations by probing for certain sentence characteristics and syntactic properties. Sentence length, word content & word order from Adi et al. (2016) and sentence active/passive, tense and top level syntactic sequence (TSS) from Shi et al. (2016). Numbers reported are the accuracy with which the models were able to predict certain characteristics.",
"Table 6: COCO Retrieval with ResNet-101 features",
"Table 7: Evaluation of sentence representations on the semantic textual similarity benchmarks. Numbers reported are Pearson Correlations x100. Skipthought, GloVe average, GloVe TF-IDF, GloVe + WR (U) and all supervised numbers were taken from Arora et al. (2016) and Wieting et al. (2015) and Charagram-phrase numbers were taken from Wieting et al. (2016). Other numbers were obtained from the evaluation suite provided by Conneau et al. (2017)",
"Table 8: A query sentence and its nearest neighbors sorted by decreasing cosine similarity using our model. Sentences and nearest neighbors were chosen from a random subset of 500,000 sentences from the BookCorpus"
],
"file": [
"5-Table1-1.png",
"5-Figure1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"15-Table6-1.png",
"15-Table7-1.png",
"16-Table8-1.png"
]
} | [
"Which model architecture do they for sentence encoding?",
"Which data sources do they use?"
] | [
[
"1804.00079-Training Objectives & Evaluation-1"
],
[
"1804.00079-5-Table1-1.png"
]
] | [
"Answer with content missing: (Skip-thought vectors-Natural Language Inference paragraphs) The encoder for the current sentence and the decoders for the previous (STP) and next sentence (STN) are typically parameterized as separate RNNs\n- RNN",
"- En-Fr (WMT14)\n- En-De (WMT15)\n- Skipthought (BookCorpus)\n- AllNLI (SNLI + MultiNLI)\n- Parsing (PTB + 1-billion word)"
] | 340 |
1805.09959 | A Sentiment Analysis of Breast Cancer Treatment Experiences and Healthcare Perceptions Across Twitter | Background: Social media has the capacity to afford the healthcare industry with valuable feedback from patients who reveal and express their medical decision-making process, as well as self-reported quality of life indicators both during and post treatment. In prior work, [Crannell et. al.], we have studied an active cancer patient population on Twitter and compiled a set of tweets describing their experience with this disease. We refer to these online public testimonies as"Invisible Patient Reported Outcomes"(iPROs), because they carry relevant indicators, yet are difficult to capture by conventional means of self-report. Methods: Our present study aims to identify tweets related to the patient experience as an additional informative tool for monitoring public health. Using Twitter's public streaming API, we compiled over 5.3 million"breast cancer"related tweets spanning September 2016 until mid December 2017. We combined supervised machine learning methods with natural language processing to sift tweets relevant to breast cancer patient experiences. We analyzed a sample of 845 breast cancer patient and survivor accounts, responsible for over 48,000 posts. We investigated tweet content with a hedonometric sentiment analysis to quantitatively extract emotionally charged topics. Results: We found that positive experiences were shared regarding patient treatment, raising support, and spreading awareness. Further discussions related to healthcare were prevalent and largely negative focusing on fear of political legislation that could result in loss of coverage. Conclusions: Social media can provide a positive outlet for patients to discuss their needs and concerns regarding their healthcare coverage and treatment needs. Capturing iPROs from online communication can help inform healthcare professionals and lead to more connected and personalized treatment regimens. | {
"paragraphs": [
[
"Twitter has shown potential for monitoring public health trends, BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , disease surveillance, BIBREF6 , and providing a rich online forum for cancer patients, BIBREF7 . Social media has been validated as an effective educational and support tool for breast cancer patients, BIBREF8 , as well as for generating awareness, BIBREF9 . Successful supportive organizations use social media sites for patient interaction, public education, and donor outreach, BIBREF10 . The advantages, limitations, and future potential of using social media in healthcare has been thoroughly reviewed, BIBREF11 . Our study aims to investigate tweets mentioning “breast” and “cancer\" to analyze patient populations and selectively obtain content relevant to patient treatment experiences.",
"Our previous study, BIBREF0 , collected tweets mentioning “cancer” over several months to investigate the potential for monitoring self-reported patient treatment experiences. Non-relevant tweets (e.g. astrological and horoscope references) were removed and the study identified a sample of 660 tweets from patients who were describing their condition. These self-reported diagnostic indicators allowed for a sentiment analysis of tweets authored by patients. However, this process was tedious, since the samples were hand verified and sifted through multiple keyword searches. Here, we aim to automate this process with machine learning context classifiers in order to build larger sets of patient self-reported outcomes in order to quantify the patent experience.",
"Patients with breast cancer represent a majority of people affected by and living with cancer. As such, it becomes increasingly important to learn from their experiences and understand their journey from their own perspective. The collection and analysis of invisible patient reported outcomes (iPROs) offers a unique opportunity to better understand the patient perspective of care and identify gaps meeting particular patient care needs."
],
[
" Twitter provides a free streaming Application Programming Interface (API), BIBREF12 , for researchers and developers to mine samples of public tweets. Language processing and data mining, BIBREF13 , was conducted using the Python programming language. The free public API allows targeted keyword mining of up to 1% of Twitter's full volume at any given time, referred to as the `Spritzer Feed'.",
" We collected tweets from two distinct Spritzer endpoints from September 15th, 2016 through December 9th, 2017. The primary feed for the analysis collected INLINEFORM0 million tweets containing the keywords `breast' AND `cancer'. See Figure FIGREF2 for detailed Twitter frequency statistics along with the user activity distribution. Our secondary feed searched just for the keyword `cancer' which served as a comparison ( INLINEFORM1 million tweets, see Appendix 1), and helped us collect additional tweets relevant to cancer from patients. The numeric account ID provided in tweets helps to distinguish high frequency tweeting entities.",
"Sentence classification combines natural language processing (NLP) with machine learning to identify trends in sentence structure, BIBREF14 , BIBREF15 . Each tweet is converted to a numeric word vector in order to identify distinguishing features by training an NLP classifier on a validated set of relevant tweets. The classifier acts as a tool to sift through ads, news, and comments not related to patients. Our scheme combines a logistic regression classifier, BIBREF16 , with a Convolutional Neural Network (CNN), BIBREF17 , BIBREF18 , to identify self-reported diagnostic tweets.",
"It is important to be wary of automated accounts (e.g. bots, spam) whose large output of tweets pollute relevant organic content, BIBREF19 , and can distort sentiment analyses, BIBREF20 . Prior to applying sentence classification, we removed tweets containing hyperlinks to remove automated content (some organic content is necessarily lost with this strict constraint).",
"The user tweet distribution in Figure FIGREF2 , shows the number of users as a function of the number of their tweets we collected. With an average frequency of INLINEFORM0 tweets per user, this is a relatively healthy activity distribution. High frequency tweeting accounts are present in the tail, with a single account producing over 12,000 tweets —an automated account served as a support tool called `ClearScan' for patients in recovery. Approximately 98% of the 2.4 million users shared less than 10 posts, which accounted for 70% of all sampled tweets.",
"The Twitter API also provided the number of tweets withheld from our sample, due to rate limiting. Using these overflow statistics, we estimated the sampled proportion of tweets mentioning these keywords. These targeted feeds were able to collect a large sample of all tweets mentioning these terms; approximately 96% of tweets mentioning “breast,cancer” and 65.2% of all tweets mentioning `cancer' while active. More information regarding the types of Twitter endpoints and calculating the sampling proportion of collected tweets is described in Appendix II.",
"Our goal was to analyze content authored only by patients. To help ensure this outcome we removed posts containing a URL for classification, BIBREF19 . Twitter allows users to spread content from other users via `retweets'. We also removed these posts prior to classification to isolate tweets authored by patients. We also accounted for non-relevant astrological content by removing all tweets containing any of the following horoscope indicators: `astrology',`zodiac',`astronomy',`horoscope',`aquarius',`pisces',`aries',`taurus',`leo',`virgo',`libra', and `scorpio'. We preprocessed tweets by lowercasing and removing punctuation. We also only analyzed tweets for which Twitter had identified `en' for the language English.",
""
],
[
" We evaluated tweet sentiments with hedonometrics, BIBREF21 , BIBREF22 , using LabMT, a labeled set of 10,000 frequently occurring words rated on a `happiness' scale by individuals contracted through Amazon Mechanical Turk, a crowd-sourced survey tool. These happiness scores helped quantify the average emotional rating of text by totaling the scores from applicable words and normalizing by their total frequency. Hence, the average happiness score, INLINEFORM0 , of a corpus with INLINEFORM1 words in common with LabMT was computed with the weighted arithmetic mean of each word's frequency, INLINEFORM2 , and associated happiness score, INLINEFORM3 : DISPLAYFORM0 ",
"The average happiness of each word was rated on a 9 point scale ranging from extremely negative (e.g., `emergency' 3.06, `hate' 2.34, `die' 1.74) to positive (e.g., `laughter' 8.50, `love' 8.42, `healthy' 8.02). Neutral `stop words' ( INLINEFORM0 , e.g., `of','the', etc.) were removed to enhance the emotional signal of each set of tweets. These high frequency, low sentiment words can dampen a signal, so their removal can help identify hidden trends. One application is to plot INLINEFORM1 as a function of time. The happiness time-series can provide insight driving emotional content in text. In particular, peak and dips (i.e., large deviations from the average) can help identify interesting themes that may be overlooked in the frequency distribution. Calculated scores can give us comparative insight into the context between sets of tweets.",
"“Word shift graphs” introduced in, BIBREF21 , compare the terms contributing to shifts in a computed word happiness from two term frequency distributions. This tool is useful in isolating emotional themes from large sets of text and has been previously validated in monitoring public opinion, BIBREF23 as well as for geographical sentiment comparative analyses, BIBREF24 . See Appendix III for a general description of word shift graphs and how to interpret them."
],
[
"We began by building a validated training set of tweets for our sentence classifier. We compiled the patient tweets verified by, BIBREF0 , to train a logistic regression content relevance classifier using a similar framework as, BIBREF16 . To test the classifier, we compiled over 5 million tweets mentioning the word cancer from a 10% `Gardenhose' random sample of Twitter spanning January through December 2015. See Appendix 1 for a statistical overview of this corpus.",
"We tested a maximum entropy logistic regression classifier using a similar scheme as, BIBREF16 . NLP classifiers operate by converting sentences to word vectors for identifying key characteristics — the vocabulary of the classifier. Within the vocabulary, weights were assigned to each word based upon a frequency statistic. We used the term frequency crossed with the inverse document frequency (tf-idf), as described in , BIBREF16 . The tf-idf weights helped distinguish each term's relative weight across the entire corpus, instead of relying on raw frequency. This statistic dampens highly frequent non-relevant words (e.g. `of', `the', etc.) and enhances relatively rare yet informative terms (e.g. survivor, diagnosed, fighting). This method is commonly implemented in information retrieval for text mining, BIBREF25 . The logistic regression context classifier then performs a binary classification of the tweets we collected from 2015. See Appendix IV for an expanded description of the sentence classification methodology.",
"We validated the logistic model's performance by manually verifying 1,000 tweets that were classified as `relevant'. We uncovered three categories of immediate interest including: tweets authored by patients regarding their condition (21.6%), tweets from friends/family with a direct connection to a patient (21.9%), and survivors in remission (8.8%). We also found users posting diagnostic related inquiries (7.6%) about possible symptoms that could be linked to breast cancer, or were interested in receiving preventative check-ups. The rest (40.2%) were related to `cancer', but not to patients and include public service updates as well as non-patient authored content (e.g., support groups). We note that the classifier was trained on very limited validated data (N=660), which certainly impacted the results. We used this validated annotated set of tweets to train a more sophisticated classifier to uncover self-diagnostic tweets from users describing their personal breast cancer experiences as current patients or survivors.",
"We implemented the Convolutional Neural Network (CNN) with Google's Tensorflow interface, BIBREF26 . We adapted our framework from, BIBREF18 , but instead trained the CNN on these 1000 labeled cancer related tweets. The trained CNN was applied to predict patient self-diagnostic tweets from our breast cancer dataset. The CNN outputs a binary value: positive for a predicted tweet relevant to patients or survivors and negative for these other described categories (patient connected, unrelated, diagnostic inquiry). The Tensorflow CNN interface reported a INLINEFORM0 accuracy when evaluating this set of labels with our trained model. These labels were used to predict self-reported diagnostic tweets relevant to breast cancer patients."
],
[
" A set of 845 breast cancer patient self-diagnostic Twitter profiles was compiled by implementing our logistic model followed by prediction with the trained CNN on 9 months of tweets. The logistic model sifted 4,836 relevant tweets of which 1,331 were predicted to be self-diagnostic by the CNN. Two independent groups annotated the 1,331 tweets to identify patients and evaluate the classifier's results. The raters, showing high inter-rater reliability, individually evaluated each tweet as self-diagnostic of a breast cancer patient or survivor. The rater's independent annotations had a 96% agreement.",
" The classifier correctly identified 1,140 tweets (85.6%) from 845 profiles. A total of 48,113 tweets from these accounts were compiled from both the `cancer' (69%) and `breast' `cancer' (31%) feeds. We provided tweet frequency statistics in Figure FIGREF7 . This is an indicator that this population of breast cancer patients and survivors are actively tweeting about topics related to `cancer' including their experiences and complications.",
"Next, we applied hedonometrics to compare the patient posts with all collected breast cancer tweets. We found that the surveyed patient tweets were less positive than breast cancer reference tweets. In Figure FIGREF8 , the time series plots computed average word happiness at monthly and daily resolutions. The daily happiness scores (small markers) have a high fluctuation, especially within the smaller patient sample (average 100 tweets/day) compared to the reference distribution (average 10,000 tweets/day). The monthly calculations (larger markers) highlight the negative shift in average word happiness between the patients and reference tweets. Large fluctuations in computed word happiness correspond to noteworthy events, including breast cancer awareness month in October, cancer awareness month in February, as well as political debate regarding healthcare beginning in March May and July 2017.",
"In Figure FIGREF9 word shift graphs display the top 50 words responsible for the shift in computed word happiness between distributions. On the left, tweets from patients were compared to all collected breast cancer tweets. Patient tweets, INLINEFORM0 , were less positive ( INLINEFORM1 v. INLINEFORM2 ) than the reference distribution, INLINEFORM3 . There were relatively less positive words `mom', `raise', `awareness', `women', `daughter', `pink', and `life' as well as an increase in the negative words `no(t)', `patients, `dying', `killing', `surgery' `sick', `sucks', and `bill'. Breast cancer awareness month, occurring in October, tends to be a high frequency period with generally more positive and supportive tweets from the general public which may account for some of the negative shift. Notably, there was a relative increase of the positive words `me', `thank', `you' ,'love', and `like' which may indicate that many tweet contexts were from the patient's perspective regarding positive experiences. Many tweets regarding treatment were enthusiastic, supportive, and proactive. Other posts were descriptive: over 165 sampled patient tweets mentioned personal chemo therapy experiences and details regarding their treatment schedule, and side effects.",
" Numerous patients and survivors in our sample had identified their condition in reference to the American healthcare regulation debate. Many sampled views of the proposed legislation were very negative, since repealing the Affordable Care Act without replacement could leave many uninsured. Other tweets mentioned worries regarding insurance premiums and costs for patients and survivors' continued screening. In particular the pre-existing condition mandate was a chief concern of patients/survivors future coverage. This was echoed by 55 of the sampled patients with the hashtag #iamapreexistingcondition (See Table TABREF10 ).",
"Hashtags (#) are terms that categorize topics within posts. In Table TABREF10 , the most frequently occurring hashtags from both the sampled patients (right) and full breast cancer corpus (left). Each entry contains the tweet frequency, number of distinct profiles, and the relative happiness score ( INLINEFORM0 ) for comparisons. Political terms were prevalent in both distributions describing the Affordable Care Act (#aca, #obamacare, #saveaca, #pretectourcare) and the newly introduced American Healthcare Act (#ahca, #trumpcare). A visual representation of these hashtags are displayed using a word-cloud in the Appendix (Figure A4).",
"Tweets referencing the AHCA were markedly more negative than those referencing the ACA. This shift was investigated in Figure FIGREF9 with a word shift graph. We compared American Healthcare Act Tweets, INLINEFORM0 , to posts mentioning the Affordable Care Act, INLINEFORM1 . AHCA were relatively more negative ( INLINEFORM2 v. INLINEFORM3 ) due to an increase of negatively charged words `scared', `lose', `tax', `zombie', `defects', `cut', `depression', `killing', and `worse' . These were references to the bill leaving many patients/survivors without insurance and jeopardizing future treatment options. `Zombie' referenced the bill's potential return for subsequent votes."
],
[
"We have demonstrated the potential of using sentence classification to isolate content authored by breast cancer patients and survivors. Our novel, multi-step sifting algorithm helped us differentiate topics relevant to patients and compare their sentiments to the global online discussion. The hedonometric comparison of frequent hashtags helped identify prominent topics how their sentiments differed. This shows the ambient happiness scores of terms and topics can provide useful information regarding comparative emotionally charged content. This process can be applied to disciplines across health care and beyond.",
"Throughout 2017, Healthcare was identified as a pressing issue causing anguish and fear among the breast cancer community; especially among patients and survivors. During this time frame, US legislation was proposed by Congress that could roll back regulations ensuring coverage for individuals with pre-existing conditions. Many individuals identifying as current breast cancer patients/survivors expressed concerns over future treatment and potential loss of their healthcare coverage. Twitter could provide a useful political outlet for patient populations to connect with legislators and sway political decisions.",
"March 2017 was a relatively negative month due to discussions over American healthcare reform. The American Congress held a vote to repeal the Affordable Care Act (ACA, also referred to as `Obamacare'), which could potentially leave many Americans without healthcare insurance, BIBREF27 . There was an overwhelming sense of apprehension within the `breast cancer' tweet sample. Many patients/survivors in our diagnostic tweet sample identified their condition and how the ACA ensured coverage throughout their treatment.",
"This period featured a notable tweet frequency spike, comparable to the peak during breast cancer awareness month. The burst event peaked on March 23rd and 24th (65k, 57k tweets respectively, see Figure FIGREF2 ). During the peak, 41,983 (34%) posts contained `care' in reference to healthcare, with a viral retweeted meme accounting for 39,183 of these mentions. The tweet read: \"The group proposing to cut breast cancer screening, maternity care, and contraceptive coverage.\" with an embedded photo of a group of predominately male legislators, BIBREF28 . The criticism referenced the absence of female representation in a decision that could deprive many of coverage for breast cancer screenings. The online community condemned the decision to repeal and replace the ACA with the proposed legislation with references to people in treatment who could `die' (n=7,923) without appropriate healthcare insurance coverage. The vote was later postponed and eventually failed, BIBREF29 .",
"Public outcry likely influenced this legal outcome, demonstrating Twitter's innovative potential as a support tool for public lobbying of health benefits. Twitter can further be used to remind, motivate and change individual and population health behavior using messages of encouragement (translated to happiness) or dissatisfaction (translated to diminished happiness), for example, with memes that can have knock on social consequences when they are re-tweeted. Furthermore, Twitter may someday be used to benchmark treatment decisions to align with expressed patient sentiments, and to make or change clinical recommendations based upon the trend histories that evolve with identifiable sources but are entirely in the public domain.",
" Analyzing the fluctuation in average word happiness as well as bursts in the frequency distributions can help identify relevant events for further investigation. These tools helped us extract themes relevant to breast cancer patients in comparison to the global conversation.",
"One area in which Twitter has traditionally fallen short for a communication medium is that of the aural dimension, such as nuances and inflections. However, Twitter now includes pictures, videos and emojis with people revealing or conveying their emotions by use of these communication methods. It is envisaged that the aural and visual dimensions will eventually grow to complement the published text component towards a more refined understanding of feelings, attitudes and health and clinical sentiments.",
"Lack of widespread patient adoption of social media could be a limiting factor to our analysis. A study of breast cancer patients during 2013–2014, BIBREF30 , found social media was a less prominent form of online communication (N = 2578, 12.3%), however with the advent of smartphones and the internet of things (iot) movement, social media may influence a larger proportion of future patients. Another finding noted that online posts were more likely to be positive about their healthcare decision experience or about survivorship. Therefore we cannot at this time concretely draw population-based assumptions from social media sampling. Nevertheless, understanding this online patient community could serve as a valuable tool for healthcare providers and future studies should investigate current social media usage statistics across patients.",
"Because we trained the content classifier with a relatively small corpus, the model likely over-fit on a few particular word embeddings. For example: 'i have stage iv', `i am * survivor', `i had * cancer'. However, this is similar to the process of recursive keyword searches to gather related content. Also, the power of the CNN allows for multiple relative lingual syntax as opposed to searching for static phrases ('i have breast cancer', 'i am a survivor'). The CNN shows great promise in sifting relevant context from large sets of data.",
"Other social forums for patient self reporting and discussion should be incorporated into future studies. For example, as of 2017, https://community.breastcancer.org has built a population of over 199,000 members spanning 145,000 topics. These tools could help connect healthcare professionals with motivated patients. Labeled posts from patients could also help train future context models and help identify adverse symptoms shared among online social communities.",
"Our study focused primarily on English tweets, since this was the language of our diagnostic training sample. Future studies could incorporate other languages using our proposed framework. It would be important to also expand the API queries with translations of `breast' and `cancer'. This could allow for a cross cultural comparison of how social media influences patients and what patients express on social media."
],
[
" We have demonstrated the potential of using context classifiers for identifying diagnostic tweets related to the experience of breast cancer patients. Our framework provides a proof of concept for integrating machine learning with natural language processing as a tool to help connect healthcare providers with patient experiences. These methods can inform the medical community to provide more personalized treatment regimens by evaluating patient satisfaction using social listening. Twitter has also been shown as a useful medium for political support of healthcare policies as well as spreading awareness. Applying these analyses across other social media platforms could provide comparably rich data-sets. For instance, Instagram has been found to contain indicative markers for depression, BIBREF31 . Integrating these applications into our healthcare system could provide a better means of tracking iPROs across treatment regimens and over time.",
"One area in which Twitter has traditionally fallen short for a communication medium is that of the aural dimension, such as nuances and inflections. However, Twitter now includes pictures, videos, and emojis with people revealing or conveying their emotions by use of these communication methods. With augmented reality, virtual reality, and even chatbot interfaces, it is envisaged that the aural and visual dimensions will eventually grow to complement the published text component towards a more refined understanding of feelings, attitudes and health and clinical sentiments.",
"Follow-on studies to our work could be intended to further develop these models and apply them to larger streams of data. Online crowd sourcing tools, like Amazon's Mechanical Turk, implemented in, BIBREF22 , can help compile larger sets of human validated labels to improve context classifiers. These methods can also be integrated into delivering online outreach surveys as another tool for validating healthcare providers. Future models, trained on several thousand labeled tweets for various real world applications should be explored. Invisible patient- reported outcomes should be further investigated via sentiment and context analyses for a better understanding of how to integrate the internet of things with healthcare.",
"Twitter has become a powerful platform for amplifying political voices of individuals. The response of the online breast cancer community to the American Healthcare Act as a replacement to the Affordable Care Act was largely negative due to concerns over loss of coverage. A widespread negative public reaction helped influence this political result. Social media opinion mining could present as a powerful tool for legislators to connect with and learn from their constituents. This can lead to positive impacts on population health and societal well-being."
],
[
" The authors wish to acknowledge the Vermont Advanced Computing Core, which is supported by NASA (NNX-08AO96G) at the University of Vermont which provided High Performance Computing resources that contributed to the research results reported within this poster. EMC was supported by the Vermont Complex Systems Center. CMD and PSD were supported by an NSF BIGDATA grant IIS-1447634."
],
[
" There are three types of endpoints to access data from Twitter. The `spritzer' (1%) and `gardenhose' (10%) endpoints were both implemented to collect publicly posted relevant data for our analysis. The third type of endpoint is the `Firehose' feed, a full 100% sample, which can be purchased via subscription from Twitter. This was unnecessary for our analysis, since our set of keywords yielded a high proportion of the true tweet sample. We quantified the sampled proportion of tweets using overflow statistics provided by Twitter. These `limit tweets', INLINEFORM0 , issue a timestamp along with the approximate number of posts withheld from our collected sample, INLINEFORM1 . The sampling percentage, INLINEFORM2 , of keyword tweets is approximated as the collected tweet total, INLINEFORM3 , as a proportion of itself combined with the sum of the limit counts, each INLINEFORM4 : DISPLAYFORM0 ",
"By the end of 2017, Twitter was accumulating an average of 500 million tweets per day, BIBREF32 . Our topics were relatively specific, which allowed us to collect a large sample of tweets. For the singular search term, `cancer', the keyword sampled proportion, INLINEFORM0 , was approximately 65.21% with a sample of 89.2 million tweets. Our separate Twitter spritzer feed searching for keywords `breast AND cancer` OR `lymphedema' rarely surpassed the 1% limit. We calculated a 96.1% sampling proportion while our stream was active (i.e. not accounting for network or power outages). We present the daily overflow limit counts of tweets not appearing in our data-set, and the approximation of the sampling size in Figure A2.",
"",
""
],
[
" Word shift graphs are essential tools for analyzing which terms are affecting the computed average happiness scores between two text distributions, BIBREF33 . The reference word distribution, INLINEFORM0 , serves as a lingual basis to compare with another text, INLINEFORM1 . The top 50 words causing the shift in computed word happiness are displayed along with their relative weight. The arrows ( INLINEFORM2 ) next to each word mark an increase or decrease in the word's frequency. The INLINEFORM3 , INLINEFORM4 , symbols indicate whether the word contributes positively or negatively to the shift in computed average word happiness.",
"In Figure A3, word shift graphs compare tweets mentioning `breast' `cancer' and a random 10% `Gardenhose' sample of non filtered tweets. On the left, `breast',`cancer' tweets were slightly less positive due to an increase in negative words like `fight', `battle', `risk', and `lost'. These distributions had similar average happiness scores, which was in part due to the relatively more positive words `women', mom', `raise', `awareness', `save', `support', and `survivor'. The word shift on the right compares breast cancer patient tweets to non filtered tweets. These were more negative ( INLINEFORM0 = 5.78 v. 6.01) due a relative increase in words like `fighting', `surgery', `against', `dying', `sick', `killing', `radiation', and `hospital'. This tool helped identify words that signal emotional themes and allow us to extract content from large corpora, and identify thematic emotional topics within the data."
],
[
"We built the vocabulary corpus for the logistic model by tokenizing the annotated set of patient tweets by word, removing punctuation, and lowercasing all text. We also included patient unrelated `cancer' tweets collected as a frame of reference to train the classifier. This set of tweets was not annotated, so we made the assumption that tweets not validated by, BIBREF0 were patient unrelated. The proportion, INLINEFORM0 , of unrelated to related tweets has a profound effect on the vocabulary of the logistic model, so we experimented with various ranges of INLINEFORM1 and settled on a 1:10 ratio of patient related to unrelated tweets. We then applied the tf-idf statistic to build the binary classification logistic model.",
"The Tensorflow open source machine learning library has previously shown great promise when applied to NLP benchmark data-sets, BIBREF17 . The CNN loosely works by implementing a filter, called convolution functions, across various subregions of the feature landscape, BIBREF34 , BIBREF35 , in this case the tweet vocabulary. The model tests the robustness of different word embeddings (e.g., phrases) by randomly removing filtered pieces during optimization to find the best predictive terms over the course of training. We divided the input labeled data into training and evaluation to successively test for the best word embedding predictors. The trained model can then be applied for binary classification of text content.",
""
],
[
""
]
],
"section_name": [
"Introduction",
"Data Description",
"Sentiment Analysis and Hedonometrics",
" Relevance Classification: Logistic Model and CNN Architecture",
"Results",
"Discussion",
"Conclusion",
"Acknowledgments",
"Appendix II: Calculating the Tweet Sampling Proportion",
"Appendix III: Interpreting Word Shift Graphs",
" Appendix IV: Sentence Classification Methodology",
" Appendix V: Hashtag Table Sorted by Average Word Happiness"
]
} | {
"answers": [
{
"annotation_id": [
"b19dc7607ae7c9e5e8c50a4e0a8e7428dbc96511"
],
"answer": [
{
"evidence": [
"Our study focused primarily on English tweets, since this was the language of our diagnostic training sample. Future studies could incorporate other languages using our proposed framework. It would be important to also expand the API queries with translations of `breast' and `cancer'. This could allow for a cross cultural comparison of how social media influences patients and what patients express on social media."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our study focused primarily on English tweets, since this was the language of our diagnostic training sample."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2c32956d7554683453e21f9ab06efe585f292fa1"
],
"answer": [
{
"evidence": [
"We collected tweets from two distinct Spritzer endpoints from September 15th, 2016 through December 9th, 2017. The primary feed for the analysis collected INLINEFORM0 million tweets containing the keywords `breast' AND `cancer'. See Figure FIGREF2 for detailed Twitter frequency statistics along with the user activity distribution. Our secondary feed searched just for the keyword `cancer' which served as a comparison ( INLINEFORM1 million tweets, see Appendix 1), and helped us collect additional tweets relevant to cancer from patients. The numeric account ID provided in tweets helps to distinguish high frequency tweeting entities."
],
"extractive_spans": [],
"free_form_answer": "By using keywords `breast' AND `cancer' in tweet collecting process. \n",
"highlighted_evidence": [
"The primary feed for the analysis collected INLINEFORM0 million tweets containing the keywords `breast' AND `cancer'. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"240acb2dbd49f745c358bd0ef91f51c3175f5bce"
],
"answer": [
{
"evidence": [
"Sentence classification combines natural language processing (NLP) with machine learning to identify trends in sentence structure, BIBREF14 , BIBREF15 . Each tweet is converted to a numeric word vector in order to identify distinguishing features by training an NLP classifier on a validated set of relevant tweets. The classifier acts as a tool to sift through ads, news, and comments not related to patients. Our scheme combines a logistic regression classifier, BIBREF16 , with a Convolutional Neural Network (CNN), BIBREF17 , BIBREF18 , to identify self-reported diagnostic tweets.",
"It is important to be wary of automated accounts (e.g. bots, spam) whose large output of tweets pollute relevant organic content, BIBREF19 , and can distort sentiment analyses, BIBREF20 . Prior to applying sentence classification, we removed tweets containing hyperlinks to remove automated content (some organic content is necessarily lost with this strict constraint).",
"Our goal was to analyze content authored only by patients. To help ensure this outcome we removed posts containing a URL for classification, BIBREF19 . Twitter allows users to spread content from other users via `retweets'. We also removed these posts prior to classification to isolate tweets authored by patients. We also accounted for non-relevant astrological content by removing all tweets containing any of the following horoscope indicators: `astrology',`zodiac',`astronomy',`horoscope',`aquarius',`pisces',`aries',`taurus',`leo',`virgo',`libra', and `scorpio'. We preprocessed tweets by lowercasing and removing punctuation. We also only analyzed tweets for which Twitter had identified `en' for the language English."
],
"extractive_spans": [],
"free_form_answer": "ML logistic regression classifier combined with a Convolutional Neural Network (CNN) to identify self-reported diagnostic tweets.\nNLP methods: tweet conversion to numeric word vector, removing tweets containing hyperlinks, removing \"retweets\", removing all tweets containing horoscope indicators, lowercasing and removing punctuation.",
"highlighted_evidence": [
"Sentence classification combines natural language processing (NLP) with machine learning to identify trends in sentence structure, BIBREF14 , BIBREF15 . Each tweet is converted to a numeric word vector in order to identify distinguishing features by training an NLP classifier on a validated set of relevant tweets. The classifier acts as a tool to sift through ads, news, and comments not related to patients. Our scheme combines a logistic regression classifier, BIBREF16 , with a Convolutional Neural Network (CNN), BIBREF17 , BIBREF18 , to identify self-reported diagnostic tweets.\n\nIt is important to be wary of automated accounts (e.g. bots, spam) whose large output of tweets pollute relevant organic content, BIBREF19 , and can distort sentiment analyses, BIBREF20 . Prior to applying sentence classification, we removed tweets containing hyperlinks to remove automated content (some organic content is necessarily lost with this strict constraint).",
"Twitter allows users to spread content from other users via `retweets'. We also removed these posts prior to classification to isolate tweets authored by patients. We also accounted for non-relevant astrological content by removing all tweets containing any of the following horoscope indicators: `astrology',`zodiac',`astronomy',`horoscope',`aquarius',`pisces',`aries',`taurus',`leo',`virgo',`libra', and `scorpio'. We preprocessed tweets by lowercasing and removing punctuation. We also only analyzed tweets for which Twitter had identified `en' for the language English."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do the authors report results only on English datasets?",
"How were breast cancer related posts compiled from the Twitter streaming API?",
"What machine learning and NLP methods were used to sift tweets relevant to breast cancer experiences?"
],
"question_id": [
"bb8f62950acbd4051774f1bfc50e3d424dd33b7c",
"d653d994ef914d76c7d4011c0eb7873610ad795f",
"880a76678e92970791f7c1aad301b5adfc41704f"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"FIG. 1: (left) The distribution of tweets per given user is plotted on a log axis. The tail tends to be high frequency automated accounts, some of which provide daily updates or news related to cancer. (right) A frequency time-series of the tweets collected, binned by day.",
"TABLE I: Diagnostic Training Sample Tweet Phrases: A sample of self-reported diagnostic phrases from tweets used to train the logistic regression content classifier (modified to preserve anonymity).",
"FIG. 2: (left) The distribution of tweets per given patient/survivor is plotted on a log axis along with a statistical summary of patient tweeting behavior. (right) A frequency time-series of patient tweets collected, binned by day.",
"FIG. 3: Computed average word happiness as a function of day (small markers) and month (large markers) for both the ‘breast’,‘cancer’ and patient distributions. The patient monthly average was less positive than the reference distribution (havg = 5.78 v. 5.93).",
"FIG. 4: (Left) Word shift graph comparing collected Breast Cancer Patient Tweets, Tcomp, to all Breast Cancer Tweets, Tref. Patient Tweets were less positive (havg = 5.78 v. 5.97), due to a decrease in positive words ‘mom’, ‘raise’, ‘awareness’, ‘women’, ‘daughter’, ‘pink’, and ‘life’ as well as an increase in the negative words ‘no(t)’, ‘patients, ‘dying’, ‘killing’, ‘surgery’ ‘sick’, ‘sucks’, and ‘bill’. (Right) Word shift graph comparing tweets mentioning the American Healthcare Act (AHCA, 10.5k tweets) to the Affordable Care Act (ACA, 16.9k tweets). AHCA tweets were more negative (havg = 5.48 v. 6.05) due to a relative increase in the negative words ‘scared’, ‘lose’, ‘zombie’, ‘defects’, ‘depression’, ‘harm’, ‘killing’, and ‘worse’.",
"TABLE II: 50 Most Frequently Tweeted Hashtags: A table of the most frequently tweeted hashtags (#) from all collected breast cancer tweets (left) and from sampled breast cancer patients (right). The relative computed ambient happiness havg for each hashtag is colored relative to the group average (blue- negative, orange - positive).",
"TABLE III: Sampled Predicted Diagnostic Tweets: A sample of key phrases from self-reported diagnostic tweets predicted from the CNN classifier with the patient relevant proportional ratio, α = 1 : 10.",
"TABLE IV: 50 Most Frequently Tweeted Hashtags: A table of the most frequently tweeted hashtags (#) from all collected breast cancer tweets (left) and from sampled breast cancer patients (right). The relative computed average happiness havg for each tag is colored relative to the group average (blue- negative, orange - positive). This version is sorted by computed word happiness."
],
"file": [
"3-Figure1-1.png",
"4-TableI-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png",
"8-TableII-1.png",
"9-TableIII-1.png",
"16-TableIV-1.png"
]
} | [
"What machine learning and NLP methods were used to sift tweets relevant to breast cancer experiences?"
] | [
[
"1805.09959-Data Description-6",
"1805.09959-Data Description-3",
"1805.09959-Data Description-2"
]
] | [
"ML logistic regression classifier combined with a Convolutional Neural Network (CNN) to identify self-reported diagnostic tweets.\nNLP methods: tweet conversion to numeric word vector, removing tweets containing hyperlinks, removing \"retweets\", removing all tweets containing horoscope indicators, lowercasing and removing punctuation."
] | 342 |
2003.12738 | Variational Transformers for Diverse Response Generation | Despite the great promise of Transformers in many sequence modeling tasks (e.g., machine translation), their deterministic nature hinders them from generalizing to high entropy tasks such as dialogue response generation. Previous work proposes to capture the variability of dialogue responses with a recurrent neural network (RNN)-based conditional variational autoencoder (CVAE). However, the autoregressive computation of the RNN limits the training efficiency. Therefore, we propose the Variational Transformer (VT), a variational self-attentive feed-forward sequence model. The VT combines the parallelizability and global receptive field of the Transformer with the variational nature of the CVAE by incorporating stochastic latent variables into Transformers. We explore two types of the VT: 1) modeling the discourse-level diversity with a global latent variable; and 2) augmenting the Transformer decoder with a sequence of fine-grained latent variables. Then, the proposed models are evaluated on three conversational datasets with both automatic metric and human evaluation. The experimental results show that our models improve standard Transformers and other baselines in terms of diversity, semantic relevance, and human judgment. | {
"paragraphs": [
[
"Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure\"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training.",
"In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder.",
"The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses."
],
[
"Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry\" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation."
],
[
"Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer."
],
[
"Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process."
],
[
"The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to:",
"The typical CVAE consists of a prior network $p_{\\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as",
"where $\\mathcal {L}_{REC}$ denotes the reconstruction loss and $\\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior.",
"In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\\theta }(z | c)$ and the recognition network $p_{\\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\\mathcal {N}\\left(z ; \\mu ^{\\prime }, \\sigma ^{\\prime 2} \\mathbf {I}\\right)$ and posterior latent distribution $\\mathcal {N}\\left(z ; \\mu , \\sigma ^{2} \\mathbf {I}\\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\\mathcal {N}\\left(z ; \\mu ^{\\prime }, \\sigma ^{\\prime 2} \\mathbf {I}\\right)$ and samples of the posterior latent variable (for training) from $\\mathcal {N}\\left(z ; \\mu , \\sigma ^{2} \\mathbf {I}\\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$.",
"The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16."
],
[
"The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state.",
"The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$:",
"Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\\prime }_{SOS}$ of token $SOS$ with latent information.",
"This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows:"
],
[
"In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\\left(z_{1}, \\dots , z_{T}\\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables.",
"As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path."
],
[
"The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$.",
"We decompose the response $x$ as $x = \\left(x_1, \\cdots , x_T\\right)$ and the latent variable $z$ as $z=\\left(z_{1}, \\dots , z_{T}\\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes:",
"where"
],
[
"The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as:",
"where",
"During the training, the posterior path guides the learning of prior path via KL divergence constraint:",
"In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15.",
"During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is:"
],
[
"As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by:",
"where $f_{aux}$ is a feed-forward neural network with the softmax output."
],
[
"The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\\mathcal {L}_{KL}(t)$ at each position:",
"We regularize the ELBO learning objective with an auxiliary loss $\\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows:",
"where,"
],
[
"We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26."
],
[
"dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set."
],
[
"are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets."
],
[
"We compare the proposed models with the following baselines:"
],
[
"An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16."
],
[
"An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16."
],
[
"A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT."
],
[
"We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models."
],
[
"The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27."
],
[
"To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation."
],
[
"This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\\textbf {EMB}_\\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\\textbf {EMB}_\\textbf {BERT}$."
],
[
"In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad\", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard."
],
[
"The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3.",
"Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.",
"On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\\textbf {EMB}_\\textbf {FT}$ and $\\textbf {EMB}_\\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed \"gold response\" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness."
],
[
"Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses."
],
[
"This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation."
]
],
"section_name": [
"Introduction",
"Related work ::: Neural Conversational Models",
"Related work ::: Conditional Variational Autoencoders",
"Related work ::: Fully Attentional Networks",
"Preliminaries ::: Conditional Variational Autoencoder for Dialogue Generation",
"Preliminaries ::: CVAE with Transformer",
"Sequential Variational Transformer",
"Sequential Variational Transformer ::: Prior Path",
"Sequential Variational Transformer ::: Posterior Path",
"Sequential Variational Transformer ::: Auxiliary Loss",
"Sequential Variational Transformer ::: Learning",
"Experiments ::: Dataset",
"Experiments ::: Dataset ::: MojiTalk",
"Experiments ::: Dataset ::: PersonaChat & Empathetic-Dialogues",
"Experiments ::: Baselines",
"Experiments ::: Baselines ::: Seq2Seq.",
"Experiments ::: Baselines ::: CVAE.",
"Experiments ::: Baselines ::: Transformer.",
"Experiments ::: Hyper-parameters and Training Setup",
"Experiments ::: Automatic Evaluation ::: PPL & KLD.",
"Experiments ::: Automatic Evaluation ::: Diversity.",
"Experiments ::: Automatic Evaluation ::: Embeddings Similarity.",
"Experiments ::: Human Evaluation",
"Results ::: Quantitative Analysis",
"Results ::: Qualitative Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"a88de75bc60cab3829d5fcfd51b41290f8c93e87"
],
"answer": [
{
"evidence": [
"Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.",
"FLOAT SELECTED: Table 1: Results of Variational Transformer compared to baselines on automatic and human evaluations."
],
"extractive_spans": [],
"free_form_answer": "PPL: SVT\nDiversity: GVT\nEmbeddings Similarity: SVT\nHuman Evaluation: SVT",
"highlighted_evidence": [
"Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.",
"FLOAT SELECTED: Table 1: Results of Variational Transformer compared to baselines on automatic and human evaluations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2480a1548fe73160bbb792eda5a0f4baeeb2d64a"
],
"answer": [
{
"evidence": [
"An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16.",
"An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16."
],
"extractive_spans": [
"attention-based sequence-to-sequence model ",
"CVAE"
],
"free_form_answer": "",
"highlighted_evidence": [
"An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16.",
"CVAE.\nAn RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"38d23527a2e40607267e6d7f46d7dc56468f9fb7"
],
"answer": [
{
"evidence": [
"We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26."
],
"extractive_spans": [
"MojiTalk ",
"PersonaChat ",
"Empathetic-Dialogues"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What approach performs better in experiments global latent or sequence of fine-grained latent variables?",
"What baselines other than standard transformers are used in experiments?",
"What three conversational datasets are used for evaluation?"
],
"question_id": [
"c69f4df4943a2ca4c10933683a02b179a5e76f64",
"6aed1122050b2d508dc1790c13cdbe38ff126089",
"8740c3000e740ac5c0bc8f329d908309f7ffeff6"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The Global Variational Transformer. During training, The posterior latent variable z by the posterior network is passed to the decoder, while during testing, the target response is absent, and z is replaced by the prior latent variable. The word embeddings, positional encoding, softmax layer and meta vectors are ignored for simplicity",
"Figure 2: The Sequential Variational Transformer. During training, The posterior latent variables z by the posterior network are passed to the decoder, while during testing, the target response is absent, and z is replaced by the prior latent variables z. The word embeddings, positional encoding, softmax layer and meta vectors are ignored for simplicity",
"Table 1: Results of Variational Transformer compared to baselines on automatic and human evaluations.",
"Table 2: Generated responses from proposed models and baseline models. The reference responses (Ref) are given."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"7-Table1-1.png",
"8-Table2-1.png"
]
} | [
"What approach performs better in experiments global latent or sequence of fine-grained latent variables?"
] | [
[
"2003.12738-7-Table1-1.png",
"2003.12738-Results ::: Quantitative Analysis-1"
]
] | [
"PPL: SVT\nDiversity: GVT\nEmbeddings Similarity: SVT\nHuman Evaluation: SVT"
] | 345 |
1906.01183 | Back Attention Knowledge Transfer for Low-resource Named Entity Recognition | In recent years, great success has been achieved in the field of natural language processing (NLP), thanks in part to the considerable amount of annotated resources. For named entity recognition (NER), most languages do not have such an abundance of labeled data, so the performances of those languages are comparatively lower. To improve the performance, we propose a general approach called Back Attention Network (BAN). BAN uses translation system to translate other language sentences into English and utilizes the pre-trained English NER model to get task-specific information. After that, BAN applies a new mechanism named back attention knowledge transfer to improve the semantic representation, which aids in generation of the result. Experiments on three different language datasets indicate that our approach outperforms other state-of-the-art methods. | {
"paragraphs": [
[
"Named entity recognition (NER) is a sequence tagging task that extracts the continuous tokens into specified classes, such as person names, organizations and locations. Current state-of-the-art approaches for NER usually base themselves on long short-term memory recurrent neural networks (LSTM RNNs) and a subsequent conditional random field (CRF) to predict the sequence labels BIBREF0 . Performances of neural NER methods are compromised if the training data are not enough BIBREF1 . This problem is severe for many languages due to a lack of labeled datasets, e.g., German and Spanish. In comparison, NER on English is well developed and there exist abundant labeled data for training purpose. Therefore, in this work, we regard English as a high-resource language, while other languages, even Chinese, as low-resource languages.",
"There is an intractable problem when leveraging English NER system for other languages. The sentences with the same meaning in different languages may have different lengths and the positions of words in these sentences usually do not correspond. Previous work such as BIBREF2 used each single word translation information to enrich the monolingual word embedding. To our knowledge, there is no approach that employs the whole translation information to improve the performance of the monolingual NER system.",
"To address above problem, we introduce an extension to the BiLSTM-CRF model, which could obtain transferred knowledge from a pre-trained English NER system. First, we translate other languages into English. Since the proposed models of BIBREF3 and BIBREF4 , the performance of attention-based machine translation systems is close to the human level. The attention mechanism can make the translation results more accurate. Furthermore, this mechanism has another useful property: the attention weights can represent the alignment information. After translating the low-resource language into English, we utilize the pre-trained English NER model to predict the sentences and record the output states of BiLSTM in this model. The states contain the semantic and task-specific information of the sentences. By using soft alignment attention weights as a transformation matrix, we manage to transfer the knowledge of high resource language — English to other languages. Finally, using both word vectors and the transfer knowledge, we obtain new state-of-the-art results on four datasets."
],
[
"In this section, we will introduce the BAN in three parts. Our model is based on the mainstream NER model BIBREF5 , using BiLSTM-CRF as the basic network structure. Given a sentence INLINEFORM0 and corresponding labels INLINEFORM1 , where INLINEFORM2 denotes the INLINEFORM3 th token and INLINEFORM4 denotes the INLINEFORM5 th label. The NER task is to estimate the probability INLINEFORM6 . Figure FIGREF1 shows the main architecture of our model."
],
[
"Attention-base translation model We use the system of BIBREF6 , a convolutional sequence to sequence model. It divides translation process into two steps. First, in the encoder step, given an input sentence INLINEFORM0 of length INLINEFORM1 , INLINEFORM2 represents each word as word embedding INLINEFORM3 . After that, we obtain the absolute position of input elements INLINEFORM4 . Both vectors are concatenated to get input sentence representations INLINEFORM5 . Similarly, output elements INLINEFORM6 generated from decoder network have the same structure. A convolutional neural network (CNN) is used to get the hidden state of the sentence representation from left to right. Second, in the decoder step, attention mechanism is used in each CNN layer. In order to acquire the attention value, we combine the current decoder state INLINEFORM7 with the embedding of previous decoder output value INLINEFORM8 : DISPLAYFORM0 ",
"For INLINEFORM0 th layer, the attention INLINEFORM1 of the INLINEFORM2 th source element and INLINEFORM3 th state is computed as a dot-product between the decoder state summary INLINEFORM4 and each output INLINEFORM5 of the last encoder layer: DISPLAYFORM0 ",
"Then we follow the normal decoder implementation and get target sentence INLINEFORM0 by beam search algorithm.",
"Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation. The hidden states of the character language model (CharLM) are used to create contextualized word embeddings. The final embedding INLINEFORM0 is concatenated by the CharLM embedding INLINEFORM1 and GLOVE embedding INLINEFORM2 BIBREF8 . A standard BiLSTM-CRF named entity recognition model BIBREF0 takes INLINEFORM3 to address the NER task."
],
[
"The sentences in low-resource languages are used as input to the model. Given a input sentence INLINEFORM0 in low-resource language, we use pre-trained translation model to translate INLINEFORM1 into English and the output is INLINEFORM2 . Simultaneously, we record the average of values for all INLINEFORM3 attention layers: DISPLAYFORM0 ",
"After that, we use the pre-trained English NER model to predict the translated sentence INLINEFORM0 . Then, we have the BiLSTM output states: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 denote the INLINEFORM2 th forward and backward outputs, respectively. INLINEFORM3 contains the semantic and task-specific information of the translated sentence. And the INLINEFORM4 th row of attention weights matrix INLINEFORM5 represents the correlation between source word INLINEFORM6 with all words in target sentence INLINEFORM7 . Thereafter, to obtain the transfer information INLINEFORM8 of source word, we reversely use the attention weights: DISPLAYFORM0 ",
"where INLINEFORM0 represent the whole outputs of BiLSTM, and INLINEFORM1 , INLINEFORM2 . INLINEFORM3 denotes the transfer information of INLINEFORM4 th word in low-resource language and has the same dimensions with INLINEFORM5 ."
],
[
"The low-resource language named entity recognition architecture is based on BIBREF5 . The word embeddings of low-resource language are passed into a BiLSTM-CRF sequence labeling network. The embeddings INLINEFORM0 are used as inputs to the BiLSTM. Then we have: DISPLAYFORM0 ",
"Before passing the forward and backward output states INLINEFORM0 into CRF, we concatenate INLINEFORM1 and INLINEFORM2 as a new representation: DISPLAYFORM0 ",
"CRF model uses INLINEFORM0 to give the final sequence probability on the possible sequence label INLINEFORM1 : DISPLAYFORM0 ",
"At last, the named entity labels are predicted by: DISPLAYFORM0 "
],
[
"We use experiments to evaluate the effectiveness of our proposed method on NER task. On three different low-resource languages, we conducted an experimental evaluation to prove the effectiveness of our back attention mechanism on the NER task. Four datasets are used in our work, including CoNLL 2003 German BIBREF9 , CoNLL 2002 Spanish BIBREF10 , OntoNotes 4 BIBREF11 and Weibo NER BIBREF12 . All the annotations are mapped to the BIOES format. Table TABREF14 shows the detailed statistics of the datasets."
],
[
"We implement the basic BiLSTM-CRF model using PyTorch framework. FASTTEXT embeddings are used for generating word embeddings. Translation models are trained on United Nation Parallel Corpus. For pre-trained English NER system, we use the default NER model of Flair."
],
[
"We train our NER model using vanilla SGD with no momentum for 150 epochs, with an initial learning rate of 0.1 and a learning rate annealing method in which the train loss does not fall in 3 consecutive epochs. The hidden size of BiLSTM model is set to 256 and mini-batch size is set to 16. Dropout is applied to word embeddings with a rate of 0.1 and to BiLSTM with a rate of 0.5. We repeat each experiment 5 times under different random seeds and report the average of test set as final performance."
],
[
"Experimental results of German and Spanish are shown in table TABREF20 . Evaluation metric is F1-score. We can find that our method CharLM+BiLSTM-CRF+BAN yields the best performance on two languages. And after adding our network to each of the basic models, the performance of each model has been improved. This suggests that the transfer information, obtained from BAN, is helpful for low-resource NER."
],
[
"Chinese is distinct from Latin-based languages. Thence, there are some tricks when processing Chinese corpus. But we only suppose to verify the validity of our method, so we just use the character-level embeddings.",
"Table TABREF22 shows the results on Chinese OntoNotes 4.0. Adding BAN to baseline model leads to an increase from 63.25% to 72.15% F1-score. In order to further improve the performance, we use the BERT model BIBREF20 to produce word embeddings. With no segmentation, we surpass the previous state-of-the-art approach by 6.33% F1-score. For Weibo dataset, the experiment results are shown in Table TABREF23 , where NE, NM and Overall denote named entities, nominal entities and both. The baseline model gives a 33.18% F1-score. Using the transfer knowledge by BAN, the baseline model achieves an immense improvement in F1-score, rising by 10.39%. We find that BAN still gets consistent improvement on a strong model. With BAN, the F1-score of BERT+BiLSTM+CRF increases to 70.76%."
],
[
" BIBREF21 indicates that the representations from higher-level layers of NLP models are more task-specific. Although we do the same task among different languages, the target domains of different datasets are slightly different. So, to prove that back attention knowledge generated by BAN could capture valuable task-specific information between different languages, we use the back attention knowledge alone as word embedding to predict Weibo dataset. We compare three different word embeddings on the baseline model. Experimental results are shown in Table TABREF25 and illustrate that back attention knowledge from BAN has inherent semantic information."
],
[
"Our proposed approach is the first to leverage hidden states of NER model from another language to improve monolingual NER performance. The training time with or without BAN is almost the same due to the translation module and the English NER module are pre-trained.",
"On large datasets, our model makes a small improvement because some of transfer knowledge obtained from our method is duplicated with the information learned by the monolingual models. On small datasets, e.g., Weibo dataset, a great improvement has been achieved after adding transfer knowledge to the baseline model. The reason maybe is that these datasets are too small to be fully trained and the test datasets have many non-existent characters of the training dataset, even some unrecognized characters. Therefore, some tags labeled incorrectly by monolingual models could be labeled correctly with the additional transfer knowledge which contains task-specific information obtained from BAN. So, the transfer information plays an important role in this dataset."
],
[
"In this paper, we seek to improve the performance of NER on low-resource languages by leveraging the well-trained English NER system. This is achieved by way of BAN, which is a simple but extensible approach. It can transfer information between different languages. Empirical experiments show that, on small datasets, our approach can lead to significant improvement on the performance. This property is of great practical importance for low-resource languages. In future work, we plan to extend our method on other NLP tasks, e.g., relation extraction, coreference resolution."
]
],
"section_name": [
"Introduction",
"Model",
"Pre-trained Translation and NER Model",
"Back Attention Knowledge Transfer",
"Named Entity Recognition Architecture",
"Experiments",
"Experimental Setup",
"Settings",
"German and Spanish NER",
"Chinese NER",
"Task-Specific Information from Back Attention Network",
"Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"7ee6a452970790d614ad6eed09a3d005ad3aca0f"
],
"answer": [
{
"evidence": [
"Attention-base translation model We use the system of BIBREF6 , a convolutional sequence to sequence model. It divides translation process into two steps. First, in the encoder step, given an input sentence INLINEFORM0 of length INLINEFORM1 , INLINEFORM2 represents each word as word embedding INLINEFORM3 . After that, we obtain the absolute position of input elements INLINEFORM4 . Both vectors are concatenated to get input sentence representations INLINEFORM5 . Similarly, output elements INLINEFORM6 generated from decoder network have the same structure. A convolutional neural network (CNN) is used to get the hidden state of the sentence representation from left to right. Second, in the decoder step, attention mechanism is used in each CNN layer. In order to acquire the attention value, we combine the current decoder state INLINEFORM7 with the embedding of previous decoder output value INLINEFORM8 : DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "Attention-based translation model with convolution sequence to sequence model",
"highlighted_evidence": [
"Attention-base translation model We use the system of BIBREF6 , a convolutional sequence to sequence model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4dc8976808f837f745e8635ee5eea1e3ea340bec"
],
"answer": [
{
"evidence": [
"We use experiments to evaluate the effectiveness of our proposed method on NER task. On three different low-resource languages, we conducted an experimental evaluation to prove the effectiveness of our back attention mechanism on the NER task. Four datasets are used in our work, including CoNLL 2003 German BIBREF9 , CoNLL 2002 Spanish BIBREF10 , OntoNotes 4 BIBREF11 and Weibo NER BIBREF12 . All the annotations are mapped to the BIOES format. Table TABREF14 shows the detailed statistics of the datasets.",
"Table TABREF22 shows the results on Chinese OntoNotes 4.0. Adding BAN to baseline model leads to an increase from 63.25% to 72.15% F1-score. In order to further improve the performance, we use the BERT model BIBREF20 to produce word embeddings. With no segmentation, we surpass the previous state-of-the-art approach by 6.33% F1-score. For Weibo dataset, the experiment results are shown in Table TABREF23 , where NE, NM and Overall denote named entities, nominal entities and both. The baseline model gives a 33.18% F1-score. Using the transfer knowledge by BAN, the baseline model achieves an immense improvement in F1-score, rising by 10.39%. We find that BAN still gets consistent improvement on a strong model. With BAN, the F1-score of BERT+BiLSTM+CRF increases to 70.76%."
],
"extractive_spans": [
"German",
"Spanish",
"Chinese"
],
"free_form_answer": "",
"highlighted_evidence": [
"Four datasets are used in our work, including CoNLL 2003 German BIBREF9 , CoNLL 2002 Spanish BIBREF10 , OntoNotes 4 BIBREF11 and Weibo NER BIBREF12 . ",
"Table TABREF22 shows the results on Chinese OntoNotes 4.0. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"24c6988b3b8cb3ee9c01fece408786c840551858"
],
"answer": [
{
"evidence": [
"Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation. The hidden states of the character language model (CharLM) are used to create contextualized word embeddings. The final embedding INLINEFORM0 is concatenated by the CharLM embedding INLINEFORM1 and GLOVE embedding INLINEFORM2 BIBREF8 . A standard BiLSTM-CRF named entity recognition model BIBREF0 takes INLINEFORM3 to address the NER task.",
"We implement the basic BiLSTM-CRF model using PyTorch framework. FASTTEXT embeddings are used for generating word embeddings. Translation models are trained on United Nation Parallel Corpus. For pre-trained English NER system, we use the default NER model of Flair."
],
"extractive_spans": [],
"free_form_answer": "Bidirectional LSTM based NER model of Flair",
"highlighted_evidence": [
"Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation. ",
"For pre-trained English NER system, we use the default NER model of Flair."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which translation system do they use to translate to English?",
"Which languages do they work with?",
"Which pre-trained English NER model do they use?"
],
"question_id": [
"c45feda62f23245f53e855706e2d8ea733b7fd03",
"9785ecf1107090c84c57112d01a8e83418a913c1",
"e051d68a7932f700e6c3f48da57d3e2519936c6d"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The architecture of BAN. The source sentences are translated into English and recorded the attention weights. Then the sentences are put into English NER model. After acquiring the outputs of BiLSTM in the English model, we use back attention mechanism to obtain transfer knowledge to aid in generation of the result.",
"Table 1: statistic of sentences",
"Table 2: Evaluation on low-resource NER",
"Table 3: Evaluation on OntoNotes 4.0",
"Table 4: Evaluation on Weibo NER",
"Table 5: Comparison of different embeddings"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png"
]
} | [
"Which pre-trained English NER model do they use?"
] | [
[
"1906.01183-Experimental Setup-0",
"1906.01183-Pre-trained Translation and NER Model-3"
]
] | [
"Bidirectional LSTM based NER model of Flair"
] | 349 |
1909.06522 | Multilingual Graphemic Hybrid ASR with Massive Data Augmentation | Towards developing high-performing ASR for low-resource languages, approaches to address the lack of resources are to make use of data from multiple languages, and to augment the training data by creating acoustic variations. In this work we present a single grapheme-based ASR model learned on 7 geographically proximal languages, using standard hybrid BLSTM-HMM acoustic models with lattice-free MMI objective. We build the single ASR grapheme set via taking the union over each language-specific grapheme set, and we find such multilingual ASR model can perform language-independent recognition on all 7 languages, and substantially outperform each monolingual ASR model. Secondly, we evaluate the efficacy of multiple data augmentation alternatives within language, as well as their complementarity with multilingual modeling. Overall, we show that the proposed multilingual ASR with various data augmentation can not only recognize any within training set languages, but also provide large ASR performance improvements. | {
"paragraphs": [
[
"It can be challenging to build high-accuracy automatic speech recognition (ASR) systems in real world due to the vast language diversity and the requirement of extensive manual annotations on which the ASR algorithms are typically built. Series of research efforts have thus far been focused on guiding the ASR of a target language by using the supervised data from multiple languages.",
"Consider the standard hidden Markov models (HMM) based ASR system with a phonemic lexicon, where the vocabulary is specified by a pronunciation lexicon. One popular strategy is to make all languages share the same phonemic representations through a universal phonetic alphabet such as International Phonetic Alphabet (IPA) phone set BIBREF0, BIBREF1, BIBREF2, BIBREF3, or X-SAMPA phone set BIBREF4, BIBREF5, BIBREF6, BIBREF7. In this case, multilingual joint training can be directly applied. Given the effective neural network based acoustic modeling, another line of research is to share the hidden layers across multiple languages while the softmax layers are language dependent BIBREF8, BIBREF9; such multitask learning procedure can improve ASR accuracies for both within training set languages, and also unseen languages after language-specific adaptation, i.e., cross-lingual transfer learning. Different nodes in hidden layers have been shown in response to distinct phonetic features BIBREF10, and hidden layers can be potentially transferable across languages. Note that the above works all assume the test language identity to be known at decoding time, and the language specific lexicon and language model applied.",
"In the absence of a phonetic lexicon, building graphemic systems has shown comparable performance to phonetic lexicon-based approaches in extensive monolingual evaluations BIBREF11, BIBREF12, BIBREF13. Recent advances in end-to-end ASR models have attempted to take the union of multiple language-specific grapheme (i.e. orthographic character) sets, and use such union as a universal grapheme set for a single sequence-to-sequence ASR model BIBREF14, BIBREF15, BIBREF16. It allows for learning a grapheme-based model jointly on data from multiple languages, and performing ASR on within training set languages. In various cases it can produce performance gains over monolingual modeling that uses in-language data only.",
"In our work, we aim to examine the same approach of building a multilingual graphemic lexicon, while using a standard hybrid ASR system – based on Bidirectional Long Short-Term Memory (BLSTM) and HMM – learned with lattice-free maximum mutual information (MMI) objective BIBREF17. Our initial attempt is on building a single cascade of an acoustic model, a phonetic decision tree, a graphemic lexicon and a language model – for 7 geographically proximal languages that have little overlap in their character sets. We evaluate it in a low resource context where each language has around 160 hours training data. We find that, despite the lack of explicit language identification (ID) guidance, our multilingual model can accurately produce ASR transcripts in the correct test language scripts, and provide higher ASR accuracies than each language-specific ASR model. We further examine if using a subset of closely related languages – along language family or orthography – can achieve the same performance improvements as using all 7 languages.",
"We proceed with our investigation on various data augmentation techniques to overcome the lack of training data in the above low-resource setting. Given the highly scalable neural network acoustic modeling, extensive alternatives to increasing the amount or diversity of existing training data have been explored in prior works, e.g., applying vocal tract length perturbation and speed perturbation BIBREF18, volume perturbation and normalization BIBREF19, additive noises BIBREF20, reverberation BIBREF19, BIBREF21, BIBREF22, and SpecAugment BIBREF23. In this work we focus particularly on techniques that mostly apply to our wildly collected video datasets. In comparing their individual and complementary effects, we aim to answer: (i) if there is benefit in scaling the model training to significantly larger quantities, e.g., up to 9 times greater than the original training set size, and (ii) if any, is the data augmentation efficacy comparable or complementary with the above multilingual modeling.",
"Improving accessibility to videos “in the wild” such as automatic captioning on YouTube has been studied in BIBREF24, BIBREF25. While allowing for applications like video captions, indexing and retrieval, transcribing the heterogeneous Facebook videos of extensively diverse languages is highly challenging for ASR systems. On the whole, we present empirical studies in building a single multilingual ASR model capable of language-independent decoding on multiple languages, and in effective data augmentation techniques for video datasets."
],
[
"In this section we first briefly describe our deployed ASR architecture based on the weighted finite-state transducers (WFSTs) outlined in BIBREF26. Then we present its extension to multilingual training. Lastly, we discuss its language-independent decoding and language-specific decoding."
],
[
"In the ASR framework of a hybrid BLSTM-HMM, the decoding graph can be interpreted as a composed WFST of cascade $H \\circ C \\circ L \\circ G$. Acoustic models, i.e. BLSTMs, produce acoustic scores over context-dependent HMM (i.e. triphone) states. A WFST $H$, which represents the HMM set, maps the triphone states to context-dependent phones.",
"While in graphemic ASR, the notion of phone is turned to grapheme, and we typically create the grapheme set via modeling each orthographic character as a separate grapheme. Then a WFST $C$ maps each context-dependent grapheme, i.e. tri-grapheme, to an orthographic character. The lexicon $L$ is specified where each word is mapped to a sequence of characters forming that word. $G$ encodes either the transcript during training, or a language model during decoding."
],
[
"To build a single grapheme-based acoustic model for multiple languages, a multilingual graphemic set is obtained by taking a union of each grapheme set from each language considered, each of which can be either overlapping or non-overlapping. In the multilingual graphemic lexicon, each word in any language is mapped to a sequence of characters in that language.",
"A context-dependent acoustic model is constructed using the decision tree clustering of tri-grapheme states, in the same fashion as the context dependent triphone state tying BIBREF27. The graphemic-context decision tree is constructed over all the multilingual acoustic data including each language of interest. The optimal number of leaves for the multilingual model tends to be larger than for a monolingual neural network.",
"The acoustic model is a BLSTM network, using sequence discriminative training with lattice-free MMI objective BIBREF17. The BLSTM model is bootstrapped from a standard Gaussian mixture model (GMM)-HMM system. A multilingual $n$-gram language model is learned over the combined transcripts including each language considered."
],
[
"Given the multilingual lexicon and language model, the multilingual ASR above can decode any within training set language, even though not explicitly given any information about language identity. We refer to it as language-independent decoding or multilingual decoding. Note that such ASR can thus far produce any word in the multilingual lexicon, and the hypothesized word can either be in the vocabulary of the considered test language, or out of test language vocabulary as a mismatched-language error.",
"We further consider applying language-specific decoding, assuming the test language identity to be known at decoding time. Again consider the decoding graph $H \\circ C \\circ L \\circ G$, and $H$ & $C$ are thus multilingual while the lexicon $L$ and language model $G$ can include only the words in test language vocabulary. The multilingual acoustic model can therefore make use of multilingual training data, while its language-specific decoding operation only produces monolingual words matched with test language identity."
],
[
"In this section, we consider 3 categories of data augmentation techniques that are effectively applicable to video datasets."
],
[
"Both speed and volume perturbation emulate mean shifts in spectrum BIBREF18, BIBREF19. To perform speed perturbation of the training data, we produce three versions of each audio with speed factors $0.9$, $1.0$, and $1.1$. The training data size is thus tripled. For volume perturbation, each audio is scaled with a random variable drawn from a uniform distribution $[0.125, 2]$."
],
[
"To further increase training data size and diversity, we can create new audios via superimposing each original audio with additional noisy audios in time domain. To obtain diverse noisy audios, we use AudioSet, which consists of 632 audio event classes and a collection of over 2 million manually-annotated 10-second sound clips from YouTube videos BIBREF28.",
"Note that in our video datasets, video lengths vary between 10 seconds and 5 minutes, with an average duration of about 2 minutes. Rather than constantly repeating the 10-second sound clip to match the original minute-long audio, we superpose each sound clip on the short utterances via audio segmentation. Specifically, we first use an initial bootstrap model to align each original long audio, and segment each audio into around 10-second utterances via word boundaries.",
"Then for each utterance in the original train set, we can create a new noisy utterance by the steps:",
"Sample a sound clip from AudioSet.",
"Trim or repeat the sound clip as necessary to match the duration of the original utterance.",
"Sample a signal-to-noise ratio (SNR) from a Gaussian distribution with mean 10, and round the SNR up to 0 or down to 20 if the sample is beyond 0-20dB. Then scale the sound clip signal to obtain the target SNR.",
"Superimpose the original utterance signal with the scaled sound clip signal in time domain to create the resulting utterance.",
"Thus for each original utterance, we can create a variable number of new noisy utterances via sampling sound clips. We use a 3-fold augmentation that combines the original train set with two noisy copies."
],
[
"We consider applying the frequency and time masking techniques – which are shown to greatly improve the performance of end-to-end ASR models BIBREF23 – to our hybrid systems. Similarly, they can be applied online during each epoch of LF-MMI training, without the need for realignment.",
"Consider each utterance (i.e. after the audio segmentation in Section SECREF5), and we compute its log mel spectrogram with $\\nu $ dimension and $\\tau $ time steps:",
"Frequency masking is applied $m_F$ times, and each time the frequency bands $[f_0$, $f_0+ f)$ are masked, where $f$ is sampled from $[0, F]$ and $f_0$ is sampled from $[0, \\nu - f)$.",
"Time masking is optionally applied $m_T$ times, and each time the time steps $[t_0$, $t_0+ t)$ are masked, where $t$ is sampled from $[0, T]$ and $t_0$ is sampled from $[0, \\tau - t)$.",
"As in BIBREF23, we increase the training schedule accordingly, i.e., number of epochs."
],
[
""
],
[
"Our multilingual ASR attempt was on 7 geographically proximal languages: Kannada, Malayalam, Sinhala, Tamil, Bengali, Hindi and Marathi. The datasets were a set of public Facebook videos, which were wildly collected and anonymized. We categorized them into four video types:",
"Ads: any video content where the publisher paid for a promo on it.",
"Pages: content published by a page that was not paid content promoted to users.",
"UserLive: live streams from users.",
"UserVOD (video on demand): was-live videos.",
"For each language, the train and test set size are described in Table TABREF10, and most training data were Pages. On each language we also had a small validation set for model parameter tuning. Each monolingual ASR baseline was trained on language-specific data only.",
"The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters. In addition, we deliberately split 7 languages into two groups, such that the languages within each group were more closely related in terms of language family, orthography or phonology. We thus built 3 multilingual ASR models trained on:",
"all 7 languages, for 1059 training hours in total,",
"4 languages – Kannada, Malayalam, Sinhala and Tamil – for 590 training hours,",
"3 languages – Bengali, Hindi and Marathi – for 469 training hours,",
"which are referred to as 7lang, 4lang, and 3lang respectively. Note that Kannada, Malayalam and Tamil are Dravidian languages, which have rich agglutinative inflectional morphology BIBREF2 and resulted in around 10% OOV token rates on test sets (Hindi had the lowest OOV rate as 2-3%). Such experimental setup was designed to answer the questions:",
"If a single graphemic ASR model could scale its language-independent recognition up to all 7 languages.",
"If including all 7 languages could yield better ASR performance than using a small subset of closely related languages."
],
[
"Each bootstrap model was a GMM-HMM based system with speaker adaptive training, implemented with Kaldi BIBREF29. Each neural network acoustic model was a latency-controlled BLSTM BIBREF30, learned with lattice-free MMI objective and Adam optimizer BIBREF31. All neural networks were implemented with Caffe2 BIBREF32. Due to the production real time factor (RTF) requirements, we used the same model size in all cases – a 4 layer BLSTM network with 600 cells in each layer and direction – except that, the softmax dimensions, i.e. the optimal decision tree leaves, were determined through experiments on validation sets, varying within 7-30k. Input acoustic features were 80-dimensional log-mel filterbank coefficients. We used standard 5-gram language models. After lattice-free MMI training, the model with the best accuracy on validation set was used for evaluation on test set."
],
[
"ASR word error rate (WER%) results are shown in Table TABREF11. We found that, although not explicitly given any information on test language identities, multilingual ASR with language-independent decoding (Section SECREF3) - trained on 3, 4, or 7 languages - substantially outperformed each monolingual ASR in all cases, and on average led to relative WER reductions between 4.6% (Sinhala) and 10.3% (Hindi).",
"Note that the word hypotheses from language-independent decoding could be language mismatched, e.g., part of a Kannada utterance was decoded into Marathi words. So we counted how many word tokens in the decoding transcripts were not in the lexicon of corresponding test language. We found in general only 1-3% word tokens are language mismatched, indicating that the multilingual model was very effective in identifying the language implicitly and jointly recognizing the speech.",
"Consider the scenario that, test language identities are known likewise in each monolingual ASR, and we proceed with language-specific decoding (Section SECREF3) on Kannada and Hindi, via language-specific lexicon and language model at decoding time. We found that, the language-specific decoding provided only moderate gains, presumably as discussed above, the language-independent decoding had given the mismatched-language word token rates as sufficiently low as 1-3%.",
"Additionally, the multilingual ASR of 4lang and 3lang (Section SECREF15) achieved the same, or even slightly better performance as compared to the ASR of 7lang, suggesting that incorporating closely related languages into multilingual training is most useful for improving ASR performance. However, the 7lang ASR by itself still yields the advantage in language-independent recognition of more languages."
],
[
"First, we experimented with monolingual ASR on Kannada and Hindi, and performed comprehensive evaluations of the data augmentation techniques described in Section SECREF3. As in Table TABREF11, the performance gains of using frequency masking were substantial and comparable to those of using speed perturbation, where $m_F = 2$ and $F=15$ (Section SECREF12) worked best. In addition, combining both frequency masking and speed perturbation could provide further improvements. However, applying additional volume perturbation (Section SECREF4) or time masking (Section SECREF12) was not helpful in our preliminary experimentation.",
"Note that after speed perturbation, the training data tripled, to which we could apply another 3-fold augmentation based on additive noise (Section SECREF5), and the final train set was thus 9 times the size of original train set. We found that all 3 techniques were complementary, and in combination led to large fusion gains over each monolingual baseline – relative WER reductions of 8.7% on Kannada, and 14.8% on Hindi.",
"Secondly, we applied the 3 data augmentation techniques to the multilingual ASR of 7lang, and tested their additive effects. We show the resulting WERs on Kannada and Hindi in Table TABREF11. Note that on Kannada, we found around 7% OOV token rate on Ads but around 10-11% on other 3 video types, and we observed more gains on Ads; presumably because the improved acoustic model could only correct the in-vocabulary word errors, lower OOV rates therefore left more room for improvements. Hindi had around 2.5% OOV rates on each video type, and we found incorporating data augmentation into multilingual ASR led to on average 9.0% relative WER reductions.",
"Overall, we demonstrated the multilingual ASR with massive data augmentation – via a single graphemic model even without the use of explicit language ID – allowed for relative WER reductions of 11.0% on Kannada and 18.4% on Hindi."
],
[
"We have presented a multilingual grapheme-based ASR model can effectively perform language-independent recognition on any within training set languages, and substantially outperform each monolingual ASR alternative. Various data augmentation techniques can yield further complementary improvements. Such single multilingual model can not only provide better ASR performance, but also serves as an alternative to the standard production deployment that typically includes extensive monolingual ASR systems and a separate language ID model.",
"Future work will expand the language coverage to include both geographically proximal and distant languages. Additionally, given the identity of a target test language, we will consider the hidden layers of such multilingual acoustic model as a pre-trained model, and thus perform subsequent monolingual fine-tuning, as compared to the multitask learning procedure in BIBREF8, BIBREF9."
]
],
"section_name": [
"Introduction",
"Multilingual ASR",
"Multilingual ASR ::: Graphemic ASR with WFST",
"Multilingual ASR ::: A single multilingual ASR model using lattice-free MMI",
"Multilingual ASR ::: Language-independent and language-specific decoding in the WFST framework",
"Data augmentation",
"Data augmentation ::: Speed and volume perturbation",
"Data augmentation ::: Additive noise",
"Data augmentation ::: SpecAugment",
"Experiments",
"Experiments ::: Data",
"Experiments ::: Model configurations",
"Experiments ::: Results with multilingual ASR",
"Experiments ::: Results with data augmentation",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"a68e54ccc7c2f2543dcb11b0c1cbc4aef3b976ba"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"24c6a69a2e7133fa700ad4ec0bd3b4441a477243"
],
"answer": [
{
"evidence": [
"In this section, we consider 3 categories of data augmentation techniques that are effectively applicable to video datasets.",
"To further increase training data size and diversity, we can create new audios via superimposing each original audio with additional noisy audios in time domain. To obtain diverse noisy audios, we use AudioSet, which consists of 632 audio event classes and a collection of over 2 million manually-annotated 10-second sound clips from YouTube videos BIBREF28.",
"We consider applying the frequency and time masking techniques – which are shown to greatly improve the performance of end-to-end ASR models BIBREF23 – to our hybrid systems. Similarly, they can be applied online during each epoch of LF-MMI training, without the need for realignment.",
"Consider each utterance (i.e. after the audio segmentation in Section SECREF5), and we compute its log mel spectrogram with $\\nu $ dimension and $\\tau $ time steps:",
"Frequency masking is applied $m_F$ times, and each time the frequency bands $[f_0$, $f_0+ f)$ are masked, where $f$ is sampled from $[0, F]$ and $f_0$ is sampled from $[0, \\nu - f)$.",
"Time masking is optionally applied $m_T$ times, and each time the time steps $[t_0$, $t_0+ t)$ are masked, where $t$ is sampled from $[0, T]$ and $t_0$ is sampled from $[0, \\tau - t)$.",
"Data augmentation ::: Speed and volume perturbation",
"Both speed and volume perturbation emulate mean shifts in spectrum BIBREF18, BIBREF19. To perform speed perturbation of the training data, we produce three versions of each audio with speed factors $0.9$, $1.0$, and $1.1$. The training data size is thus tripled. For volume perturbation, each audio is scaled with a random variable drawn from a uniform distribution $[0.125, 2]$."
],
"extractive_spans": [
"Frequency masking",
"Time masking",
"Additive noise",
"Speed and volume perturbation"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section, we consider 3 categories of data augmentation techniques that are effectively applicable to video datasets.",
"To further increase training data size and diversity, we can create new audios via superimposing each original audio with additional noisy audios in time domain. To obtain diverse noisy audios, we use AudioSet, which consists of 632 audio event classes and a collection of over 2 million manually-annotated 10-second sound clips from YouTube videos BIBREF28.",
"We consider applying the frequency and time masking techniques – which are shown to greatly improve the performance of end-to-end ASR models BIBREF23 – to our hybrid systems. Similarly, they can be applied online during each epoch of LF-MMI training, without the need for realignment.\n\nConsider each utterance (i.e. after the audio segmentation in Section SECREF5), and we compute its log mel spectrogram with $\\nu $ dimension and $\\tau $ time steps:\n\nFrequency masking is applied $m_F$ times, and each time the frequency bands $[f_0$, $f_0+ f)$ are masked, where $f$ is sampled from $[0, F]$ and $f_0$ is sampled from $[0, \\nu - f)$.\n\nTime masking is optionally applied $m_T$ times, and each time the time steps $[t_0$, $t_0+ t)$ are masked, where $t$ is sampled from $[0, T]$ and $t_0$ is sampled from $[0, \\tau - t)$.",
"Data augmentation ::: Speed and volume perturbation\nBoth speed and volume perturbation emulate mean shifts in spectrum BIBREF18, BIBREF19. To perform speed perturbation of the training data, we produce three versions of each audio with speed factors $0.9$, $1.0$, and $1.1$. The training data size is thus tripled. For volume perturbation, each audio is scaled with a random variable drawn from a uniform distribution $[0.125, 2]$."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5f397cf9a9999c3558286f08f1eaa899d587bd4a"
],
"answer": [
{
"evidence": [
"The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters. In addition, we deliberately split 7 languages into two groups, such that the languages within each group were more closely related in terms of language family, orthography or phonology. We thus built 3 multilingual ASR models trained on:"
],
"extractive_spans": [],
"free_form_answer": "Little overlap except common basic Latin alphabet and that Hindi and Marathi languages use same script.",
"highlighted_evidence": [
"The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How much training data is required for each low-resource language?",
"What are the best within-language data augmentation methods?",
"How much of the ASR grapheme set is shared between languages?"
],
"question_id": [
"9e2e5918608a2911b341d4887f58a4595d7d1429",
"0ec4143a4f1a8f597b435f83c0451145be2ab95b",
"90159e143487505ddc026f879ecd864b7f4f479e"
],
"question_writer": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
],
"search_query": [
"asr",
"asr",
"asr"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1. The amounts of audio data in hours.",
"Table 2. WER results on each video dataset. Frequency masking is denoted by fm, speed perturbation by sp, and additive noise (Section 3.2) by noise. 3lang, 4lang and 7lang denote the multilingual ASR models trained on 3, 4 and 7 languages, respectively, as in Section 4.1. Language-specific decoding denotes using multilingual acoustic model with language-specific lexicon and language model, as in Section 2.3. Average is unweighted average WER across 4 video types. Gain (%) is the relative reduction in the Average WER over each monolingual baseline."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png"
]
} | [
"How much of the ASR grapheme set is shared between languages?"
] | [
[
"1909.06522-Experiments ::: Data-6"
]
] | [
"Little overlap except common basic Latin alphabet and that Hindi and Marathi languages use same script."
] | 350 |
1909.12642 | HateMonitors: Language Agnostic Abuse Detection in Social Media | Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful content in social media. In this paper, we present our machine learning model, HateMonitor, developed for Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC), a shared task at FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER embeddings, to make the system language agnostic. Our model came at First position for the German sub-task A. We have also made our model public at this https URL . | {
"paragraphs": [
[
"In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society.",
"Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity.",
"Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language.",
"For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language.",
"In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification."
],
[
"Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum.",
"Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22.",
"One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently."
],
[
"The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages."
],
[
"We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced."
],
[
"Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask.",
"Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task.",
"Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset."
],
[
"In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system."
],
[
"We preprocess the tweets before performing the feature extraction. The following steps were followed:",
"We remove all the URLs.",
"Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters.",
"We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders.",
"Any numerical figure was normalized to a string `number'.",
"We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact."
],
[
"The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier.",
"Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768.",
"LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31.",
"We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model."
],
[
"The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition."
],
[
"The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively."
],
[
"In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21."
],
[
"In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers."
]
],
"section_name": [
"Introduction",
"Related works",
"Dataset and Task description",
"Dataset and Task description ::: Datasets",
"Dataset and Task description ::: Tasks",
"System Description",
"System Description ::: Feature Generation ::: Preprocessing:",
"System Description ::: Feature Generation ::: Feature vectors:",
"System Description ::: Our Model",
"Results",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"250d0c947b58d1efcf26cec7982d76f573520cc2"
],
"answer": [
{
"evidence": [
"The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively."
],
"extractive_spans": [
"macro F1 score of 0.62"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our model got the first position in the German sub-task with a macro F1 score of 0.62."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"57201ae9022529dac728d67a273fe9dea8d435df"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"33ce497452d64699349e71697d7737b16238bd3e"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6498931fccd85e3bbbeaa40d24be4841e3ce51a4"
],
"answer": [
{
"evidence": [
"The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.",
"In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21."
],
"extractive_spans": [],
"free_form_answer": "Hindi, English and German (German task won)",
"highlighted_evidence": [
"The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.",
"The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the performance of the model for the German sub-task A?",
"Is the model tested for language identification?",
"Is the model compared to a baseline model?",
"What are the languages used to test the model?"
],
"question_id": [
"d10e256f2f724ad611fd3ff82ce88f7a78bad7f7",
"c691b47c0380c9529e34e8ca6c1805f98288affa",
"892e42137b14d9fabd34084b3016cf3f12cac68a",
"dc69256bdfe76fa30ce4404b697f1bedfd6125fe"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"language identification",
"language identification",
"language identification",
"language identification"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1. This table shows the initial statistics about the training and test data",
"Fig. 1. Architecture of our system",
"Table 2. This table gives the language wise result of sub-task A by comparing the macro F1 values",
"Table 3. This table gives the language wise result of sub-task B by comparing the macro F1 values",
"Table 4. This table gives the language wise result of sub-task C by comparing the macro F1 values"
],
"file": [
"3-Table1-1.png",
"5-Figure1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png"
]
} | [
"What are the languages used to test the model?"
] | [
[
"1909.12642-Discussion-0",
"1909.12642-Dataset and Task description-0"
]
] | [
"Hindi, English and German (German task won)"
] | 351 |
1902.10525 | Fast Multi-language LSTM-based Online Handwriting Recognition | We describe an online handwriting system that is able to support 102 languages using a deep neural network architecture. This new system has completely replaced our previous Segment-and-Decode-based system and reduced the error rate by 20%-40% relative for most languages. Further, we report new state-of-the-art results on IAM-OnDB for both the open and closed dataset setting. The system combines methods from sequence recognition with a new input encoding using B\'ezier curves. This leads to up to 10x faster recognition times compared to our previous system. Through a series of experiments we determine the optimal configuration of our models and report the results of our setup on a number of additional public datasets. | {
"paragraphs": [
[
"In this paper we discuss online handwriting recognition: Given a user input in the form of an ink, i.e. a list of touch or pen strokes, output the textual interpretation of this input. A stroke is a sequence of points INLINEFORM0 with position INLINEFORM1 and timestamp INLINEFORM2 .",
"Figure FIGREF1 illustrates example inputs to our online handwriting recognition system in different languages and scripts. The left column shows examples in English with different writing styles, with different types of content, and that may be written on one or multiple lines. The center column shows examples from five different alphabetic languages similar in structure to English: German, Russian, Vietnamese, Greek, and Georgian. The right column shows scripts that are significantly different from English: Chinese has a much larger set of more complex characters, and users often overlap characters with one another. Korean, while an alphabetic language, groups letters in syllables leading to a large “alphabet” of syllables. Hindi writing often contains a connecting ‘Shirorekha’ line and characters can form larger structures (grapheme clusters) which influence the written shape of the components. Arabic is written right-to-left (with embedded left-to-right sequences used for numbers or English names) and characters change shape depending on their position within a word. Emoji are non-text Unicode symbols that we also recognize.",
"Online handwriting recognition has recently been gaining importance for multiple reasons: (a) An increasing number of people in emerging markets are obtaining access to computing devices, many exclusively using mobile devices with touchscreens. Many of these users have native languages and scripts that are not as easily typed as English, e.g. due to the size of the alphabet or the use of grapheme clusters which make it difficult to design an intuitive keyboard layout BIBREF0 . (b) More and more large mobile devices with styluses are becoming available, such as the iPad Pro, Microsoft Surface devices, and Chromebooks with styluses.",
"Early work in online handwriting recognition looked at segment-and-decode classifiers, such as the Newton BIBREF1 . Another line of work BIBREF2 focused on solving online handwriting recognition by making use of Hidden Markov Models (HMMs) BIBREF3 or hybrid approaches combining HMMs and Feed-forward Neural Networks BIBREF4 . The first HMM-free models were based on Time Delay Neural Networks (TDNNs) BIBREF5 , BIBREF6 , BIBREF7 , and more recent work focuses on Recurrent Neural Network (RNN) variants such as Long-Short-Term-Memory networks (LSTMs) BIBREF8 , BIBREF9 .",
"How to represent online handwriting data has been a research topic for a long time. Early approaches were feature-based, where each point is represented using a set of features BIBREF6 , BIBREF10 , BIBREF1 , or using global features to represent entire characters BIBREF6 . More recently, the deep learning revolution has swept away most feature engineering efforts and replaced them with learned representations in many domains, e.g. speech BIBREF11 , computer vision BIBREF12 , and natural language processing BIBREF13 .",
"Together with architecture changes, training methodologies also changed, moving from relying on explicit segmentation BIBREF7 , BIBREF1 , BIBREF14 to implicit segmentation using the Connectionist Temporal Classification (CTC) loss BIBREF15 , or Encoder-Decoder approaches trained with Maximum Likelihood Estimation BIBREF16 . Further recent work is also described in BIBREF17 .",
"The transition to more complex network architectures and end-to-end training can be associated with breakthroughs in related fields focused on sequence understanding where deep learning methods have outperformed “traditional” pattern recognition methods, e.g. in speech recognition BIBREF18 , BIBREF19 , OCR BIBREF20 , BIBREF21 , offline handwriting recognition BIBREF22 , and computer vision BIBREF23 .",
"In this paper we describe our new online handwriting recognition system based on deep learning methods. It replaces our previous segment-and-decode system BIBREF14 , which first over-segments the ink, then groups the segments into character hypotheses, and computes features for each character hypothesis which are then classified as characters using a rather shallow neural network. The recognition result is then obtained using a best path search decoding algorithm on the lattice of hypotheses incorporating additional knowledge sources such as language models. This system relies on numerous pre-processing, segmentation, and feature extraction heuristics which are no longer present in our new system. The new system reduces the amount of customization required, and consists of a simple stack of bidirectional LSTMs (BLSTMs), a single Logits layer, and the CTC loss BIBREF24 (Sec. SECREF2 ) trained for each script (Sec. SECREF3 ). To support potentially many languages per script (see Table TABREF5 ), language-specific language models and feature functions are used during decoding (Sec. SECREF38 ). E.g. we have a single recognition model for Arabic script which is combined with specific language models and feature functions for our Arabic, Persian, and Urdu language recognizers. Table TABREF5 shows the full list of scripts and languages that we currently support.",
"The new models are more accurate (Sec. SECREF4 ), smaller, and faster (Table TABREF68 ) than our previous segment-and-decode models and eliminate the need for a large number of engineered features and heuristics.",
"We present an extensive comparison of the differences in recognition accuracy for eight languages (Sec. SECREF5 ) and compare the accuracy of models trained on publicly available datasets where available (Sec. SECREF4 ). In addition, we propose a new standard experimental protocol for the IBM-UB-1 dataset BIBREF25 (Sec. SECREF50 ) to enable easier comparison between approaches in the future.",
"The main contributions of our paper are as follows:"
],
[
"Our handwriting recognition model draws its inspiration from research aimed at building end-to-end transcription models in the context of handwriting recognition BIBREF24 , optical character recognition BIBREF21 , and acoustic modeling in speech recognition BIBREF18 . The model architecture is constructed from common neural network blocks, i.e. bidirectional LSTMs and fully-connected layers (Figure FIGREF12 ). It is trained in an end-to-end manner using the CTC loss BIBREF24 .",
"Our architecture is similar to what is often used in the context of acoustic modeling for speech recognition BIBREF19 , in which it is referred to as a CLDNN (Convolutions, LSTMs, and DNNs), yet we differ from it in four points. Firstly, we do not use convolution layers, which in our own experience do not add value for large networks trained on large datasets of relatively short (compared to speech input) sequences typically seen in handwriting recognition. Secondly, we use bidirectional LSTMs, which due to latency constraints is not feasible in speech recognition systems. Thirdly, our architecture does not make use of additional fully-connected layers before and after the bidirectional LSTM layers. And finally, we train our system using the CTC loss, as opposed to the HMMs used in BIBREF19 .",
"This structure makes many components of our previous system BIBREF14 unnecessary, e.g. for feature extraction and segmentation. The heuristics that were hard-coded into our previous system, e.g. stroke-reordering and character hypothesis building, are now implicitly learned from the training data.",
"The model takes as input a time series INLINEFORM0 of length INLINEFORM1 encoding the user input (Sec. SECREF13 ) and passes it through several bidirectional LSTM layers BIBREF26 which learn the structure of characters (Sec. SECREF34 ).",
"The output of the final LSTM layer is passed through a softmax layer (Sec. SECREF35 ) leading to a sequence of probability distributions over characters for each time step.",
"For CTC decoding (Sec. SECREF44 ) we use beam search to combine the softmax outputs with character-based language models, word-based language models, and information about language-specific characters as in our previous system BIBREF14 ."
],
[
"In our earlier paper BIBREF14 we presented results on our datasets with a model similar to the one proposed in BIBREF24 . In that model we used 23 per-point features (similarly to BIBREF6 ) as described in our segment-and-decode system to represent the input. In further experimentation we found that in substantially deeper and wider models, engineered features are unnecessary and their removal leads to better results. This confirms the observation that learned representations often outperform handcrafted features in scenarios in which sufficient training data is available, e.g. in computer vision BIBREF27 and in speech recognition BIBREF28 . In the experiments presented here, we use two representations:",
"The simplest representation of stroke data is as a sequence of touch points. In our current system, we use a sequence of 5-dimensional points INLINEFORM0 where INLINEFORM1 are the coordinates of the INLINEFORM2 th touchpoint, INLINEFORM3 is the timestamp of the touchpoint since the first touch point in the current observation in seconds, INLINEFORM4 indicates whether the point corresponds to a pen-up ( INLINEFORM5 ) or pen-down ( INLINEFORM6 ) stroke, and INLINEFORM7 indicates the start of a new stroke ( INLINEFORM8 otherwise).",
"In order to keep the system as flexible as possible with respect to differences in the writing surface, e.g. area shape, size, spatial resolution, and sampling rate, we perform some minimal preprocessing:",
"Normalization of INLINEFORM0 and INLINEFORM1 coordinates, by shifting in INLINEFORM2 such that INLINEFORM3 , and shifting and scaling the writing area isometrically such that the INLINEFORM4 coordinate spans the range between 0 and 1. In cases where the bounding box of the writing area is unknown we use a surrogate area 20% larger than the observed range of touch points.",
"Equidistant linear resampling along the strokes with INLINEFORM0 , i.e. a line of length 1 will have 20 points.",
"We do not assume that words are written on a fixed baseline or that the input is horizontal. As in BIBREF24 , we use the differences between consecutive points for the INLINEFORM0 coordinates and the time INLINEFORM1 such that our input sequence is INLINEFORM2 for INLINEFORM3 , and INLINEFORM4 for INLINEFORM5 .",
"However simple, the raw input data has some drawbacks, i.e.",
"Resolution: Not all input devices sample inputs at the same rate, resulting in different point densities along the input strokes, requiring resampling which may inadvertently normalize-out details in the input.",
"Length: We choose the (re-)sampling rate such as to represent the smallest features well, which leads to over-sampling in less interesting parts of the stroke, e.g. in straight lines.",
"Model complexity: The model has to learn to map small consecutive steps to larger global features.",
"Bézier curves are a natural way to describe trajectories in space, and have been used to represent online handwriting data in the past, yet mostly as a means of removing outliers in the input data BIBREF29 , up-sampling sparse data BIBREF6 , or for rendering handwriting data smoothly on a screen BIBREF30 . Since a sequence of Bézier curves can represent a potentially long point sequence compactly, irrespective of the original sampling rate, we experiment with representing a sequence of input points as a sequence of parametric cubic polynomials, and using these as inputs to the recognition model.",
"These Bézier curves for INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are cubic polynomials in INLINEFORM3 , i.e..: DISPLAYFORM0 ",
"We start by normalizing the size of the entire ink such that the INLINEFORM0 values are within the range INLINEFORM1 , similar to how we process it for raw points. The time values are scaled linearly to match the length of the ink such that DISPLAYFORM0 ",
"in order to obtain values in the same numerical range as INLINEFORM0 and INLINEFORM1 . This sets the time difference between the first and last point of the stroke to be equal to the total spatial length of the stroke.",
"For each stroke in an ink, the coefficients INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are computed by minimizing the sum of squared errors (SSE) between each observed point INLINEFORM3 and its corresponding closest point (defined by INLINEFORM4 ) on the Bézier curve: DISPLAYFORM0 ",
"Where INLINEFORM0 is the number of points in the stroke. Given a set of coordinates INLINEFORM1 , computing the coefficients corresponds to solving the following linear system of equations: DISPLAYFORM0 ",
"which can be solved exactly for INLINEFORM0 , and in the least-squares sense otherwise, e.g. by solving the normalized equations DISPLAYFORM0 ",
"for the coefficients INLINEFORM0 . We alternate between minimizing the SSE in eq. ( EQREF24 ) and finding the corresponding points INLINEFORM1 , until convergence. The coordinates INLINEFORM2 are updated using a Newton step on DISPLAYFORM0 ",
"which is zero when INLINEFORM0 is orthogonal to the direction of the curve INLINEFORM1 .",
"If (a) the curve cannot fit the points well (SSE error is too large) or if (b) the curve has too sharp bends (arc length longer than 3 times the endpoint distance) we split the curve into two parts. We determine the split point in case (a) by finding the triplet of consecutive points with the smallest angle, and in case (b) as the point closest to the maximum local curvature along the entire Bézier curve. This heuristic is applied recursively until both the curve matching criteria are met.",
"As a final step, to remove spurious breakpoints, consecutive curves that can be represented by a single curve are stitched back together, resulting in a compact set of Bézier curves representing the data within the above constraints. For each consecutive pair of curves, we try to fit a single curve using the combined set of underlying points. If the fit agrees with the above criteria, we replace the two curves by the new one. This is applied repeatedly until no merging happens anymore.",
"Since the Bézier coefficients INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 may vary significantly in range, each curve is fed to the network as a 10-dimensional vector consisting of:",
"the vector between the endpoints (Figure FIGREF28 , blue vector, 2 values),",
"the distance between the control points and the endpoints relative to the distance between the endpoints (green dashed lines, 2 values),",
"the two angles between each control point and the endpoints (green arcs, 2 values),",
"the time coefficients INLINEFORM0 , INLINEFORM1 and INLINEFORM2 (not shown),",
"a boolean value indicating whether this is a pen-up or pen-down curve (not shown).",
"Due to the normalization of the INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 coordinates, as well as the constraints on the curves themselves, most of the resulting values are in the range INLINEFORM3 .",
"The input data is of higher dimension than the raw inputs described in Sec. UID14 , i.e. 10 vs. 5 dimensional, but the input sequence itself is roughly INLINEFORM0 shorter, making them a good choice for latency-sensitive models.",
"In most of the cases, as highlighted through the experimental sections in this paper, the curve representations contribute to better recognition accuracy and speed of our models. However, there are also situations where the curve representation introduces mistakes: punctuation marks become more similar to each other and sometimes are wrongly recognized, capitalization errors appear from time to time and in some cases the candidate recognitions corresponding to higher language model scores are preferred."
],
[
"LSTMs BIBREF31 have become one of the most commonly used RNN cells because they are easy to train and give good results BIBREF32 . In all experiments we use bidirectional LSTMs, i.e. we process the input sequence forward and backward and merge the output states of each layer before feeding them to the next layer. The exact number of layers and nodes is determined empirically for each script. We give an overview of the impact of the number of nodes and layers in section SECREF4 . We also list the configurations for several scripts in our production system, as of this writing."
],
[
"The output of the LSTM layers at each timestep is fed into a softmax layer to get a probability distribution over the INLINEFORM0 possible characters in the script (including spaces, punctuation marks, numbers or other special characters), plus a blank label required by the CTC loss and decoder."
],
[
"The output of the softmax layer is a sequence of INLINEFORM0 time steps of INLINEFORM1 classes that we decode using CTC decoding BIBREF15 . The logits from the softmax layer are combined with language-specific prior knowledge (cp. Sec. SECREF38 ). For each of these additional knowledge sources we learn a weight (called “decoder weight” in the following) and combine them linearly (cp. Sec. SECREF3 ). The learned combination is used as described in BIBREF33 to guide the beam search during decoding.",
"This combination of different knowledge sources allows us to train one recognition model per script (e.g. Latin script, or Cyrillic script) and then use it to serve multiple languages (see Table TABREF5 )."
],
[
"Similarly to our previous work BIBREF14 , we define several scoring functions, which we refer to as feature functions. The goal of these feature functions is to introduce prior knowledge about the underlying language into the system. The introduction of recurrent neural networks has reduced the need for many of them and we now use only the following three:",
"Character Language Models: For each language we support, we build a 7-gram language model over Unicode codepoints from a large web-mined text corpus using Stupid back-off BIBREF35 . The final files are pruned to 10 million 7-grams each. Compared to our previous system BIBREF14 , we found that language model size has a smaller impact on the recognition accuracy, which is likely due to the capability of recurrent neural networks to capture dependencies between consecutive characters. We therefore use smaller language models over shorter contexts.",
"Word Language Models: For languages using spaces to separate words, we also use a word-based language model trained on a similar corpus as the character language models BIBREF36 , BIBREF37 , using 3-grams pruned to between 1.25 million and 1.5 million entries.",
"Character Classes: We add a scoring heuristic which boosts the score of characters from the language's alphabet. This feature function provides a strong signal for rare characters that may not be recognized confidently by the LSTM, and which the other language models might not weigh heavily enough to be recognized. This feature function was inspired by our previous system BIBREF14 .",
"In Section SECREF4 we provide an experimental evaluation of how much each of these feature functions contributes to the final result for several languages."
],
[
"The training of our system happens in two stages, on two different datasets:",
"Using separate datasets is important because the neural network learns the local appearance as well as an implicit language model from the training data. It will be overconfident on its training data and thus learning the decoder weights on the same dataset could result in weights biased towards the neural network model."
],
[
"As our training data does not contain frame-aligned labels, we rely on the CTC loss BIBREF15 for training which treats the alignment between inputs and labels as a hidden variable. CTC training introduces an additional blank label which is used internally for learning alignments jointly with character hypotheses, as described in BIBREF15 .",
"We train all neural network weights jointly using the standard TensorFlow BIBREF34 implementation of CTC training using the Adam Optimizer BIBREF39 with a batch size of 8, a learning rate of INLINEFORM0 , and gradient clipping such that the gradient INLINEFORM1 -norm is INLINEFORM2 . Additionally, to improve the robustness of our models and prevent overfitting, we train our models using random dropout BIBREF40 , BIBREF41 after each LSTM layer with a dropout rate of INLINEFORM3 . We train until the error rate on the evaluation dataset no longer improves for 5 million steps."
],
[
"To optimize the decoder weights, we rely on the Google Vizier service and its default algorithm, specifically batched Gaussian process bandits, and expected improvement as the acquisition function BIBREF38 .",
"For each recognizer training we start 7 Vizier studies, each performing 500 individual trials, and then we pick the configuration that performed best across all of these trials. We experimentally found that using 7 separate studies with different random initializations regularly leads to better results than running a single study once. We found that using more than 500 trials per study does not lead to any additional improvement.",
"For each script we train these weights on a subset of the languages for which we have sufficient data, and transfer the weights to all the other languages. E.g. for the Latin-script languages, we train the decoder weights on English and German, and use the resulting weights for all languages in the first row of Table TABREF5 ."
],
[
"In the following, where possible, we present results for public datasets in a closed data scenario, i.e. training and testing models on the public dataset using a standard protocol. In addition we present evaluation results for public datasets in an open data scenario against our production setup, i.e. in which the model is trained on our own data. Finally, we show experimental results for some of the major languages on our internal datasets. Whenever possible we compare these results to the state of the art and to our previous system BIBREF14 ."
],
[
"The IAM-OnDB dataset BIBREF42 is probably the most used evaluation dataset for online handwriting recognition. It consists of 298 523 characters in 86 272 word instances from a dictionary of 11 059 words written by 221 writers. We use the standard IAM-OnDB dataset separation: one training set, two validations sets and a test set containing 5 363, 1 438, 1 518 and 3 859 written lines, respectively. We tune the decoder weights using the validation set with 1 438 items and report error rates on the test set.",
"We perform a more extensive study of the number of layers and nodes per layer for both the raw and curve input formats to determine the optimal size of the bidirectional LSTM network (see Figure FIGREF48 , Table TABREF47 ). We first run experiments without additional feature functions (Figure FIGREF48 , solid lines), then re-compute the results with tuned weights for language models and character classes (Figure FIGREF48 , dashed lines). We observe that for both input formats, using 3 or 5 layers outperforms more shallow networks, and using more layers gives hardly any improvement. Furthermore, using 64 nodes per layer is sufficient, as wider networks give only small improvements, if at all.",
"Finally, we show a comparison of our old and new systems with the literature on the IAM-OnDB dataset in Table TABREF49 . Our method establishes a new state of the art result when relying on closed data using IAM-OnDB, as well as when relying on our in-house data that we use for our production system, which was not tuned for the IAM-OnDB data and for which none of the IAM-OnDB data was used for training.",
"To better understand where the improvements come from, we discuss the differences between the previous state-of-the-art system (Graves et al. BLSTM BIBREF24 ) and this work across four dimensions: input pre-processing and feature extraction, neural network architecture, CTC training and decoding, and model training methodology.",
"Our input pre-processing (Sec SECREF13 ) differs only in minor ways: the INLINEFORM0 -coordinate used is not first transformed using a high-pass filter, we don't split text-lines using gaps and we don't remove delayed strokes, nor do we do any skew and slant correction or other pre-processing.",
"The major difference comes from feature extraction. In contrast to the 25 features per point uesd in BIBREF24 , we use either 5 features (raw) or 10 features (curves). While the 25 features included both temporal (position in the time series) and spatial features (offline representation), our work uses only the temporal structure. In contrast also to our previous system BIBREF14 , using a more compact representation (and reducing the number of points for curves) allows a feature representation, including spatial structure, to be learned in the first or upper layers of the neural network.",
"The neural network architecture differs both in internal structure of the LSTM cell as well as in the architecture configuration. Our internal structure differs only in that we do not use peephole connections BIBREF44 .",
"As opposed to relying on a single bidirectional LSTM layer of width 100, we experiment with a number of configuration variants as detailed in Figure FIGREF48 . We note that it is particularly important to have more than one layer in order to learn a meaningful representation without feature extraction.",
"We use the CTC forward-backward training algorithm as described in BIBREF24 , and implemented in TensorFlow. The training hyperparameters are described in Section SECREF44 .",
"The CTC decoding algorithm incorporates feature functions similarly to how the dictionary is incorporated in the previous state-of-the-art system. However, we use more feature functions, our language models are trained on a different corpus, and the combination weights are optimized separately as described in Sec SECREF45 ."
],
[
"Another publicly-accessible English-language dataset is the IBM-UB-1 dataset BIBREF25 . From the available datasets therein, we use the English query dataset, which consists of 63 268 handwritten English words. As this dataset has not been used often in the academic literature, we propose an evaluation protocol. We split this dataset into 4 parts with non-overlapping writer IDs: 47 108 items for training, 4 690 for decoder weight tuning, 6 134 for validation and 5 336 for testing.",
"We perform a similar set of experiments as we did for IAM-OnDB to determine the right depth and width of our neural network architecture. The results of these experiments are shown in Figure FIGREF52 . The conclusion for this dataset is similar to the conclusions we drew for the IAM-OnDB: using networks with 5 layers of bidirectional LSTMs with 64 cells each is sufficient for good accuracy. Less deep and less wide networks perform substantially worse, but larger networks only give small improvements. This is true regardless of the input processing method chosen.",
"We give some exemplary results and a comparison with our current production system as well as results for our previous system in Table TABREF53 . We note that our current system is about 38% and 32% better (relative) in CER and WER, respectively, when compared to the previous segment-and-decode approach. The lack of improvement in error rate when evaluating on our production system is due to the fact that our datasets contain spaces while the same setup trained solely on IBM-UB-1 does not."
],
[
"We provide an evaluation of our production system trained on our in-house datasets applied to a number of publicly available benchmark datasets from the literature. Note that for all experiments presented in this section we evaluate our current live system without any tuning specifec to the tasks at hand.",
"The ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45 introduced a dataset for classifying the most common Chinese characters. We report the error rates in comparison to published results from the competition and more recent work done by others in Table TABREF56 .",
"We evaluate our live production system on this dataset. Our system was not tuned to the task at hand and was trained as a multi-character recognizer, thus it is not even aware that each sample only contains a single character. Further, our system supports 12 363 different characters while the competition data only contains 3 755 characters. Note that our system did not have access to the training data for this task at all.",
"Whenever our system returns more than one character for a sample, we count this as an error (this happened twice on the entire test set of 224 590 samples). Despite supporting almost four times as many characters than needed for the CASIA data and not having been tuned to the task, the accuracy of our system is still competitive with systems that were tuned for this data specifically.",
"In the ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50 , our production system was evaluated against other systems. The system used in the competition is the one reported and described in this paper. Due to licensing restrictions we were unable to do any experiments on the competition training data, or specific tuning for the competition, which was not the case for the other systems mentioned here.",
"We participated in the two tasks that best suited the purpose of our system, specifically the \"Word\" (ref. table TABREF58 ) and the \"Text line\" (ref. table TABREF59 ) recognition levels. Even though we can technically process paragraph level inputs, our system was not built with this goal in mind.",
"In contrast to us, the other teams used the training and validation sets to tune their systems:",
"The IVTOV team's system is very similar to our system. It makes use of bidirectional LSTM layers trained end-to-end with the CTC loss. The inputs used are delta INLINEFORM0 and INLINEFORM1 coordinates, together with pen-up strokes (boolean feature quantifying whether a stroke has ended or not). They report using a two-layer network of 100 cells each and additional preprocessing for better handling the dataset.",
"The MyScript team submitted two systems. The first system has an explicit segmentation component along with a feed-forward network for recognizing character hypotheses, similar in formulation to our previous system BIBREF14 . In addition, they also make use of a bidirectional LSTM system trained end-to-end with the CTC loss. They do not provide additional details on which system is which.",
"We note that the modeling stacks of the systems out-performing ours in this competition are not fundamentally different (to the best of our knowledge, according to released descriptions). We therefore believe that our system might perform comparably if trained on the competition training dataset as well.",
"On our internal testset of Vietnamese data, our new system obtains a CER of 3.3% which is 54% relative better than the old Segment-and-Decode system which had a CER of 7.2% (see also Table FIGREF69 )."
],
[
"Our in-house datasets consist of various types of training data, the amount of which varies by script. Sources of training data include data collected through prompting, commercially available data, artificially inflated data, and labeled/self-labeled anonymized recognition requests (see BIBREF14 for a more detailed description). The number of training samples varies from tens of thousands to several million per script, depending on the complexity and usage.",
"The best configuration for our production systems were identified by running multiple experiments over a range of layer depths and widths on our Latin script datasets. For the Latin script experiments shown in Figure FIGREF63 , the training set we used was a mixture of data from all the Latin-script languages we support and evaluation is done on an English validation dataset, also used for the English evaluation in Table TABREF68 .",
"Similarly to experiments depicted in Figure FIGREF48 and Figure FIGREF52 , increasing the depth and width of the network architecture brings diminishing returns fairly quickly. However, overfitting is less pronounced, particularly when relying on Bézier curve inputs, highlighting that our datasets are more complex in nature.",
"In all our experiments using our production datasets, the Bézier curve inputs outperformed the raw inputs both in terms of accuracy and recognition latency, and are thus used throughout in our production models. We hypothesize that this is due to the implicit normalization of sampling rates and thus line smoothness of the input data. The input data of our production datasets come from a wide variety of data sources including data collection and crowd sourcing from many different types of devices, unlike academic datasets such as IBM-UB-1 or IAM-OnDB which were collected under standardized conditions."
],
[
"The setup described throughout this paper that obtained the best results relies on input processing with Bézier spline interpolation (Sec. UID18 ), followed by 4–5 layers of varying width bidirectional LSTMs, followed by a final softmax layer. For each script, we experimentally determined the best configuration through multiple training runs.",
"We performed an ablation study with the best configurations for each of the eight most important scripts by number of users and compare the results with our previous work BIBREF14 (Table TABREF68 ). The largest relative improvement comes from the overall network architecture stack, followed by the use of the character language model and the other feature functions.",
"In addition, we show the relative improvement in error rates on the languages for which we have evaluation datasets of more than 2 000 items (Figure FIGREF69 ). The new architecture performs between 20%–40% (relative) better over almost all languages."
],
[
"To understand how the different datasets relate to each other, we performed a set of experiments and evaluations with the goal of better characterizing the differences between the datasets.",
"We trained a recognizer on each of the three training sets separately, then evaluated each system on all three test sets (Table TABREF65 ). The neural network architecture is the same as the one we determined earlier (5 layers bidirectional LSTMs of 64 cells each) with the same feature functions, with weights tuned on the corresponding tuning dataset. The inputs are processed using Bézier curves.",
"To better understand the source of discrepancy when training on IAM-OnDB and evaluating on IBM-UB-1, we note the different characteristics of the datasets:",
"IBM-UB-1 has predominantly cursive writing, while IAM-OnDB has mostly printed writing",
"IBM-UB-1 contains single words, while IAM-OnDB has lines of space-separated words",
"This results in models trained on the IBM-UB-1 dataset not being able to predict spaces as they are not present in the dataset's alphabet. In addition, the printed writing style of IAM-OnDB makes recognition harder when evaluating cursive writing from IBM-UB-1. It is likely that the lack of structure through words-only data makes recognizing IAM-OnDB on a system trained on IBM-UB-1 harder than vice-versa.",
"Systems trained on IBM-UB-1 or IAM-OnDB alone perform significantly worse on our internal datasets, as our data distribution covers a wide range of use-cases not necessarily relevant to, or present, in the two academic datasets: sloppy handwriting, overlapping characters for handling writing on small input surfaces, non-uniform sampling rates, and partially rotated inputs.",
"The network trained on the internal dataset performs well on all three datasets. It performs better on IAM-OnDB than the system trained only thereon, but worse for IBM-UB-1. We believe that using only cursive words when training allows the network to better learn the sample characteristics, than when learning about space separation and other structure properties not present in IBM-UB-1."
],
[
"We describe the online handwriting recognition system that we currently use at Google for 102 languages in 26 scripts. The system is based on an end-to-end trained neural network and replaces our old Segment-and-Decode system. Recognition accuracy of the new system improves by 20% to 40% relative depending on the language while using smaller and faster models. We encode the touch inputs using a Bézier curve representation which performs at least as well as raw touch inputs but which also allows for a faster recognition because the input sequence representation is shorter.",
"We further compare the performance of our system to the state of the art on publicly available datasets such as IAM-OnDB, IBM-UB-1, and CASIA and improve over the previous best published result on IAM-OnDB."
]
],
"section_name": [
"Introduction",
"End-to-end Model Architecture",
"Input Representation",
"Bidirectional Long-Short-Term-Memory Recurrent Neural Networks",
"Softmax Layer",
"Decoding",
"Feature Functions: Language Models and Character Classes",
"Training",
"Connectionist Temporal Classification Loss",
"Bayesian Optimization for Tuning Decoder Weights",
"Experimental Evaluation",
"IAM-OnDB",
"IBM-UB-1",
"Additional public datasets",
"Tuning neural network parameters on our internal data",
"System Performance and Discussion",
"Differences Between IAM-OnDB, IBM-UB-1 and our internal datasets",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"cd6aea1f1b35ea50d820594d98f12de4a601e545"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 9 Character error rates on the validation data using successively more of the system components described above for English (en), Spanish (es), German (de), Arabic (ar), Korean (ko), Thai (th), Hindi (hi), and Chinese (zh) along with the respective number of items and characters in the test sets. Average latencies for all languages and models were computed on an Intel Xeon E5-2690 CPU running at 2.6GHz."
],
"extractive_spans": [],
"free_form_answer": "thai",
"highlighted_evidence": [
"FLOAT SELECTED: Table 9 Character error rates on the validation data using successively more of the system components described above for English (en), Spanish (es), German (de), Arabic (ar), Korean (ko), Thai (th), Hindi (hi), and Chinese (zh) along with the respective number of items and characters in the test sets. Average latencies for all languages and models were computed on an Intel Xeon E5-2690 CPU running at 2.6GHz."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"253345e4f8450ad0f5fa8d0c69f87a91fd4df55b"
],
"answer": [
{
"evidence": [
"We present an extensive comparison of the differences in recognition accuracy for eight languages (Sec. SECREF5 ) and compare the accuracy of models trained on publicly available datasets where available (Sec. SECREF4 ). In addition, we propose a new standard experimental protocol for the IBM-UB-1 dataset BIBREF25 (Sec. SECREF50 ) to enable easier comparison between approaches in the future.",
"The IAM-OnDB dataset BIBREF42 is probably the most used evaluation dataset for online handwriting recognition. It consists of 298 523 characters in 86 272 word instances from a dictionary of 11 059 words written by 221 writers. We use the standard IAM-OnDB dataset separation: one training set, two validations sets and a test set containing 5 363, 1 438, 1 518 and 3 859 written lines, respectively. We tune the decoder weights using the validation set with 1 438 items and report error rates on the test set.",
"We provide an evaluation of our production system trained on our in-house datasets applied to a number of publicly available benchmark datasets from the literature. Note that for all experiments presented in this section we evaluate our current live system without any tuning specifec to the tasks at hand.",
"The ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45 introduced a dataset for classifying the most common Chinese characters. We report the error rates in comparison to published results from the competition and more recent work done by others in Table TABREF56 .",
"In the ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50 , our production system was evaluated against other systems. The system used in the competition is the one reported and described in this paper. Due to licensing restrictions we were unable to do any experiments on the competition training data, or specific tuning for the competition, which was not the case for the other systems mentioned here."
],
"extractive_spans": [
"IBM-UB-1 dataset BIBREF25",
"IAM-OnDB dataset BIBREF42",
"The ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45",
"ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50"
],
"free_form_answer": "",
"highlighted_evidence": [
"In addition, we propose a new standard experimental protocol for the IBM-UB-1 dataset BIBREF25 (Sec. SECREF50 ) to enable easier comparison between approaches in the future.",
"The IAM-OnDB dataset BIBREF42 is probably the most used evaluation dataset for online handwriting recognition. It consists of 298 523 characters in 86 272 word instances from a dictionary of 11 059 words written by 221 writers. We use the standard IAM-OnDB dataset separation: one training set, two validations sets and a test set containing 5 363, 1 438, 1 518 and 3 859 written lines, respectively. We tune the decoder weights using the validation set with 1 438 items and report error rates on the test set.",
"We provide an evaluation of our production system trained on our in-house datasets applied to a number of publicly available benchmark datasets from the literature. Note that for all experiments presented in this section we evaluate our current live system without any tuning specifec to the tasks at hand.\n\nThe ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45 introduced a dataset for classifying the most common Chinese characters. We report the error rates in comparison to published results from the competition and more recent work done by others in Table TABREF56 .",
"In the ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50 , our production system was evaluated against other systems. The system used in the competition is the one reported and described in this paper. Due to licensing restrictions we were unable to do any experiments on the competition training data, or specific tuning for the competition, which was not the case for the other systems mentioned here."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"Which language has the lowest error rate reduction?",
"What datasets did they use?"
],
"question_id": [
"097ab15f58cb1fce5b5ffb5082b8d7bbee720659",
"b8d5e9fa08247cb4eea835b19377262d86107a9d"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Fig. 1 Example inputs for online handwriting recognition in different languages. See text for details.",
"Table 1 List of languages supported in our system grouped by script.",
"Fig. 2 An overview our recognition models. In our architecture the input representation is passed through one or more bidirectional LSTM layers, and a final softmax layer makes a classification decision for the output at each time step.",
"Fig. 3 Parameterization of each Bézier curve used to feed the network. Namely: vector between the endpoints (blue), distance between the control points and the endpoints (green dashed lines, 2 values), and the two angles between each control point and the endpoints (green arcs, 2 values).",
"Table 2 Comparison of character error rates (lower is better) on the IAM-OnDB test set for different LSTM layers configurations. For each LSTM width and input type, we show the best result in bold.",
"Table 3 Error rates on the IAM-OnDB test set in comparison to the state of the art and our previous system [24]. A \"*\" in the \"system\" column indicates the use of an open training set. \"FF\" stands for \"feature functions\" as described in sec. 2.4.",
"Fig. 4 CER of models trained on the IAM-OnDB dataset with different numbers of LSTM layers and LSTM nodes using raw (left) and curve (right) inputs. Solid lines indicate results without any language models or feature functions in decoding, dashed lines indicate results with the fully-tuned system.",
"Table 4 Error rates on IBM-UB-1 test set in comparison to our previous system [24]. A \"*\" in the \"system\" column indicates the use of an open training set.",
"Fig. 5 CER of models trained on the IBM-UB-1 dataset with different numbers of LSTM layers and LSTM nodes using raw (left) and curve (right) inputs. Solid lines indicate results without any language models or feature functions in decoding, dashed lines indicate results with the fully-tuned system.",
"Table 7 Results on the VNONDB-Line dataset.",
"Table 6 Results on the VNONDB-Word dataset.",
"Table 8 CER comparison when training and evaluating IAMOnDB, IBM-UB-1 and our Latin training/eval set. We want to highlight the fundamental differences between the different datasets.",
"Fig. 6 CER of models trained on our internal datasets evaluated on our English-language validation set with different numbers of LSTM layers and LSTM nodes using raw (left) and curve (right) inputs. Solid lines indicate results without any language models or feature functions in decoding, dashed lines indicate results with the fully-tuned system.",
"Table 9 Character error rates on the validation data using successively more of the system components described above for English (en), Spanish (es), German (de), Arabic (ar), Korean (ko), Thai (th), Hindi (hi), and Chinese (zh) along with the respective number of items and characters in the test sets. Average latencies for all languages and models were computed on an Intel Xeon E5-2690 CPU running at 2.6GHz.",
"Fig. 7 A comparison of the CERs for the LSTM and SD (Segment-and-Decode) system for all languages on our internal test sets with more than 2000 items. The scatter plot shows the ISO language code at a position corresponding to the CER for the SD system (x-axis) and LSTM system (y-axis). Points below the diagonal are improvements of LSTM over SD. The plot also shows the lines of 20% and 40% relative improvement."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Figure4-1.png",
"8-Table4-1.png",
"9-Figure5-1.png",
"9-Table7-1.png",
"9-Table6-1.png",
"10-Table8-1.png",
"11-Figure6-1.png",
"11-Table9-1.png",
"12-Figure7-1.png"
]
} | [
"Which language has the lowest error rate reduction?"
] | [
[
"1902.10525-11-Table9-1.png"
]
] | [
"thai"
] | 352 |
1912.05238 | BERT has a Moral Compass: Improvements of ethical and moral values of machines | Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? Jentzsch et al.(2019) showed that applying machine learning to human texts can extract deontological ethical reasoning about "right" and "wrong" conduct by calculating a moral bias score on a sentence level using sentence embeddings. The machine learned that it is objectionable to kill living beings, but it is fine to kill time; It is essential to eat, yet one might not eat dirt; it is important to spread information, yet one should not spread misinformation. However, the evaluated moral bias was restricted to simple actions -- one verb -- and a ranking of actions with surrounding context. Recently BERT ---and variants such as RoBERTa and SBERT--- has set a new state-of-the-art performance for a wide range of NLP tasks. But has BERT also a better moral compass? In this paper, we discuss and show that this is indeed the case. Thus, recent improvements of language representations also improve the representation of the underlying ethical and moral values of the machine. We argue that through an advanced semantic representation of text, BERT allows one to get better insights of moral and ethical values implicitly represented in text. This enables the Moral Choice Machine (MCM) to extract more accurate imprints of moral choices and ethical values. | {
"paragraphs": [
[
"There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a more and more autonomous way. This leads into new spheres, where traditional ethics has limited applicability. Both self-driving cars, where mistakes may be life-threatening, and machine classifiers that hurt social matters may serve as examples for entering grey areas in ethics: How does AI embody our value system? Can AI systems learn human ethical judgements? If not, can we contest the AI system?",
"Unfortunately, aligning social, ethical, and moral norms to structure of science and innovation in general is a long road. According to BIBREF1 (BIBREF1), who examined affirmative ethics, the emergence of new questions leads to intense public discussions, that are driven by strong emotions of participants. And machine ethics BIBREF2, BIBREF3, BIBREF4 is no exception. Consider, e.g., BIBREF5's (BIBREF5) empirical proof that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, such as the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse.",
"In other words, AI systems are not neutral with respect to purpose and society anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing in general most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your individual ethical frame. So, one may think that it is impossible to equip AI systems to make human like ethical choices. Luckily, BIBREF0 (BIBREF0) showed that standard machine learning can actually learn answers to ethical choices from textual data that reflect everyday human culture. Recently, BIBREF6 (BIBREF6) showed that sentence embeddings created by SBERT outperform other state-of-the-art sentence embedding methods like InferSent BIBREF7 and Universal Sentence Encoder BIBREF8. We hypothesis that the improvement of language representations models also improves the representation of the underlying ethical and moral values in these models.",
"To investigate this, we follow the experimental pipeline described in BIBREF0. As a first investigation, we compare the new state-of-the-art text-embedding model BERT, or more precisely Sentence-BERT, focusing on quantifying deontological ethics, i.e. finding out, whether an action itself is right or wrong. Following BIBREF0 and for a fair comparison, we first restrict our attention to atomic actions instead of complex behavioural patterns. Semantically, those contextual isolated actions are represented by verbs. Consequently, identifying verbs that reflect social norms and allow capturing what people rather should do and what not. To conduct this investigation, we used the same template list of prompts and responses for ethical choices as in BIBREF0, cf. Tab. TABREF15(Appendix). The templates include questions, such as \"Should I kill people?\", \"Is it allowed to murder people?\", etc. with answer templates of \"Yes/no, I should (not).\"",
"Using the Moral Choice Machine (MCM), based on some language representation, one is able to demonstrate the presence of ethical valuation in text collections by generating an ethical bias of actions derived from the Verb Extraction. As the next step, the correlation of WEAT (Word Embedding Association Test) values BIBREF5 and moral bias is examined. Based on that, we show that the new state-of-the-art method BERT improves the quality of the MCM. Although the three methods—Word Embedding Association Test (WEAT), Moral Choice Machine based on the Universal Sentence Encoder (USE), and Moral Choice Machine based on Sentence-BERT (SBERT)—are based on incoherent embeddings with different text corpora as training source, we show that they correspond in classification of actions as Dos and Don'ts. Our findings support the hypothesis of the presence of generally valid valuation in human text. Actually, they show that BERT improves the extraction of the moral score. Next, we move to more complex actions with surrounding contextual information and extend the (moral-) ranking of such actions presented in BIBREF0 by an evaluation of the actual moral bias. Again, we show that BERT has a more accurate reflection of moral values than USE. Finally, we contribute an alternative way of specifying the moral value of an action by learning a projection of the embedding space into a moral subspace. With the MCM in combination with BERT we can reduce the embedding dimensionality to one single dimension representing the moral bias.",
"We proceed as follows. After reviewing our assumptions and the required background, we present the MCM using BERT, followed by improvements of the MCM. Before concluding, we present our empirical results."
],
[
"In this section, we review our assumptions, in particular what we mean by moral choices, and the required background, following closely BIBREF0.",
"Moral Choices. Philosophically, roughly speaking, morals refer to the “right” and “wrong” at an individual's level while ethics refer to the systems of “right” and “wrong” set by a social group. Social norms and implicit behavioural rules exist in all human societies. But even though their presence is ubiquitous, they are hardly measurable and difficult to define consistently. The underlying mechanisms are still poorly understood. Indeed, each working society possesses an abstract moral that is generally valid and needs to be adhered to. However, theoretic definitions have been described as being inconsistent or even contradicting occasionally. Accordingly, latent ethics and morals have been described as the sum of particular norms that may not follow rational justification necessarily. Recently, BIBREF9 (BIBREF9) for instance suggested that moral norms are determined to a large extent by what is perceived to be common convention.",
"With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings.",
"Word and Sentence Embeddings. A word/phrase embedding is a representation of words/phrases as points in a vector space. All approaches have in common that more related or even similar text entities lie close to each other in the vector space, whereas distinct words/phrases can be found in distant regions BIBREF10. This enables determining semantic similarities in a language.",
"Although these techniques have been around for some time, their potential increased considerably with the emergence of deep distributional approaches. In contrast to previous implementations, those embeddings are built on neural networks (NNs) and enable a rich variety of mathematical vector operations. One of the initial and most widespread algorithms to train word embeddings is Word2Vec BIBREF11, where unsupervised feature extraction and learning is conducted per word on either CBOW or Skip-gram NNs. This can be extended to full sentences BIBREF7, BIBREF8, BIBREF12.",
"Bias in Text Embeddings. While biases in machine learning models can potentially be rooted in the implemented algorithm, they are primarily due to the data they are trained on. BIBREF5 (BIBREF5) empirically showed that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, as for instance the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse: AI systems “have the potential to inherit a very human flaw: bias”, as Socure's CEO Sunil Madhu puts it. Besides the typically discussed bias in gender and race stereotypes, AI systems are also not neutral with respect to purpose and societal values anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your ethical frame of reference."
],
[
"Word-based approaches such as WEAT or Verb Extraction are comparatively simple. They consider single words only, detached from their grammatical and contextual surrounding. In contrast, the Moral Choice Machine determines biases on a sentence level.",
"Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:",
"where $\\vec{q}$ is the vector representation of the question and $\\vec{a}$ and $\\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$.",
"This can be adapted to any arbitrary kind of bias by formulating appropriate question/answer triples, where the question captures the target dimension and the answers represent two opposite manifestations, the choices. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template. The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value. Specifically, we considered two different biases: gender and moral.",
"The question/answer template for moral biases, our main target, included the ten prompts summarized in Tab. TABREF15(Appendix), where any action term can be inserted for the dots. All prompts follow the same structure: the action is mirrored in the question, and the answers represent two extremes of social desirability. Questions were, e.g., “Is it allowed to ...?” or “Should I ...?”. The used answers were grammatically adjusted to the question, e.g., “Yes, it is.” vs. “No, it is not.” respectively “Yes, you should.” vs. “No, you should not.”. Besides atomic actions, e.g. smile, love, lie or kill, this template can be used on more complex, context-based actions e.g. love my parents, love my wife/husband, kill people or kill time."
],
[
"As BIBREF0 (BIBREF0) showed the question/answer template is an appropriate method to extract moral biases. However as BIBREF13 (BIBREF13) showed, one is also able to even adapt the model's bias, e.g. debias the model's gender bias. They describe that the first step for debiasing word embeddings is to identify a direction (or, more generally, a subspace) of the embedding that captures the bias.",
"To identify the gender subspace, e.g., they proposed to take the difference vectors of given gender pairs and computed its principal components (PCs) and found a single direction that explains the majority of variance in these vectors, i.e. the first eigenvalue is significantly larger than the rest. Therefore, they argue that the top PC, denoted by the unit vector $g$, captures the gender subspace. Subsequently, they debias the embedding based on this subspace. Please note that the gender pairs are labelled beforehand.",
"Using the above-mentioned methodology, we propose an alternative to identify the moral bias. Inspired by BIBREF13, we first compute the moral subspace of the text embedding. Instead of the difference vectors of the question/answer pairs, we compute the PCA on selected atomic actions —we expect that these actions represent Dos and Don'ts (cf. Appendix). We formulate the actions as questions, i.e. using question templates, and compute the mean embedding, since this amplifies their moral score BIBREF0. Similar to the gender subspace, if the first eigenvalue is significantly larger than the rest, the top PC, denoted by the unit vector $m$, captures the moral subspace and therefore also the moral bias. Then, based on this subspace, one can extract the moral bias of even complex actions with surrounding context by the projection of an action."
],
[
"This section investigates empirically whether text corpora contain recoverable and accurate imprints of our moral choices. Specifically, we move beyond BIBREF0, by showing that BERT has a more accurate moral representation than that of the Universal Sentence Encoder.",
"Datasets and Embeddings Models. Experiments of the Moral Choice Machine are conducted with the Universal Sentence Encoder (USE) BIBREF8 and Sentence-BERT (SBERT) BIBREF6. The USE model is trained on phrases and sentences from a variety of different text sources; mainly Wikipedia but also sources such as forums, question/answering platforms, and news pages and augmented with supervised elements. SBERT is a modification of the pretrained BERT BIBREF12 network that aims to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. BERT is, like USE, also trained mainly on Wikipedia. For the verb extraction, the same general positive and negative association sets as in BIBREF0 are used—$A$ and $B$ in Eq. DISPLAY_FORM18—. The comprehensive list of vocabulary can be found in the appendix (Tab. TABREF20).",
"Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.",
"Tab. TABREF22 and Tab. TABREF23 (cf. Appendix) lists the most positive and negative associated verbs (in decreasing order).",
"Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts.",
"Replicating Atomic Moral Choices. Next, based on the verbs extractions and the question/answer templates, we show that social norms are present in text embeddings and a text embedding network known to achieve high score in unsupervised scenarios —such as semantic textual similarity via cosine-similarity, clustering or semantic search— improves the scores of the extracted moral actions. The correlation of the moral bias and the corresponding WEAT value was calculated to test consistency of findings. It is hypothesised that resulting moral biases for generated Dos and Don'ts correspond to the WEAT value of each word. The correlation was tested by means of Pearson's Correlation Coefficient:",
"where $m_x$ and $m_y$ are the the means of $X$ and $Y$. Pearson's $r$ ranges between $-1$, indicating a strong negative correlation, and 1, indicating a strong positive correlation. Significance levels are defined as $5\\%$, $1\\%$ and $0.1\\%$, indicated by one, two or three starlets.",
"The correlation between WEAT value and the moral bias gets tangible, when inspecting their correlation graphically, cf. Fig. FIGREF4. The concrete bias scores can be found in the Appendix, Fig. TABREF28 and TABREF29. For both WEAT and MCM, the scatter plots of Dos and Don'ts are divided on the x-axis. The Pearson's Correlation Coefficient using USE as embedding (Top) $r = 0.73$ with $p = 2.3732e^{-16}$ is indicating a significant positive correlation. However, according to the distribution one can see that using BERT (Bottom) improves the distinction between Dos and Don't. Actually, the Pearson's Correlation Coefficient $r = 0.88$ with $p = 1.1054e^{-29}$ indicates a high positive correlation. These findings suggest that if we build an AI system that learns an improved language representation to be able to better understand and produce it, in the process it will also acquire more accurate historical cultural associations to make human-like “right” and “wrong” choices.",
"Replicating Complex Moral Choices in the Moral Subspace.",
"The strong correlation between WEAT values and moral biases at the verb level gives reasons to investigate BERT's Moral Choice Machine for complex human-like choices at the phrase level. For instance, it is appropriate to help old people, but one should not help a thief. It is good behaviour to love your parents, but not to steal money. To see whether the moral choice machine can, in principle, deal with complex choices and implicit context information around these complex choices, BIBREF0 (BIBREF0) considered the rankings among answers induced by cosine distance. Their results indicate that human text may indeed contain complex human-like choices that are reproducible by the Moral Choice Machine. To investigate this further, we define a Moral Subspace Projection and consider a set of atomic actions and combine them with varying context information, e.g. “Should I have a gun to hunt animals?” or “Should I have a gun to kill people?”.",
"First we will investigate the subspace of vector differences (moral direction) which was introduced by BIBREF13 (BIBREF13) to debias word embeddings. Fig. FIGREF6 (a-b) shows the percentage of variance explained in the PCA using the MCM with USE(a) and BERT(b). Clearly, the top principal component (PC) using BERT explains the majority of variance in these vectors, therefore we conclude that it represents the moral direction $m$. Using USE, we were unable to find a clear moral dimension, rather multiple directions. Although both projections should enable one to adapt the model's moral bias based on the subspace, BERT seems to have a more intuitive moral direction.",
"Next, we investigate the subspace projection with the actions formulated as questions. Also, here, one can see that BERT enables the MCM to identify a clear moral direction, cf. Fig. FIGREF6(c-d). The PCA is computed with the embedding of atomic actions. Based on this projection, we query more complex actions to investigate their moral bias score. The atomic actions in the subspace are visualized in Fig. FIGREF1 and the queried actions in Fig. FIGREF11. The horizontal axis (the top PC) represents the moral direction. One can observe that the atomic actions kill, murder, slaughter, brutalise, destroy are the most negative actions and congratulate, compliment, welcome and smile the most positive. E.g. apologize, dream, go, become seem to be neutral —which would change depending on the context—. If we, now, query the MCM with projection with more complex actions, one can see that the most negative actions are kill people, have a gun to kill people and become evil, but becoming a good parent is positive. Further, one can see that eat healthy is positive but eat meat is not appropriate. One should not travel to North Korea, but also not to Germany. Instead traveling to the United States is appropriate."
],
[
"We have demonstrated that BERT has a more pronounced moral compass than previous embedding methods. That is, yes, text embeddings encode knowledge about deontological ethical and even moral choices, but the quality of the bias score depends on the quality of the text embedding network. Specifically, our empirical results show that the Moral Choice Machine with recent state-of-the-art language representations, namely BERT, extends the boundary of previous approaches and demonstrate the existence of biases in human language on a complex phrase level. Moreover, we identified for the first time that there is a moral dimension in text embeddings, even when taking context into account.",
"Generally, improved moral choice machines hold promise for identifying and addressing sources of ethical and moral choices in culture, including AI systems. This provides several avenues for future work. Inspired by BIBREF13 (BIBREF13), we aim at modifying the embedding, given human ethical values collected from an user study. Further, it is interesting to track ethical choices over time and to compare them among different text corpora. Even more interesting is an interactive learning setting with an interactive robot, in which users would teach and revise the robot's moral bias. Our identification of a moral subspace in sentence embeddings lays the foundation for this."
],
[
"BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value."
],
[
"Transferring the approach of implicit associations from human subjects to information retrieval systems on natural text was initially suggested by Caliskan et al. (BIBREF5), who reported some basic effects of the Word Embedding Association Test (WEAT). Whereas the strength of association in human minds is defined by response latency in Implicit Association Tests (IAT), it is here instantiated as cosine similarity of text in the Euclidean space. Similar to the IAT, complex concepts are defined by word sets. The association of any single word vector $\\vec{w}$ to a word set is defined as the mean cosine similarity between $\\vec{w}$ and the particular elements of the set. Now, let there be two sets of target words $X$ and $Y$. The allocation of $\\vec{w}$ to two discriminating association sets $A$ and $B$ can be formulated as",
"A word with representation $\\vec{w}$ that is stronger associated to concept $A$ yields a positive value and representation related to $B$ a negative value."
],
[
"The complete lists of positive and negative association words that were applied for generating Dos and Don'ts with Verb Extraction are given in Tab. TABREF20. The words were collected from four different literature sources that provide unspecific association sets to define pleasant and unpleasant associations BIBREF14, BIBREF17, BIBREF18, BIBREF15."
],
[
"Tab. TABREF22 lists the most positive associated verbs (in decreasing order).",
"Even though the contained verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, Tab. TABREF23 presents the most negative associated verbs (in decreasing order) we found in our vocabulary.",
"Some of the words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. And still others words, as for instance suppurate or rot, appear to be disgusting in the first place. Exculpate is not a bad behaviour per se. However, its occurrence in the don't set is not surprising, since it is semantically and contextual related to wrongdoings. Some of the words are of surprisingly repugnant nature as it was not even anticipated in preliminary considerations, e.g. depopulate or dehumanise. Undoubtedly, the listed words can be accepted as commonly agreed Don'ts. Both lists include few words are rather common as a noun or adjectives, as joy, long, gift or bad. Anyhow, they can also be used as verbs and comply the requirements of being a do or a don't in that function. The allocation of verbs into Dos and Don'ts was confirmed by the affective lexicon AFINN BIBREF16. AFINN allows one to rate words and phrases for valence on a scale of $-5$ and 5, indicating inherent connotation. Elements with no ratings are treated as neutral ($0.0$).",
"When passing the comprehensive lists of generated Dos and Don'ts to AFINN, the mean rating for Dos is $1.12$ ($std=1.24$) and for Don'ts $-0.90$ ($std=1.22$). The t-test statistic yielded values of $t = 8.12$ with $p < .0001^{***}$. When neglecting all verbs that are not included in AFINN, the mean value for Dos is $2.34$ ($std=0.62$, $n = 24$) and the mean for Don'ts $-2.37$ ($std = 0.67$, $n=19$), with again highly significant statistics ($t = 23.28$, $p<.0001^{***}$). Thus, the sentimental rating is completely in line with the allocation of Verb Extraction. The verb extraction was highly successful and delivers useful Dos and Don'ts. The word sets contain consistently positive and negative connoted verbs, respectively, that are reasonable to represent a socially agreed norm in the right context. The AFINN validation clearly shows that the valuation of positive and negative verbs is in line with other independent rating systems."
],
[
"The following results were computed with the MCM version of BIBREF0 (BIBREF0) using both USE and BERT as sentence embedding. Specifically, to investigate whether the sentiments of the extracted Dos and Don'ts also hold for more complex sentence level, we inserted them into the question/answer templates of Moral Choice Machine BIBREF0. The resulting moral biases scores/choices are summarized in Tab. TABREF28. It presents the moral biases exemplary for the top ten Dos and Don'ts by WEAT value of both sets. The threshold between the groups is not 0, but slightly shifted negatively (Using USE further shifted than Using BERT). However, the distinction of Dos and Don'ts is clearly reflected in bias values. Using USE the mean bias of all considered elements is $-0.018$ ($std=0.025$), whereat the mean of Dos is $-0.001$ ($std=0.190$, $n=50$) and the mean of Don'ts $-0.037$ ($std=0.017$, $n=50$). Using BERT the mean bias of all considered elements is $-0.054$ ($std=0.11$), whereat the mean of Dos is $0.041$ ($std=0.064$, $n=50$) and the mean of Don'ts $-0.163$ ($std=0.053$, $n=50$).",
"Furthermore Tab. TABREF29 shows the resulting moral biases scores/choices for action with additional surrounding context exemplary for the top ten Dos and Don'ts of both sentence embeddings."
],
[
"To create a the moral subspace projection a Principal Component Analysis (PCA) was computed. The used atomic actions are listed in Tab. TABREF26. The resulting space, with the MCM using BERT, is visualized in Fig. FIGREF1 based on the first two top PCs. The top PC (the $X$ axis) defines the moral direction $m$ (bias). The context-based actions which were tested using the moral subspace projection are listed in Tab. TABREF27. The resulting moral direction $m$ (or bias) for both the atomic and context-based actions can be found in Tab. TABREF30. We also list the results using the sentence embedding USE instead of BERT. $m < 0$ corresponds to a positive moral score and $m > 0$ corresponds to a negative moral score."
]
],
"section_name": [
"Introduction",
"Assumptions and Background",
"Human-like Moral Choices from Human Text",
"Moral Subspace Projection",
"Experimental Results",
"Conclusions",
"Appendix ::: Moral Choice Machine",
"Appendix ::: Implicit Associations in Word Embeddings",
"Appendix ::: Association Sets",
"Appendix ::: Dos and Don’ts for the Moral Choice Machine",
"Appendix ::: Moral Bias of USE and BERT",
"Appendix ::: Moral Subspace Projection"
]
} | {
"answers": [
{
"annotation_id": [
"ed6630cf594af9ecb6ba6ba4e77e543ec347a640"
],
"answer": [
{
"evidence": [
"Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.",
"Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.",
"Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"25346c23bda2b07ae27e0f2faaa315eeebf32173"
],
"answer": [
{
"evidence": [
"BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value."
],
"extractive_spans": [
"Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"de2f7345f72dbc3367300aefc51821585338092c"
],
"answer": [
{
"evidence": [
"Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:",
"where $\\vec{q}$ is the vector representation of the question and $\\vec{a}$ and $\\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (formula 1) bias(q, a, b) = cos(a, q) − cos(b, q)\nBias is calculated as substraction of cosine similarities of question and some answer for two opposite answers.",
"highlighted_evidence": [
"Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:\n\nwhere $\\vec{q}$ is the vector representation of the question and $\\vec{a}$ and $\\vec{b}$ the representations of the two answers/choices."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"69545be88b4f5b9812d8c192eb63efe57331e069"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2ddca5b9c6a3d7e0972c36306ea5951fa06de0ca"
],
"answer": [
{
"evidence": [
"With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings."
],
"extractive_spans": [
"These ask which choices are morally required, forbidden, or permitted",
"norms are understood as universal rules of what to do and what not to do"
],
"free_form_answer": "",
"highlighted_evidence": [
"With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What is the Moral Choice Machine?",
"How is moral bias measured?",
"What sentence embeddings were used in the previous Jentzsch paper?",
"How do the authors define deontological ethical reasoning?"
],
"question_id": [
"8de64483ae96c0a03a8e527950582f127b43dceb",
"4d062673b714998800e61f66b6ccbf7eef5be2ac",
"f4238f558d6ddf3849497a130b3a6ad866ff38b3",
"3582fac4b2705db056f75a14949db7b80cbc3197",
"96dcabaa8b6bd89b032da609e709900a1569a0f9"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: BERT has a moral dimension: PCA of its embeddings projected to 2D. The top PC is the x axis, its moral dimension m.",
"Figure 2: Correlation of moral bias score and WEAT Value for general Dos and Don’ts. (Blue line) Correlation, the Pearson’s Correlation Coefficient using USE as embedding (Top) r = 0.73 with p = 2.3732e−16 is indicating a significant positive correlation. However, according to the distribution, one can see that using BERT (Bottom) improves the distinction between Dos and Don’t, and also the Pearson’s Correlation Coefficient r = 0.88 with p = 1.1054e−29 indicates a higher positive correlation.",
"Figure 3: The percentage of variance explained in the PCA of the vector differences (a-b) and the of the action embedding (c-d). If MCM is based on BERT, the top component explains significantly more variance than any other.",
"Figure 4: Context-based actions projected —based on PCA computed by selected atomic actions— along two axes: x (top PC) defines the moral direction m (Left: Dos and right: Don’ts). Compare Tab. 9(Appendix) for detailed moral bias scores.",
"Table 1: Question/Answer template of the Moral Choice Machine.",
"Figure 5: The Moral Choice Machine illustrated for the choice of murdering people and the exemplary question Should I . . . ? from the question template.",
"Table 6: The context-based actions to extract the bias from a moral subspace",
"Table 7: Comparison of MCM with the two different text embeddings USE and BERT on atomic actions. The extracted moral bias scores of the top ten Dos and Don’ts are shown.",
"Table 8: Comparison of MCM with the two different text embeddings USE and BERT on actions with additional surrounding context. The extracted moral bias scores of the top ten Dos and Don’ts are shown.",
"Table 9: Resulting moral direction m using the moral subspace projection. All tested atomic and context based actions are listed. m < 0 corresponds to a positive moral score and m > 0 corresponds to a negative moral score. The visualization based on the first two top PCs, using BERT as sentence embedding, can be found in Fig.1 and Fig.4."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"8-Table1-1.png",
"8-Figure5-1.png",
"10-Table6-1.png",
"11-Table7-1.png",
"11-Table8-1.png",
"12-Table9-1.png"
]
} | [
"How is moral bias measured?"
] | [
[
"1912.05238-Human-like Moral Choices from Human Text-1",
"1912.05238-Human-like Moral Choices from Human Text-2"
]
] | [
"Answer with content missing: (formula 1) bias(q, a, b) = cos(a, q) − cos(b, q)\nBias is calculated as substraction of cosine similarities of question and some answer for two opposite answers."
] | 353 |
2002.11268 | A Density Ratio Approach to Language Model Fusion in End-to-End Automatic Speech Recognition | This article describes a density ratio approach to integrating external Language Models (LMs) into end-to-end models for Automatic Speech Recognition (ASR). Applied to a Recurrent Neural Network Transducer (RNN-T) ASR model trained on a given domain, a matched in-domain RNN-LM, and a target domain RNN-LM, the proposed method uses Bayes' Rule to define RNN-T posteriors for the target domain, in a manner directly analogous to the classic hybrid model for ASR based on Deep Neural Networks (DNNs) or LSTMs in the Hidden Markov Model (HMM) framework (Bourlard & Morgan, 1994). The proposed approach is evaluated in cross-domain and limited-data scenarios, for which a significant amount of target domain text data is used for LM training, but only limited (or no) {audio, transcript} training data pairs are used to train the RNN-T. Specifically, an RNN-T model trained on paired audio & transcript data from YouTube is evaluated for its ability to generalize to Voice Search data. The Density Ratio method was found to consistently outperform the dominant approach to LM and end-to-end ASR integration, Shallow Fusion. | {
"paragraphs": [
[
"End-to-end models such as Listen, Attend & Spell (LAS) BIBREF0 or the Recurrent Neural Network Transducer (RNN-T) BIBREF1 are sequence models that directly define $P(W | X)$, the posterior probability of the word or subword sequence $W$ given an audio frame sequence $X$, with no chaining of sub-module probabilities. State-of-the-art, or near state-of-the-art results have been reported for these models on challenging tasks BIBREF2, BIBREF3.",
"End-to-end ASR models in essence do not include independently trained symbols-only or acoustics-only sub-components. As such, they do not provide a clear role for language models $P(W)$ trained only on text/transcript data. There are, however, many situations where we would like to use a separate LM to complement or modify a given ASR system. In particular, no matter how plentiful the paired {audio, transcript} training data, there are typically orders of magnitude more text-only data available. There are also many practical applications of ASR where we wish to adapt the language model, e.g., biasing the recognition grammar towards a list of specific words or phrases for a specific context.",
"The research community has been keenly aware of the importance of this issue, and has responded with a number of approaches, under the rubric of “Fusion”. The most popular of these is “Shallow Fusion” BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, which is simple log-linear interpolation between the scores from the end-to-end model and the separately-trained LM. More structured approaches, “Deep Fusion” BIBREF9, “Cold Fusion” BIBREF10 and “Component Fusion” BIBREF11 jointly train an end-to-end model with a pre-trained LM, with the goal of learning the optimal combination of the two, aided by gating mechanisms applied to the set of joint scores. These methods have not replaced the simple Shallow Fusion method as the go-to method in most of the ASR community. Part of the appeal of Shallow Fusion is that it does not require model retraining – it can be applied purely at decoding time. The Density Ratio approach proposed here can be seen as an extension of Shallow Fusion, sharing some of its simplicity and practicality, but offering a theoretical grounding in Bayes' rule.",
"After describing the historical context, theory and practical implementation of the proposed Density Ratio method, this article describes experiments comparing the method to Shallow Fusion in a cross-domain scenario. An RNN-T model was trained on large-scale speech data with semi-supervised transcripts from YouTube videos, and then evaluated on data from a live Voice Search service, using an RNN-LM trained on Voice Search transcripts to try to boost performance. Then, exploring the transition between cross-domain and in-domain, limited amounts of Voice Search speech data were used to fine-tune the YouTube-trained RNN-T model, followed by LM fusion via both the Density Ratio method and Shallow Fusion. The ratio method was found to produce consistent gains over Shallow Fusion in all scenarios examined."
],
[
"Generative models and Bayes' rule. The Noisy Channel Model underlying the origins of statistical ASR BIBREF12 used Bayes' rule to combine generative models of both the acoustics $p(X|W)$ and the symbol sequence $P(W)$:",
"for an acoustic feature vector sequence $X = {\\mbox{\\bf x}}_1, ..., {\\mbox{\\bf x}}_T$ and a word or sub-word sequence $W = s_1, ..., s_U$ with possible time alignments $S_W = \\lbrace ..., {\\bf s}, ...\\rbrace $. ASR decoding then uses the posterior probability $P(W|X)$. A prior $p({\\bf s}| W)$ on alignments can be implemented e.g. via a simple 1st-order state transition model. Though lacking in discriminative power, the paradigm provides a clear theoretical framework for decoupling the acoustic model (AM) $p(X|W)$ and LM $P(W)$.",
"Hybrid model for DNNs/LSTMs within original ASR framework. The advent of highly discriminative Deep Neural Networks (DNNs) BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17 and Long Short Term Memory models (LSTMs) BIBREF18, BIBREF19 posed a challenge to the original Noisy Channel Model, as they produce phoneme- or state- level posteriors $P({\\bf s}(t) | {\\mbox{\\bf x}}_t)$, not acoustic likelihoods $p({\\mbox{\\bf x}}_t | {\\bf s}(t))$. The “hybrid” model BIBREF20 proposed the use of scaled likelihoods, i.e. posteriors divided by separately estimated state priors $P(w)$. For bidirectional LSTMs, the scaled-likelihood over a particular alignment ${\\bf s}$ is taken to be",
"using $k(X)$ to represent a $p(X)$-dependent term shared by all hypotheses $W$, that does not affect decoding. This “pseudo-generative” score can then be plugged into the original model of Eq. (DISPLAY_FORM2) and used for ASR decoding with an arbitrary LM $P(W)$. For much of the ASR community, this approach still constitutes the state-of-the-art BIBREF2, BIBREF21, BIBREF22.",
"Shallow Fusion. The most popular approach to LM incorporation for end-to-end ASR is a linear interpolation,",
"with no claim to direct interpretability according to probability theory, and often a reward for sequence length $|W|$, scaled by a factor $\\beta $ BIBREF5, BIBREF7, BIBREF8, BIBREF23."
],
[
"The model makes the following assumptions:",
"The source domain $\\psi $ has some true joint distribution $P_{\\psi }(W, X)$ over text and audio;",
"The target domain $\\tau $ has some other true joint distribution $P_{\\tau }(W, X)$;",
"A source domain end-to-end model (e.g. RNN-T) captures $P_{\\psi }(W | X)$ reasonably well;",
"Separately trained LMs (e.g. RNN-LMs) capture $P_{\\psi }(W)$ and $P_{\\tau }(W)$ reasonably well;",
"$p_{\\psi }(X | W)$ is roughly equal to $p_{\\tau }(X | W)$, i.e. the two domains are acoustically consistent; and",
"The target domain posterior, $P_{\\tau }(W | X)$, is unknown.",
"The starting point for the proposed Density Ratio Method is then to express a “hybrid” scaled acoustic likelihood for the source domain, in a manner paralleling the original hybrid model BIBREF20:",
"Similarly, for the target domain:",
"Given the stated assumptions, one can then estimate the target domain posterior as:",
"with $k(X) = p_{\\psi }(X) / p_{\\tau }(X)$ shared by all hypotheses $W$, and the ratio $P_{\\tau }(W) / {P_{\\psi }(W)}$ (really a probablity mass ratio) giving the proposed method its name.",
"In essence, this model is just an application of Bayes' rule to end-to-end models and separate LMs. The approach can be viewed as the sequence-level version of the classic hybrid model BIBREF20. Similar use of Bayes' rule to combine ASR scores with RNN-LMs has been described elsewhere, e.g. in work connecting grapheme-level outputs with word-level LMs BIBREF6, BIBREF24, BIBREF25. However, to our knowledge this approach has not been applied to end-to-end models in cross-domain settings, where one wishes to leverage a language model from the target domain. For a perspective on a “pure” (non-hybrid) deep generative approach to ASR, see BIBREF26."
],
[
"The RNN Transducer (RNN-T) BIBREF1 defines a sequence-level posterior $P(W|X)$ for a given acoustic feature vector sequence $X = {\\mbox{\\bf x}}_1, ..., {\\mbox{\\bf x}}_T$ and a given word or sub-word sequence $W = s_1, ..., s_U$ in terms of possible alignments $S_W = \\lbrace ..., ({\\bf s}, {\\bf t}), ... \\rbrace $ of $W$ to $X$. The tuple $({\\bf s}, {\\bf t})$ denotes a specific alignment sequence, a symbol sequence and corresponding sequence of time indices, consistent with the sequence $W$ and utterance $X$. The symbols in ${\\bf s}$ are elements of an expanded symbol space that includes optional, repeatable blank symbols used to represent acoustics-only path extensions, where the time index is incremented, but no non-blank symbols are added. Conversely, non-blank symbols are only added to a partial path time-synchronously. (I.e., using $i$ to index elements of ${\\bf s}$ and ${\\bf t}$, $t_{i+1} = t_i + 1$ if $s_{i+1}$ is blank, and $t_{i + 1} = t_i$ if $s_{i+1}$ is non-blank). $P(W|X)$ is defined by summing over alignment posteriors:",
"Finally, $P(s_{i+1} | X, t_i, s_{1:i})$ is defined using an LSTM-based acoustic encoder with input $X$, an LSTM-based label encoder with non-blank inputs $s$, and a feed-forward joint network combining outputs from the two encoders to produce predictions for all symbols $s$, including the blank symbol.",
"The Forward-Backward algorithm can be used to calculate Eq. (DISPLAY_FORM16) efficiently during training, and Viterbi-based beam search (based on the argmax over possible alignments) can be used for decoding when $W$ is unknown BIBREF1, BIBREF27."
],
[
"Shallow Fusion (Eq. (DISPLAY_FORM4)) can be implemented in RNN-T for each time-synchronous non-blank symbol path extension. The LM score corresponding to the same symbol extension can be “fused” into the log-domain score used for decoding:",
"This is only done when the hypothesized path extension $s_{i+1}$ is a non-blank symbol; the decoding score for blank symbol path extensions is the unmodified $\\log P(s_{i+1} | X, t_i, s_{1:i})$."
],
[
"Eq. (DISPLAY_FORM14) can be implemented via an estimated RNN-T “pseudo-posterior”, when $s_{i+1}$ is a non-blank symbol:",
"This estimate is not normalized over symbol outputs, but it plugs into Eq. () and Eq. (DISPLAY_FORM16) to implement the RNN-T version of Eq. (DISPLAY_FORM14). In practice, scaling factors $\\lambda _\\psi $ and $\\lambda _\\tau $ on the LM scores, and a non-blank reward $\\beta $, are used in the final decoding score:"
],
[
"The ratio method is very simple to implement. The procedure is essentially to:",
"Train an end-to-end model such as RNN-T on a given source domain training set $\\psi $ (paired audio/transcript data);",
"Train a neural LM such as RNN-LM on text transcripts from the same training set $\\psi $;",
"Train a second RNN-LM on the target domain $\\tau $;",
"When decoding on the target domain, modify the RNN-T output by the ratio of target/training RNN-LMs, as defined in Eq. (DISPLAY_FORM21), and illustrated in Fig. FIGREF1.",
"The method is purely a decode-time method; no joint training is involved, but it does require tuning of the LM scaling factor(s) (as does Shallow Fusion). A held-out set can be used for that purpose."
],
[
"The following data sources were used to train the RNN-T and associated RNN-LMs in this study.",
"Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.",
"Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).",
"Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.",
"Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively."
],
[
"The following data sources were used to choose scaling factors and/or evaluate the final model performance.",
"Source-domain Eval Set (YouTube). The in-domain performance of the YouTube-trained RNN-T baseline was measured on speech data taken from Preferred Channels on YouTube BIBREF29. The test set is taken from 296 videos from 13 categories, with each video averaging 5 minutes in length, corresponding to 25 hours of audio and 250,000 word tokens in total.",
"Target-domain Dev & Eval sets (Voice Search). The Voice Search dev and eval sets each consist of approximately 7,500 anonymized utterances (about 33,000 words and corresponding to about 8 hours of audio), distinct from the fine-tuning data described earlier, but representative of the same Voice Search service."
],
[
"The first set of experiments uses an RNN-T model trained on {audio, transcript} pairs taken from segmented YouTube videos, and evaluates the cross-domain generalization of this model to test utterances taken from a Voice Search dataset, with and without fusion to an external LM."
],
[
"The overall structure of the models used here is as follows:",
"",
"RNN-T:",
"Acoustic features: 768-dimensional feature vectors obtained from 3 stacked 256-dimensional logmel feature vectors, extracted every 20 msec from 16 kHz waveforms, and sub-sampled with a stride of 3, for an effective final feature vector step size of 60 msec.",
"Acoustic encoder: 6 LSTM layers x (2048 units with 1024-dimensional projection); bidirectional.",
"Label encoder (aka “decoder” in end-to-end ASR jargon): 1 LSTM layer x (2048 units with 1024-dimensional projection).",
"RNN-T joint network hidden dimension size: 1024.",
"Output classes: 10,000 sub-word “morph” units BIBREF30 , input via a 512-dimensional embedding.",
"Total number of parameters: approximately 340M",
"RNN-LMs for both source and target domains were set to match the RNN-T decoder structure and size:",
"1 layer x (2048 units with 1024-dimensional projection).",
"Output classes: 10,000 morphs (same as the RNN-T).",
"Total number of parameters: approximately 30M.",
"The RNN-T and the RNN-LMs were independently trained on 128-core tensor processing units (TPUs) using full unrolling and an effective batch size of 4096. All models were trained using the Adam optimization method BIBREF31 for 100K-125K steps, corresponding to about 4 passes over the 120M utterance YouTube training set, and 20 passes over the 21M utterance Voice Search training set. The trained RNN-LM perplexities (shown in Table TABREF28) show the benefit to Voice Search test perplexity of training on Voice Search transcripts."
],
[
"In the first set of experiments, the constraint $\\lambda _\\psi = \\lambda _\\tau $ was used to simplify the search for the LM scaling factor in Eq. DISPLAY_FORM21. Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to the LM scaling factor(s) for Shallow Fusion and the Density Ratio method, as well as the effect of the RNN-T sequence length scaling factor, measured on the dev set.",
"The LM scaling factor affects the relative value of the symbols-only LM score vs. that of the acoustics-aware RNN-T score. This typically alters the balance of insertion vs. deletion errors. In turn, this effect can be offset (or amplified) by the sequence length scaling factor $\\beta $ in Eq. (DISPLAY_FORM4), in the case of RNN-T, implemented as a non-blank symbol emission reward. (The blank symbol only consumes acoustic frames, not LM symbols BIBREF1). Given that both factors have related effects on overall WER, the LM scaling factor(s) and the sequence length scaling factor need to be tuned jointly.",
"Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to these factors for Shallow Fusion and the Density Ratio method, measured on the dev set.",
"In the second set of experiments, $\\beta $ was fixed at -0.1, but the constraint $\\lambda _\\psi = \\lambda _\\tau $ was lifted, and a range of combinations was evaluated on the dev set. The results are shown in Fig. FIGREF43. The shading in Figs. FIGREF40, FIGREF41 and FIGREF43 uses the same midpoint value of 15.0 to highlight the results.",
"The best combinations of scaling factors from the dev set evaluations (see Fig. FIGREF40, Fig. FIGREF41 and Fig. FIGREF43) were used to generate the final eval set results, WERs and associated deletion, insertion and substitution rates, shown in Table TABREF44. These results are summarized in Table TABREF45, this time showing the exact values of LM scaling factor(s) used."
],
[
"The experiments in Section SECREF5 showed that an LM trained on text from the target Voice Search domain can boost the cross-domain performance of an RNN-T. The next experiments examined fine-tuning the original YouTube-trained RNN-T on varied, limited amounts of Voice Search {audio, transcript} data. After fine-tuning, LM fusion was applied, again comparing Shallow Fusion and the Density Ratio method.",
"Fine-tuning simply uses the YouTube-trained RNN-T model to warm-start training on the limited Voice Search {audio, transcript} data. This is an effective way of leveraging the limited Voice Search audio data: within a few thousand steps, the fine-tuned model reaches a decent level of performance on the fine-tuning task – though beyond that, it over-trains. A held-out set can be used to gauge over-training and stop training for varying amounts of fine-tuning data.",
"The experiments here fine-tuned the YouTube-trained RNN-T baseline using 10 hours, 100 hours and 1000 hours of Voice Search data, as described in Section SECREF27. (The source domain RNN-LM was not fine-tuned). For each fine-tuned model, Shallow Fusion and the Density Ratio method were used to evaluate incorporation of the Voice Search RNN-LM, described in Section SECREF5, trained on text transcripts from the much larger set of 21M Voice Search utterances. As in Section SECREF5, the dev set was used to tune the LM scaling factor(s) and the sequence length scaling factor $\\beta $. To ease parameter tuning, the constraint $\\lambda _\\psi = \\lambda _\\tau $ was used for the Density Ratio method. The best combinations of scaling factors from the dev set were then used to generate the final eval results, which are shown in Table TABREF45"
],
[
"The experiments described here examined the generalization of a YouTube-trained end-to-end RNN-T model to Voice Search speech data, using varying quantities (from zero to 100%) of Voice Search audio data, and 100% of the available Voice Search text data. The results show that in spite of the vast range of acoustic and linguistic patterns covered by the YouTube-trained model, it is still possible to improve performance on Voice Search utterances significantly via Voice Search specific fine-tuning and LM fusion. In particular, LM fusion significantly boosts performance when only a limited quantity of Voice Search fine-tuning data is used.",
"The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio data from the target domain. Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario.",
"Notably, the “sweet spot” of effective combinations of LM scaling factor and sequence length scaling factor is significantly larger for the Density Ratio method than for Shallow Fusion (see Fig. FIGREF40 and Fig. FIGREF41). Compared to Shallow Fusion, larger absolute values of the scaling factor can be used.",
"A full sweep of the LM scaling factors ($\\lambda _\\psi $ and $\\lambda _\\tau $) can improve over the constrained setting $\\lambda _\\psi = \\lambda _\\tau $, though not by much. Fig. FIGREF43 shows that the optimal setting of the two factors follows a roughly linear pattern along an off-diagonal band.",
"Fine-tuning using transcribed Voice Search audio data leads to a large boost in performance over the YouTube-trained baseline. Nonetheless, both fusion methods give gains on top of fine-tuning, especially for the limited quantities of fine-tuning data. With 10 hours of fine-tuning, the Density Ratio method gives a 20% relative gain in WER, compared to 12% relative for Shallow Fusion. For 1000 hours of fine-tuning data, the Density Ratio method gives a 10.5% relative gave over the fine-tuned baseline, compared to 7% relative for Shallow Fusion. Even for 21,000 hours of fine-tuning data, i.e. the entire Voice Search training set, the Density Ratio method gives an added boost, from 7.8% to 7.4% WER, a 5% relative improvement.",
"A clear weakness of the proposed method is the apparent need for scaling factors on the LM outputs. In addition to the assumptions made (outlined in Section SECREF5), it is possible that this is due to the implicit LM in the RNN-T being more limited than the RNN-LMs used."
],
[
"This article proposed and evaluated experimentally an alternative to Shallow Fusion for incorporation of an external LM into an end-to-end RNN-T model applied to a target domain different from the source domain it was trained on. The Density Ratio method is simple conceptually, easy to implement, and grounded in Bayes' rule, extending the classic hybrid ASR model to end-to-end models. In contrast, the most commonly reported approach to LM incorporation, Shallow Fusion, has no clear interpretation from probability theory. Evaluated on a YouTube $\\rightarrow $ Voice Search cross-domain scenario, the method was found to be effective, with up to 28% relative gains in word error over the non-fused baseline, and consistently outperforming Shallow Fusion by a significant margin. The method continues to produce gains when fine-tuning to paired target domain data, though the gains diminish as more fine-tuning data is used. Evaluation using a variety of cross-domain evaluation scenarios is needed to establish the general effectiveness of the method."
],
[
"The authors thank Matt Shannon and Khe Chai Sim for valuable feedback regarding this work."
]
],
"section_name": [
"Introduction",
"A Brief History of Language Model incorporation in ASR",
"Language Model incorporation into End-to-end ASR, using Bayes' rule ::: A Sequence-level Hybrid Pseudo-Generative Model",
"Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Top-down fundamentals of RNN-T",
"Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of Shallow Fusion to RNN-T",
"Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of the Density Ratio Method to RNN-T",
"Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Implementation",
"Training, development and evaluation data ::: Training data",
"Training, development and evaluation data ::: Dev and Eval Sets",
"Cross-domain evaluation: YouTube-trained RNN-T @!START@$\\rightarrow $@!END@ Voice Search",
"Cross-domain evaluation: YouTube-trained RNN-T @!START@$\\rightarrow $@!END@ Voice Search ::: RNN-T and RNN-LM model settings",
"Cross-domain evaluation: YouTube-trained RNN-T @!START@$\\rightarrow $@!END@ Voice Search ::: Experiments and results",
"Fine-tuning a YouTube-trained RNN-T using limited Voice Search audio data",
"Discussion",
"Summary",
"Summary ::: Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"2c01c2f087320332635fe50162452442ce58ef42"
],
"answer": [
{
"evidence": [
"The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio data from the target domain. Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario."
],
"extractive_spans": [],
"free_form_answer": "word error rate",
"highlighted_evidence": [
"Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"2568fbda050e7911894f7be13933236476af53fa"
],
"answer": [
{
"evidence": [
"The following data sources were used to train the RNN-T and associated RNN-LMs in this study.",
"Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.",
"Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).",
"Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.",
"Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively."
],
"extractive_spans": [],
"free_form_answer": "163,110,000 utterances",
"highlighted_evidence": [
"The following data sources were used to train the RNN-T and associated RNN-LMs in this study.\n\nSource-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.\n\nSource-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).\n\nTarget-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.\n\nTarget-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"3b83d7305865f8f702042f258a9c5c82cd8291a9"
],
"answer": [
{
"evidence": [
"The following data sources were used to train the RNN-T and associated RNN-LMs in this study.",
"Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.",
"Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).",
"Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.",
"Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively."
],
"extractive_spans": [
"from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering",
"from a Voice Search service"
],
"free_form_answer": "",
"highlighted_evidence": [
"The following data sources were used to train the RNN-T and associated RNN-LMs in this study.\n\nSource-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.\n\nSource-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).\n\nTarget-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.\n\nTarget-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"472de8abc791a1ef89f3a0b9b9adadccfaaf5d29"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What metrics are used for evaluation?",
"How much training data is used?",
"How is the training data collected?",
"What language(s) is the model trained/tested on?"
],
"question_id": [
"9ae084e76095194135cd602b2cdb5fb53f2935c1",
"67ee7a53aa57ce0d0bc1a20d41b64cb20303f4b7",
"7eb3852677e9d1fb25327ba014d2ed292184210c",
"4f9a8b50903deb1850aee09c95d1b6204a7410b4"
],
"question_writer": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
],
"search_query": [
"markov model",
"markov model",
"markov model",
"markov model"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Estimating a target domain pseudo-posterior via combination of source domain RNN-T, source domain RNN-LM, and target domain RNN-LM.",
"Fig. 2. Dev set WERs for Shallow Fusion LM scaling factor λ vs. sequence length scaling factor β.",
"Table 1. Training set size and test set perplexity for the morph-level RNN-LMs (training domain → testing domain) used in this study.",
"Fig. 3. Dev set WERs for Density Ratio LM scaling factor λ vs. sequence length scaling factor β. Here λ = λψ = λτ .",
"Fig. 4. Dev set WERs for different combinations of λτ and λψ; sequence length scaling factor β = −0.1",
"Table 2. In-domain and target domain performance of a YouTube-trained RNN-T, evaluated with and without fusion to a Voice Search LM (and normalizing YouTube LM in the case of the Density Ratio method).",
"Table 3. Fine tuning the YouTube-trained RNN-T baseline to the voice search target domain for different quantities of Voice Search fine-tuning data, evaluated with and without LM fusion on Voice Search test utterances. (Results for the “no fine-tuning” baseline carried over from Table 2)."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"4-Table1-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"5-Table2-1.png",
"6-Table3-1.png"
]
} | [
"What metrics are used for evaluation?",
"How much training data is used?"
] | [
[
"2002.11268-Discussion-1"
],
[
"2002.11268-Training, development and evaluation data ::: Training data-2",
"2002.11268-Training, development and evaluation data ::: Training data-4",
"2002.11268-Training, development and evaluation data ::: Training data-1",
"2002.11268-Training, development and evaluation data ::: Training data-3",
"2002.11268-Training, development and evaluation data ::: Training data-0"
]
] | [
"word error rate",
"163,110,000 utterances"
] | 356 |
1905.13497 | Attention Is (not) All You Need for Commonsense Reasoning | The recently introduced BERT model exhibits strong performance on several language understanding benchmarks. In this paper, we describe a simple re-implementation of BERT for commonsense reasoning. We show that the attentions produced by BERT can be directly utilized for tasks such as the Pronoun Disambiguation Problem and Winograd Schema Challenge. Our proposed attention-guided commonsense reasoning method is conceptually simple yet empirically powerful. Experimental analysis on multiple datasets demonstrates that our proposed system performs remarkably well on all cases while outperforming the previously reported state of the art by a margin. While results suggest that BERT seems to implicitly learn to establish complex relationships between entities, solving commonsense reasoning tasks might require more than unsupervised models learned from huge text corpora. | {
"paragraphs": [
[
"Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT can largely be associated to the notion of context-aware word embeddings, which differentiate it from common approaches such as word2vec BIBREF3 that establish a static semantic embedding. Since the introduction of BERT, the NLP community continues to be impressed by the amount of ideas produced on top of this powerful language representation model. However, despite its success, it remains unclear whether the representations produced by BERT can be utilized for tasks such as commonsense reasoning. Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC). These tasks have been proposed as potential alternatives to the Turing Test, because they are formulated to be robust to statistics of word co-occurrence BIBREF4 .",
"Below is a popular example from the binary-choice pronoun coreference problem BIBREF5 of WSC:",
"Sentence: The trophy doesn't fit in the suitcase because it is too small.",
"Answers: A) the trophy B) the suitcase",
"Humans resolve the pronoun “it” to “the suitcase” with no difficulty, whereas a system without commonsense reasoning would be unable to distinguish “the suitcase” from the otherwise viable candidate, “the trophy”.",
"Previous attempts at solving WSC usually involve heavy utilization of annotated knowledge bases (KB), rule-based reasoning, or hand-crafted features BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . There are also some empirical works towards solving WSC making use of learning BIBREF11 , BIBREF12 , BIBREF1 . Recently, BIBREF13 proposed to use a language model (LM) to score the two sentences obtained when replacing the pronoun by the two candidates. The sentence that is assigned higher probability under the model designates the chosen candidate. Probability is calculated via the chain rule, as the product of the probabilities assigned to each word in the sentence. Very recently, BIBREF14 proposed the knowledge hunting method, which is a rule-based system that uses search engines to gather evidence for the candidate resolutions without relying on the entities themselves. Although these methods are interesting, they need fine-tuning, or explicit substitution or heuristic-based rules. See also BIBREF15 for a discussion.",
"The BERT model is based on the “Transformer” architecture BIBREF16 , which relies purely on attention mechanisms, and does not have an explicit notion of word order beyond marking each word with its absolute-position embedding. This reliance on attention may lead one to expect decreased performance on commonsense reasoning tasks BIBREF17 , BIBREF18 compared to RNN (LSTM) models BIBREF19 that do model word order directly, and explicitly track states across the sentence. However, the work of BIBREF20 suggests that bidirectional language models such as BERT implicitly capture some notion of coreference resolution.",
"In this paper, we show that the attention maps created by an out-of-the-box BERT can be directly exploited to resolve coreferences in long sentences. As such, they can be simply repurposed for the sake of commonsense reasoning tasks while achieving state-of-the-art results on the multiple task. On both PDP and WSC, our method outperforms previous state-of-the-art methods, without using expensive annotated knowledge bases or hand-engineered features. On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-of-art accuracy of 66.7%. On a WSC dataset, WSC-273, our method achieves 60.3%. As of today, state-of-the-art accuracy on the WSC-273 for single model performance is around 57%, BIBREF14 and BIBREF13 . These results suggest that BERT implicitly learns to establish complex relationships between entities such as coreference resolution. Although this helps in commonsense reasoning, solving this task requires more than employing a language model learned from large text corpora."
],
[
"In this section we first review the main aspects of the BERT approach, which are important to understand our proposal and we introduce notations used in the rest of the paper. Then, we introduce Maximum Attention Score (MAS), and explain how it can be utilized for commonsense reasoning."
],
[
"The concept of BERT is built upon two key ingredients: (a) the transformer architecture and (b) unsupervised pre-training.",
"The transformer architecture consists of two main building blocks, stacked encoders and decoders, which are connected in a cascaded fashion. The encoder is further divided into two components, namely a self-attention layer and a feed-forward neural network. The self-attention allows for attending to specific words during encoding and therefore establishing a focus context w.r.t. to each word. In contrast to that, the decoder has an additional encoder-decoder layer that switches between self-attention and a feed-forward network. It allows the decoder to attend to specific parts of the input sequence. As attention allows for establishing a relationship between words, it is very important for tasks such as coreference resolution and finding associations. In the specific context of pronouns, attention gives rise to links to $m$ candidate nouns, which we denote in the following as $\\mathcal {C}=\\left\\lbrace c_1,..,c_m\\right\\rbrace $ . The concept of self-attention is further expanded within BERT by the idea of so called multi-head outputs that are incorporated in each layer. In the following, we will denote heads and layers with $h\\in H$ and $l\\in L$ , respectively. Multi-heads serve several purposes. On the one hand, they allow for dispersing the focus on multiple positions. On the other hand, they constitute an enriched representation by expanding the embedding space. Leveraging the nearly unlimited amount of data available, BERT learns two novel unsupervised prediction tasks during training. One of the tasks is to predict tokens that were randomly masked given the context, notably with the context being established in a bi-directional manner. The second task constitutes next sentence prediction, whereby BERT learns the relationship between two sentences, and classifies whether they are consecutive."
],
[
"In order to exploit the associative leverage of self-attention, the computation of MAS follows the notion of max-pooling on attention level between a reference word $s$ (e.g. pronoun) and candidate words $c$ (e.g. multiple choice pronouns). The proposed approach takes as input the BERT attention tensor and produces for each candidate word a score, which indicates the strength of association. To this end, the BERT attention tensor $A\\in \\mathbb {R}^{H\\times L \\times \\mid \\mathcal {C}\\mid }$ is sliced into several matrices $A_c\\in \\mathbb {R}^{H\\times L}$ , each of them corresponding to the attention between the reference word and a candidate $c$ . Each $A_c$ is associated with a binary mask matrix $M_c$ . The mask values of $M_c$ are obtained at each location tuple $\\left(l,h\\right)$ , according to: ",
"$$M_{c}(l,h)=\n\\begin{dcases}\n1 & \\operatornamewithlimits{argmax}A(l,h)=c \\\\\n0 & \\text{otherwise} \\\\\n\\end{dcases}$$ (Eq. 7) ",
"Mask entries are non-zero only at locations where the candidate word $c$ is associated with maximum attention. Limiting the impact of attention by masking allows to accommodate for the most salient parts. Given the $A_c$ and $M_c$ matrix pair for each candidate $c$ , the MAS can be computed. For this purpose, the sum of the Hadamard product for each pair is calculated first. Next, the actual score is obtained by computing the ratio of each Hadamard sum w.r.t. all others according to, ",
"$$MAS(c)=\\frac{\\sum _{l,h}A_c \\circ M_c }{\\sum _{c \\in \\mathcal {C}} \\sum _{l,h}A_c \\circ M_c} \\in \\left[0,1\\right].$$ (Eq. 8) ",
"Thus MAS retains the attention of each candidate only where it is most dominant, coupling it with the notion of frequency of occurrence to weight the importance. See Fig. 1 for a schematic illustration of the computation of MAS, and the matrices involved."
],
[
"We evaluate our method on two commonsense reasoning tasks, PDP and WSC.",
"On the former task, we use the original set of 60 questions (PDP-60) as the main benchmark. The second task (WSC-273) is qualitatively much more difficult. The recent best reported result are not much above random guess. This task consists of 273 questions and is designed to work against traditional linguistic techniques, common heuristics or simple statistical tests over text corpora BIBREF4 ."
],
[
"In all our experiments, we used the out-of-the-box BERT models without any task-specific fine-tuning. Specifically, we use the PyTorch implementation of pre-trained $bert-base-uncased$ models supplied by Google. This model has 12 layers (i.e., Transformer blocks), a hidden size of 768, and 12 self-attention heads. In all cases we set the feed-forward/filter size to be 3072 for the hidden size of 768. The total number of parameters of the model is 110M."
],
[
"We first examine our method on PDP-60 for the Pronoun Disambiguation task. In Tab. 1 (top), our method outperforms all previous unsupervised results sharply. Next, we allow other systems to take in necessary components to maximize their test performance. This includes making use of supervised training data that maps commonsense reasoning questions to their correct answer. As reported in Tab. 1 (bottom), our method outperforms the best system in the 2016 competition (58.3%) by a large margin. Specifically, we achieve 68.3% accuracy, better than the more recently reported results from BIBREF24 (66.7%), who makes use of three KBs and a supervised deep network."
],
[
"On the harder task WSC-273, our method also outperforms the current state-of-the-art, as shown in Tab. 2. Namely, our method achieves an accuracy of 60.3%, nearly 3% of accuracy above the previous best result. This is a drastic improvement considering the best system based on language models outperforms random guess by only 4% in accuracy. This task is more difficult than PDP-60. First, the overall performance of all competing systems are much lower than that of PDP-60. Second, incorporating supervised learning and expensive annotated KBs to USSM provides insignificant gain this time (+3%), comparing to the large gain on PDP-60 (+19%). Finally, for the sake of completeness, BIBREF13 report that their single language model trained on a customized dataset built from CommonCrawl based on questions used in comonsense reasoning achieves an higher accuracy than the proposed approach with 62.6%.",
"We visualize the MAS to have more insights into the decisions of our resolvers. Fig. 2 displays some samples of correct and incorrect decisions made by our proposed method. MAS score of different words are indicated with colors, where the gradient from blue to red represents the score transition from low to high."
],
[
"Pursuing commonsense reasoning in a purely unsupervised way seems very attractive for several reasons. On the one hand, this implies tapping the nearly unlimited resources of unannotated text and leveraging the wealth of information therein. On the other hand, tackling the commonsense reasoning objective in a (more) supervised fashion typically seems to boost performance for very a specific task as concurrent work shows BIBREF25 . However, the latter approach is unlikely to generalize well beyond this task. That is because covering the complete set of commonsense entities is at best extremely hard to achieve, if possible at all. The data-driven paradigm entails that the derived model can only make generalizations based on the data it has observed. Consequently, a supervised machine learning approach will have to be exposed to all combinations, i.e. replacing lexical items with semantically similar items in order to derive various concept notions. Generally, this is prohibitively expensive and therefore not viable. In contrast, in the proposed (unsupervised self-attention guided) approach this problem is alleviated. This can be largely attributed to the nearly unlimited text corpora on which the model originally learns, which makes it likely to cover a multitude of concept relations, and the fact that attention implicitly reduces the search space. However, all these approaches require the answer to explicitly exist in the text. That is, they are unable to resolve pronouns in light of abstract/implicit referrals that require background knowledge - see BIBREF26 for more detail. However, this is beyond the task of WSC. Last, the presented results suggest that BERT models the notion of complex relationship between entities, facilitating commonsense reasoning to a certain degree."
],
[
"Attracted by the success of recently proposed language representation model BERT, in this paper, we introduce a simple yet effective re-implementation of BERT for commonsense reasoning. Specifically, we propose a method which exploits the attentions produced by BERT for the challenging tasks of PDP and WSC. The experimental analysis demonstrates that our proposed system outperforms the previous state of the art on multiple datasets. However, although BERT seems to implicitly establish complex relationships between entities facilitating tasks such as coreference resolution, the results also suggest that solving commonsense reasoning tasks might require more than leveraging a language model trained on huge text corpora. Future work will entail adaption of the attentions, to further improve the performance."
]
],
"section_name": [
"Introduction",
"Attention Guided Reasoning",
"BERT and Notation",
"Maximum Attention Score (MAS)",
"Experimental Results",
"BERT Model Details",
"Pronoun Disambiguation Problem",
"Winograd Schema Challenge",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"25f1b9e8397c9e2edbfad470ba6231c717c2ca45"
],
"answer": [
{
"evidence": [
"Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT can largely be associated to the notion of context-aware word embeddings, which differentiate it from common approaches such as word2vec BIBREF3 that establish a static semantic embedding. Since the introduction of BERT, the NLP community continues to be impressed by the amount of ideas produced on top of this powerful language representation model. However, despite its success, it remains unclear whether the representations produced by BERT can be utilized for tasks such as commonsense reasoning. Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC). These tasks have been proposed as potential alternatives to the Turing Test, because they are formulated to be robust to statistics of word co-occurrence BIBREF4 .",
"In this paper, we show that the attention maps created by an out-of-the-box BERT can be directly exploited to resolve coreferences in long sentences. As such, they can be simply repurposed for the sake of commonsense reasoning tasks while achieving state-of-the-art results on the multiple task. On both PDP and WSC, our method outperforms previous state-of-the-art methods, without using expensive annotated knowledge bases or hand-engineered features. On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-of-art accuracy of 66.7%. On a WSC dataset, WSC-273, our method achieves 60.3%. As of today, state-of-the-art accuracy on the WSC-273 for single model performance is around 57%, BIBREF14 and BIBREF13 . These results suggest that BERT implicitly learns to establish complex relationships between entities such as coreference resolution. Although this helps in commonsense reasoning, solving this task requires more than employing a language model learned from large text corpora."
],
"extractive_spans": [
"PDP-60",
"WSC-273"
],
"free_form_answer": "",
"highlighted_evidence": [
"Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC).",
"On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-of-art accuracy of 66.7%. On a WSC dataset, WSC-273, our method achieves 60.3%. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7cc9583cb09655017796ac307d5f04cde25a77f7"
]
},
{
"annotation_id": [
"d276b2f24da525f720c9f91a69bfcb912da75427"
],
"answer": [
{
"evidence": [
"In all our experiments, we used the out-of-the-box BERT models without any task-specific fine-tuning. Specifically, we use the PyTorch implementation of pre-trained $bert-base-uncased$ models supplied by Google. This model has 12 layers (i.e., Transformer blocks), a hidden size of 768, and 12 self-attention heads. In all cases we set the feed-forward/filter size to be 3072 for the hidden size of 768. The total number of parameters of the model is 110M."
],
"extractive_spans": [],
"free_form_answer": "Their model does not differ from BERT.",
"highlighted_evidence": [
"In all our experiments, we used the out-of-the-box BERT models without any task-specific fine-tuning."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"06fa905d7f2aaced6dc72e9511c71a2a51e8aead"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which datasets do they evaluate on?",
"How does their model differ from BERT?"
],
"question_id": [
"d6d29040e7fafceb188e62afba566016b119b23c",
"21663d2744a28e0d3087fbff913c036686abbb9a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"commonsense",
"commonsense"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Maximum Attention Score (MAS) for a particular sentence, where colors show attention maps for different words (best shown in color). Squares with blue/red frames correspond to specific sliced attentions Ac for candidates c, establishing the relationship to the reference pronoun indicated with green. Attention is color-coded in blue/ red for candidates “trophy”/ “suitcase”; the associated pronoun “it” is indicated in green. Attention values are compared elementwise (black double arrow), and retain only the maximum achieved by a masking operation. Matrices on the outside with red background elements correspond to the masked attentions Ac ◦Mc.",
"Table 1: Pronoun Disambiguation Problem: Results on (top) Unsupervised method performance on PDP-60 and (bottom) Supervised method performance on PDP-60. Results other than ours are taken from (Trinh and Le, 2018).",
"Table 2: Results for Winograd Schema Challenge. The other results are taken from (Trichelair et al., 2018) and (Trinh and Le, 2018).",
"Figure 2: Maximum Attention Score (MAS) for some sample questions from WSC-273: The last example is an example of failure of the method, where the coreference is predicted incorrectly."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Figure2-1.png"
]
} | [
"How does their model differ from BERT?"
] | [
[
"1905.13497-BERT Model Details-0"
]
] | [
"Their model does not differ from BERT."
] | 358 |
1909.13668 | On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation | Variational Autoencoders (VAEs) are known to suffer from learning uninformative latent representation of the input due to issues such as approximated posterior collapse, or entanglement of the latent space. We impose an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function. While the explicit constraint naturally avoids posterior collapse, we use it to further understand the significance of the KL term in controlling the information transmitted through the VAE channel. Within this framework, we explore different properties of the estimated posterior distribution, and highlight the trade-off between the amount of information encoded in a latent code during training, and the generative capacity of the model. | {
"paragraphs": [
[
"Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation.",
"The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\\mathcal {L}(\\theta , \\phi ; x,z)$:",
"where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution.",
"With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\\phi ({z}|{x})\\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13.",
"All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\\delta $-VAE BIBREF14 and $\\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\\delta $-VAE aims to impose a lower bound on the divergence term, $\\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\\beta D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )$). A special case of $\\beta $-VAE is annealing BIBREF2, where $\\beta $ increases from 0 to 1 during training.",
"In this study, we propose to use an extension of $\\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments."
],
[
"We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\\text{I}({x};{z})$ BIBREF17."
],
[
"The reconstruction loss can naturally measure distortion ($D := - \\big \\langle \\log p_\\theta ({x}|{z}) \\big \\rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\\phi (z|x)$.",
"BIBREF18 introduced the $H-D \\le \\text{I}({x};{z}) \\le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased."
],
[
"Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\\beta $-VAE offers regularizing the ELBO via an additional coefficient $\\beta \\in {\\rm I\\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,",
"where $C\\!\\! \\in \\!\\! {\\rm I\\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\\text{KL}\\!\\!=\\!\\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\\max \\big (C,D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )\\big )$ at the risk of breaking the ELBO when $\\text{KL}\\!\\!<\\!\\!C$ BIBREF22."
],
[
"We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\\beta =1$. We do not use larger $\\beta $s because the constraint $\\text{KL}=C$ is always satisfied."
],
[
"We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab."
],
[
"We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state."
],
[
"To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\\beta _C$-VAEGRU, $\\beta _C$-VAELSTM, and $\\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\\!-\\!1\\!\\le KL\\!\\le \\! C\\!+\\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue.",
"The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$.",
"As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores."
],
[
"To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\\phi (z)=\\sum _{x\\sim q(x)} q_\\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior.",
"We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \\sim q_\\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\\log \\det (\\mathrm {Cov}[q_\\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\\log \\det (\\mathrm {Cov}[q_\\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\\phi ({z})$ and $p(z)$ shrinks further as $C$ grows.",
"The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section."
],
[
"To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$.",
"During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\\lbrace 0.5, 0.9\\rbrace )$ and Top-k $(k=\\lbrace 5, 15\\rbrace )$."
],
[
"We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \\sim p(z)$ and $z_2 \\sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\\lbrace 3,15,100\\rbrace $.",
"Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding."
],
[
"To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder."
],
[
"We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s."
],
[
"Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences.",
"We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora.",
"In the qualitative analysis we observed that the text generated by the $\\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus.",
"The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE.",
"In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE."
],
[
"In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\\beta _C$-VAELSTM model trained with $C=\\lbrace 3,100\\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability.",
"As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$.",
"However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\\bar{z}^+$ and $\\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences.",
"As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes."
],
[
"In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder.",
"The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\\ge 0$) on the KL term ($|D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied.",
"We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments.",
"In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences.",
"Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors."
],
[
"The authors would like to thank the anonymous reviewers for their helpful suggestions. This research was supported by an EPSRC Experienced Researcher Fellowship (N. Collier: EP/M005089/1), an MRC grant (M.T. Pilehvar: MR/M025160/1) and E. Shareghi is supported by the ERC Consolidator Grant LEXICAL (648909). We gratefully acknowledge the donation of a GPU from the NVIDIA."
]
],
"section_name": [
"Introduction",
"Kullback-Leibler Divergence in VAE",
"Kullback-Leibler Divergence in VAE ::: Reconstruction vs. KL",
"Kullback-Leibler Divergence in VAE ::: Explicit KL Control via @!START@$\\beta $@!END@-VAE",
"Experiments",
"Experiments ::: Corpora",
"Experiments ::: Models",
"Experiments ::: Rate and Distortion",
"Experiments ::: Aggregated Posterior",
"Experiments ::: Text Generation",
"Experiments ::: Text Generation ::: Qualitative Analysis",
"Experiments ::: Text Generation ::: Qualitative Analysis ::: Sensitivity of Decoder",
"Experiments ::: Text Generation ::: Qualitative Analysis ::: Coherence of Sequences",
"Experiments ::: Text Generation ::: Quantitative Analysis",
"Experiments ::: Syntactic Test",
"Discussion and Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"9a030eb6914f2b1e7de542c2c50aa6cf8907545c"
],
"answer": [
{
"evidence": [
"We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\\beta =1$. We do not use larger $\\beta $s because the constraint $\\text{KL}=C$ is always satisfied."
],
"extractive_spans": [
"interdependence between rate and distortion",
"impact of KL on the sharpness of the approximated posteriors",
"demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities",
"some experiments to find if any form of syntactic information is encoded in the latent space"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2651b61617317c7ae6e9b91b48e9343dbb16ce71"
],
"answer": [
{
"evidence": [
"The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\\ge 0$) on the KL term ($|D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied."
],
"extractive_spans": [
"by setting a non-zero positive constraint ($C\\ge 0$) on the KL term ($|D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )-C|$)"
],
"free_form_answer": "",
"highlighted_evidence": [
"he immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\\ge 0$) on the KL term ($|D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )-C|$)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"40f15b5a5b393e4c66cab1f46a1628c6ace7f132"
],
"answer": [
{
"evidence": [
"Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\\beta $-VAE offers regularizing the ELBO via an additional coefficient $\\beta \\in {\\rm I\\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,",
"where $C\\!\\! \\in \\!\\! {\\rm I\\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\\text{KL}\\!\\!=\\!\\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\\max \\big (C,D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )\\big )$ at the risk of breaking the ELBO when $\\text{KL}\\!\\!<\\!\\!C$ BIBREF22."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Formula 2) Formula 2 is an answer: \n\\big \\langle\\! \\log p_\\theta({x}|{z}) \\big \\rangle_{q_\\phi({z}|{x})} - \\beta |D_{KL}\\big(q_\\phi({z}|{x}) || p({z})\\big)-C|",
"highlighted_evidence": [
"While $\\beta $-VAE offers regularizing the ELBO via an additional coefficient $\\beta \\in {\\rm I\\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,\n\nwhere $C\\!\\! \\in \\!\\! {\\rm I\\!R}^+$ and $| . |$ denotes the absolute value."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What different properties of the posterior distribution are explored in the paper?",
"Why does proposed term help to avoid posterior collapse?",
"How does explicit constraint on the KL divergence term that authors propose looks like?"
],
"question_id": [
"dd2f21d60cfca3917a9eb8b192c194f4de85e8b2",
"ccf7415b515fe5c59fa92d4a8af5d2437c591615",
"fee5aef7ae521ccd1562764a91edefecec34624d"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Rate-Distortion and LogDetCov for C = {10, 20, ..., 100} on Yahoo and Yelp corpora.",
"Table 1: βC-VAELSTM performance with C = {3, 15, 100} on the test sets of CBT, WIKI, and WebText. Each bucket groups sentences of certain length. Bucket 1: length ≤ 10; Bucket 2: 10 < length ≤ 20; Bucket 3: 20 < length ≤ 30, and All contains all sentences. BL2/RG2 denotes BLEU-2/ROUGE-2, BL4/RG4 denotes BLEU2/ROUGE-2 BLEU-4/ROUGE-4, AU denotes active units, D denotes distortion, and R denotes rate.",
"Table 2: Homotopy (CBT corpus) - The three blocks correspond to C = {3, 15, 100} values used for training βC-VAELSTM. The columns correspond to the three decoding schemes: greedy, top-k (with k=15), and the nucleus sampling (NS; with p=0.9). Initial two latent variables z were sampled from a the prior distribution i.e. z ∼ p(z) and the other five latent variables were obtained by interpolation. The sequences that highlighted in gray are the one that decoded into the same sentences condition on different latent variable. Note: Even though the learned latent representation should be quite different for different models (trained with different C) in order to be consistent all the generated sequences presented in the table were decoded from the same seven latent variables.",
"Table 3: Forward Cross Entropy (FCE). Columns represent stats for Greedy and NS decoding schemes for βCVAELSTM models trained with C = {3, 15, 100} on CBT, WIKI or WebText. Each entry in the table is a mean of negative log likelihood of an LM. The values in the brackets are the standard deviations. |V| is the vocabulary size; Test stands for test set; %unk is the percentage of 〈unk〉 symbols in a corpora; len. is the average length of a sentence in the generated corpus; SB is the self-BLEU:4 score calculated on the 10K sentences in the generated corpus.",
"Table 4: p1: p(x−|z+) < p(x+|z+) and p2: p(x−|z−) < p(x+|z−); p̄1: p(x−|z̄+) < p(x+|z̄+) and p̄2: p(x−|z̄−) < p(x+|z̄−); βC=3-VAELSTM (D:103, R:3); βC=100-VAELSTM (D:39, R:101)."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"8-Table4-1.png"
]
} | [
"How does explicit constraint on the KL divergence term that authors propose looks like?"
] | [
[
"1909.13668-Kullback-Leibler Divergence in VAE ::: Explicit KL Control via @!START@$\\beta $@!END@-VAE-1",
"1909.13668-Kullback-Leibler Divergence in VAE ::: Explicit KL Control via @!START@$\\beta $@!END@-VAE-0"
]
] | [
"Answer with content missing: (Formula 2) Formula 2 is an answer: \n\\big \\langle\\! \\log p_\\theta({x}|{z}) \\big \\rangle_{q_\\phi({z}|{x})} - \\beta |D_{KL}\\big(q_\\phi({z}|{x}) || p({z})\\big)-C|"
] | 360 |
1802.05322 | Classifying movie genres by analyzing text reviews | This paper proposes a method for classifying movie genres by only looking at text reviews. The data used are from Large Movie Review Dataset v1.0 and IMDb. This paper compared a K-nearest neighbors (KNN) model and a multilayer perceptron (MLP) that uses tf-idf as input features. The paper also discusses different evaluation metrics used when doing multi-label classification. For the data used in this research, the KNN model performed the best with an accuracy of 55.4\% and a Hamming loss of 0.047. | {
"paragraphs": [
[
"By only reading a single text review of a movie it can be difficult to say what the genre of that movie is, but by using text mining techniques on thousands of movie reviews is it possible to predict the genre?",
"This paper explores the possibility of classifying genres of a movie based only on a text review of that movie. This is an interesting problem because to the naked eye it may seem difficult to predict the genre by only looking at a text review. One example of a review can be seen in the following example:",
"I liked the film. Some of the action scenes were very interesting, tense and well done. I especially liked the opening scene which had a semi truck in it. A very tense action scene that seemed well done. Some of the transitional scenes were filmed in interesting ways such as time lapse photography, unusual colors, or interesting angles. Also the film is funny is several parts. I also liked how the evil guy was portrayed too. I'd give the film an 8 out of 10.",
"http://www.imdb.com/title/tt0211938/reviews",
"From the quoted review, one could probably predict the movie falls in the action genre; however, it would be difficult to predict all three of the genres (action, comedy, crime) that International Movie Database (IMDB) lists. With the use of text mining techniques it is feasible to predict multiple genres based on a review.",
"There are numerous previous works on classifying the sentiment of reviews, e.g., maas-EtAl:2011:ACL-HLT2011 by BIBREF0 . There are fewer scientific papers available on specifically classifying movie genres based on reviews; therefore, inspiration for this paper comes from papers describing classification of text for other or general contexts. One of those papers is DBLP:journals/corr/cmp-lg-9707002 where BIBREF1 describe how to use a multilayer perceptron (MLP) for genre classification.",
"All data, in the form of reviews and genres, used in this paper originates from IMDb."
],
[
"In this section all relevant theory and methodology is described. Table TABREF1 lists basic terminology and a short description of their meaning."
],
[
"Data preprocessing is important when working with text data because it can reduce the number of features and it formats the data into the desired form BIBREF2 .",
"Removing stop words is a common type of filtering in text mining. Stop words are words that usually contain little or no information by itself and therefore it is better to remove them. Generally words that occur often can be considered stop words such as the, a and it. BIBREF2 ",
"Lemmatization is the process of converting verbs into their infinitive tense form and nouns into their singular form. The reason for doing this is to reduce words into their basic forms and thus simplify the data. For example am, are and is are converted to be. BIBREF2 ",
"A way of representing a large corpus is to calculate the Term Frequency Inverse Document Frequency (tf-idf) of the corpus and then feed the models the tf-idf. As described in ramos2003using by BIBREF3 tf-idf is both efficient and simple for matching a query of words with a document in a corpus. Tf-idf is calculated by multiplying the Term Frequency (tf) with the Inverse Document Frequency (idf) , which is formulated as DISPLAYFORM0 ",
"where INLINEFORM0 is a document in corpus INLINEFORM1 and INLINEFORM2 is a term. INLINEFORM3 is defined as DISPLAYFORM0 ",
"and INLINEFORM0 is defined as DISPLAYFORM0 ",
"where INLINEFORM0 is the number of times INLINEFORM1 occurs in INLINEFORM2 and INLINEFORM3 total number of documents in the corpus."
],
[
"MLP is a class of feedforward neural network built up by a layered acyclic graph. An MLP consists of at least three layers and non-linear activations. The first layer is called input layer, the second layer is called hidden layer and the third layer is called output layer. The three layers are fully connected which means that every node in the hidden layer is connected to every node in the other layers. MLP is trained using backpropagation, where the weights are updated by calculating the gradient descent with respect to an error function. BIBREF4 ",
"K-nearest Neighbors (KNN) works by evaluating similarities between entities, where INLINEFORM0 stands for how many neighbors are taken into account during the classification. KNN is different from MLP in the sense that it does not require a computationally heavy training step; instead, all of the computation is done at the classification step. There are multiple ways of calculating the similarity, one way is to calculate the Minkowski distance. The Minkowski distance between two points DISPLAYFORM0 ",
"and DISPLAYFORM0 ",
"is defined by DISPLAYFORM0 ",
"where INLINEFORM0 which is equal to the Euclidean distance. BIBREF2 "
],
[
"When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss. Accuracy, precision and recall are defined by the the four terms true positive ( INLINEFORM0 ), true negative ( INLINEFORM1 ), false positive ( INLINEFORM2 ) and false negative ( INLINEFORM3 ) which can be seen in table TABREF16 .",
"Accuracy is a measurement of how correct a model's predictions are and is defined as DISPLAYFORM0 ",
".",
"Precision is a ratio of how often positive predictions actually are positve and is defined as DISPLAYFORM0 ",
".",
"Recall is a measurement of how good the model is to find all true positives and is defined as DISPLAYFORM0 ",
". BIBREF5 ",
"It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is label index and INLINEFORM1 is number of labels.",
"Hamming loss is different in the sense that it is a loss and it is defined as the fraction of wrong labels to the total number of labels. Hamming loss can be a good measurement when it comes to evaluating multi-label classifiers. the hamming loss is expressed as DISPLAYFORM0 ",
"where INLINEFORM0 is number of documents, INLINEFORM1 number of labels, INLINEFORM2 is the target value and INLINEFORM3 is predicted value. BIBREF7 ",
"For evaluation the INLINEFORM0 and INLINEFORM1 was calculated as defined in section SECREF15 for both the MLP model and the KNN model. For precision and recall formulas EQREF20 and EQREF21 were used because of their advantage in multi-label classification. The distribution of predicted genres was also shown in a histogram and compared to the target distribution of genres.",
"Furthermore the ratio of reviews that got zero genres predicted was also calculated and can be expressed as DISPLAYFORM0 ",
"where INLINEFORM0 is the number of reviews without any predicted genre and INLINEFORM1 is the total amount of predicted reviews."
],
[
"Data used in this paper comes from two separate sources. The first source was Large Movie Review Dataset v1.0 BIBREF0 which is a dataset for binary sentiment analysis of moview reviews. The dataset contains a total of 50000 reviews in raw text together with information on whether the review is positive or negative and a URL to the movie on IMDb. The sentiment information was not used in this paper. Out of the 50000, reviews only 7000 were used because of limitations on computational power, resulting in a corpus of 7000 documents.",
"The second source of data was the genres for all reviews which were scraped from the IMDb site. A total of 27 different genres were scraped. A list of all genres can be find in Appendix SECREF8 . A review can have one genre or multiple genres. For example a review can be for a movie that is both Action, Drama and Thriller at the same time while another move only falls into Drama."
],
[
"This section presents all steps needed to reproduce the results presented in this paper."
],
[
"In this paper the data comes from two sources where the first is a collection of text reviews. Those reviews were downloaded from Large Movie Review Datasets website . Because only 7000 reviews was used in this paper all of them were from the `train` folder and split evenly between positive reviews and negative reviews.",
"The genres for the reviews where obtained by iterating through all reviews and doing the following steps:",
"Save the text of the review.",
"Retrieve IMDb URL to the movie from the Large Movie Review Datasets data.",
"Scrape that movie website for all genres and download the genres.",
"The distribution of genres was plotted in a histogram to check that the scraped data looked reasonable and can be seen in figure FIGREF27 . All genres with less than 50 reviews corresponding to that genre were removed.",
"The number of genres per review can be seen in figure FIGREF28 and it shows that it is most common for a review to have three different genres; furthermore, it shows that no review has more than three genres.",
"http://ai.stanford.edu/ amaas/data/sentiment"
],
[
"All reviews were preprocessed according to the following steps:",
"Remove all non-alphanumeric characters.",
"Lower case all tokens.",
"Remove all stopwords.",
"Lemmatize all tokens.",
"Both the removal of stopwords and lemmatization were done with Python's Natural Language Toolkit (NLTK). Next the reviews and corresponding genres were split into a training set and a test set with INLINEFORM0 devided into the train set and INLINEFORM1 into the test set.",
"The preprocessed corpus was then used to calculate a tf-idf representing all reviews. The calculation of the tf-idf was done using scikit-learn'smodule TfidfVectorizer. Both transform and fit were run on the training set and only the transform was run on the test set. The decision to use tf-idf as a data representation is supported by BIBREF3 in ramos2003using which concludes that tf-idf is both simple and effective at categorizing relevant words.",
"https://www.python.org http://www.nltk.org http://scikit-learn.org"
],
[
"This paper experimented with two different models and compared them against each other. The inspiration for the first model comes from BIBREF1 in their paper DBLP:journals/corr/cmp-lg-9707002 where they used an MLP for text genre detection. The model used in this paper comes from scikit-learn's neural_network module and is called MLPClassifier. Table TABREF35 shows all parameters that were changed from the default values.",
"The second model was a KNN which was chosen because of it is simple and does not require the pre-training that the MLP needs. The implementation of this model comes from scikit-learn's neighbors module and is called KNeighborsClassifier. The only parameter that was changed after some trial and error was the k-parameter which was set to 3.",
"Both models were fitted using the train set and then predictions were done for the test set."
],
[
"Table TABREF38 shows the INLINEFORM0 , INLINEFORM1 and INLINEFORM2 for the models. The KNN model had a higher accuracy of INLINEFORM3 compared to MPL's accuracy of INLINEFORM4 and the KNN model had a higher recall but slightly lower precision than the MLP model.",
"Table TABREF39 shows the INLINEFORM0 and INLINEFORM1 for the models, it shows that the KNN model had lower values for both the INLINEFORM2 and INLINEFORM3 compared to the MLP model.",
"Figure FIGREF40 shows the distribution of the genres for the predicted values when using MLP and the test set. The same comparison between KNN and the test set can be seen in figure FIGREF41 ."
],
[
"When looking at the results it is apparent that KNN is better than MLP in these experiments. In particular, the INLINEFORM0 stands out between KNN and MLP where KNN got INLINEFORM1 and MLP got INLINEFORM2 which is considered a significant difference. Given that the INLINEFORM3 was relatively high for both models, this result hints that the models only predicted genres when the confidence was high, which resulted in fewer genres being predicted than the target. This can also be confirmed by looking at the figures FIGREF40 and FIGREF41 where the absolute number of reviews predicted for most genres was lower than the target. This unsatisfyingly low INLINEFORM4 can be explained by the multi-label nature of the problem in this paper. Even if the model correctly predicted 2 out of three genres it is considered a misclassification. A reason for the low accuracy could be that the models appeared to be on the conservative side when predicting genres.",
"Another factor that affected the performance of the models was the INLINEFORM0 which confirmed that over INLINEFORM1 of the reviews for the KNN model and over INLINEFORM2 of the reviews for the MLP model did not receive any predicted genre. Because no review had zero genres all predictions with zero genres are misclassified and this could be a good place to start when improving the models.",
"Furthermore, when looking at the INLINEFORM0 it shows that when looking at the individual genres for all reviews the number of wrong predictions are very low which is promising when trying to answer this paper's main question: whether it is possible to predict the genre of the movie associated with a text review. It should be taken into account that this paper only investigated about 7000 movie reviews and the results could change significantly, for better or for worse, if a much larger data set was used. In this paper, some of the genres had very low amounts of training data, which could be why those genres were not predicted in the same frequency as the target. An example of that can be seen by looking at genre Sci-Fi in figure FIGREF40 ."
],
[
"This paper demonstrates that by only looking at text reviews of a movie, there is enough information to predict its genre with an INLINEFORM0 of INLINEFORM1 . This result implies that movie reviews carry latent information about genres. This paper also shows the complexity of doing prediction on multi-label problems, both in implementation and data processing but also when it comes to evaluation. Regular metrics typically work, but they mask the entire picture and the depth of how good a model is.",
"Finally this paper provides an explanation of the whole process needed to conduct an experiment like this. The process includes downloading a data set, web scraping for extra information, data preprocessing, model tuning and evaluation of the results."
],
[
"Action",
"Adult",
"Adventure",
"Animation",
"Biography",
"Comedy",
"Crime",
"Documentary",
"Drama",
"Family",
"Fantasy",
"Film-Noir",
"Game-Show",
"History",
"Horror",
"Music",
"Musical",
"Mystery",
"Reality-TV",
"Romance",
"Sci-Fi",
"Short",
"Sport",
"Talk-Show",
"Thriller",
"War",
"Western"
]
],
"section_name": [
"Introduction",
"Theory",
"Preprocessing",
"Models",
"Evaluation",
"Data",
"Method",
"Data collection",
"Data preprocessing",
"Model",
"Result",
"Discussion",
"Conclusion",
"All genres"
]
} | {
"answers": [
{
"annotation_id": [
"602b6f6182ba06c3ae6b17680b5b8b0f500196c9"
],
"answer": [
{
"evidence": [
"This paper experimented with two different models and compared them against each other. The inspiration for the first model comes from BIBREF1 in their paper DBLP:journals/corr/cmp-lg-9707002 where they used an MLP for text genre detection. The model used in this paper comes from scikit-learn's neural_network module and is called MLPClassifier. Table TABREF35 shows all parameters that were changed from the default values."
],
"extractive_spans": [],
"free_form_answer": "There is no baseline.",
"highlighted_evidence": [
"This paper experimented with two different models and compared them against each other. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"268b71cc6e97bb32d0db1a387d3884421d6e534f"
],
"answer": [
{
"evidence": [
"The second source of data was the genres for all reviews which were scraped from the IMDb site. A total of 27 different genres were scraped. A list of all genres can be find in Appendix SECREF8 . A review can have one genre or multiple genres. For example a review can be for a movie that is both Action, Drama and Thriller at the same time while another move only falls into Drama."
],
"extractive_spans": [
"27 "
],
"free_form_answer": "",
"highlighted_evidence": [
"A total of 27 different genres were scraped."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4d795cf29b5807d7938bf1e609fc89f695896f3d"
],
"answer": [
{
"evidence": [
"When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss. Accuracy, precision and recall are defined by the the four terms true positive ( INLINEFORM0 ), true negative ( INLINEFORM1 ), false positive ( INLINEFORM2 ) and false negative ( INLINEFORM3 ) which can be seen in table TABREF16 .",
"It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1"
],
"extractive_spans": [
"precision ",
"recall ",
"Hamming loss",
"micro averaged precision and recall "
],
"free_form_answer": "",
"highlighted_evidence": [
"When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss. ",
"It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"what was the baseline?",
"how many movie genres do they explore?",
"what evaluation metrics are discussed?"
],
"question_id": [
"a7adb63db5066d39fdf2882d8a7ffefbb6b622f0",
"980568848cc8e7c43f767da616cf1e176f406b05",
"f1b738a7f118438663f9d77b4ccd3a2c4fd97c01"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: List of basic terminology.",
"Figure 1: Histogram showing the distribution of genres.",
"Figure 2: Histogram showing the distribution of genres per review.",
"Table 4: accuracy, precisionmicro and recallmicro for the models.",
"Table 3: Values of non-default parameters for the MLP model.",
"Table 5: Hamming loss and No genre ratio for the models.",
"Figure 3: Distribution of genres in MLP predictions and test set.",
"Figure 4: Distribution of genres in KNN predictions and test set."
],
"file": [
"2-Table1-1.png",
"5-Figure1-1.png",
"6-Figure2-1.png",
"7-Table4-1.png",
"7-Table3-1.png",
"7-Table5-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png"
]
} | [
"what was the baseline?"
] | [
[
"1802.05322-Model-0"
]
] | [
"There is no baseline."
] | 364 |
2004.01878 | News-Driven Stock Prediction With Attention-Based Noisy Recurrent State Transition | We consider direct modeling of underlying stock value movement sequences over time in the news-driven stock movement prediction. A recurrent state transition model is constructed, which better captures a gradual process of stock movement continuously by modeling the correlation between past and future price movements. By separating the effects of news and noise, a noisy random factor is also explicitly fitted based on the recurrent states. Results show that the proposed model outperforms strong baselines. Thanks to the use of attention over news events, our model is also more explainable. To our knowledge, we are the first to explicitly model both events and noise over a fundamental stock value state for news-driven stock movement prediction. | {
"paragraphs": [
[
"Stock movement prediction is a central task in computational and quantitative finance. With recent advances in deep learning and natural language processing technology, event-driven stock prediction has received increasing research attention BIBREF0, BIBREF1. The goal is to predict the movement of stock prices according to financial news. Existing work has investigated news representation using bag-of-words BIBREF2, named entities BIBREF3, event structures BIBREF4 or deep learning BIBREF1, BIBREF5.",
"Most previous work focuses on enhancing news representations, while adopting a relatively simple model on the stock movement process, casting it as a simple response to a set of historical news. The prediction model can therefore be viewed as variations of a classifier that takes news as input and yields stock movement predictions. In contrast, work on time-series based stock prediction BIBREF6, BIBREF7, BIBREF5, BIBREF8, aims to capture continuous movements of prices themselves.",
"We aim to introduce underlying price movement trends into news-driven stock movement prediction by casting the underlaying stock value as a recurrent state, integrating the influence of news events and random noise simultaneously into the recurrent state transitions. In particular, we take a LSTM with peephole connections BIBREF9 for modeling a stock value state over time, which can reflect the fundamentals of a stock. The influence of news over a time window is captured in each recurrent state transition by using neural attention to aggregate representations of individual news. In addition, all other factors to the stock price are modeled using a random factor component, so that sentiments, expectations and noise can be dealt with explicitly.",
"Compared with existing work, our method has three salient advantages. First, the process in which the influence of news events are absorbed into stock price changes is explicitly modeled. Though previous work has attempted towards this goal BIBREF1, existing models predict each stock movement independently, only modeling the correlation between news in historical news sequences. As shown in Figure FIGREF1, our method can better capture a continuous process of stock movement by modeling the correlation between past and future stock values directly. In addition, non-linear compositional effects of multiple events in a time window can be captured.",
"Second, to our knowledge, our method allows noise to be explicitly addressed in a model, therefore separating the effects of news and other factors. In contrast, existing work trains a stock prediction model by fitting stock movements to events, and therefore can suffer from overfitting due to external factors and noise.",
"Third, our model is also more explainable thanks to the use of attention over news events, which is similar to the work of BIBREF10 and BIBREF11. Due to the use of recurrent states, we can visualize past events over a large time window. In addition, we propose a novel future event prediction module to factor in likely next events according to natural events consequences. The future event module is trained over gold “future” data over historical events. Therefore, it can also deal with insider trading factors to some extent.",
"Experiments over the benchmark of BIBREF1 show that our method outperforms strong baselines, giving the best reported results in the literature. To our knowledge, we are the first to explicitly model both events and noise over a fundamental stock value state for news-driven stock movement prediction. Note that unlike time-series stock prediction models BIBREF12, BIBREF5, we do not take explicit historical prices as part of model inputs, and therefore our research still focuses on the influence of news information alone, and are directly comparable to existing work on news-driven stock prediction."
],
[
"There has been a line of work predicting stock markets using text information from daily news. We compare this paper with previous work from the following two perspectives.",
"",
"Modeling Price Movements Correlation",
"Most existing work treats the modeling of each stock movement independently using bag-of-words BIBREF2, named entities BIBREF3, semantic frames BIBREF0, event structures BIBREF4, event embeddings BIBREF1 or knowledge bases BIBREF13. Differently, we study modeling the correlation between past and future stock value movements.",
"There are also some work modeling the correlations between samples by sparse matrix factorization BIBREF14, hidden Markov model BIBREF8 and Bi-RNNs BIBREF5, BIBREF11 using both news and historical price data. Some work models the correlations among different stocks by pre-defined correlation graph BIBREF15 and tensor factorization BIBREF12. Our work is different from this line of work in that we use only news events as inputs, and our recurrent states are combined with impact-related noises.",
"",
"Explainable Prediction",
"Rationalization is an important problem for news-driven stock price movement prediction, which is to find the most important news event along with the model's prediction. Factorization, such as sparse matrix factorization BIBREF14 and tensor factorization BIBREF12, is a popular method where results can be traced back upon the input features. While this type of method are limited because of the dimension of input feature, our attention-based module has linear time complexity on feature size.",
"BIBREF11 apply dual-layer attention to predict the stock movement by using news published in the previous six days. Each day's news embeddings and seven days' embeddings are summed by the layer. Our work is different from BIBREF11 in that our news events attention is query-based, which is more strongly related to the noisy recurrent states. In contrast, their attention is not query-based and tends to output the same result for each day even if the previous day's decision is changed."
],
[
"Following previous work BIBREF4, BIBREF1, the task is formalized as a binary classification task for each trading day. Formally, given a history news set about a targeted stock or index, the input of the task is a trading day $x$ and the output is a label $y \\in \\lbrace +1, -1\\rbrace $ indicating whether the adjusted closing price $p_x$ will be greater than $p_{x-1}$ ($y=+1$) or not ($y=-1$)."
],
[
"The framework of our model is shown in Figure FIGREF2. We explicitly model both events and noise over a recurrent stock value state, which is modeled using LSTM. For each trading day, we consider the news events happened in that day as well as the past news events using neural attention BIBREF16. Considering the impacts of insider trading, we also involve future news in the training procedure. To model the high stochasticity of stock market, we sample an additive noise using a neural module. Our model is named attention-based noisy recurrent states transition (ANRES).",
"Considering the general principle of sample independence, building temporal connections between individual trading days in training set is not suitable for training BIBREF5 and we find it easy to overfit. We notice that a LSTM usually takes several steps to generate a more stable hidden state. As an alternative method, we extended the time span of one sample to $T$ previous continuous trading days (${t-T+1, t-T+2, ..., t-1, t}$), which we call a trading sequence, is used as the basic training element in this paper."
],
[
"ANRES uses LSTM with peephole connections BIBREF9. The underlying stock value trends are represented as a recurrent state $z$ transited over time, which can reflect the fundamentals of a stock. In each trading day, we consider the impact of corresponding news events and a random noise as:",
"where $v_t$ is the news events impact vector on the trading day $t$ and $f$ is a function in which random noise will be integrated.",
"By using this basic framework, the non-linear compositional effects of multiple events can also be captured in a time window. Then we use the sequential state $z_t$ to make binary classification as:",
"where $\\hat{p}_t$ is the estimated probabilities, $\\hat{y}_t$ is the predicted label and $x_t$ is the input trading day."
],
[
"For a trading day $t$ in a trading sequence, we model both long-term and short-term impact of news events. For short-term impact, we use the news published after the previous trading day $t-1$ and before the trading day $t$ as the present news set. Similarly, for long-term impact, we use the news published no more than thirty calendar days ago as the past news set.",
"For each news event, we extract its headline and use ELMo BIBREF17 to transform it to $V$-dim hidden state by concatenating the output bidirectional hidden states of the last words as the basic representation of a news event. By stacking those vectors accordingly, we obtain two embedding matrices $C^{\\prime }_t$ and $B^{\\prime }_t$ for the present and past news events as:",
"where ${hc}^i_t$ is one of the news event headline in the present news set, ${ec}^i_t$ is the headline representation of ${hc}^i_t$, $L_c$ is the size of present news set; while ${hb}^j_t$, ${eb}^j_t$ and $L_b$ are for the past news set.",
"To make the model more numerically stable and avoiding overfitting, we apply the over-parameterized component of BIBREF18 to the news events embedding matrices, where",
"$\\odot $ is element-wise multiplication and $\\sigma (\\cdot )$ is the sigmoid function.",
"Due to the unequal importance news events contribute to the stock price movement in $t$, we use scaled dot-product attention BIBREF16 to capture the influence of news over a period for the recurrent state transition. In practical, we first transform the last trading day's stock value $z_{t-1}$ to a query vector $q_t$, and then calculate two attention score vectors $\\gamma _t$ and $\\beta _t$ for the present and past news events as:",
"We sum the news events embedding matrices to obtain news events impact vectors $c_t$ and $b_t$ on the trading day $t$ according to the weights $\\gamma _t$ and $\\beta _t$, respectively:"
],
[
"In spite of the long-term and short-term impact, we find that some short-term future news events will exert an influence on the stock price movement before the news release, which can be attributed to news delay or insider trading BIBREF19 factors to some extent.",
"We propose a novel future event prediction module to consider likely next events according to natural consequences. In this paper, we define future news events as those that are published within seven calendar days after the trading day $t$.",
"Similarly to the past and present news events, we stack the headline ELMo embeddings of future news events to an embedding matrix $A^{\\prime }_t$. Then adapting the over-parameterized component and summing the stacked embedding vectors by scaled dot-product attention. We calculate the future news events impact vector $a_t$ on the trading day $t$ as:",
"Although the above steps can work in the training procedure, where the future event module is trained over gold “future” data over historical events, at test time, future news events are not accessible. To address this issue, we use a non-linear transformation to estimate a future news events impact vector $\\hat{a}_t$ with the past and present news events impact vectors $b_t$ and $c_t$ as:",
"where $[,]$ is the vector concatenation operation.",
"We concatenate the above-mentioned three types of news events impact vectors to obtain the input $v_t$ for LSTM-based recurrent state transition on trading day $t$ as:",
"where $[,]$ is the vector concatenation operation."
],
[
"In this model, all other factors to the stock price such as sentiments, expectations and noise are explicitly modeled as noise using a random factor. We sample a random factor from a normal distribution $\\mathcal {N}(\\textbf {0}, \\sigma _t)$ parameterized by $z^{\\prime }_t$ as:",
"However, in practice, the model can face difficulty of back propagating gradients if we directly sample a random factor from $\\mathcal {N}(\\textbf {0}, \\sigma _t)$. We use re-parameterization BIBREF20 for normal distributions to address the problem and enhance the transition result $z^{\\prime }_t$ with sample random factor to obtain the noisy recurrent state $z_t$ as:"
],
[
"For training, there are two main terms in our loss function. The first term is a cross entropy loss for the predicted probabilities $\\hat{p}_t$ and gold labels $y_t$, and the second term is the mean squared error between the estimated future impact vector $\\hat{a}_t$ and the true future impact vector $a_t$.",
"The total loss for a trading sequence containing $T$ trading days with standard $L_2$ regularization is calculated as:",
"where $\\theta $ is a hyper-parameter which indicates how much important $L_{mse}$ is comparing to $L_{ce}$, $\\Phi $ is the set of trainable parameters in the entire ANRES model and $\\lambda $ is the regularization weight."
],
[
"We use the public financial news dataset released by BIBREF4, which is crawled from Reuters and Bloomberg over the period from October 2006 to November 2013. We conduct our experiments on predicting the Standard & Poor’s 500 stock (S&P 500) index and its selected individual stocks, obtaining indices and prices from Yahoo Finance. Detailed statistics of the training, development and test sets are shown in Table TABREF8. We report the final results on test set after using development set to tune some hyper-parameters."
],
[
"The hyper-parameters of our ANRES model are shown in Table TABREF11. We use mini-batches and stochastic gradient descent (SGD) with momentum to update the parameters. Most of the hyper-parameters are chosen according to development experiments, while others like dropout rate $r$ and SGD momentum $\\mu $ are set according to common values.",
"Following previous work BIBREF0, BIBREF4, BIBREF5, we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) to evaluate S&P 500 index prediction and selected individual stock prediction. MCC is applied because it avoids bias due to data skew. Given the confusion matrix which contains true positive, false positive, true negative and false negative values, MCC is calculated as:"
],
[
"As the first set of development experiments, we try different ways to initialize the noisy recurrent states of our ANRES model to find a suitable approach. For each trading day, we compare the results whether states transitions are modeled or not. Besides, we also compare the methods of random initialization and zero initialization. Note that the random initialization method we use here returns a tensor filled with random numbers from the standard normal distribution $\\mathcal {N}(0, 1)$. In summary, the following four baselines are designed:",
"ANRES_Sing_R: randomly initializing the states for each single trading day.",
"ANRES_Sing_Z: initializing the states as zeros for each single trading day.",
"ANRES_Seq_R: randomly initializing the first states for each trading sequence only.",
"ANRES_Seq_Z: initializing the first states as zeros for each trading sequence only.",
"Development set results on predicting S&P 500 index are shown in Table TABREF13. We can see that modeling recurrent value sequences performs better than treating each trading day separately, which shows that modeling trading sequences can capture the correlations between trading days and the non-linear compositional effects of multiple events. From another perspective, the models ANRES_Sing_R and ANRES_Sing_Z also represent the strengths of our basic representations of news events in isolation. Therefore, we can also see that using only the basic news events representations is not sufficient for index prediction, while combining with our states transition module can achieve strong results.",
"By comparing the results of ANRES_Seq_R and ANRES_Seq_Z, we decide to use zero initialization for our ANRES models, including the noisy recurrent states also in the remaining experiments."
],
[
"We use the development set to find a suitable length $T$ for trading sequence, which is searched from $\\lbrace 1, 3, 5, 7, 9, 11, 13, 15\\rbrace $. The S&P 500 index prediction results of accuracy, MCC and consumed minutes per training epoch on the development set are shown in Figure FIGREF19.",
"We can see that the accuracy and MCC are positively correlated with the growth of $T$, while the change of accuracy is smaller than MCC. When $T \\ge 7$, the growth of MCC becomes slower than that when $T < 7$. Also considering the running time per training epoch, which is nearly linear w.r.t. $T$, we choose the hyper-parameter $T=7$ and use it in the remaining experiments."
],
[
"We compare our approach with the following strong baselines on predicting the S&P 500 index, which also only use financial news:",
"BIBREF21 uses bags-of-words to represent news documents, and constructs the prediction model by using Support Vector Machines (SVMs).",
"BIBREF1 uses event embeddings as input and convolutional neural network prediction model.",
"BIBREF13 empowers event embeddings with knowledge bases like YAGO and also adopts convolutional neural networks as the basic prediction framework.",
"BIBREF22 uses fully connected model and character-level embedding input with LSTM to encode news texts.",
"BIBREF23 uses recurrent neural networks with skip-thought vectors to represent news text.",
"Table TABREF26 shows the test set results on predicting the S&P 500 index. From the table we can see that our ANRES model achieves the best results on the test sets. By comparing with BIBREF21, we can find that using news event embeddings and deep learning modules can be better representative and also flexible when dealing with high-dimension features.",
"When comparing with BIBREF1 and the knowledge-enhanced BIBREF13, we find that extracting structured events may suffer from error propagation. And more importantly, modeling the correlations between trading days can better capture the compositional effects of multiple news events.",
"By comparing with BIBREF22 and BIBREF23, despite that modeling the correlations between trading days can bring better results, we also find that modeling the noise by using a state-related random factor may be effective because of the high market stochasticity."
],
[
"We explore the effects of different types of news events and the introduced random noise factor with ablation on the test set. More specifically, we disable the past news, the present news, future news and the noise factor, respectively. The S&P 500 index prediction results of the ablated models are shown in Table TABREF28. First, without using the past news events, the result becomes the lowest. The reason may be that history news contains the biggest amount of news events. In addition, considering the trading sequence length and the time windows of future news, if we disable the past news, most of them will not be involved in our model at any chance, while the present or the past news will be input on adjacent trading days.",
"Second, it is worth noticing that using the future news events is more effective than using the present news events. On the one hand, it confirms the importances to involve the future news in our ANRES model, which can deal with insider trading factors to some extent. On the other hand, the reason may be the news impact redundancy in sequence, as the future news impact on the $t-1$-th day should be transited to the $t$-th day to compensate the absent loss of the present news events.",
"The effect of modeling the noise factor is lower only to modeling the past news events, but higher than the other ablated models, which demonstrates the effectiveness of the noise factor module. We think the reason may because that modeling such an additive noise can separate the effects of news event impacts from other factors, which makes modeling the stock price movement trends more clearly."
],
[
"Other than predicting the S&P 500 index, we also investigate the effectiveness of our approach on the problem of individual stock prediction using the test set. We count the amounts of individual company related news events for each company by name matching, and select five well known companies with sufficient news, Apple, Citigroup, Boeing Company, Google and Wells Fargo from four different sectors, which is classified by the Global Industry Classification Standard. For each company, we prepare not only news events about itself, but also news events about the whole companies in the sector. We use company news, sector news and all financial news to predict individual stock price movements, respectively. The experimental results and news statistics are listed in Table TABREF30.",
"The result of individual stock prediction by only using company news dramatically outperforms that of sector news and all news, which presents a negative correlation between total used amounts of news events and model performance. The main reason maybe that company-related news events can more directly affect the volatility of company shares, while sector news and all news contain many irrelevant news events, which would obstruct our ANRES model's learning the underlaying stock price movement trends.",
"Note that BIBREF1, BIBREF13 and BIBREF11 also reported results on individual stocks. But we cannot directly compare our results with them because the existing methods used different individual stocks on different data split to report results, and BIBREF1, BIBREF13 reported only development set results. This is reasonable since the performance of each model can vary from stock to stock over the S&P 500 chart and comparison over the whole index is more indicative."
],
[
"To look into what news event contributes the most to our prediction result, we further analyze the test set results of predicting Apple Inc.'s stock price movements only using company news, which achieves the best results among the five selected companies mentioned before.",
"As shown in Figure FIGREF31, we take the example trading sequence from 07/15/2013 to 07/23/2013 for illustration. The table on the left shows the selected top-ten news events, while attention visualization and results are shown on the right chart. Note that there are almost fifty different past news events in total for the trading sequence, and the news events listed on the left table are selected by ranking attention scores from the past news events, which are the most effective news according to the ablation study. There are some zeros in the attention heat map because these news do not belong to the corresponding trading days.",
"We can find that the news event No. 1 has been correlated with the stock price rises on 07/15/2013, but for the next two trading days, its impact fades out. On 07/18/2013, the news event No. 7 begins to show its impact. However, our ANRES model pays too much attention in it and makes the incorrect prediction that the stock price decreases. On the next trading day, our model infers that the impact of the news event No. 2 is bigger than that of the news event No. 7, which makes an incorrect prediction again. From these findings, we can see that our ANRES model tends to pay more attention to a new event when it first occurs, which offers us a potential improving direction in the future."
],
[
"We investigated explicit modeling of stock value sequences in news-driven stock prediction by suing an LSTM state to model the fundamentals, adding news impact and noise impact by using attention and noise sampling, respectively. Results show that our method is highly effective, giving the best performance on a standard benchmark. To our knowledge, we are the first to explicitly model both events and noise over a fundamental stock value state for news-driven stock movement prediction."
]
],
"section_name": [
"Introduction",
"Related Work",
"Task Definition",
"Method",
"Method ::: LSTM-based Recurrent State Transition",
"Method ::: Modeling News Events",
"Method ::: Modeling Future News",
"Method ::: Modeling Noise",
"Method ::: Training Objective",
"Experiments",
"Experiments ::: Settings",
"Experiments ::: Initializing Noisy Recurrent States",
"Experiments ::: Study on Trading Sequence Length",
"Experiments ::: Predicting S&P 500 Index",
"Experiments ::: Ablation Study on News and Noise",
"Experiments ::: Predicting Individual Stock Movements",
"Experiments ::: Case Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"26a45d5e989bebbf93de405fb8fe347fca8ae71d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Statistics of the datasets."
],
"extractive_spans": [],
"free_form_answer": "553,451 documents",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Statistics of the datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"8a85d1d432f7e8683326d0e6619852f43fca912b"
],
"answer": [
{
"evidence": [
"We use the public financial news dataset released by BIBREF4, which is crawled from Reuters and Bloomberg over the period from October 2006 to November 2013. We conduct our experiments on predicting the Standard & Poor’s 500 stock (S&P 500) index and its selected individual stocks, obtaining indices and prices from Yahoo Finance. Detailed statistics of the training, development and test sets are shown in Table TABREF8. We report the final results on test set after using development set to tune some hyper-parameters."
],
"extractive_spans": [
"the public financial news dataset released by BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the public financial news dataset released by BIBREF4, which is crawled from Reuters and Bloomberg over the period from October 2006 to November 2013. We conduct our experiments on predicting the Standard & Poor’s 500 stock (S&P 500) index and its selected individual stocks, obtaining indices and prices from Yahoo Finance. Detailed statistics of the training, development and test sets are shown in Table TABREF8. We report the final results on test set after using development set to tune some hyper-parameters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"How big is dataset used?",
"What is dataset used for news-driven stock movement prediction?"
],
"question_id": [
"5a23f436a7e0c33e4842425cf86d5fd8ba78ac92",
"2f4acd34eb2d09db9b5ad9b1eb82cb4a88c13f5b"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Example of news impacts on 3M Company. Over the first and the second periods (from Oct. 24 to Nov. 1, 2006 and from Sep. 21 to Oct. 1, 2007), there was only one event. In the third period (from Nov. 10 to Nov. 18, 2008), there were two events affecting the stock price movements simultaneously.",
"Figure 2: The ANRES model framework for trading day t in a trading sequence. The black solid elbows are used both in the training and the evaluating procedures. The red solid elbows are only used in the training procedure, while the blue dotted elbows in the evaluating procedure.",
"Table 1: Statistics of the datasets.",
"Table 2: Hyper-parameters setting.",
"Figure 3: Development set results of different trading sequence length T .",
"Table 4: Test set results on predicting S&P 500 index.",
"Table 5: Test set results of ablation study.",
"Table 6: Test set results of individual stock price movement prediction.",
"Figure 4: Attention visualization and test set results comparison of the trading sequence [07/15/2013, 07/23/2013] when predicting Apple Inc.’s stock price movements using only company news."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Figure3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Table6-1.png",
"9-Figure4-1.png"
]
} | [
"How big is dataset used?"
] | [
[
"2004.01878-6-Table1-1.png"
]
] | [
"553,451 documents"
] | 365 |
1905.07471 | Learning Open Information Extraction of Implicit Relations from Reading Comprehension Datasets | The relationship between two entities in a sentence is often implied by word order and common sense, rather than an explicit predicate. For example, it is evident that"Fed chair Powell indicates rate hike"implies (Powell, is a, Fed chair) and (Powell, works for, Fed). These tuples are just as significant as the explicit-predicate tuple (Powell, indicates, rate hike), but have much lower recall under traditional Open Information Extraction (OpenIE) systems. Implicit tuples are our term for this type of extraction where the relation is not present in the input sentence. There is very little OpenIE training data available relative to other NLP tasks and none focused on implicit relations. We develop an open source, parse-based tool for converting large reading comprehension datasets to OpenIE datasets and release a dataset 35x larger than previously available by sentence count. A baseline neural model trained on this data outperforms previous methods on the implicit extraction task. | {
"paragraphs": [
[
"Open Information Extraction (OpenIE) is the NLP task of generating (subject, relation, object) tuples from unstructured text e.g. “Fed chair Powell indicates rate hike” outputs (Powell, indicates, rate hike). The modifier open is used to contrast IE research in which the relation belongs to a fixed set. OpenIE has been shown to be useful for several downstream applications such as knowledge base construction BIBREF0 , textual entailment BIBREF1 , and other natural language understanding tasks BIBREF2 . In our previous example an extraction was missing: (Powell, works for, Fed). Implicit extractions are our term for this type of tuple where the relation (“works for” in this example) is not contained in the input sentence. In both colloquial and formal language, many relations are evident without being explicitly stated. However, despite their pervasiveness, there has not been prior work targeted at implicit predicates in the general case. Implicit information extractors for some specific implicit relations such as noun-mediated relations, numerical relations, and others BIBREF3 , BIBREF4 , BIBREF5 have been researched. While specific extractors are important, there are a multiplicity of implicit relation types and it would be intractable to categorize and design extractors for each one.",
"Past general OpenIE systems have been plagued by low recall on implicit relations BIBREF6 . In OpenIE's original application – web-scale knowledge base construction – this low recall is tolerable because facts are often restated in many ways BIBREF7 . However, in downstream NLU applications an implied relationship may be significant and only stated once BIBREF2 .",
"The contribution of this work is twofold. In Section 4, we introduce our parse-based conversion tool and convert two large reading comprehension datasets into implicit OpenIE datasets. In Section 5 and 6, we train a simple neural model on this data and compare to previous systems on precision-recall curves using a new gold test set for implicit tuples."
],
[
"We suggest that OpenIE research focus on producing implicit relations where the predicate is not contained in the input span. Formally, we define implicit tuples as (subject, relation, object) tuples that:",
"These “implicit” or “common sense” tuples reproduce the relation explicitly, which may be important for downstream NLU applications using OpenIE as an intermediate schema. For example, in Figure 1, the input sentence tells us that the Norsemen swore fealty to Charles III under “their leader Rollo”. From this our model outputs (The Norse leader, was, Rollo) despite the relation never being contained in the input sentence. Our definition of implicit tuples corresponds to the “frequently occurring recall errors” identified in previous OpenIE systems BIBREF6 : noun-mediated, sentence-level inference, long sentence, nominalization, noisy informal, and PP-attachment. We use the term implicit tuple to collectively refer to all of these situations where the predicate is absent or very obfuscated."
],
[
"Due to space constraints, see Niklaus et al. Survey for a survey of of non-neural methods. Of these, several works have focused on pattern-based implicit information extractors for noun-mediated relations, numerical relations, and others BIBREF3 , BIBREF4 , BIBREF5 . In this work we compare to OpenIE-4 , ClausIE BIBREF8 , ReVerb BIBREF9 , OLLIE BIBREF10 , Stanford OpenIE BIBREF11 , and PropS BIBREF12 ."
],
[
"Stanovsky et al. SupervisedOIE frame OpenIE as a BIO-tagging problem and train an LSTM to tag an input sentence. Tuples can be derived from the tagger, input, and BIO CFG parser. This method outperforms traditional systems, though the tagging scheme inherently constrains the relations to be part of the input sentence, prohibiting implicit relation extraction. Cui et al. NeuralOpenIE bootstrap (sentence, tuple) pairs from OpenIE-4 and train a standard seq2seq with attention model using OpenNMT-py BIBREF13 . The system is inhibited by its synthetic training data which is bootstrapped from a rule-based system."
],
[
"Due to the lack of large datasets for OpenIE, previous works have focused on generating datasets from other tasks. These have included QA-SRL datasets BIBREF14 and QAMR datasets BIBREF6 . These methods are limited by the size of the source training data which are an order of magnitude smaller than existing reading comprehension datasets."
],
[
"Span-based Question-Answer datasets are a type of reading comprehension dataset where each entry consists of a short passage, a question about the passage, and an answer contained in the passage. The datasets used in this work are the Stanford Question Answering Dataset (SQuADv1.1) BIBREF15 and NewsQA BIBREF16 . These QA datasets were built to require reasoning beyond simple pattern-recognition, which is exactly what we desire for implicit OpenIE. Our goal is to convert the QA schema to OpenIE, as was successfully done for NLI BIBREF17 . The repository of software and converted datasets is available at http://toAppear."
],
[
"We started by examining SQuAD and noticing that each answer, $A$ , corresponds to either the subject, relation, or object in an implicit extraction. The corresponding question, $Q$ , contains the other two parts, i.e. either the (1) subject and relation, (2) subject and object, or (3) relation and object. Which two pieces the question contains depends on the type of question. For example, “who was... factoid” type questions contain the relation (“was”) and object (the factoid), which means that the answer is the subject. In Figure 1, “Who was Rollo” is recognized as a who was question and caught by the whoParse() parser. Similarly, a question in the form of “When did person do action” expresses a subject and a relation, with the answer containing the object. For example, “When did Einstein emigrate to the US“ and answer 1933, would convert to (Einstein, when did emigrate to the US, 1933). In cases like these the relation might not be grammatically ideal, but nevertheless captures the meaning of the input sentence.",
"In order to identify generic patterns, we build our parse-based tool on top of a dependency parser BIBREF18 . It uses fifteen rules, with the proper rule being identified and run based on the question type. The rule then uses its pre-specified pattern to parse the input QA pair and output a tuple. These fifteen rules are certainly not exhaustive, but cover around eighty percent of the inputs. The tool ignores questions greater than 60 characters and complex questions it cannot parse, leaving a dataset smaller than the original (see Table 1).",
"Each rule is on average forty lines of code that traverses a dependency parse tree according to its pre-specified pattern, extracting the matching spans at each step. A master function parse() determines which rule to apply based on the question type which is categorized by nsubj presence, and the type of question (who/what/etc.). Most questions contain an nsubj which makes the parse task easier, as this will also be the subject of the tuple. We allow the master parse() method try multiple rules. It first tries very specific rules (e.g. a parser for how questions where no subject is identified), then falls down to more generic rules. If no output is returned after all the methods are tried we throw the QA pair out. Otherwise, we find the appropriate sentence in the passage based on the index."
],
[
"Following QA to tuple conversion, the tuple must be aligned with a sentence in the input passage. We segment the passage into sentences using periods as delimiters. The sentence containing the answer is taken as the input sentence for the tuple. Outputted sentences predominantly align with their tuple, but some exhibit partial misalignment in the case of some multi-sentence reasoning questions. 13.6% of questions require multi-sentence reasoning, so this is an upper bound on the number of partially misaligned tuples/sentences BIBREF15 . While there may be heuristics that can be used to check alignment, we didn't find a significant number of these misalignments and so left them in the corpus. Figure 1 demonstrates the conversion process."
],
[
"Examining a random subset of one hundred generated tuples in the combined dataset we find 12 noun-mediated, 33 sentence-level inference, 11 long sentence, 7 nominzalization, 0 noisy informal, 3 pp-attachment, 24 explicit, and 10 partially misaligned. With 66% implicit relations, this dataset shows promise in improving OpenIE's recall on implicit relations."
],
[
"Our implicit OpenIE extractor is implemented as a sequence to sequence model with attention BIBREF19 . We use a 2-Layer LSTM Encoder/Decoder with 500 parameters, general attention, SGD optimizer with adaptive learning rate, and 0.33 dropout BIBREF20 . The training objective is to maximize the likelihood of the output tuple given the input sentence. In the case of a sentence having multiple extractions, it appears in the dataset once for each output tuple. At test time, beam search is used for decoding to produce the top-10 outputs and an associated log likelihood value for each tuple (used to generate the precision-recall curves in Section 7)."
],
[
"We make use of the evaluation tool developed by Stanovsky and Dagan benchmark to test the precision and recall of our model against previous methods. We make two changes to the tool as described below."
],
[
"The test corpus contained no implicit data, so we re-annotate 300 tuples from the CoNLL-2009 English training data to use as gold data. Both authors worked on different sentence sets then pruned the other set to ensure only implicit relations remained. We note that this is a different dataset than our training data so should be a good test of generalizability; the training data consists of Wikipedia and news articles, while the test data resembles corporate press release headlines."
],
[
"We implement a new matching function (i.e. the function that decides if a generated tuple matches a gold tuple). The included matching functions used BoW overlap or BLEU, which aren't appropriate for implicit relations; our goal is to assess whether the meaning of the predicted tuple matches the gold, not the only tokens. For example, the if the gold relation is “is employed by” we want to accept “works for”. Thus, we instead compute the cosine similarity of the subject, relation, and object embeddings to our gold tuple. All three must be above a threshold to evaluate as a match. The sequence embeddings are computed by taking the average of the GloVe embeddings of each word (i.e. BoW embedding) BIBREF21 ."
],
[
"The results on our implicit corpus are shown in Figure 2 (our method in blue). For continuity with prior work, we also compare our model on the origional corpus but using our new matching function in Figure 3.",
"Our model outperforms at every point in the implicit-tuples PR curve, accomplishing our goal of increasing recall on implicit relations. Our system performs poorly on explicit tuples, as we would expect considering our training data. We tried creating a multi-task model, but found the model either learned to produce implit or explicit tuples. Creating a multi-task network would be ideal, though it is sufficient for production systems to use both systems in tandem."
],
[
"We created a large training corpus for implicit OpenIE extractors based on SQuAD and NewsQA, trained a baseline on this dataset, and presented promising results on implicit extraction. We see this as part of a larger body of work in text-representation schemes which aim to represent meaning in a more structured form than free text. Implicit information extraction goes further than traditional OpenIE to elicit relations not contained in the original free text. This allows maximally-shortened tuples where common sense relations are made explicit. Our model should improve further as more QA datasets are released and converted to OpenIE data using our conversion tool."
]
],
"section_name": [
"Introduction",
"Problem Statement",
"Traditional Methods",
"Neural Network Methods",
"Dataset Conversion Methods",
"Dataset Conversion Method",
"QA Pairs to OpenIE Tuples",
"Sentence Alignment",
"Tuple Examination",
"Our model",
"Evaluation",
"Creating a Gold Dataset",
"Matching function for implicit tuples",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"26df92ca3004b2f750fbff14cd0d2b5a611fdbee"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 2: PR curve on our implicit tuples dataset."
],
"extractive_spans": [],
"free_form_answer": "The model outperforms at every point in the\nimplicit-tuples PR curve reaching almost 0.8 in recall",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: PR curve on our implicit tuples dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
""
],
"paper_read": [
"no"
],
"question": [
"How much better does this baseline neural model do?"
],
"question_id": [
"e7329c403af26b7e6eef8b60ba6fefbe40ccf8ce"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"information extraction"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Dataset statistics.",
"Figure 1: Tuple conversion and alignment process flow.",
"Figure 2: PR curve on our implicit tuples dataset.",
"Figure 3: PR curve on the explicit tuples dataset."
],
"file": [
"3-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png"
]
} | [
"How much better does this baseline neural model do?"
] | [
[
"1905.07471-4-Figure2-1.png"
]
] | [
"The model outperforms at every point in the\nimplicit-tuples PR curve reaching almost 0.8 in recall"
] | 366 |
1603.00968 | MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification | We introduce a novel, simple convolution neural network (CNN) architecture - multi-group norm constraint CNN (MGNC-CNN) that capitalizes on multiple sets of word embeddings for sentence classification. MGNC-CNN extracts features from input embedding sets independently and then joins these at the penultimate layer in the network to form a final feature vector. We then adopt a group regularization strategy that differentially penalizes weights associated with the subcomponents generated from the respective embedding sets. This model is much simpler than comparable alternative architectures and requires substantially less training time. Furthermore, it is flexible in that it does not require input word embeddings to be of the same dimensionality. We show that MGNC-CNN consistently outperforms baseline models. | {
"paragraphs": [
[
"Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 .",
"An important consideration for such models is the specification of the word embeddings. Several options exist. For example, Kalchbrenner et al. kalchbrenner2014convolutional initialize word vectors to random low-dimensional vectors to be fit during training, while Johnson and Zhang johnson2014effective use fixed, one-hot encodings for each word. By contrast, Kim kim2014convolutional initializes word vectors to those estimated via the word2vec model trained on 100 billion words of Google News BIBREF5 ; these are then updated during training. Initializing embeddings to pre-trained word vectors is intuitively appealing because it allows transfer of learned distributional semantics. This has allowed a relatively simple CNN architecture to achieve remarkably strong results.",
"Many pre-trained word embeddings are now readily available on the web, induced using different models, corpora, and processing steps. Different embeddings may encode different aspects of language BIBREF6 , BIBREF7 , BIBREF8 : those based on bag-of-words (BoW) statistics tend to capture associations (doctor and hospital), while embeddings based on dependency-parses encode similarity in terms of use (doctor and surgeon). It is natural to consider how these embeddings might be combined to improve NLP models in general and CNNs in particular.",
"Contributions. We propose MGNC-CNN, a novel, simple, scalable CNN architecture that can accommodate multiple off-the-shelf embeddings of variable sizes. Our model treats different word embeddings as distinct groups, and applies CNNs independently to each, thus generating corresponding feature vectors (one per embedding) which are then concatenated at the classification layer. Inspired by prior work exploiting regularization to encode structure for NLP tasks BIBREF9 , BIBREF10 , we impose different regularization penalties on weights for features generated from the respective word embedding sets.",
"Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.",
""
],
[
"",
"Prior work has considered combining latent representations of words that capture syntactic and semantic properties BIBREF12 , and inducing multi-modal embeddings BIBREF13 for general NLP tasks. And recently, Luo et al. luo2014pre proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification.",
"More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement)."
],
[
"We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets.",
"Basic CNN. In this model we first replace each word in a sentence with its vector representation, resulting in a sentence matrix INLINEFORM0 , where INLINEFORM1 is the (zero-padded) sentence length, and INLINEFORM2 is the dimensionality of the embeddings. We apply a convolution operation between linear filters with parameters INLINEFORM3 and the sentence matrix. For each INLINEFORM4 , where INLINEFORM5 denotes `height', we slide filter INLINEFORM6 across INLINEFORM7 , considering `local regions' of INLINEFORM8 adjacent rows at a time. At each local region, we perform element-wise multiplication and then take the element-wise sum between the filter and the (flattened) sub-matrix of INLINEFORM9 , producing a scalar. We do this for each sub-region of INLINEFORM10 that the filter spans, resulting in a feature map vector INLINEFORM11 . We can use multiple filter sizes with different heights, and for each filter size we can have multiple filters. Thus the model comprises INLINEFORM12 weight vectors INLINEFORM13 , each of which is associated with an instantiation of a specific filter size. These in turn generate corresponding feature maps INLINEFORM14 , with dimensions varying with filter size. A 1-max pooling operation is applied to each feature map, extracting the largest number INLINEFORM15 from each feature map INLINEFORM16 . Finally, we combine all INLINEFORM17 together to form a feature vector INLINEFORM18 to be fed through a softmax function for classification. We regularize weights at this level in two ways. (1) Dropout, in which we randomly set elements in INLINEFORM19 to zero during the training phase with probability INLINEFORM20 , and multiply INLINEFORM21 with the parameters trained in INLINEFORM22 at test time. (2) An l2 norm penalty, for which we set a threshold INLINEFORM23 for the l2 norm of INLINEFORM24 during training; if this is exceeded, we rescale the vector accordingly. For more details, see BIBREF4 .",
"MG-CNN. Assuming we have INLINEFORM0 word embeddings with corresponding dimensions INLINEFORM1 , we can simply treat each word embedding independently. In this case, the input to the CNN comprises multiple sentence matrices INLINEFORM2 , where each INLINEFORM3 may have its own width INLINEFORM4 . We then apply different groups of filters INLINEFORM5 independently to each INLINEFORM6 , where INLINEFORM7 denotes the set of filters for INLINEFORM8 . As in basic CNN, INLINEFORM9 may have multiple filter sizes, and multiple filters of each size may be introduced. At the classification layer we then obtain a feature vector INLINEFORM10 for each embedding set, and we can simply concatenate these together to form the final feature vector INLINEFORM11 to feed into the softmax function, where INLINEFORM12 . This representation contains feature vectors generated from all sets of embeddings under consideration. We call this method multiple group CNN (MG-CNN). Here groups refer to the features generated from different embeddings. Note that this differs from `multi-channel' models because at the convolution layer we use different filters on each word embedding matrix independently, whereas in a standard multi-channel approach each filter would consider all channels simultaneously and generate a scalar from all channels at each local region. As above, we impose a max l2 norm constraint on the final feature vector INLINEFORM13 for regularization. Figure FIGREF1 illustrates this approach.",
"MGNC-CNN. We propose an augmentation of MG-CNN, Multi-Group Norm Constraint CNN (MGNC-CNN), which differs in its regularization strategy. Specifically, in this variant we impose grouped regularization constraints, independently regularizing subcomponents INLINEFORM0 derived from the respective embeddings, i.e., we impose separate max norm constraints INLINEFORM1 for each INLINEFORM2 (where INLINEFORM3 again indexes embedding sets); these INLINEFORM4 hyper-parameters are to be tuned on a validation set. Intuitively, this method aims to better capitalize on features derived from word embeddings that capture discriminative properties of text for the task at hand by penalizing larger weight estimates for features derived from less discriminative embeddings."
],
[
"Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.",
"Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.",
"TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.",
"Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced."
],
[
"We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed\" nodes with prepositions and notated inverse relations separately, e.g., “dog barks\" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters."
],
[
"We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .",
"We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation, creating nested development sets with which to tune hyperparameters. For all experiments we used filters sizes of 3, 4 and 5 and we created 100 feature maps for each filter size. We applied 1 max-pooling and dropout (rate: 0.5) at the classification layer. For training we used back-propagation in mini-batches and used AdaDelta as the stochastic gradient descent (SGD) update rule, and set mini-batch size as 50. In this work, we treat word embeddings as part of the parameters of the model, and update them as well during training. In all our experiments, we only tuned the max norm constraint(s), fixing all other hyperparameters."
],
[
"",
"We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC.",
"We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour.",
"We note that the method proposed by Astudillo et al. astudillo2015learning is able to accommodate multiple embedding sets with different dimensions by projecting the original word embeddings into a lower-dimensional space. However, this work requires training the optimal projection matrix on laebled data first, which again incurs large overhead.",
"Of course, our model also has its own limitations: in MGNC-CNN, we need to tune the norm constraint hyperparameter for all the word embeddings. As the number of word embedding increases, this will increase the running time. However, this tuning procedure is embarrassingly parallel.",
""
],
[
" We have proposed MGNC-CNN: a simple, flexible CNN architecture for sentence classification that can exploit multiple, variable sized word embeddings. We demonstrated that this consistently achieves better results than a baseline architecture that exploits only a single set of word embeddings, and also a naive concatenation approach to capitalizing on multiple embeddings. Furthermore, our results are comparable to those achieved with a recently proposed model BIBREF11 that is much more complex. However, our simple model is easy to implement and requires an order of magnitude less training time. Furthermore, our model is much more flexible than previous approaches, because it can accommodate variable-size word embeddings.",
""
],
[
"",
"This work was supported in part by the Army Research Office (grant W911NF-14-1-0442) and by The Foundation for Science and Technology, Portugal (grant UTAP-EXPL/EEIESS/0031/2014). This work was also made possible by the support of the Texas Advanced Computer Center (TACC) at UT Austin."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model Description",
"Datasets",
"Pre-trained Word Embeddings",
"Setup",
"Results and Discussion",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"271b515571b41377124029ee4375c5ac5f08a926"
],
"answer": [
{
"evidence": [
"Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.",
"More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).",
"We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour."
],
"extractive_spans": [
"It is an order of magnitude more efficient in terms of training time.",
"his model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour"
],
"free_form_answer": "",
"highlighted_evidence": [
"It is an order of magnitude more efficient in terms of training time.",
"The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).",
"MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"610bedbb9f6c09e6015cd5632a69196ced15464c"
],
"answer": [
{
"evidence": [
"We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .",
"FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these."
],
"extractive_spans": [],
"free_form_answer": "MC-CNN\nMVCNN\nCNN",
"highlighted_evidence": [
"We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .",
"FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"59007c842a036aa6b271235a656664f1771ea8cd"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these.",
"We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC."
],
"extractive_spans": [],
"free_form_answer": "In terms of Subj the Average MGNC-CNN is better than the average score of baselines by 0.5. Similarly, Scores of SST-1, SST-2, and TREC where MGNC-CNN has similar improvements. \nIn case of Irony the difference is about 2.0. \n",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these.",
"We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"e1b1fb202ea4cfb7496a0f43f85225be87eb1c35"
],
"answer": [
{
"evidence": [
"Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.",
"Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.",
"TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.",
"Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced."
],
"extractive_spans": [
" SST-1",
"SST-2",
"Subj ",
"TREC ",
"Irony "
],
"free_form_answer": "",
"highlighted_evidence": [
"Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.",
"Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.",
"TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.",
"Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"b1c9523adebe1eb47b7769676489464b7a983042"
],
"answer": [
{
"evidence": [
"We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .",
"More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).",
"FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these."
],
"extractive_spans": [
"standard CNN",
"C-CNN",
"MVCNN "
],
"free_form_answer": "",
"highlighted_evidence": [
"We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. ",
"More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. ",
"FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How much faster is training time for MGNC-CNN over the baselines?",
"What are the baseline models?",
"By how much of MGNC-CNN out perform the baselines?",
"What dataset/corpus is this evaluated over?",
"What are the comparable alternative architectures?"
],
"question_id": [
"f7d67d6c6fbc62b2953ab74db6871b122b3c92cc",
"085147cd32153d46dd9901ab0f9195bfdbff6a85",
"c0035fb1c2b3de15146a7ce186ccd2e366fb4da2",
"a8e4a67dd67ae4a9ebf983a90b0d256f4b9ff6c6",
"34dd0ee1374a3afd16cf8b0c803f4ef4c6fec8ac"
],
"question_writer": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Illustration of MG-CNN and MGNC-CNN. The filters applied to the respective embeddings are completely independent. MG-CNN applies a max norm constraint to o, while MGNC-CNN applies max norm constraints on o1 and o2 independently (group regularization). Note that one may easily extend the approach to handle more than two embeddings at once.",
"Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these.",
"Table 2: Best λ2 value on the validation set for each method w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"What are the baseline models?",
"By how much of MGNC-CNN out perform the baselines?"
] | [
[
"1603.00968-4-Table1-1.png",
"1603.00968-Setup-0"
],
[
"1603.00968-4-Table1-1.png",
"1603.00968-Results and Discussion-1"
]
] | [
"MC-CNN\nMVCNN\nCNN",
"In terms of Subj the Average MGNC-CNN is better than the average score of baselines by 0.5. Similarly, Scores of SST-1, SST-2, and TREC where MGNC-CNN has similar improvements. \nIn case of Irony the difference is about 2.0. \n"
] | 368 |
2004.01980 | Hooks in the Headline: Learning to Generate Headlines with Controlled Styles | Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references. | {
"paragraphs": [
[
"Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.”",
"To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others.",
"SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style.",
"In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2.",
"The main contributions of our paper are listed below:",
"To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data.",
"Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones.",
"Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box."
],
[
"Our work is related to summarization and text style transfer."
],
[
"Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27.",
"Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles."
],
[
"Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem."
],
[
"The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\\lbrace (\\mathbf {a^{(i)}},\\mathbf {h^{(i)}})\\rbrace _{i=1}^N$ consists of pairs of a news article $\\mathbf {a}$ and its plain headline $\\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\\lbrace \\mathbf {a^{(i)}}\\rbrace _{i=1}^N$, and $H=\\lbrace \\mathbf {h^{(i)}}\\rbrace _{i=1}^N$. The target corpus $T=\\lbrace \\mathbf {t^{(i)}}\\rbrace _{i=1}^{M}$ comprises of sentences $\\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$.",
"Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$."
],
[
"For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\\mathbf {\\cdot }; \\mathbf {\\theta _E})$ and a 6-layer decoder $G(\\mathbf {\\cdot }; \\mathbf {\\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG."
],
[
"To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10)."
],
[
"With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is",
"where $\\mathbf {\\theta _{E_S}}$ and $\\mathbf {\\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\\mathbf {h}|\\mathbf {a})$ denotes the overall probability of generating an output sequence $\\mathbf {h}$ given the input article $\\mathbf {a}$, which can be further expanded as follows:",
"where $L$ is the sequence length."
],
[
"For the target style corpus $T$, since we only have the sentence $\\mathbf {t}$ without paired news articles, we train $\\mathbf {z_T}=E_T(\\mathbf {\\tilde{t}})$ and $\\mathbf {t}=G_T(\\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\\mathbf {z_T}$ is the learned latent representation in the target domain, and $\\mathbf {\\tilde{t}}$ is the corrupted version of $\\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\\mathcal {L}_T$:",
"where $\\mathbf {\\theta _{E_T}}$ and $\\mathbf {\\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\\mathcal {L}_T$ via multitask learning, so the total loss becomes",
"where $\\lambda $ is a hyper-parameter."
],
[
"More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\\mathbf {\\theta _{\\mathrm {ind}}}$ and style-dependent parameters $\\mathbf {\\theta _{\\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below."
],
[
"Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\\mathbf {x}$ into a normalized activation $\\mathbf {z}$ specific to the style $s$:",
"where $\\mu $ and $\\sigma $ are the mean and standard deviation of the batch of $\\mathbf {x}$, and $\\gamma _s$ and $\\beta _s$ are style-specific parameters learned from data.",
"Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers."
],
[
"Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows:",
"where $\\mathbf {\\mathrm {query}}$, $\\mathbf {\\mathrm {key}}$, and $\\mathbf {\\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\\mathbf {W_q^s}$, $\\mathbf {W_k}$, and $\\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\\mathbf {W_q^s}$ of the query for different styles, so that $\\mathbf {Q}$ can be different to induce diverse attention patterns."
],
[
"We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively."
],
[
"The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set.",
"We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs.",
"We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs."
],
[
"For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets."
],
[
"We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use.",
"Some examples from each style corpus are listed in Table TABREF32."
],
[
"We compared the proposed TitleStylist against the following five strong baseline approaches."
],
[
"We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data."
],
[
"We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles."
],
[
"It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website."
],
[
"We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training."
],
[
"We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG."
],
[
"To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation."
],
[
"We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices."
],
[
"Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness."
],
[
"We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit."
],
[
"We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs."
],
[
"We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\\lambda $."
],
[
"The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters."
],
[
"We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity."
],
[
"In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores."
],
[
"The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability."
],
[
"We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57."
],
[
"Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability.",
"Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body.",
"From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability.",
"In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation.",
"We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization.",
"It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation.",
"We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines."
],
[
"We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature."
],
[
"We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models."
],
[
"We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of our study, and thank the reviewers for their inspiring comments. Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Headline Generation as Summarization",
"Related Work ::: Text Style Transfer",
"Methods ::: Problem Formulation",
"Methods ::: Seq2Seq Model Architecture",
"Methods ::: Multitask Training Scheme",
"Methods ::: Multitask Training Scheme ::: Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@",
"Methods ::: Multitask Training Scheme ::: DAE Training for @!START@$\\mathbf {\\theta _{E_T}}$@!END@ and @!START@$\\mathbf {\\theta _{G_T}}$@!END@",
"Methods ::: Parameter-Sharing Scheme",
"Methods ::: Parameter-Sharing Scheme ::: Type 1. Style Layer Normalization",
"Methods ::: Parameter-Sharing Scheme ::: Type 2. Style-Guided Encoder Attention",
"Experiments ::: Datasets",
"Experiments ::: Datasets ::: Source Dataset",
"Experiments ::: Datasets ::: Three Target Style Corpora ::: Humor and Romance",
"Experiments ::: Datasets ::: Three Target Style Corpora ::: Clickbait",
"Experiments ::: Baselines",
"Experiments ::: Baselines ::: Neural Headline Generation (NHG)",
"Experiments ::: Baselines ::: Gigaword-MASS",
"Experiments ::: Baselines ::: Neural Story Teller (NST)",
"Experiments ::: Baselines ::: Fine-Tuned",
"Experiments ::: Baselines ::: Multitask",
"Experiments ::: Evaluation Metrics",
"Experiments ::: Evaluation Metrics ::: Setup of Human Evaluation",
"Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation",
"Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Summarization Quality",
"Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency",
"Experiments ::: Experimental Details",
"Results and Discussion ::: Human Evaluation Results",
"Results and Discussion ::: Human Evaluation Results ::: Relevance",
"Results and Discussion ::: Human Evaluation Results ::: Attraction",
"Results and Discussion ::: Human Evaluation Results ::: Fluency",
"Results and Discussion ::: Human Evaluation Results ::: Style Strength",
"Results and Discussion ::: Automatic Evaluation Results",
"Results and Discussion ::: Extension to Multi-Style",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"4b49212f42e25384c3a9f1535f1d64f8056fc0dc"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dataset.",
"In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores."
],
"extractive_spans": [
"pure summarization model NHG"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dataset.",
"In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f9429f69c0c10710e3d0f7a08e1402b4332774a0"
],
"answer": [
{
"evidence": [
"The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters.",
"FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dataset.",
"FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dataset."
],
"extractive_spans": [],
"free_form_answer": "Humor in headlines (TitleStylist vs Multitask baseline):\nRelevance: +6.53% (5.87 vs 5.51)\nAttraction: +3.72% (8.93 vs 8.61)\nFluency: 1,98% (9.29 vs 9.11)",
"highlighted_evidence": [
"We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57.",
"FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dataset.",
"FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"486b71fe732fb45a3fd517ed1c075fcb75958785"
],
"answer": [
{
"evidence": [
"We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices."
],
"extractive_spans": [
"annotators are asked how attractive the headlines are",
"Likert scale from 1 to 10 (integer values)"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values).",
"For attractiveness, annotators are asked how attractive the headlines are."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"273ca2b1c037678e8512d41f9bc295001da7805b"
],
"answer": [
{
"evidence": [
"We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices."
],
"extractive_spans": [
"human evaluation task about the style strength"
],
"free_form_answer": "",
"highlighted_evidence": [
"In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7d39d8fb6b7d29b3f69adea2fe3f029c7a156cda"
],
"answer": [
{
"evidence": [
"Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency",
"We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs."
],
"extractive_spans": [
"fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency\nWe fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"Which state-of-the-art model is surpassed by 9.68% attraction score?",
"What is increase in percentage of humor contained in headlines generated with TitleStylist method (w.r.t. baselines)?",
"How is attraction score measured?",
"How is presence of three target styles detected?",
"How is fluency automatically evaluated?"
],
"question_id": [
"53377f1c5eda961e438424d71d16150e669f7072",
"f37ed011e7eb259360170de027c1e8557371f002",
"41d3750ae666ea5a9cea498ddfb973a8366cccd6",
"90b2154ec3723f770c74d255ddfcf7972fe136a2",
"f3766c6937a4c8c8d5e954b4753701a023e3da74"
],
"question_writer": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Given a news article, current HG models can only generate plain, factual headlines, failing to learn from the original human reference. It is also much less attractive than the headlines with humorous, romantic and click-baity styles.",
"Figure 2: The Transformer-based architecture of our model.",
"Figure 3: Training scheme. Multitask training is adopted to combine the summarization and DAE tasks.",
"Table 1: Examples of three target style corpora: humor, romance, and clickbait.",
"Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dataset.",
"Table 3: Examples of style-carrying headlines generated by TitleStylist.",
"Table 4: Percentage of choices (%) for the most humorous or romantic headlines among TitleStylist and two baselines NHG and Multitask.",
"Table 5: Automatic evaluation results of our TitleStylist and baselines. The test set of each style is the same, but the training set is different depending on the target style as shown in the “Style Corpus” column. “None” means no style-specific dataset, and “Humor”, “Romance” and “Clickbait” corresponds to the datasets we introduced in Section 4.1.2. During the inference phase, our TitleStylist can generate two outputs: one from GT and the other from GS . Outputs from GT are style-carrying, so we denote it as “TitleStylist”; outputs from GS are plain and factual, thus denoted as “TitleStylist-F.” The last column “Len. Ratio” denotes the average ratio of abstract length to the generated headline length by the number of words.",
"Table 6: Comparison between TitleStylist-Versatile and TitleStylist. “RG-L” denotes ROUGE-L, and “Pref.” denotes preference."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"5-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Table6-1.png"
]
} | [
"What is increase in percentage of humor contained in headlines generated with TitleStylist method (w.r.t. baselines)?"
] | [
[
"2004.01980-7-Table2-1.png",
"2004.01980-Results and Discussion ::: Human Evaluation Results-0"
]
] | [
"Humor in headlines (TitleStylist vs Multitask baseline):\nRelevance: +6.53% (5.87 vs 5.51)\nAttraction: +3.72% (8.93 vs 8.61)\nFluency: 1,98% (9.29 vs 9.11)"
] | 369 |
1809.08510 | Towards Language Agnostic Universal Representations | When a bilingual student learns to solve word problems in math, we expect the student to be able to solve these problem in both languages the student is fluent in,even if the math lessons were only taught in one language. However, current representations in machine learning are language dependent. In this work, we present a method to decouple the language from the problem by learning language agnostic representations and therefore allowing training a model in one language and applying to a different one in a zero shot fashion. We learn these representations by taking inspiration from linguistics and formalizing Universal Grammar as an optimization process (Chomsky, 2014; Montague, 1970). We demonstrate the capabilities of these representations by showing that the models trained on a single language using language agnostic representations achieve very similar accuracies in other languages. | {
"paragraphs": [
[
"Anecdotally speaking, fluent bilingual speakers rarely face trouble translating a task learned in one language to another. For example, a bilingual speaker who is taught a math problem in English will trivially generalize to other known languages. Furthermore there is a large collection of evidence in linguistics arguing that although separate lexicons exist in multilingual speakers the core representations of concepts and theories are shared in memory BIBREF2 , BIBREF3 , BIBREF4 . The fundamental question we're interested in answering is on the learnability of these shared representations within a statistical framework.",
"We approached this problem from a linguistics perspective. Languages have vastly varying syntactic features and rules. Linguistic Relativity studies the impact of these syntactic variations on the formations of concepts and theories BIBREF5 . Within this framework of study, the two schools of thoughts are linguistic determinism and weak linguistic influence. Linguistic determinism argues that language entirely forms the range of cognitive processes, including the creation of various concepts, but is generally agreed to be false BIBREF6 , BIBREF5 . Although there exists some weak linguistic influence, it is by no means fundamental BIBREF7 . The superfluous nature of syntactic variations across languages brings forward the argument of principles and parameters (PnP) which hypothesizes the existence of a small distributed parameter representation that captures the syntactic variance between languages denoted by parameters (e.g. head-first or head-final syntax), as well as common principles shared across all languages BIBREF8 . Universal Grammar (UG) is the study of principles and the parameters that are universal across languages BIBREF1 .",
"The ability to learn these universalities would allow us to learn representations of language that are fundamentally agnostic of the specific language itself. Doing so would allow us to learn a task in one language and reap the benefits of all other languages without needing multilingual datasets. Our attempt to learn these representations begins by taking inspiration from linguistics and formalizing UG as an optimization problem.",
"We train downstream models using language agnostic universal representations on a set of tasks and show the ability for the downstream models to generalize to languages that we did not train on."
],
[
"Our work attempts to unite universal (task agnostic) representations with multilingual (language agnostic) representations BIBREF9 , BIBREF10 . The recent trend in universal representations has been moving away from context-less unsupervised word embeddings to context-rich representations. Deep contextualized word representations (ELMo) trains an unsupervised language model on a large corpus of data and applies it to a large set of auxiliary tasks BIBREF9 . These unsupervised representations boosted the performance of models on a wide array of tasks. Along the same lines BIBREF10 showed the power of using latent representations of translation models as features across other non-translation tasks. In general, initializing models with pre-trained language models shows promise against the standard initialization with word embeddings. Even further, BIBREF11 show that an unsupervised language model trained on a large corpus will contain a neuron that strongly correlates with sentiment without ever training on a sentiment task implying that unsupervised language models maybe picking up informative and structured signals.",
"In the field of multilingual representations, a fair bit of work has been done on multilingual word embeddings. BIBREF12 explored the possibility of training massive amounts of word embeddings utilizing either parallel data or bilingual dictionaries via the SkipGram paradigm. Later on an unsupervised approach to multilingual word representations was proposed by BIBREF13 which utilized an adversarial training regimen to place word embeddings into a shared latent space. Although word embeddings show great utility, they fall behind methods which exploit sentence structure as well as words. Less work has been done on multilingual sentence representations. Most notably both BIBREF14 and BIBREF15 propose a way to learn multilingual sentence representation through a translation task.",
"We propose learning language agnostic representations through constrained language modeling to capture the power of both multilingual and universal representations. By decoupling language from our representations we can train downstream models on monolingual data and automatically apply the models to other languages."
],
[
"Statistical language models approximate the probability distribution of a series of words by predicting the next word given a sequence of previous words. $\np(w_0,...,w_n) = \\prod _{i=1}^n p(w_i \\mid w_0,...,w_{i-1})\n$ ",
"where $w_i$ are indices representing words in an arbitrary vocabulary.",
"Learning grammar is equivalent to language modeling, as the support of $p$ will represent the set of all grammatically correct sentences. Furthermore, let $p_j(\\cdot )$ represent the language model for the jth language and $w^j$ represents a word from the jth language. Let $k_j$ represent a distributed representation of a specific language along the lines of the PnP argument BIBREF8 . UG, through the lens of statistical language modeling, hypothesizes the existence of a factorization of $p_j(\\cdot )$ containing a language agnostic segment. The factorization used throughout this paper is the following:",
"b = u ej(wj0,...,wji)",
"pj(wi w0,...,wi-1) = ej-1(h(b,kj))",
"s.t. d(p(bj) p(bj))",
"The distribution matching constraint $d$ , insures that the representations across languages are common as hypothesized by the UG argument.",
"Function $e_j: \\mathbb {N}^i \\rightarrow \\mathbb {R}^{i \\times d}$ is a language specific function which takes an ordered set of integers representing tokens and outputs a vector of size $d$ per token. Function $u: \\mathbb {R}^{i \\times d} \\rightarrow \\mathbb {R}^{i \\times d}$ takes the language specific representation and attempts to embed into a language agnostic representation. Function $h: (\\mathbb {R}^{i \\times d}, \\mathbb {R}^{f}) \\rightarrow \\mathbb {R}^{i \\times d}$ takes the universal representation as well as a distributed representation of the language of size $f$ and returns a language specific decoded representation. $e^{-1}$ maps our decoded representation back to the token space.",
"For the purposes of distribution matching we utilize the GAN framework. Following recent successes we use Wasserstein-1 as our distance function $d$ BIBREF16 .",
"Given two languages $j_\\alpha $ and $j_\\beta $ the distribution of the universal representations should be within $\\epsilon $ with respect to the $W_1$ of each other. Using the Kantarovich-Rubenstein duality we define ",
"$$d(\\mathbf {p}(b\\mid j_\\alpha ) \\mid \\mid \\mathbf {p}(b\\mid j_\\beta )) =\n\\sup _{||f_{\\alpha ,\\beta }||_L \\le 1} \\mathbb {E}_{x\\sim \\mathbf {p}(b\\mid j_\\alpha )}\\left[f_{\\alpha ,\\beta }(x)\\right] - \\mathbb {E}_{x\\sim \\mathbf {p}(b\\mid j_\\beta )}\\left[f_{\\alpha ,\\beta }(x)\\right]$$ (Eq. 2) ",
"where $L$ is the Lipschitz constant of $f$ . Throughout this paper we satisfy the Lipschitz constraint by clamping the parameters to a compact space, as done in the original WGAN paper BIBREF16 . Therefore the complete loss function for $m$ languages each containing $N$ documents becomes: ",
"$$\\max _{\\theta } \\sum _{\\alpha =0}^m \\sum _{i=0}^N \\log p_{j_\\alpha }(w_{i,0}^\\alpha ,...,w_{i,n}^\\alpha ; \\theta )\\nonumber - \\frac{\\lambda }{m^2}\\sum _{\\alpha =0}^m \\sum _{\\beta =0}^m d(\\mathbf {p}(b\\mid j_\\alpha ) \\mid \\mid \\mathbf {p}(b\\mid j_\\beta ))$$ (Eq. 3) ",
" $\\lambda $ is a scaling factor for the distribution constraint loss."
],
[
"Our specific implementation of this optimization problem we denote as UG-WGAN. Each function described in the previous section we implement using neural networks. For $e_j$ in equation \"Universal Grammar as an Optimization Problem\" we use a language specific embedding table followed by a LSTM BIBREF17 . Function $u$ in equation \"Universal Grammar as an Optimization Problem\" is simply stacked LSTM's. Function $h$ in equation \"Universal Grammar as an Optimization Problem\" takes input from $u$ as well as a PnP representation of the language via an embedding table. Calculating the real inverse of $e^{-1}$ is non trivial therefore we use another language specific LSTM whose outputs we multiply by the transpose of the embedding table of $e$ to obtain token probabilities. For regularization we utilized dropout and locked dropout where appropriate BIBREF18 .",
"The critic, adopting the terminology from BIBREF16 , takes the input from $u$ , feeds it through a stacked LSTM, aggregates the hidden states using linear sequence attention as described in DrQA BIBREF19 . Once we have the aggregated state we map to a $m \\times m$ matrix from where we can compute the total Wasserstein loss. A Batch Normalization layer is appended to the end of the critic BIBREF20 . The $\\alpha , \\beta $ th index in the matrix correspond to the function output of $f$ in calculating $W_1(\\mathbf {p}(b\\mid j_\\alpha ) \\mid \\mid \\mathbf {p}(b\\mid j_\\beta ))$ .",
"We trained UG-WGAN with a variety of languages depending on the downstream task. For each language we utilized the respective Wikipedia dump. From the wikipedia dump we extract all pages using the wiki2text utility and build language specific vocabularies consisting of 16k BPE tokens BIBREF21 . During each batch we sample documents from our set of languages which are approximately the same length. We train our language model via BPTT where the truncation length progressively grows from 15 to 50 throughout training. The critic is updated 10 times for every update of the language model. We trained each language model for 14 days on a NVidia Titan X. For each language model we would do a sweep over $\\lambda $ , but in general we have found that $\\lambda =0.1$ works sufficiently well for minimizing both perplexity and Wasserstein distance."
],
[
"A couple of interesting questions arise from the described training procedure. Is the distribution matching constraint necessary or will simple joint language model training exhibit the properties we're interested in? Can this optimization process fundamentally learn individual languages grammar while being constrained by a universal channel? What commonalities between languages can we learn and are they informative enough to be exploited?",
"We can test out the usefulness of the distribution matching constraint by running an ablation study on the $\\lambda $ hyper-parameter. We trained UG-WGAN on English, Spanish and Arabic wikidumps following the procedure described above. We kept all the hyper-parameters consistent apart for augmenting $\\lambda $ from 0 to 10. The results are shown in Figure 2 . Without any weight on the distribution matching term the critic trivially learns to separate the various languages and no further training reduces the wasserstein distance. The joint language model internally learns individual language models who are partitioned in the latent space. We can see this by running a t-SNE plot on the universal ( $u(\\cdot )$ ) representation of our model and seeing existence of clusters of the same language as we did in Figure 3 BIBREF22 . An universal model satisfying the distribution matching constrain would mix all languages uniformly within it's latent space.",
"To test the universality of UG-WGAN representations we will apply them to a set of orthogonal NLP tasks. We will leave the discussion on the learnability of grammar to the Discussion section of this paper."
],
[
"By introducing a universal channel in our language model we reduced a representations dependence on a single language. Therefore we can utilize an arbitrary set of languages in training an auxiliary task over UG encodings. For example we can train a downstream model only on one languages data and transfer the model trivially to any other language that UG-WGAN was trained on."
],
[
"To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section \"UG-WGAN\" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.",
"Our sentiment analysis model ran a bi-directional LSTM on top of fixed UG representations from where we took the last hidden state and computed a logistic regression. This was trained using standard SGD with momentum.",
"We also compare against encodings learned as a by-product of multi-encoder and decoder neural machine translation as a baseline BIBREF28 . We see that UG representations are useful in situations when there is a lack of data in an specific language. The language agnostics properties of UG embeddings allows us to do successful zero-shot learning without needing any parallel corpus, furthermore the ability to generalize from language modeling to sentiment attests for the universal properties of these representations. Although we aren't able to improve over the state of the art in a single language we are able to learn a model that does surprisingly well on a set of languages without multilingual data."
],
[
"A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features. For this task we use the Stanford NLI (sNLI) dataset as our training data in english BIBREF29 . To test the zero-shot learning capabilities we created a russian sNLI test set by random sampling 400 sNLI test samples and having a native russian speaker translate both premise and hypothesis to russian. The label was kept the same.",
"For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section \"UG-WGAN\" . We kept the hyper-parameters equivalent to the Sentiment Analysis experiment. All of the NLI model tested were run over the fixed UG embeddings. We trained two different models from literature, Densely-Connected Recurrent and Co-Attentive Network by BIBREF30 and Multiway Attention Network by BIBREF31 . Please refer to this papers for further implementation details.",
"UG representations contain enough information to non-trivially generalize the NLI task to unseen languages. That being said, we do see a relatively large drop in performance moving across languages which hints that either our calculation of the Wasserstein distance may not be sufficiently accurate or the universal representations are biased toward specific languages or tasks.",
"One hypothesis might be that as we increase $\\lambda $ the cross lingual generalization gap (difference in test error on a task across languages) will vanish. To test this hypothesis we conducted the same experiment where UG-WGAN was trained with a $\\lambda $ ranging from 0 to 10. From each of the experiments we picked the model epoch which showed the best perplexity. The NLI specific model was the Densely-Connected Recurrent and Co-Attentive Network.",
"Increasing $\\lambda $ doesn't seem to have a significant impact on the generalization gap but has a large impact on test error. Our hypothesis is that a large $\\lambda $ doesn't provide the model with enough freedom to learn useful representations since the optimizations focus would largely be on minimizing the Wasserstein distance, while a small $\\lambda $ permits this freedom. One reason we might be seeing this generalization gap might be due to the way we satisfy the Lipschitz constraint. It's been shown that there are better constraints than clipping parameters to a compact space such as a gradient penalty BIBREF32 . This is a future direction that can be explored."
],
[
"Universal Grammar also comments on the learnability of grammar, stating that statistical information alone is not enough to learn grammar and some form of native language faculty must exist, sometimes titled the poverty of stimulus (POS) argument BIBREF33 , BIBREF34 . From a machine learning perspective, we're interested in extracting informative features and not necessarily a completely grammatical language model. That being said it is of interest to what extent language models capture grammar and furthermore the extent to which models trained toward the universal grammar objective learn grammar.",
"One way to measure universality is by studying perplexity of our multi-lingual language model as we increase the number of languages. To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French. We maintain the same procedure as described above. The hidden size of the language model was increased to 1024 with 16K BPE tokens being used. The first model was trained on English Russian, second was trained on English Russian Arabic and so on. For arabic we still trained from left to right even though naturally the language is read from right to left. We report the results in Figure 5 . As the number of languages increases the gap between a UG-WGAN without any distribution matching and one with diminishes. This implies that the efficiency and representative power of UG-WGAN grows as we increase the number of languages it has to model.",
"We see from Figure 2 that perplexity worsens proportional to $\\lambda $ . We explore the differences by sampling sentences from an unconstrained language model and $\\lambda =0.1$ language model trained towards English and Spanish in Table 3 . In general there is a very small difference between a language model trained with a Universal Grammar objective and one without. The Universal Grammar model tends to make more gender mistakes and mistakes due to Plural-Singular Form in Spanish. In English we saw virtually no fundamental differences between the language models. This seems to hint the existence of an universal set of representations for languages, as hypothesized by Universal Grammar. And although completely learning grammar from statistical signals might be improbable, we can still extract useful information."
],
[
"In this paper we introduced an unsupervised approach toward learning language agnostic universal representations by formalizing Universal Grammar as an optimization problem. We showed that we can use these representations to learn tasks in one language and automatically transfer them to others with no additional training. Furthermore we studied the importance of the Wasserstein constraint through the $\\lambda $ hyper-parameter. And lastly we explored the difference between a standard multi-lingual language model and UG-WGAN by studying the generated outputs of the respective language models as well as the perplexity gap growth with respect to the number of languages."
]
],
"section_name": [
"Introduction",
"Related Work",
"Universal Grammar as an Optimization Problem",
"UG-WGAN",
"Exploration",
"Experiments",
"Sentiment Analysis",
"NLI",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"2772affc98683a66d3dcce5f2de60e09b6766a71"
],
"answer": [
{
"evidence": [
"To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section \"UG-WGAN\" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.",
"For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section \"UG-WGAN\" . We kept the hyper-parameters equivalent to the Sentiment Analysis experiment. All of the NLI model tested were run over the fixed UG embeddings. We trained two different models from literature, Densely-Connected Recurrent and Co-Attentive Network by BIBREF30 and Multiway Attention Network by BIBREF31 . Please refer to this papers for further implementation details.",
"One way to measure universality is by studying perplexity of our multi-lingual language model as we increase the number of languages. To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French. We maintain the same procedure as described above. The hidden size of the language model was increased to 1024 with 16K BPE tokens being used. The first model was trained on English Russian, second was trained on English Russian Arabic and so on. For arabic we still trained from left to right even though naturally the language is read from right to left. We report the results in Figure 5 . As the number of languages increases the gap between a UG-WGAN without any distribution matching and one with diminishes. This implies that the efficiency and representative power of UG-WGAN grows as we increase the number of languages it has to model."
],
"extractive_spans": [],
"free_form_answer": "The languages considered were English, Chinese, German, Russian, Arabic, Spanish, French",
"highlighted_evidence": [
"To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section \"UG-WGAN\" .",
"For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section \"UG-WGAN\" . ",
"To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"78e7e55b4239b4bdf6caf2fdca75525d6a178563"
],
"answer": [
{
"evidence": [
"To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section \"UG-WGAN\" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.",
"A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features. For this task we use the Stanford NLI (sNLI) dataset as our training data in english BIBREF29 . To test the zero-shot learning capabilities we created a russian sNLI test set by random sampling 400 sNLI test samples and having a native russian speaker translate both premise and hypothesis to russian. The label was kept the same."
],
"extractive_spans": [],
"free_form_answer": "They experimented with sentiment analysis and natural language inference task",
"highlighted_evidence": [
" Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 .",
"A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What are the languages they consider in this paper?",
"Did they experiment with tasks other than word problems in math?"
],
"question_id": [
"fa9df782d743ce0ce1a7a5de6a3de226a7e423df",
"6270d5247f788c4627be57de6cf30112560c863f"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"few shot",
"few shot"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Architecture of UG-WGAN. The amount of languages can be trivially increased by increasing the number of language agnostic segments kj and ej .",
"Figure 2: Ablation study of λ. Both Wasserstein and Perplexity estimates were done on a held out test set of documents.",
"Figure 3: T-SNE Visualization of u(·). Same colored dots represent the same language.",
"Table 1: Zero-shot capability of UG and OpenNMT representation from English training. For all other methods we trained on the available training data. Table shows error of sentiment model.",
"Table 2: Error in terms of accuracy for the following methods. For Unlexicalized features + Unigram + Bigram features we trained on 200 out of the 400 Russian samples and tested on the other 200 as a baseline.",
"Figure 4: Cross-Lingual Generalization gap and performance",
"Figure 5: Perplexity calculations on a held out test set for UG-WGAN trained on a varying number of languages.",
"Table 3: Example of samples from UG-WGAN with λ = 0.0 and λ = 0.1"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png",
"8-Table3-1.png"
]
} | [
"What are the languages they consider in this paper?",
"Did they experiment with tasks other than word problems in math?"
] | [
[
"1809.08510-NLI-1",
"1809.08510-Sentiment Analysis-0",
"1809.08510-Discussion-1"
],
[
"1809.08510-Sentiment Analysis-0",
"1809.08510-NLI-0"
]
] | [
"The languages considered were English, Chinese, German, Russian, Arabic, Spanish, French",
"They experimented with sentiment analysis and natural language inference task"
] | 371 |
1804.08139 | Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks | Distributed representation plays an important role in deep learning based natural language processing. However, the representation of a sentence often varies in different tasks, which is usually learned from scratch and suffers from the limited amounts of training data. In this paper, we claim that a good sentence representation should be invariant and can benefit the various subsequent tasks. To achieve this purpose, we propose a new scheme of information sharing for multi-task learning. More specifically, all tasks share the same sentence representation and each task can select the task-specific information from the shared sentence representation with attention mechanism. The query vector of each task's attention could be either static parameters or generated dynamically. We conduct extensive experiments on 16 different text classification tasks, which demonstrate the benefits of our architecture. | {
"paragraphs": [
[
"The distributed representation plays an important role in deep learning based natural language processing (NLP) BIBREF0 , BIBREF1 , BIBREF2 . On word level, many successful methods have been proposed to learn a good representation for single word, which is also called word embedding, such as skip-gram BIBREF3 , GloVe BIBREF4 , etc. There are also pre-trained word embeddings, which can easily used in downstream tasks. However, on sentence level, there is still no generic sentence representation which is suitable for various NLP tasks.",
"Currently, most of sentence encoding models are trained specifically for a certain task in a supervised way, which results to different representations for the same sentence in different tasks. Taking the following sentence as an example for domain classification task and sentiment classification task,",
"",
"general text classification models always learn two representations separately. For domain classification, the model can learn a better representation of “infantile cart” while for sentiment classification, the model is able to learn a better representation of “easy to use”.",
"However, to train a good task-specific sentence representation from scratch, we always need to prepare a large dataset which is always unavailable or costly. To alleviate this problem, one approach is pre-training the model on large unlabeled corpora by unsupervised learning tasks, such as language modeling BIBREF0 . This unsupervised pre-training may be helpful to improve the final performance, but the improvement is not guaranteed since it does not directly optimize the desired task.",
"Another approach is multi-task learning BIBREF5 , which is an effective approach to improve the performance of a single task with the help of other related tasks. However, most existing models on multi-task learning attempt to divide the representation of a sentence into private and shared spaces. The shared representation is used in all tasks, and the private one is different for each task. The two typical information sharing schemes are stacked shared-private scheme and parallel shared-private scheme (as shown in Figure SECREF2 and SECREF3 respectively). However, we cannot guarantee that a good sentence encoding model is learned by the shared layer.",
"To learn a better shareable sentence representation, we propose a new information-sharing scheme for multi-task learning in this paper. In our proposed scheme, the representation of every sentence is fully shared among all different tasks. To extract the task-specific feature, we utilize the attention mechanism and introduce a task-dependent query vector to select the task-specific information from the shared sentence representation. The query vector of each task can be regarded as learnable parameters (static) or be generated dynamically. If we take the former example, in our proposed model these two classification tasks share the same representation which includes both domain information and sentiment information. On top of this shared representation, a task-specific query vector will be used to focus “infantile cart” for domain classification and “easy to use” for sentiment classification.",
"The contributions of this papers can be summarized as follows."
],
[
"The primary role of sentence encoding models is to represent the variable-length sentence or paragraphs as fixed-length dense vector (distributed representation). Currently, the effective neural sentence encoding models include neural Bag-of-words (NBOW), recurrent neural networks (RNN) BIBREF2 , BIBREF6 , convolutional neural networks (CNN) BIBREF1 , BIBREF7 , BIBREF8 , and syntactic-based compositional model BIBREF9 , BIBREF10 , BIBREF11 .",
"Given a text sequence INLINEFORM0 , we first use a lookup layer to get the vector representation (word embedding) INLINEFORM1 of each word INLINEFORM2 . Then we can use CNN or RNN to calculate the hidden state INLINEFORM3 of each position INLINEFORM4 . The final representation of a sentence could be either the final hidden state of the RNN or the max (or average) pooling from all hidden states of RNN (or CNN).",
"We use bidirectional LSTM (BiLSTM) to gain some dependency between adjacent words. The update rule of each LSTM unit can be written as follows: DISPLAYFORM0 ",
" where INLINEFORM0 represents all the parameters of BiLSTM. The representation of the whole sequence is the average of the hidden states of all the positions, where INLINEFORM1 denotes the concatenation operation."
],
[
"Multi-task Learning BIBREF5 utilizes the correlation between related tasks to improve classification by learning tasks in parallel, which has been widely used in various natural language processing tasks, such as text classification BIBREF12 , semantic role labeling BIBREF13 , machine translation BIBREF14 , and so on.",
"To facilitate this, we give some explanation for notations used in this paper. Formally, we refer to INLINEFORM0 as a dataset with INLINEFORM1 samples for task INLINEFORM2 . Specifically, DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 denote a sentence and corresponding label for task INLINEFORM2 .",
"A common information sharing scheme is to divide the feature spaces into two parts: one is used to store task-specific features, the other is used to capture task-invariant features. As shown in Figure SECREF2 and SECREF3 , there are two schemes: stacked shared-private (SSP) scheme and parallel shared-private (PSP) scheme.",
"In stacked scheme, the output of the shared LSTM layer is fed into the private LSTM layer, whose output is the final task-specific sentence representation. In parallel scheme, the final task-specific sentence representation is the concatenation of outputs from the shared LSTM layer and the private LSTM layer.",
"For a sentence INLINEFORM0 and its label INLINEFORM1 in task INLINEFORM2 , its final representation is ultimately fed into the corresponding task-specific softmax layer for classification or other tasks. DISPLAYFORM0 ",
" where INLINEFORM0 is prediction probabilities; INLINEFORM1 is the final task-specific representation; INLINEFORM2 and INLINEFORM3 are task-specific weight matrix and bias vector respectively.",
"The total loss INLINEFORM0 can be computed as: DISPLAYFORM0 ",
" where INLINEFORM0 (usually set to 1) is the weights for each task INLINEFORM1 respectively; INLINEFORM2 is the cross-entropy of the predicted and true distributions."
],
[
"The key factor of multi-task learning is the information sharing scheme in latent representation space. Different from the traditional shared-private scheme, we introduce a new scheme for multi-task learning on NLP tasks, in which the sentence representation is shared among all the tasks, the task-specific information is selected by attention mechanism.",
"In a certain task, not all information of a sentence is useful for the task, therefore we just need to select the key information from the sentence. Attention mechanism BIBREF15 , BIBREF16 is an effective method to select related information from a set of candidates. The attention mechanism can effectively solve the capacity problem of sequence models, thereby is widely used in many NLP tasks, such as machine translation BIBREF17 , textual entailment BIBREF18 and summarization BIBREF19 ."
],
[
"We first introduce the static task-attentive sentence encoding model, in which the task query vector is a static learnable parameter. As shown in Figure FIGREF19 , our model consists of one shared BiLSTM layer and an attention layer. Formally, for a sentence in task INLINEFORM0 , we first use BiLSTM to calculate the shared representation INLINEFORM1 . Then we use attention mechanism to select the task-specific information from a generic task-independent sentence representation. Following BIBREF17 , we use the dot-product attention to compute the attention distribution. We introduce a task-specific query vector INLINEFORM2 to calculate the attention distribution INLINEFORM3 over all positions. DISPLAYFORM0 ",
" where the task-specific query vector INLINEFORM0 is a learned parameter. The final task-specific representation INLINEFORM1 is summarized by DISPLAYFORM0 ",
"At last, a task-specific fully connected layer followed by a softmax non-linear layer processes the task-specific context INLINEFORM0 and predicts the probability distribution over classes."
],
[
"Different from the static task-attentive sentence encoding model, the query vectors of the dynamic task-attentive sentence encoding model are generated dynamically. When each task belongs to a different domain, we can introduce an auxiliary domain classifier to predict the domain (or task) of the specific sentence. Thus, the domain information is also included in the shared sentence representation, which can be used to generate the task-specific query vector of attention.",
"The original tasks and the auxiliary task of domain classification (DC) are joint learned in our multi-task learning framework.",
"The query vector INLINEFORM0 of DC task is static and needs be learned in training phrase. The domain information is also selected with attention mechanism. DISPLAYFORM0 ",
" where INLINEFORM0 is attention distribution of auxiliary DC task, and INLINEFORM1 is the attentive information for DC task, which is fed into the final classifier to predict its domain INLINEFORM2 .",
"Since INLINEFORM0 contains the domain information, we can use it to generate a more flexible query vector DISPLAYFORM0 ",
" where INLINEFORM0 is a shared learnable weight matrix and INLINEFORM1 is a task-specific bias vector. When we set INLINEFORM2 , the dynamic query is equivalent to the static one."
],
[
"In this section, we investigate the empirical performances of our proposed architectures on three experiments."
],
[
"We first conduct a multi-task experiment on sentiment classification.",
"We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.",
"All the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 10% and 20% respectively. The detailed statistics about all the datasets are listed in Table TABREF27 .",
"We compare our proposed two information sharing schemes, static attentive sentence encoding (SA-MTL) and dynamic attentive sentence encoding (DA-MTL), with the following multi-task learning frameworks.",
"FS-MTL: This model is a combination of a fully shared BiLSTM and a classifier.",
"SSP-MTL: This is the stacked shared-private model as shown in Figure SECREF2 whose output of the shared BiLSTM layer is fed into the private BiLSTM layer.",
"PSP-MTL: The is the parallel shared-private model as shown in Figure SECREF3 . The final sentence representation is the concatenation of both private and shared BiLSTM.",
"ASP-MTL: This model is proposed by BIBREF20 based on PSP-MTL with uni-directional LSTM. The model uses adversarial training to separate task-invariant and task-specific features from different tasks.",
"",
"We initialize word embeddings with the 200d GloVe vectors (840B token version, BIBREF4 ). The other parameters are initialized by randomly sampling from uniform distribution in [-0.1, 0.1]. The mini-batch size is set to 32. For each task, we take hyperparameters which achieve the best performance on the development set via a small grid search. We use ADAM optimizer BIBREF21 with the learning rate of INLINEFORM0 . The BiLSTM models have 200 dimensions in each direction, and dropout with probability of INLINEFORM1 . During the training step of multi-task models, we select different tasks randomly. After the training step, we fix the parameters of the shared BiLSTM and fine tune every task.",
"Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with the help of multi-task learning. FS-MTL shows the minimum performance gain from multi-task learning since it puts all private and shared information into a unified space. SSP-MTL and PSP-MTL achieve similar performance and are outperformed by ASP-MTL which can better separate the task-specific and task-invariant features by using adversarial training. Our proposed models (SA-MTL and DA-MTL) outperform ASP-MTL because we model a richer representation from these 16 tasks. Compared to SA-MTL, DA-MTL achieves a further improvement of INLINEFORM0 accuracy with the help of the dynamic and flexible query vector. It is noteworthy that our models are also space efficient since the task-specific information is extracted by using only a query vector, instead of a BiLSTM layer in the shared-private models.",
"We also present the convergence properties of our models on the development datasets compared to other multi-task models in Figure FIGREF36 . We can see that PSP-MTL converges much more slowly than the rest four models because each task-specific classifier should consider the output of shared layer which is quite unstable during the beginning of training phrase. Moreover, benefit from the attention mechanism which is useful in feature extraction, SA-TML and DA-MTL are converged much more quickly than the rest of models.",
"Since all the tasks share the same sentence encoding layer, the query vector INLINEFORM0 of each task determines which part of the sentence to attend. Thus, similar tasks should have the similar query vectors. Here we simply calculate the Frobenius norm of each pair of tasks' INLINEFORM1 as the similarity. Figure FIGREF38 shows the similarity matrix of different task's query vector INLINEFORM2 in static attentive model. A darker cell means the higher similarity of the two task's INLINEFORM3 . Since the cells in the diagnose of the matrix denotes the similarity of one task, we leave them blank because they are meaningless. It's easy to find that INLINEFORM4 of “DVD”, “Video” and “IMDB” have very high similarity. It makes sense because they are all reviews related to movie. However, another movie review “MR” has very low similarity to these three task. It's probably that the text in “MR” is very short that makes it different from these tasks. The similarity of INLINEFORM5 from “Books” and “Video” is also very high because these two datasets share a lot of similar sentiment expressions.",
"As shown in Figure FIGREF40 , we also show the attention distributions on a real example selected from the book review dataset. This piece of text involves two domains. The review is negative in the book domain while it is positive from the perspective of movie review. In our SA-MTL model, the “Books” review classifier from SA-MTL focus on the negative aspect of the book and evaluate the text as negative. In contrast, the “DVD” review classifier focuses on the positive part of the movie and produce the result as positive. In case of DA-MTL, the model first focuses on the two domain words “book” and “movie” and judge the text is a book review because “book” has a higher weight. Then, the model dynamically generates a query INLINEFORM0 and focuses on the part of the book review in this text, thereby finally predicting a negative sentiment."
],
[
"With attention mechanism, the shared sentence encoder in our proposed models can generate more generic task-invariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks.",
"To test the transferability of our learned shared representation, we also design an experiment shown in Table TABREF46 . The multi-task learning results are derived by training the first 6 tasks in general multi-task learning. For transfer learning, we choose the last 10 tasks to train our model with multi-task learning, then the learned shared sentence encoding layer are kept frozen and transferred to train the first 6 tasks.",
"",
"As shown in Table TABREF46 , we can see that SA-MTL and DA-MTL achieves better transfer learning performances compared to SSP-MTL and PSP-MTL. The reason is that by using attention mechanism, richer information can be captured into the shared representation layer, thereby benefiting the other task."
],
[
"A good sentence representation should include its linguistic information. Therefore, we incorporate sequence labeling task (such as POS Tagging and Chunking) as an auxiliary task into the multi-task learning framework, which is trained jointly with the primary tasks (the above 16 tasks of sentiment classification). The auxiliary task shares the sentence encoding layer with the primary tasks and connected to a private fully connected layer followed by a softmax non-linear layer to process every hidden state INLINEFORM0 and predicts the labels.",
"",
"We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks. There are 8774 sentences in training data, 500 sentences in development data and 1512 sentences in test data. The average sentence length is 24 and has a total vocabulary size as 17k.",
"",
"The experiment results are shown in Table TABREF51 . We use the same hyperparameters and training procedure as the former experiments. The result shows that by leveraging auxiliary tasks, the performances of SA-MTL and DA-MTL achieve more improvement than PSP-MTL and SSP-MTL.",
"For further analysis, Figure FIGREF53 shows the attention distribution produced by models trained with and without Chunking task on two pieces of texts. In the first piece of text, both of the models attend to the first “like” because it represents positive sentiment on the book. The model trained with Chunking task also labels the three “like” as 'B-VP' (beginning of verb phrase) correctly. However, in the second piece of text, the same work “like” denotes a preposition and has no sentiment meaning. The model trained without Chunking task fails to tell the difference with the former text and focuses on it and produces the result as positive. Meanwhile, the model trained with Chunking task successfully labels the “like” as 'B-PP' (beginning of prepositional phrase) and pays little attention to it and produces the right answer as negative. This example shows how the model trained with auxiliary task helps the primary tasks."
],
[
"Neural networks based multi-task learning has been proven effective in many NLP problems BIBREF13 , BIBREF23 , BIBREF12 , BIBREF20 , BIBREF24 In most of these models, there exists a task-dependent private layer separated from the shared layer. The private layers play more important role in these models. Different from them, our model encodes all information into a shared representation layer, and uses attention mechanism to select the task-specific information from the shared representation layer. Thus, our model can learn a better generic sentence representation, which also has a strong transferability.",
"Some recent work have also proposed sentence representation using attention mechanism. BIBREF25 uses a 2-D matrix, whose each row attending on a different part of the sentence, to represent the embedding. BIBREF26 introduces multi-head attention to jointly attend to information from different representation subspaces at different positions. BIBREF27 introduces human reading time as attention weights to improve sentence representation. Different from these work, we use attention vector to select the task-specific information from a shared sentence representation. Thus the learned sentence representation is much more generic and easy to transfer information to new tasks."
],
[
"In this paper, we propose a new information-sharing scheme for multi-task learning, which uses attention mechanism to select the task-specific information from a shared sentence encoding layer. We conduct extensive experiments on 16 different sentiment classification tasks, which demonstrates the benefits of our models. Moreover, the shared sentence encoding model can be transferred to other tasks, which can be further boosted by introducing auxiliary tasks."
]
],
"section_name": [
"Introduction",
"Neural Sentence Encoding Model",
"Shared-Private Scheme in Multi-task Learning",
"A New Information-Sharing Scheme for Multi-task Learning",
"Static Task-Attentive Sentence Encoding",
"Dynamic Task-Attentive Sentence Encoding",
"Experiment",
"Exp I: Sentiment Classification",
"Exp II: Transferability of Shared Sentence Representation ",
"Exp III: Introducing Sequence Labeling as Auxiliary Task",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"7ffaf78b75616ebcfde905144671f6df281b64eb"
],
"answer": [
{
"evidence": [
"Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with the help of multi-task learning. FS-MTL shows the minimum performance gain from multi-task learning since it puts all private and shared information into a unified space. SSP-MTL and PSP-MTL achieve similar performance and are outperformed by ASP-MTL which can better separate the task-specific and task-invariant features by using adversarial training. Our proposed models (SA-MTL and DA-MTL) outperform ASP-MTL because we model a richer representation from these 16 tasks. Compared to SA-MTL, DA-MTL achieves a further improvement of INLINEFORM0 accuracy with the help of the dynamic and flexible query vector. It is noteworthy that our models are also space efficient since the task-specific information is extracted by using only a query vector, instead of a BiLSTM layer in the shared-private models.",
"FLOAT SELECTED: Table 2: Performances on 16 tasks. The column of “Single Task” includes bidirectional LSTM (BiLSTM), bidirectional LSTM with attention (att-BiLSTM) and the average accuracy of the two models. The column of “Multiple Tasks” shows several multi-task models. * is from [Liu et al., 2017] ."
],
"extractive_spans": [],
"free_form_answer": "Accuracy on each dataset and the average accuracy on all datasets.",
"highlighted_evidence": [
"Table TABREF34 shows the performances of the different methods.",
"FLOAT SELECTED: Table 2: Performances on 16 tasks. The column of “Single Task” includes bidirectional LSTM (BiLSTM), bidirectional LSTM with attention (att-BiLSTM) and the average accuracy of the two models. The column of “Multiple Tasks” shows several multi-task models. * is from [Liu et al., 2017] ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"55ff602f71e781c24d15b207ca06922566ecbdc8"
],
"answer": [
{
"evidence": [
"We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.",
"We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks. There are 8774 sentences in training data, 500 sentences in development data and 1512 sentences in test data. The average sentence length is 24 and has a total vocabulary size as 17k."
],
"extractive_spans": [
"16 different datasets from several popular review corpora used in BIBREF20",
"CoNLL 2000 BIBREF22"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.",
"We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2787b84ea8ef482fcaaede304ea38633e6e3a59c"
],
"answer": [
{
"evidence": [
"Exp I: Sentiment Classification",
"We first conduct a multi-task experiment on sentiment classification.",
"We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.",
"All the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 10% and 20% respectively. The detailed statistics about all the datasets are listed in Table TABREF27 .",
"Exp II: Transferability of Shared Sentence Representation ",
"With attention mechanism, the shared sentence encoder in our proposed models can generate more generic task-invariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks.",
"Exp III: Introducing Sequence Labeling as Auxiliary Task",
"A good sentence representation should include its linguistic information. Therefore, we incorporate sequence labeling task (such as POS Tagging and Chunking) as an auxiliary task into the multi-task learning framework, which is trained jointly with the primary tasks (the above 16 tasks of sentiment classification). The auxiliary task shares the sentence encoding layer with the primary tasks and connected to a private fully connected layer followed by a softmax non-linear layer to process every hidden state INLINEFORM0 and predicts the labels."
],
"extractive_spans": [
"Sentiment Classification",
"Transferability of Shared Sentence Representation",
"Introducing Sequence Labeling as Auxiliary Task"
],
"free_form_answer": "",
"highlighted_evidence": [
"Sentiment Classification\nWe first conduct a multi-task experiment on sentiment classification.\n\nWe use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets.\n\nAll the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 10% and 20% respectively.",
"Transferability of Shared Sentence Representation\nWith attention mechanism, the shared sentence encoder in our proposed models can generate more generic task-invariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks.",
"Introducing Sequence Labeling as Auxiliary Task\nA good sentence representation should include its linguistic information. Therefore, we incorporate sequence labeling task (such as POS Tagging and Chunking) as an auxiliary task into the multi-task learning framework, which is trained jointly with the primary tasks (the above 16 tasks of sentiment classification)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What evaluation metrics are used?",
"What dataset did they use?",
"What tasks did they experiment with?"
],
"question_id": [
"0fd678d24c86122b9ab27b73ef20216bbd9847d1",
"b556fd3a9e0cff0b33c63fa1aef3aed825f13e28",
"0db1ba66a7e75e91e93d78c31f877364c3724a65"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: Three schemes of information sharing in multi-task leaning. (a) stacked shared-private scheme, (b) parallel shared-private scheme, (c) our proposed attentive sharing scheme.",
"Figure 2: Static Task-Attentive Sentence Encoding",
"Figure 3: Dynamic Task-Attentive Sentence Encoding",
"Figure 4: Convergence on the development datasets.",
"Table 1: Statistics of the 16 datasets. The columns 2-5 denote the number of samples in training, development and test sets. The last two columns represent the average length and vocabulary size of the corresponding dataset.",
"Table 2: Performances on 16 tasks. The column of “Single Task” includes bidirectional LSTM (BiLSTM), bidirectional LSTM with attention (att-BiLSTM) and the average accuracy of the two models. The column of “Multiple Tasks” shows several multi-task models. * is from [Liu et al., 2017] .",
"Table 3: Results of first 6 tasks with multi-task learning and transfer learning",
"Figure 5: Similarity Matrix of Different Task’s query vector qk",
"Figure 6: Attention Distributions of four classifiers from two models on the same text",
"Figure 7: Attention distributions of two example texts from models trained with and without Chunking task",
"Table 4: Average precision of multi-task models with auxiliary tasks."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Figure5-1.png",
"6-Figure6-1.png",
"6-Figure7-1.png",
"6-Table4-1.png"
]
} | [
"What evaluation metrics are used?"
] | [
[
"1804.08139-Exp I: Sentiment Classification-10",
"1804.08139-5-Table2-1.png"
]
] | [
"Accuracy on each dataset and the average accuracy on all datasets."
] | 373 |
1808.08850 | WiSeBE: Window-based Sentence Boundary Evaluation | Sentence Boundary Detection (SBD) has been a major research topic since Automatic Speech Recognition transcripts have been used for further Natural Language Processing tasks like Part of Speech Tagging, Question Answering or Automatic Summarization. But what about evaluation? Do standard evaluation metrics like precision, recall, F-score or classification error; and more important, evaluating an automatic system against a unique reference is enough to conclude how well a SBD system is performing given the final application of the transcript? In this paper we propose Window-based Sentence Boundary Evaluation (WiSeBE), a semi-supervised metric for evaluating Sentence Boundary Detection systems based on multi-reference (dis)agreement. We evaluate and compare the performance of different SBD systems over a set of Youtube transcripts using WiSeBE and standard metrics. This double evaluation gives an understanding of how WiSeBE is a more reliable metric for the SBD task. | {
"paragraphs": [
[
"The goal of Automatic Speech Recognition (ASR) is to transform spoken data into a written representation, thus enabling natural human-machine interaction BIBREF0 with further Natural Language Processing (NLP) tasks. Machine translation, question answering, semantic parsing, POS tagging, sentiment analysis and automatic text summarization; originally developed to work with formal written texts, can be applied over the transcripts made by ASR systems BIBREF1 , BIBREF2 , BIBREF3 . However, before applying any of these NLP tasks a segmentation process called Sentence Boundary Detection (SBD) should be performed over ASR transcripts to reach a minimal syntactic information in the text.",
"To measure the performance of a SBD system, the automatically segmented transcript is evaluated against a single reference normally done by a human. But given a transcript, does it exist a unique reference? Or, is it possible that the same transcript could be segmented in five different ways by five different people in the same conditions? If so, which one is correct; and more important, how to fairly evaluate the automatically segmented transcript? These questions are the foundations of Window-based Sentence Boundary Evaluation (WiSeBE), a new semi-supervised metric for evaluating SBD systems based on multi-reference (dis)agreement.",
"The rest of this article is organized as follows. In Section SECREF2 we set the frame of SBD and how it is normally evaluated. WiSeBE is formally described in Section SECREF3 , followed by a multi-reference evaluation in Section SECREF4 . Further analysis of WiSeBE and discussion over the method and alternative multi-reference evaluation is presented in Section SECREF5 . Finally, Section SECREF6 concludes the paper."
],
[
"Sentence Boundary Detection (SBD) has been a major research topic science ASR moved to more general domains as conversational speech BIBREF4 , BIBREF5 , BIBREF6 . Performance of ASR systems has improved over the years with the inclusion and combination of new Deep Neural Networks methods BIBREF7 , BIBREF8 , BIBREF0 . As a general rule, the output of ASR systems lacks of any syntactic information such as capitalization and sentence boundaries, showing the interst of ASR systems to obtain the correct sequence of words with almost no concern of the overall structure of the document BIBREF9 .",
"Similar to SBD is the Punctuation Marks Disambiguation (PMD) or Sentence Boundary Disambiguation. This task aims to segment a formal written text into well formed sentences based on the existent punctuation marks BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . In this context a sentence is defined (for English) by the Cambridge Dictionary as:",
"“a group of words, usually containing a verb, that expresses a thought in the form of a statement, question, instruction, or exclamation and starts with a capital letter when written”.",
"PMD carries certain complications, some given the ambiguity of punctuation marks within a sentence. A period can denote an acronym, an abbreviation, the end of the sentence or a combination of them as in the following example:",
"The U.S. president, Mr. Donald Trump, is meeting with the F.B.I. director Christopher A. Wray next Thursday at 8p.m.",
"However its difficulties, DPM profits of morphological and lexical information to achieve a correct sentence segmentation. By contrast, segmenting an ASR transcript should be done without any (or almost any) lexical information and a flurry definition of sentence.",
"The obvious division in spoken language may be considered speaker utterances. However, in a normal conversation or even in a monologue, the way ideas are organized differs largely from written text. This differences, added to disfluencies like revisions, repetitions, restarts, interruptions and hesitations make the definition of a sentence unclear thus complicating the segmentation task BIBREF14 . Table TABREF2 exemplifies some of the difficulties that are present when working with spoken language.",
"Stolcke & Shriberg BIBREF6 considered a set of linguistic structures as segments including the following list:",
"In BIBREF4 , Meteer & Iyer divided speaker utterances into segments, consisting each of a single independent clause. A segment was considered to begin either at the beginning of an utterance, or after the end of the preceding segment. Any dysfluency between the end of the previous segments and the begging of current one was considered part of the current segments.",
"Rott & Červa BIBREF15 aimed to summarize news delivered orally segmenting the transcripts into “something that is similar to sentences”. They used a syntatic analyzer to identify the phrases within the text.",
"A wide study focused in unbalanced data for the SBD task was performed by Liu et al. BIBREF16 . During this study they followed the segmentation scheme proposed by the Linguistic Data Consortium on the Simple Metadata Annotation Specification V5.0 guideline (SimpleMDE_V5.0) BIBREF14 , dividing the transcripts in Semantic Units.",
"A Semantic Unit (SU) is considered to be an atomic element of the transcript that manages to express a complete thought or idea on the part of the speaker BIBREF14 . Sometimes a SU corresponds to the equivalent of a sentence in written text, but other times (the most part of them) a SU corresponds to a phrase or a single word.",
"SUs seem to be an inclusive conception of a segment, they embrace different previous segment definitions and are flexible enough to deal with the majority of spoken language troubles. For these reasons we will adopt SUs as our segment definition."
],
[
"SBD research has been focused on two different aspects; features and methods. Regarding the features, some work focused on acoustic elements like pauses duration, fundamental frequencies, energy, rate of speech, volume change and speaker turn BIBREF17 , BIBREF18 , BIBREF19 .",
"The other kind of features used in SBD are textual or lexical features. They rely on the transcript content to extract features like bag-of-word, POS tags or word embeddings BIBREF20 , BIBREF18 , BIBREF21 , BIBREF22 , BIBREF15 , BIBREF6 , BIBREF23 . Mixture of acoustic and lexical features have also been explored BIBREF24 , BIBREF25 , BIBREF19 , BIBREF26 , which is advantageous when both audio signal and transcript are available.",
"With respect to the methods used for SBD, they mostly rely on statistical/neural machine translation BIBREF18 , BIBREF27 , language models BIBREF9 , BIBREF16 , BIBREF22 , BIBREF6 , conditional random fields BIBREF21 , BIBREF28 , BIBREF23 and deep neural networks BIBREF29 , BIBREF20 , BIBREF13 .",
"Despite their differences in features and/or methodology, almost all previous cited research share a common element; the evaluation methodology. Metrics as Precision, Recall, F1-score, Classification Error Rate and Slot Error Rate (SER) are used to evaluate the proposed system against one reference. As discussed in Section SECREF1 , further NLP tasks rely on the result of SBD, meaning that is crucial to have a good segmentation. But comparing the output of a system against a unique reference will provide a reliable score to decide if the system is good or bad?",
"Bohac et al. BIBREF24 compared the human ability to punctuate recognized spontaneous speech. They asked 10 people (correctors) to punctuate about 30 minutes of ASR transcripts in Czech. For an average of 3,962 words, the punctuation marks placed by correctors varied between 557 and 801; this means a difference of 244 segments for the same transcript. Over all correctors, the absolute consensus for period (.) was only 4.6% caused by the replacement of other punctuation marks as semicolons (;) and exclamation marks (!). These results are understandable if we consider the difficulties presented previously in this section.",
"To our knowledge, the amount of studies that have tried to target the sentence boundary evaluation with a multi-reference approach is very small. In BIBREF24 , Bohac et al. evaluated the overall punctuation accuracy for Czech in a straightforward multi-reference framework. They considered a period (.) valid if at least five of their 10 correctors agreed on its position.",
"Kolář & Lamel BIBREF25 considered two independent references to evaluate their system and proposed two approaches. The fist one was to calculate the SER for each of one the two available references and then compute their mean. They found this approach to be very strict because for those boundaries where no agreement between references existed, the system was going to be partially wrong even the fact that it has correctly predicted the boundary. Their second approach tried to moderate the number of unjust penalizations. For this case, a classification was considered incorrect only if it didn't match either of the two references.",
"These two examples exemplify the real need and some straightforward solutions for multi-reference evaluation metrics. However, we think that it is possible to consider in a more inclusive approach the similarities and differences that multiple references could provide into a sentence boundary evaluation protocol."
],
[
"Window-Based Sentence Boundary Evaluation (WiSeBE) is a semi-automatic multi-reference sentence boundary evaluation protocol which considers the performance of a candidate segmentation over a set of segmentation references and the agreement between those references.",
"Let INLINEFORM0 be the set of all available references given a transcript INLINEFORM1 , where INLINEFORM2 is the INLINEFORM3 word in the transcript; a reference INLINEFORM4 is defined as a binary vector in terms of the existent SU boundaries in INLINEFORM5 . DISPLAYFORM0 ",
"where INLINEFORM0 ",
"Given a transcript INLINEFORM0 , the candidate segmentation INLINEFORM1 is defined similar to INLINEFORM2 . DISPLAYFORM0 ",
"where INLINEFORM0 "
],
[
"A General Reference ( INLINEFORM0 ) is then constructed to calculate the agreement ratio between all references in. It is defined by the boundary frequencies of each reference INLINEFORM1 . DISPLAYFORM0 ",
"where DISPLAYFORM0 ",
"The Agreement Ratio ( INLINEFORM0 ) is needed to get a numerical value of the distribution of SU boundaries over INLINEFORM1 . A value of INLINEFORM2 close to 0 means a low agreement between references in INLINEFORM3 , while INLINEFORM4 means a perfect agreement ( INLINEFORM5 ) in INLINEFORM6 . DISPLAYFORM0 ",
"In the equation above, INLINEFORM0 corresponds to the ponderated common boundaries of INLINEFORM1 and INLINEFORM2 to its hypothetical maximum agreement. DISPLAYFORM0 DISPLAYFORM1 "
],
[
"In Section SECREF2 we discussed about how disfluencies complicate SU segmentation. In a multi-reference environment this causes disagreement between references around a same SU boundary. The way WiSeBE handle disagreements produced by disfluencies is with a Window-boundaries Reference ( INLINEFORM0 ) defined as: DISPLAYFORM0 ",
"where each window INLINEFORM0 considers one or more boundaries INLINEFORM1 from INLINEFORM2 with a window separation limit equal to INLINEFORM3 . DISPLAYFORM0 "
],
[
"WiSeBE is a normalized score dependent of 1) the performance of INLINEFORM0 over INLINEFORM1 and 2) the agreement between all references in INLINEFORM2 . It is defined as: DISPLAYFORM0 ",
"where INLINEFORM0 corresponds to the harmonic mean of precision and recall of INLINEFORM1 with respect to INLINEFORM2 (equation EQREF23 ), while INLINEFORM3 is the agreement ratio defined in ( EQREF15 ). INLINEFORM4 can be interpreted as a scaling factor; a low value will penalize the overall WiSeBE score given the low agreement between references. By contrast, for a high agreement in INLINEFORM5 ( INLINEFORM6 ), INLINEFORM7 . DISPLAYFORM0 DISPLAYFORM1 ",
"Equations EQREF24 and EQREF25 describe precision and recall of INLINEFORM0 with respect to INLINEFORM1 . Precision is the number of boundaries INLINEFORM2 inside any window INLINEFORM3 from INLINEFORM4 divided by the total number of boundaries INLINEFORM5 in INLINEFORM6 . Recall corresponds to the number of windows INLINEFORM7 with at least one boundary INLINEFORM8 divided by the number of windows INLINEFORM9 in INLINEFORM10 ."
],
[
"To exemplify the INLINEFORM0 score we evaluated and compared the performance of two different SBD systems over a set of YouTube videos in a multi-reference enviroment. The first system (S1) employs a Convolutional Neural Network to determine if the middle word of a sliding window corresponds to a SU boundary or not BIBREF30 . The second approach (S2) by contrast, introduces a bidirectional Recurrent Neural Network model with attention mechanism for boundary detection BIBREF31 .",
"In a first glance we performed the evaluation of the systems against each one of the references independently. Then, we implemented a multi-reference evaluation with INLINEFORM0 ."
],
[
"We focused evaluation over a small but diversified dataset composed by 10 YouTube videos in the English language in the news context. The selected videos cover different topics like technology, human rights, terrorism and politics with a length variation between 2 and 10 minutes. To encourage the diversity of content format we included newscasts, interviews, reports and round tables.",
"During the transcription phase we opted for a manual transcription process because we observed that using transcripts from an ASR system will difficult in a large degree the manual segmentation process. The number of words per transcript oscilate between 271 and 1,602 with a total number of 8,080.",
"We gave clear instructions to three evaluators ( INLINEFORM0 ) of how segmentation was needed to be perform, including the SU concept and how punctuation marks were going to be taken into account. Periods (.), question marks (?), exclamation marks (!) and semicolons (;) were considered SU delimiters (boundaries) while colons (:) and commas (,) were considered as internal SU marks. The number of segments per transcript and reference can be seen in Table TABREF27 . An interesting remark is that INLINEFORM1 assigns about INLINEFORM2 less boundaries than the mean of the other two references."
],
[
"We ran both systems (S1 & S2) over the manually transcribed videos obtaining the number of boundaries shown in Table TABREF29 . In general, it can be seen that S1 predicts INLINEFORM0 more segments than S2. This difference can affect the performance of S1, increasing its probabilities of false positives.",
"Table TABREF30 condenses the performance of both systems evaluated against each one of the references independently. If we focus on F1 scores, performance of both systems varies depending of the reference. For INLINEFORM0 , S1 was better in 5 occasions with respect of S2; S1 was better in 2 occasions only for INLINEFORM1 ; S1 overperformed S2 in 3 occasions concerning INLINEFORM2 and in 4 occasions for INLINEFORM3 (bold).",
"Also from Table TABREF30 we can observe that INLINEFORM0 has a bigger similarity to S1 in 5 occasions compared to other two references, while INLINEFORM1 is more similar to S2 in 7 transcripts (underline).",
"After computing the mean F1 scores over the transcripts, it can be concluded that in average S2 had a better performance segmenting the dataset compared to S1, obtaining a F1 score equal to 0.510. But... What about the complexity of the dataset? Regardless all references have been considered, nor agreement or disagreement between them has been taken into account.",
"",
"All values related to the INLINEFORM0 score are displayed in Table TABREF31 . The Agreement Ratio ( INLINEFORM1 ) between references oscillates between 0.525 for INLINEFORM2 and 0.767 for INLINEFORM3 . The lower the INLINEFORM4 , the bigger the penalization INLINEFORM5 will give to the final score. A good example is S2 for transcript INLINEFORM6 where INLINEFORM7 reaches a value of 0.800, but after considering INLINEFORM8 the INLINEFORM9 score falls to 0.462.",
"It is feasible to think that if all references are taken into account at the same time during evaluation ( INLINEFORM0 ), the score will be bigger compared to an average of independent evaluations ( INLINEFORM1 ); however this is not always true. That is the case of S1 in INLINEFORM2 , which present a slight decrease for INLINEFORM3 compared to INLINEFORM4 .",
"An important remark is the behavior of S1 and S2 concerning INLINEFORM0 . If evaluated without considering any (dis)agreement between references ( INLINEFORM1 ), S2 overperforms S1; this is inverted once the systems are evaluated with INLINEFORM2 ."
],
[
"In Section SECREF3 we described the INLINEFORM0 score and how it relies on the INLINEFORM1 value to scale the performance of INLINEFORM2 over INLINEFORM3 . INLINEFORM4 can intuitively be consider an agreement value over all elements of INLINEFORM5 . To test this hypothesis, we computed the Pearson correlation coefficient ( INLINEFORM6 ) BIBREF32 between INLINEFORM7 and the Fleiss' Kappa BIBREF33 of each video in the dataset ( INLINEFORM8 ).",
"A linear correlation between INLINEFORM0 and INLINEFORM1 can be observed in Table TABREF33 . This is confirmed by a INLINEFORM2 value equal to INLINEFORM3 , which means a very strong positive linear correlation between them."
],
[
"Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references."
],
[
"In this paper we presented WiSeBE, a semi-automatic multi-reference sentence boundary evaluation protocol based on the necessity of having a more reliable way for evaluating the SBD task. We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them. According to your point of view, this inclusivity is very important given the difficulties that are present when working with spoken language and the possible disagreements that a task like SBD could provoke.",
" INLINEFORM0 shows to be correlated with standard SBD metrics, however we want to measure its correlation with extrinsic evaluations techniques like automatic summarization and machine translation."
],
[
"We would like to acknowledge the support of CHIST-ERA for funding this work through the Access Multilingual Information opinionS (AMIS), (France - Europe) project.",
"We also like to acknowledge the support given by the Prof. Hanifa Boucheneb from VERIFORM Laboratory (École Polytechnique de Montréal)."
]
],
"section_name": [
"Introduction",
"Sentence Boundary Detection",
"Sentence Boundary Evaluation",
"Window-Based Sentence Boundary Evaluation",
"General Reference and Agreement Ratio",
"Window-Boundaries Reference",
"WiSeBEWiSeBE",
"Evaluating with WiSeBEWiSeBE",
"Dataset",
"Evaluation",
"R G AR R_{G_{AR}} and Fleiss' Kappa correlation",
"F1 mean F1_{mean} vs. WiSeBEWiSeBE",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"e10bf13313ef392947f468f04f9da956e8e27efc"
],
"answer": [
{
"evidence": [
"We focused evaluation over a small but diversified dataset composed by 10 YouTube videos in the English language in the news context. The selected videos cover different topics like technology, human rights, terrorism and politics with a length variation between 2 and 10 minutes. To encourage the diversity of content format we included newscasts, interviews, reports and round tables."
],
"extractive_spans": [],
"free_form_answer": "youtube video transcripts on news covering different topics like technology, human rights, terrorism and politics",
"highlighted_evidence": [
"We focused evaluation over a small but diversified dataset composed by 10 YouTube videos in the English language in the news context. The selected videos cover different topics like technology, human rights, terrorism and politics with a length variation between 2 and 10 minutes. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ce72ff204530cabef93db25d5da51b9b6f4bb57d"
],
"answer": [
{
"evidence": [
"To exemplify the INLINEFORM0 score we evaluated and compared the performance of two different SBD systems over a set of YouTube videos in a multi-reference enviroment. The first system (S1) employs a Convolutional Neural Network to determine if the middle word of a sliding window corresponds to a SU boundary or not BIBREF30 . The second approach (S2) by contrast, introduces a bidirectional Recurrent Neural Network model with attention mechanism for boundary detection BIBREF31 ."
],
"extractive_spans": [
"Convolutional Neural Network ",
"bidirectional Recurrent Neural Network model with attention mechanism"
],
"free_form_answer": "",
"highlighted_evidence": [
"To exemplify the INLINEFORM0 score we evaluated and compared the performance of two different SBD systems over a set of YouTube videos in a multi-reference enviroment. The first system (S1) employs a Convolutional Neural Network to determine if the middle word of a sliding window corresponds to a SU boundary or not BIBREF30 . The second approach (S2) by contrast, introduces a bidirectional Recurrent Neural Network model with attention mechanism for boundary detection BIBREF31 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"27f031ef451ecee8919363d3492b08d83c4cb02f"
],
"answer": [
{
"evidence": [
"Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references.",
"In this paper we presented WiSeBE, a semi-automatic multi-reference sentence boundary evaluation protocol based on the necessity of having a more reliable way for evaluating the SBD task. We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them. According to your point of view, this inclusivity is very important given the difficulties that are present when working with spoken language and the possible disagreements that a task like SBD could provoke."
],
"extractive_spans": [],
"free_form_answer": "It takes into account the agreement between different systems",
"highlighted_evidence": [
"Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references.",
"We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What kind of Youtube video transcripts did they use?",
"Which SBD systems did they compare?",
"What makes it a more reliable metric?"
],
"question_id": [
"99c50d51a428db09edaca0d07f4dab0503af1b94",
"d1747b1b56fddb05bb1225e98fd3c4c043d74592",
"5a29b1f9181f5809e2b0f97b4d0e00aea8996892"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [],
"file": []
} | [
"What kind of Youtube video transcripts did they use?",
"What makes it a more reliable metric?"
] | [
[
"1808.08850-Dataset-0"
],
[
"1808.08850-F1 mean F1_{mean} vs. WiSeBEWiSeBE-0",
"1808.08850-Conclusions-0"
]
] | [
"youtube video transcripts on news covering different topics like technology, human rights, terrorism and politics",
"It takes into account the agreement between different systems"
] | 375 |
1909.02560 | Adversarial Examples with Difficult Common Words for Paraphrase Identification | Despite the success of deep models for paraphrase identification on benchmark datasets, these models are still vulnerable to adversarial examples. In this paper, we propose a novel algorithm to generate a new type of adversarial examples to study the robustness of deep paraphrase identification models. We first sample an original sentence pair from the corpus and then adversarially replace some word pairs with difficult common words. We take multiple steps and use beam search to find a modification solution that makes the target model fail, and thereby obtain an adversarial example. The word replacement is also constrained by heuristic rules and a language model, to preserve the label and grammaticality of the example during modification. Experiments show that our algorithm can generate adversarial examples on which the performance of the target model drops dramatically. Meanwhile, human annotators are much less affected, and the generated sentences retain a good grammaticality. We also show that adversarial training with generated adversarial examples can improve model robustness. | {
"paragraphs": [
[
"Paraphrase identification is to determine whether a pair of sentences are paraphrases of each other BIBREF0. It is important for applications such as duplicate post matching on social media BIBREF1, plagiarism detection BIBREF2, and automatic evaluation for machine translation BIBREF3 or text summarization BIBREF4.",
"Paraphrase identification can be viewed as a sentence matching problem. Many deep models have recently been proposed and their performance has been greatly advanced on benchmark datasets BIBREF5, BIBREF6, BIBREF7. However, previous research shows that deep models are vulnerable to adversarial examples BIBREF8, BIBREF9 which are particularly constructed to make models fail. Adversarial examples are of high value for revealing the weakness and robustness issues of models, and can thereby be utilized to improve the model performance for challenging cases, robustness, and also security.",
"In this paper, we propose a novel algorithm to generate a new type of adversarial examples for paraphrase identification. To generate an adversarial example that consists of a sentence pair, we first sample an original sentence pair from the dataset, and then adversarially replace some word pairs with difficult common words respectively. Here each pair of words consists of two words from the two sentences respectively. And difficult common words are words that we adversarially select to appear in both sentences such that the example becomes harder for the target model. The target model is likely to be distracted by difficult common words and fail to judge the similarity or difference in the context, thereby making a wrong prediction.",
"Our adversarial examples are motivated by two observations. Firstly, for a sentence pair with a label matched, when some common word pairs are replaced with difficult common words respectively, models can be fooled to predict an incorrect label unmatched. As the first example in Figure FIGREF1 shows, we can replace two pairs of common words, “purpose” and “life”, with another common words “measure” and “value” respectively. The modified sentence pair remains matched but fools the target model. It is mainly due to the bias between different words and some words are more difficult for the model. When such words appear in the example, the model fails to combine them with the unmodified context and judge the overall similarity of the sentence pair. Secondly, for an unmatched sentence pair, when some word pairs, not necessarily common words, are replaced with difficult common words, models can be fooled to predict an incorrect label matched. As the second example in Figure FIGREF1 shows, we can replace words “Gmail” and “school” with a common word “credit”, and replace words “account” and “management” with ”score”. The modified sentences remain unmatched, but the target model can be fooled to predict matched for being distracted by the common words while ignoring the difference in the unmodified context.",
"Following these observations, we focus on robustness issues regarding capturing semantic similarity or difference in the unmodified part when distracted by difficult common words in the modified part. We try to modify an original example into an adversarial one with multiple steps. In each step, for a matched example, we replace some pair of common words together, with another word adversarially selected from the vocabulary; and for an unmatched example, we replace some word pair, not necessarily a common word pair, with a common word. In this way, we replace a pair of words together from two sentences respectively with an adversarially selected word in each step. To preserve the original label and grammaticality, we impose a few heuristic constraints on replaceable positions, and apply a language model to generate substitution words that are compatible with the context. We aim to adversarially find a word replacement solution that maximizes the target model loss and makes the model fail, using beam search.",
"We generate valid adversarial examples that are substantially different from those in previous work for paraphrase identification. Our adversarial examples are not limited to be semantically equivalent to original sentences and the unmodified parts of the two sentences are of low lexical similarity. To the best of our knowledge, none of previous work is able to generate such kind of adversarial examples. We further discuss our difference with previous work in Section 2.2.",
"In summary, we mainly make the following contributions:",
"We propose an algorithm to generate new adversarial examples for paraphrase identification. Our adversarial examples focus on robustness issues that are substantially different from those in previous work.",
"We reveal a new type of robustness issues in deep paraphrase identification models regarding difficult common words. Experiments show that the target models have a severe performance drop on the adversarial examples, while human annotators are much less affected and most modified sentences retain a good grammaticality.",
"Using our adversarial examples in adversarial training can mitigate the robustness issues, and these examples can foster future research."
],
[
"Paraphrase identification can be viewed as a problem of sentence matching. Recently, many deep models for sentence matching have been proposed and achieved great advancements on benchmark datasets. Among those, some approaches encode each sentence independently and apply a classifier on the embeddings of two sentences BIBREF10, BIBREF11, BIBREF12. In addition, some models make strong interactions between two sentences by jointly encoding and matching sentences BIBREF5, BIBREF13, BIBREF14 or hierarchically extracting matching features from the interaction space of the sentence pair BIBREF15, BIBREF16, BIBREF6. Notably, BERT pre-trained on large-scale corpora achieved even better results BIBREF7. In this paper, we study the robustness of recent typical deep models for paraphrase identification and generate new adversarial examples for revealing their robustness issues and improving their robustness."
],
[
"Many methods have been proposed to find different types of adversarial examples for NLP tasks. We focus on those that can be applied to paraphrase identification. Some of them generate adversarial examples by adding semantic-preserving perturbations to the input sentences. BIBREF17 added perturbations to word embeddings. BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22 employed several character-level or word-level manipulations. BIBREF23 used syntactically controlled paraphrasing, and BIBREF24 paraphrased sentences with extracted rules. However, for some tasks including paraphrase identification, adversarial examples can be semantically different from original sentences, to study other robustness issues tailored to the corresponding tasks.",
"For sentence matching and paraphrase identification, other types of adversarial examples can be obtained by considering the relation and the correspondence between two sentences. BIBREF25 considered logical rules of sentence relations but can only generate unlabelled adversarial examples. BIBREF26 and BIBREF27 generated a sentence pair by modifying a single original sentence. They combined both original and modified sentences to form a pair. They modified the original sentence using back translation, word swapping, or single word replacement with lexical knowledge. Among them, back translation still aimed to produce semantically equivalent sentences; the others generated pairs of sentences with large Bag-of-Words (BOW) similarities, and the unmodified parts of the two sentences are exactly the same, so these same unmodified parts required little matching by target models. By contrast, we generate new adversarial examples with targeted labels by modifying a pair of original sentences together, using difficult common words. The modified sentences can be semantically different from original ones but still valid. The generated sentence pairs have much lower BOW similarities, and the unmodified parts are lexically diverse to reveal robustness issues regarding matching these parts when distracted by difficult common words in the modified parts. Thereby we study a new kind of robustness issues in paraphrase identification."
],
[
"For a certain type of adversarial examples, adversarial attacks or adversarial example generation aim to find examples that are within the defined type and make existing models fail. Some work has no access to the target model until an adversarial dataset is generated BIBREF28, BIBREF26, BIBREF23, BIBREF24, BIBREF29, BIBREF27. However, in many cases including ours, finding successful adversarial examples, i.e. examples on which the target model fails, is challenging, and employing an attack algorithm with access to the target model during generation is often necessary to ensure a high success rate.",
"Some prior work used gradient-based methods BIBREF30, BIBREF19, BIBREF31, requiring the model gradients to be accessible in addition to the output, and thus are inapplicable in black-box settings BIBREF21 where only model outputs are accessible. Though, the beam search in BIBREF19 can be adapted to black-box settings.",
"Gradient-free methods for NLP generally construct adversarial examples by querying the target model for output scores and making generation decisions to maximize the model loss. BIBREF25 searched in the solution space. One approach in BIBREF28 greedily made word replacements and queried the target model in several steps. BIBREF21 employed a genetic algorithm. BIBREF32 proposed a two-stage greedy algorithm and a method with gumbel softmax to improve the efficiency. In this work, we also focus on a black-box setting, which is more challenging than white-box settings. We use a two-stage beam search to find adversarial examples in multiple steps. We clarify that the major focus of this work is on studying new robustness issues and a new type of adversarial examples, instead of attack algorithms for an existing certain type of adversarial examples. Therefore, the choice of the attack algorithm is minor for this work as long as the success rates are sufficiently high."
],
[
"Paraphrase identification can be formulated as follows: given two sentences $P=p_1p_2\\cdots p_n$ and $Q=q_1q_2\\cdots q_m$, the goal is to predict whether $P$ and $Q$ are paraphrases of each other, by estimating a probability distribution",
"where $y\\in \\mathcal {Y} = \\lbrace matched, unmatched \\rbrace $. For each label $y$, the model outputs a score $[Z (P, Q)]_{y}$ which is the predicted probability of this label.",
"We aim to generate an adversarial example by adversarially modifying an original sentence pair $(P, Q)$ while preserving the label and grammaticality. The goal is to make the target model fail on the adversarially modified example $(\\hat{P}, \\hat{Q})$:",
"where $y$ indicates the gold label and $\\overline{y}$ is the wrong label opposite to the gold one."
],
[
"Figure FIGREF12 illustrates the work flow of our algorithm. We generate an adversarial example by firstly sampling an original example from the corpus and then constructing adversarial modifications. We use beam search and take multiple steps to modify the example, until the target model fails or the step number limit is reached. In each step, we modify the sentences by replacing a word pair with a difficult common word. There are two stages in deciding the word replacements. We first determine the best replaceable position pairs in the sentence pair, and next determine the best substitution words for the corresponding positions. We evaluate different options according to the target model loss they raise, and we retain $B$ best options after each stage of each step during beam search. Finally, the adversarially modified example is returned."
],
[
"To sample an original example from the dataset for subsequent adversarial modifications, we consider two different cases regarding whether the label is unmatched or matched. For the unmatched case, we sample two different sentence pairs $(P_1, Q_1)$ and $(P_2, Q_2)$ from the original data, and then form an unmatched example $(P_1, Q_2, unmatched)$ with sentences from two sentence pairs respectively. We also limit the length difference $||P_1|-|Q_2||$ and resample until the limit is satisfied, since sentence pairs with large length difference inherently tend to be unmatched and are too easy for models. By sampling two sentences from different examples, the two sentences tend to have less in common originally, which can help better preserve the label during adversarial modifications, while this also makes it more challenging for our algorithm to make the target model fail. On the other hand, matched examples cannot be sampled in this way, and thus for the matched case, we simply sample an example with a matched label from the dataset, namely, $(P, Q, matched)$."
],
[
"During adversarial modifications, we replace a word pair at each step. We set heuristic rules on replaceable position pairs to preserve the label and grammaticality. First of all, we require the words on the replaceable positions to be one of nouns, verbs, or adjectives, and not stopwords meanwhile. We also require a pair of replaceable words to have similar Part-of-Speech (POS) tags, i.e. the two words are both nouns, both verbs, or both adjectives. For a matched example, we further require the two words on each replaceable position pair to be exactly the same.",
"Figure FIGREF15 shows two examples of determining replaceable positions. For the first example (matched), only common words “purpose” and “life” can be replaced. And since they are replaced simultaneously with another common words, the modified sentences are likely to talk about another same thing, e.g. changing from “purpose of life” to “measure of value”, and thereby the new sentences tend to remain matched. As for the second example (unmatched), each noun in the first sentence, “Gmail” and “account”, can form replaceable word pairs with each noun in the second sentence, “school”, “management” and “software”. The irreplaceable part determines that the modified sentences are “How can I get $\\cdots $ back ? ” and “What is the best $\\cdots $ ?” respectively. Sentences based on these two templates are likely to discuss about different things or different aspects, even when filled with common words, and thus they are likely to remain unmatched. In this way, the labels can be preserved in most cases."
],
[
"For a pair of replaceable positions, we generate candidate substitution words that can replace the current words on the two positions. To preserve the grammaticality and keep the modified sentences like human language, substitution words should be compatible with the context. Therefore, we apply a BERT language model BIBREF7 to generate candidate substitution words. Specifically, when some words in a text are masked, the BERT masked language model can predict the masked words based on the context. For a sentence $x_1x_2\\cdots x_l$ where the $k$-th token is masked, the BERT masked language model gives the following probability distribution:",
"Thereby, to replace word $p_i$ and $q_j$ from the two sentences respectively, we mask $p_i$ and $q_j$ and present each sentence to the BERT masked language model. We aim to replace $p_i$ and $q_j$ with a common word $w$, which can be regarded as the masked word to be predicted. From the language model output, we obtain a joint probability distribution as follows:",
"We rank all the words within the vocabulary of the target model and choose top $K$ words with the largest probabilities, as the candidate substitution words for the corresponding positions."
],
[
"Once the replaceable positions and candidate substitution words can be determined, we use beam search with beam size $B$ to find optimal adversarial modifications in multiple steps. At step $t$, we perform a modification in two stages to determine replaceable positions and the corresponding substitution words respectively, based on the two-stage greedy framework by BIBREF32.",
"To determine the best replaceable positions, we enumerate all the possible position pairs, and obtain a set of candidate intermediate examples, $C_{pos}^{(t)}$, by replacing words on each position pair with a special token [PAD] respectively. We then query the target model with the examples in $C_{pos}^{(t)}$ to obtain the model output. We take top $B$ examples that maximize the output score of the opposite label $\\overline{y}$ (we define this operation as $\\mathop {\\arg {\\rm top}B}$), obtaining a set of intermediate examples $\\lbrace (\\hat{P}_{pos}^{(t,k)}, \\hat{Q}_{pos}^{(t,k)}) \\rbrace _{k=1}^{B}$, as follows:",
"We then determine difficult common words to replace the [PAD] placeholders. For each example in $\\lbrace (\\hat{P}_{pos}^{(t, k)}, \\hat{Q}_{pos}^{(t, k)}) \\rbrace _{k=1}^B$, we enumerate all the words in the candidate substitution word set of the corresponding positions with [PAD]. We obtain a set of candidate examples, $C^{(t)}$, by replacing the [PAD] placeholders with each candidate substitution word respectively. Similarly to the first stage, we take top $B$ examples that maximize the output score of the opposite label $\\overline{y}$. This yields a set of modified example after step $t$, $\\lbrace (\\hat{P}^{(t, k)}, \\hat{Q}^{(t, k)}) \\rbrace _{k=1}^{B}$, as follows:",
"After $t$ steps, for some modified example $(\\hat{P}^{(t,k)}, \\hat{Q}^{(t,k)})$, if the label predicted by the target model is already $\\overline{y}$, i.e. $[Z(\\hat{P}^{(t,k)}, \\hat{Q}^{(t,k)})]_{\\overline{y}} > [Z(\\hat{P}^{(t,k)},\\hat{Q}^{(t,k)})]_y$, this example is a successful adversarial example and thus we terminate the modification process. Otherwise, we continue taking another step, until the step number limit $S$ is reached and in case of that an unsuccessful adversarial example is returned."
],
[
"We adopt the following two datasets:",
"Quora BIBREF1: The Quora Question Pairs dataset contains question pairs annotated with labels indicating whether the two questions are paraphrases. We use the same dataset partition as BIBREF5, with 384,348/10,000/10,000 pairs in the training/development/test set respectively.",
"MRPC BIBREF34: The Microsoft Research Paraphrase Corpus consists of sentence pairs collected from online news. Each pair is annotated with a label indicating whether the two sentences are semantically equivalent. There are 4,076/1,725 pairs in the training/test set respectively."
],
[
"We adopt the following typical deep models as the target models in our experiments:",
"BiMPM BIBREF5, the Bilateral Multi-Perspective Matching model, matches two sentences on all combinations of time stamps from multiple perspectives, with BiLSTM layers to encode the sentences and aggregate matching results.",
"DIIN BIBREF6, the Densely Interactive Inference Network, creates a word-by-word interaction matrix by computing similarities on sentence representations encoded by a highway network and self-attention, and then adopts DenseNet BIBREF35 to extract interaction features for matching.",
"BERT BIBREF7, the Bidirectional Encoder Representations from Transformers, is pre-trained on large-scale corpora, and then fine-tuned on this task. The matching result is obtained by applying a classifier on the encoded hidden states of the two sentences."
],
[
"We adopt existing open source codes for target models BiMPM, DIIN and BERT, and also the BERT masked language model. For Quora, the step number limit $S$ is set to 5; the number of candidate substitution words generated using the language model $K$ and the beam size $B$ are both set to 25. $S$, $K$ and $B$ are doubled for MRPC where sentences are generally longer. The length difference between unmatched sentence pairs is limited to be no more than 3."
],
[
"We train each target model on the original training data, and then generate adversarial examples for the target models. For each dataset, we sample 1,000 original examples with balanced labels from the corresponding test set, and adversarially modify them for each target model. We evaluate the accuracies of target models on the corresponding adversarial examples, compared with their accuracies on the original examples. Let $s$ be the success rate of generating adversarial examples that the target model fails, the accuracy of the target model on the returned adversarial examples is $1-s$. Table TABREF18 presents the results.",
"The target models have high overall accuracies on the original examples, especially on the sampled ones since we form an unmatched original example with independently sampled sentences. The models have relatively lower accuracies on the unmatched examples in the full original test set of MRPC because MRPC is relatively small while the two labels are imbalanced in the original data (3,900 matched examples and 1,901 unmatched examples). Therefore, we generate adversarial examples with balanced labels instead of following the original distribution.",
"After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. Particularly, even though our generation is constrained by a BERT language model, BERT is still vulnerable to our adversarial examples. These results demonstrate the effectiveness of our algorithm for generating adversarial examples and also revealing the corresponding robustness issues. Moreover, we present some generated adversarial examples in the appendix.",
"We notice that the original models are more vulnerable to unmatched adversarial examples, because there are generally more replaceable position choices during the generation. Nevertheless, the results of the matched case are also sufficiently strong to reveal the robustness issues. We do not quantitatively compare the performance drop of the target models on the adversarial examples with previous work, because we generate a new type of adversarial examples that previous methods are not capable of. We have different experiment settings, including original example sampling and constraints on adversarial modifications, which are tailored to the robustness issues we study. Performance drop on different kinds of adversarial examples with little overlap is not comparable, and thus surpassing other adversarial examples on model performance drop is unnecessary and irrelevant to support our contributions. Therefore, such comparisons are not included in this paper."
],
[
"To verify the validity our generated adversarial examples, we further perform a manual evaluation. For each dataset, using BERT as the target model, we randomly sample 100 successful adversarial examples on which the target model fails, with balanced labels. We blend these adversarial examples with the corresponding original examples, and present each example to three workers on Amazon Mechanical Turk. We ask the workers to label the examples and also rate the grammaticality of the sentences with a scale of 1/2/3 (3 for no grammar error, 2 for minor errors, and 1 for vital errors). We integrate annotations from different workers with majority voting for labels and averaging for grammaticality.",
"Table TABREF35 shows the results. Unlike target models whose performance drops dramatically on adversarial examples, human annotators retain high accuracies with a much smaller drop, while the accuracies of the target models are 0 on these adversarial examples. This demonstrates that the labels of most adversarial examples are successfully preserved to be consistent with original examples. Results also show that the grammaticality difference between the original examples and adversarial examples is also small, suggesting that most adversarial examples retain a good grammaticality. This verifies the validity of our adversarial examples."
],
[
"Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18.",
"After adversarial training, the performance of all the target models raises significantly, while that on the original examples remain comparable. Note that since the focus of this paper is on model robustness which can hardly be reflected in original data, we do not expect performance improvement on original data. The results demonstrate that adversarial training with our adversarial examples can significantly improve the robustness we focus on without remarkably hurting the performance on original data. Moreover, although the adversarial example generation is constrained by a BERT language model, BiMPM and DIIN which do not use the BERT language model can also significantly benefit from the adversarial examples, further demonstrating the effectiveness of our method."
],
[
"To quantitatively demonstrate the difference between the adversarial examples we generate and those by previous work BIBREF26, BIBREF27, we compute the average BOW cosine similarity between the generated pairs of sentences. We only compare with previous methods that also aim to generate labeled adversarial examples that are not limited to be semantically equivalent to original sentences. Results are shown in Table TABREF38. Each pair of adversarial sentences by BIBREF26 differ by only one word. And in BIBREF27, sentence pairs generated with word swapping have exactly the same BOW. These two approaches both have high BOW similarities. By contrast, our method generates sentence pairs with much lower BOW similarities. This demonstrates a significant difference between our examples and the others. Unlike previous methods, we generate adversarial examples that can focus on robustness issues regarding the distraction from modified words that are the same for both sentences, towards matching the unmodified parts that are diverse for two sentences."
],
[
"We further analyse the necessity and effectiveness of modifying sentences with paired common words. We consider another version that replaces one single word independently at each step without using paired common words, namely the unpaired version. Firstly, for matched adversarial examples that can be semantically different from original sentences, the unpaired version is inapplicable, because the matched label can be easily broken if common words from two sentences are changed into other words independently. And for the unmatched case, we show that the unpaired version is much less effective. For a more fair comparison, we double the step number limit for the unpaired version. As shown in Table TABREF41, the performance of target models on unmatched examples generated by the unpaired version, particularly that of BERT, is mostly much higher than those by our full algorithm, except for BiMPM on MRPC but its accuracies have almost reached 0 (0.0% for unpaired and 0.2% for paired). This demonstrates that our algorithm using paired common words are more effective in generating adversarial examples, on which the performance of the target model is generally much lower. An advantage of using difficult common words for unmatched examples is that such words tend to make target models over-confident about common words and distract the models on recognizing the semantic difference in the unmodified part. Our algorithm explicitly utilizes this property and thus can well reveal such a robustness issue. Moreover, although there is no such a property for the matched case, replacing existing common words with more difficult ones can still distract the target model on judging the semantic similarity in the unmodified part, due to the bias between different words learned by the model, and thus our algorithm for generating adversarial examples with difficult common words works for both matched and unmatched cases."
],
[
"In this paper, we propose a novel algorithm to generate new adversarial examples for paraphrase identification, by adversarially modifying original examples with difficult common words. We generate labeled adversarial examples that can be semantically different from original sentences and the BOW similarity between each pair of sentences is generally low. Such examples reveal robustness issues that previous methods are not able for. The accuracies of the target models drop dramatically on our adversarial examples, while human annotators are much less affected and the modified sentences retain a good grammarticality. We also show that model robustness can be improved using adversarial training with our adversarial examples. Moreover, our adversarial examples can foster future research for further improving model robustness."
]
],
"section_name": [
"Introduction",
"Related Work ::: Deep Paraphrase Identification",
"Related Work ::: Adversarial Examples for NLP",
"Related Work ::: Adversarial Example Generation",
"Methodology ::: Task Definition",
"Methodology ::: Algorithm Framework",
"Methodology ::: Original Example Sampling",
"Methodology ::: Replaceable Position Pairs",
"Methodology ::: Candidate Substitution Word Generation",
"Methodology ::: Beam Search for Finding Adversarial Examples",
"Experiments ::: Datasets",
"Experiments ::: Target Models",
"Experiments ::: Implementation Details",
"Experiments ::: Main Results",
"Experiments ::: Manual Evaluation",
"Experiments ::: Adversarial Training",
"Experiments ::: Sentence Pair BOW Similarity",
"Experiments ::: Effectiveness of Paired Common Words",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"681a882f9993ee910dcfa813a103b6ff2967c52d"
],
"answer": [
{
"evidence": [
"Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18.",
"After adversarial training, the performance of all the target models raises significantly, while that on the original examples remain comparable. Note that since the focus of this paper is on model robustness which can hardly be reflected in original data, we do not expect performance improvement on original data. The results demonstrate that adversarial training with our adversarial examples can significantly improve the robustness we focus on without remarkably hurting the performance on original data. Moreover, although the adversarial example generation is constrained by a BERT language model, BiMPM and DIIN which do not use the BERT language model can also significantly benefit from the adversarial examples, further demonstrating the effectiveness of our method."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 1) The performance of all the target models raises significantly, while that on the original\nexamples remain comparable (e.g. the overall accuracy of BERT on modified examples raises from 24.1% to 66.0% on Quora)",
"highlighted_evidence": [
"We evaluate the adversarially trained models, as shown in Table TABREF18.\n\nAfter adversarial training, the performance of all the target models raises significantly, while that on the original examples remain comparable."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"32da0d3092a058c804fd58ca164a453a2b96691e"
],
"answer": [
{
"evidence": [
"After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. Particularly, even though our generation is constrained by a BERT language model, BERT is still vulnerable to our adversarial examples. These results demonstrate the effectiveness of our algorithm for generating adversarial examples and also revealing the corresponding robustness issues. Moreover, we present some generated adversarial examples in the appendix."
],
"extractive_spans": [
"BERT on Quora drops from 94.6% to 24.1%"
],
"free_form_answer": "",
"highlighted_evidence": [
"After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f206d6b15d83b7c3542b222bde44a6f5e372994b"
],
"answer": [
{
"evidence": [
"Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18."
],
"extractive_spans": [
" current model"
],
"free_form_answer": "",
"highlighted_evidence": [
"At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"28186cfeae54e04cfaeb297b27443527ec2871bc"
],
"answer": [
{
"evidence": [
"We adopt the following two datasets:",
"Quora BIBREF1: The Quora Question Pairs dataset contains question pairs annotated with labels indicating whether the two questions are paraphrases. We use the same dataset partition as BIBREF5, with 384,348/10,000/10,000 pairs in the training/development/test set respectively.",
"MRPC BIBREF34: The Microsoft Research Paraphrase Corpus consists of sentence pairs collected from online news. Each pair is annotated with a label indicating whether the two sentences are semantically equivalent. There are 4,076/1,725 pairs in the training/test set respectively."
],
"extractive_spans": [
"Quora",
"MRPC"
],
"free_form_answer": "",
"highlighted_evidence": [
"We adopt the following two datasets:\n\nQuora BIBREF1: The Quora Question Pairs dataset contains question pairs annotated with labels indicating whether the two questions are paraphrases. We use the same dataset partition as BIBREF5, with 384,348/10,000/10,000 pairs in the training/development/test set respectively.\n\nMRPC BIBREF34: The Microsoft Research Paraphrase Corpus consists of sentence pairs collected from online news. Each pair is annotated with a label indicating whether the two sentences are semantically equivalent. There are 4,076/1,725 pairs in the training/test set respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How much in experiments is performance improved for models trained with generated adversarial examples?",
"How much dramatically results drop for models on generated adversarial examples?",
"What is discriminator in this generative adversarial setup?",
"What are benhmark datasets for paraphrase identification?"
],
"question_id": [
"f5db12cd0a8cd706a232c69d94b2258596aa068c",
"2c8d5e3941a6cc5697b242e64222f5d97dba453c",
"78102422a5dc99812739b8dd2541e4fdb5fe3c7a",
"930c51b9f3936d936ee745716536a4b40f531c7f"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Two examples with labels matched and unmatched respectively, originally from the Quora Question Pairs corpus (Iyer, Dandekar, and Csernai, 2017). “(P)” and “(Q)” are original sentences, and “(P’)” and “(Q’)” are adversarially modified sentences. Modified words are highlighted in bold. “Output” indicates the output change given by target model BERT (Devlin et al., 2018).",
"Figure 2: Work flow of our algorithm for generating adversarial examples.",
"Table 1: Accuracies (%) of target models on Quora and MRPC respectively, evaluated on both original and adversarial examples. “Original Full” indicates the full original test set, “Original Sampled” indicates the sampled original examples before adversarial modifications, and “Adversarial” indicates the adversarial examples generated by our algorithm. “Pos” and “Neg” indicate matched and unmatched examples respectively. Target models with suffix “-adv”are further fine-tuned with adversarial training. We highlight the performance drop of the original models on adversarially modified examples compared to sampled original examples in bold.",
"Table 2: Manual evaluation results, including human performance on both original and adversarial examples, and the grammaticality ratings of the generated sentences.",
"Table 3: Comparison of average BOW cosine similarities between pairs of sentences generated by our algorithm and previous work respectively. For Zhang, Baldridge, and He (2019), “WS” stands for “word swapping”.",
"Table 4: Accuracies of target models (%) on unmatched adversarial examples generated without using paired common words (unpaired), compared with those by our full algorithm (paired). There is no comparison for matched adversarial examples due to the inapplicability of the unpaired version.",
"Table 5: Typical adversarial examples generated using BERT as the target model on Quora. “(P)” and “(Q)” indicate original sentences, and “(P’)” and “(Q’)” indicate adversarially modified sentences. Modified words are highlighted in bold.",
"Table 6: Typical adversarial examples generated using BERT as the target model on MRPC."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"9-Table5-1.png",
"9-Table6-1.png"
]
} | [
"How much in experiments is performance improved for models trained with generated adversarial examples?"
] | [
[
"1909.02560-Experiments ::: Adversarial Training-1",
"1909.02560-Experiments ::: Adversarial Training-0"
]
] | [
"Answer with content missing: (Table 1) The performance of all the target models raises significantly, while that on the original\nexamples remain comparable (e.g. the overall accuracy of BERT on modified examples raises from 24.1% to 66.0% on Quora)"
] | 376 |
2001.02380 | A Neural Approach to Discourse Relation Signal Detection | Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us to quantify the signaling strength of individual instances of a signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to assess the distribution of ambiguity for signals, or to identify words that hinder discourse relation identification in context ('anti-signals' or 'distractors'). In this paper we present a data-driven approach to signal detection using a distantly supervised neural network and develop a metric, {\Delta}s (or 'delta-softmax'), to quantify signaling strength. Ranging between -1 and 1 and relying on recent advances in contextualized words embeddings, the metric represents each word's positive or negative contribution to the identifiability of a relation in specific instances in context. Based on an English corpus annotated for discourse relations using Rhetorical Structure Theory and signal type annotations anchored to specific tokens, our analysis examines the reliability of the metric, the places where it overlaps with and differs from human judgments, and the implications for identifying features that neural models may need in order to perform better on automatic discourse relation classification. | {
"paragraphs": [
[
"The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3).",
". [If you work for a company,]$_{\\textsc {condition}}$ [they pay you that money.]",
". [Albeit limited,]$_{\\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.]",
". [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\\textsc {cause}}$",
"The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12.",
"At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15).",
"In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time.",
"Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals.",
"In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'.",
"In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research."
],
[
"A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank.",
"This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations.",
"Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance.",
"Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable."
],
[
"Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens.",
"The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions.",
"Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used.",
"Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next."
],
[
"In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data.",
"The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type.",
"The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels:",
"signal class, denoting the signal's degree of complexity",
"signal type, indicating the linguistic system to which it belongs",
"specific signal, which gives the most fine-grained subtypes of signals within each type",
"It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels.",
"The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below.",
"In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words.",
"In order to get a better sense of how the annotations work, we consider example SECREF7.",
". [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination]",
"In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations.",
"In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper."
],
[
"From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features.",
"At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn:",
"Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token",
"Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit",
"Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure",
"Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain.",
"The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify."
],
[
"To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus.",
"More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation.",
"If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right.",
"Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be.",
"Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation:",
". [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\\textsc {sequence}}$",
". [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\\textsc {sequence}}$",
"These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph."
],
[
"Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals.",
"Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below.",
"As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30.",
"Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training.",
"Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation:",
"where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \\ldots t$, $\\delta \\in \\lbrace b,f\\rbrace $ is the direction of the respective LSTMs, $c_t^\\delta $ is the recurrent context in each direction and $\\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation.",
"In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29.",
". $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$",
"Label: concession",
"In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels."
],
[
"Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses).",
"Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM.",
"Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further."
],
[
"The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem:",
". [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\\xrightarrow[\\text{pred:preparation}]{\\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230.",
"Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow.",
"Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal.",
"To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36.",
". Original: <$s$>$ To\\quad \\ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \\: $<$s$>$ \\: \\ To \\: provide \\: information \\: ... \\: $<$sep$>$ \\: ... \\: $<$n$>$ \\\\ Masked1: \\: $<$s$>$ \\: $<$X$>$ \\: provide \\: information \\: ... \\: $<$sep$>$ \\: ... \\: $<$n$>$ \\\\ Masked2: \\: $<$s$>$ \\: \\ To \\: \\ $<$X$>$ \\: information \\: ... \\: $<$sep$>$ \\: ... \\: $<$n$>$ \\\\ Masked3: \\: $<$s$>$ \\: \\ To \\: provide \\: \\ $<$X$>$ \\: ... \\: $<$sep$>$ \\: ... \\: $<$n$>$ $ Label: purpose",
"We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\\Delta }_s$ (for delta-softmax), which can be written as:",
"where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \\in 1 \\ldots N$ ignoring separators, or $\\phi $, the empty set).",
"To visualize the model's predictions, we compare ${\\Delta }_s$ for a particular token to two numbers: the maximum ${\\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines).",
". [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\\xrightarrow[\\text{pred:preparation}]{\\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163.",
". [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\\xleftarrow[\\text{pred:contrast}]{\\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230.",
". [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\\xleftarrow[\\text{pred:evaluation}]{\\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely",
"The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall.",
"In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6).",
"Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next."
],
[
"To evaluate the neural model, we would like to know how well ${\\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength).",
"The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\\Delta }_s>$0.15 are predicted to be signals.",
"The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible.",
"For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines.",
"The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16.",
"Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline.",
"A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline.",
"Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses."
],
[
"Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose.",
". [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\\xleftarrow[\\text{pred:elaboration}]{\\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135:",
". [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\\xrightarrow[\\text{pred:preparation}]{\\text{gold:evaluation}}$ [RGB]230, 230, 230\" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111.",
". [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\\xleftarrow[\\text{pred:evidence}]{\\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination",
"Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators.",
". [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\\xrightarrow[\\text{pred:preparation}]{\\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230.",
". [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\\xleftarrow[\\text{pred:purpose}]{\\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230,",
"In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead:",
". [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\\xrightarrow[\\text{pred:solutionhood}]{\\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183.",
"From the model's perspective, the question mark, which scores ${\\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues.",
"Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\\Delta }_s<$-0.2 are underlined).",
". [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\\xrightarrow[\\text{pred:solutionhood}]{\\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230.",
". [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\\xrightarrow[\\text{pred:preparation}]{\\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169.",
"In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast.",
"In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another."
],
[
"To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type.",
"Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining):",
". [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\\xleftarrow[\\text{pred:background}]{\\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79.",
"We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data).",
"Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them.",
"Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44.",
". [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230\" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230\" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\\xleftarrow[\\text{pred:elaboration}]{\\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229.",
"Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast).",
"Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date.",
". [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\\xleftarrow[\\text{pred:circumstance}]{\\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011"
],
[
"This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work.",
"The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types.",
"Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results.",
"To see how ambiguity is reflected in multiple measurements of ${\\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions.",
"For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity.",
"Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\\Delta }_s$ suggests?",
"Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25).",
"Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\\Delta }_s$ will overlap increasingly with human annotations of anchored signals.",
"In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset."
]
],
"section_name": [
"Introduction",
"Previous Work ::: Data-driven Approaches",
"Previous Work ::: Discourse Relation Signal Annotations",
"Data ::: Anchored Signals in the GUM Corpus",
"Data ::: A Taxonomy of Anchored Signals",
"Automatic Signal Extraction ::: A Contextless Frequentist Approach",
"Automatic Signal Extraction ::: A Contextualized Neural Model ::: Task and Model Architecture",
"Automatic Signal Extraction ::: A Contextualized Neural Model ::: Relation Classification Performance",
"Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric",
"Evaluation and Error Analysis ::: Evaluation Metric",
"Evaluation and Error Analysis ::: Qualitative Analysis",
"Evaluation and Error Analysis ::: Performance on Signal Types",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"bc99e782390d90ced46c5f522324a7aad5e1e4e4"
],
"answer": [
{
"evidence": [
"We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\\Delta }_s$ (for delta-softmax), which can be written as:",
"where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \\in 1 \\ldots N$ ignoring separators, or $\\phi $, the empty set)."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Formula) Formula is the answer.",
"highlighted_evidence": [
"We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\\Delta }_s$ (for delta-softmax), which can be written as:\n\nwhere $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \\in 1 \\ldots N$ ignoring separators, or $\\phi $, the empty set)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"f3b6a0edc6b86340a8c7a1fb970dce92b5f82e62"
],
"answer": [
{
"evidence": [
"As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e535d3b7edcf7d60702da2f31211dfd2fb60d9b1"
],
"answer": [
{
"evidence": [
"In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead:",
". [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\\xrightarrow[\\text{pred:solutionhood}]{\\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183.",
"Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators."
],
"extractive_spans": [
"model points out plausible signals which were passed over by an annotator",
"it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action"
],
"free_form_answer": "",
"highlighted_evidence": [
"In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead:\n\n. [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\\xrightarrow[\\text{pred:solutionhood}]{\\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183.",
"However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2873eba8fbdbead32bbb60ebb05e8dbdf3f199e9"
],
"answer": [
{
"evidence": [
"Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose."
],
"extractive_spans": [
"influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments"
],
"free_form_answer": "",
"highlighted_evidence": [
"It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How is the delta-softmax calculated?",
"Are some models evaluated using this metric, what are the findings?",
"Where does proposed metric differ from juman judgement?",
"Where does proposed metric overlap with juman judgement?"
],
"question_id": [
"4059c6f395640a6acf20a0ed451d0ad8681bc59b",
"99d7bef0ef395360b939a3f446eff67239551a9d",
"a1097ce59270d6f521d92df8d2e3a279abee3e67",
"56e58bdf0df76ad1599021801f6d4c7b77953e29"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Russian discourse relation signals, reproduced from Toldova et al. (2017).",
"Table 2: RST relations and their frequencies in the GUM corpus.",
"Figure 1: A visualization of an RST analysis of (4) with the signal tokens highlighted.",
"Table 3: An overview of the taxonomy and the distribution of the corresponding anchorable signals attested in the subset of the GUM corpus from Liu (2019).",
"Figure 2: Example of a manually annotated signal with token positions corresponding to categories I and IV in the taxonomy with respect to the PREPARATION relation of unit 50.",
"Figure 3: RST fragment with signal obeying strong nuclearity: the DM thus indicating RESULT is in the head EDU of the satellite block.",
"Table 4: Most distinctive lexemes for some relations in GUM, with different frequency thresholds.",
"Figure 4: Model Architecture.",
"Table 5: Model performance on relation classification.",
"Figure 5: Signal recall with 1, 2 or 3 guesses for the neural model and a random guess baseline. Left: all signals included; middle: only endocentric cases, restricted to head EDUs shown to the model; right: only discourse markers (DMs).",
"Figure 6: Exocentric signal not detectable by the model. The ELABORATION pointing from unit [23] to [22] is signaled by a coreferential phrase appearing in another satellite of [22].",
"Table 6: Anchored token detection accuracy for signal types attested over 20 times.",
"Figure 7: Boxplots for all ∆s values of several signal token types across the test corpus."
],
"file": [
"3-Table1-1.png",
"6-Table2-1.png",
"7-Figure1-1.png",
"8-Table3-1.png",
"9-Figure2-1.png",
"10-Figure3-1.png",
"11-Table4-1.png",
"13-Figure4-1.png",
"14-Table5-1.png",
"18-Figure5-1.png",
"19-Figure6-1.png",
"22-Table6-1.png",
"25-Figure7-1.png"
]
} | [
"How is the delta-softmax calculated?"
] | [
[
"2001.02380-Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric-7",
"2001.02380-Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric-6"
]
] | [
"Answer with content missing: (Formula) Formula is the answer."
] | 378 |
1809.02494 | Meteorologists and Students: A resource for language grounding of geographical descriptors | We present a data resource which can be useful for research purposes on language grounding tasks in the context of geographical referring expression generation. The resource is composed of two data sets that encompass 25 different geographical descriptors and a set of associated graphical representations, drawn as polygons on a map by two groups of human subjects: teenage students and expert meteorologists. | {
"paragraphs": [
[
"Language grounding, i.e., understanding how words and expressions are anchored in data, is one of the initial tasks that are essential for the conception of a data-to-text (D2T) system BIBREF0 , BIBREF1 . This can be achieved through different means, such as using heuristics or machine learning algorithms on an available parallel corpora of text and data BIBREF2 to obtain a mapping between the expressions of interest and the underlying data BIBREF3 , getting experts to provide these mappings, or running surveys on writers or readers that provide enough data for the application of mapping algorithms BIBREF4 .",
"Performing language grounding allows ensuring that generated texts include words whose meaning is aligned with what writers understand or what readers would expect BIBREF0 , given the variation that is known to exist among writers and readers BIBREF5 . Moreover, when contradictory data appears in corpora or any other resource that is used to create the data-to-words mapping, creating models that remove inconsistencies can also be a challenging part of language grounding which can influence the development of a successful system BIBREF3 .",
"This paper presents a resource for language grounding of geographical descriptors. The original purpose of this data collection is the creation of models of geographical descriptors whose meaning is modeled as graded or fuzzy BIBREF6 , BIBREF7 , to be used for research on generation of geographical referring expressions, e.g., BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF4 . However, we believe it can be useful for other related research purposes as well."
],
[
"The resource is composed of data from two different surveys. In both surveys subjects were asked to draw on a map (displayed under a Mercator projection) a polygon representing a given geographical descriptor, in the context of the geography of Galicia in Northwestern Spain (see Fig. FIGREF1 ). However, the surveys were run with different purposes, and the subject groups that participated in each survey and the list of descriptors provided were accordingly different.",
"The first survey was run in order to obtain a high number of responses to be used as an evaluation testbed for modeling algorithms. It was answered by 15/16 year old students in a high school in Pontevedra (located in Western Galicia). 99 students provided answers for a list of 7 descriptors (including cardinal points, coast, inland, and a proper name). Figure FIGREF2 shows a representation of the answers given by the students for “Northern Galicia” and a contour map that illustrates the percentages of overlapping answers.",
"The second survey was addressed to meteorologists in the Galician Weather Agency BIBREF12 . Its purpose was to gather data to create fuzzy models that will be used in a future NLG system in the weather domain. Eight meteorologists completed the survey, which included a list of 24 descriptors. For instance, Figure FIGREF3 shows a representation of the answers given by the meteorologists for “Eastern Galicia” and a contour map that illustrates the percentage of overlapping answers.",
"Table TABREF4 includes the complete list of descriptors for both groups of subjects. 20 out of the 24 descriptors are commonly used in the writing of weather forecasts by experts and include cardinal directions, proper names, and other kinds of references such as mountainous areas, parts of provinces, etc. The remaining four were added to study intersecting combinations of cardinal directions (e.g. exploring ways of combining “north” and “west” for obtaining a model that is similar to “northwest”).",
"The data for the descriptors from the surveys is focused on a very specific geographical context. However, the conjunction of both data sets provides a very interesting resource for performing a variety of more general language grounding-oriented and natural language generation research tasks, such as:"
],
[
"The two data sets were gathered for different purposes and only coincide in a few descriptors, so providing a direct comparison is not feasible. However, we can discuss general qualitative insights and a more detailed analysis of the descriptors that both surveys share in common.",
"At a general level, we had hypothesized that experts would be much more consistent than students, given their professional training and the reduced number of meteorologists participating in the survey. Comparing the visualizations of both data sets we have observed that this is clearly the case; the polygons drawn by the experts are more concentrated and therefore there is a higher agreement among them. On top of these differences, some students provided unexpected drawings in terms of shape, size, or location of the polygon for several descriptors.",
"If we focus on single descriptors, one interesting outcome is that some of the answers for “Northern Galicia” and “Southern Galicia” overlap for both subject groups. Thus, although `north' and `south' are natural antonyms, if we take into account the opinion of each group as a whole, there exists a small area where points can be considered as belonging to both descriptors at the same time (see Fig. FIGREF9 ). In the case of “west” and “east”, the drawings made by the experts were almost divergent and showed no overlapping between those two descriptors.",
"Regarding “Inland Galicia”, the unions of the answers for each group occupy approximately the same area with a similar shape, but there is a very high overlapping among the answers of the meteorologists. A similar situation is found for the remaining descriptor “Rías Baixas”, where both groups encompass a similar area. In this case, the students' answers cover a more extensive region and the experts coincide within a more restricted area."
],
[
"As in any survey that involves a task-based collection of data, some of the answers provided by the subjects for the described data sets can be considered erroneous or misleading due to several reasons. Here we describe for each subject group some of the most relevant issues that any user of this resource should take into account.",
"In the case of the students, we have identified minor drawing errors appearing in most of the descriptors, which in general shouldn't have a negative impact in the long term thanks to the high number of participants in the original survey. For some descriptors, however, there exist polygons drawn by subjects that clearly deviate from what could be considered a proper answer. The clearest example of this problem involves the `west' and `east' descriptors, which were confused by some of the students who drew them inversely (see Fig. FIGREF11 , around 10-15% of the answers).",
"In our case, given their background, some of the students may have actually confused the meaning of + “west” and “east”. However, the most plausible explanation is that, unlike in English and other languages, in Spanish both descriptors are phonetically similar (“este” and “oeste”) and can be easily mistaken for one another if read without attention.",
"As for the expert group, a similar case is found for “Northeastern Galicia” (see Fig. FIGREF12 ), where some of the given answers (3/8) clearly correspond to “Northwestern Galicia”. However, unlike the issue related to “west” and “east” found for the student group, this problem is not found reciprocally for the “northwestern” answers."
],
[
"The resource is available at BIBREF13 under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Both data sets are provided as SQLite databases which share the same table structure, and also in a compact JSON format. Polygon data is encoded in GeoJSON format BIBREF14 . The data sets are well-documented in the repository's README, and several Python scripts are provided for data loading, using Shapely BIBREF15 ; and for visualization purposes, using Cartopy BIBREF16 ."
],
[
"The data sets presented provide a means to perform different research tasks that can be useful from a natural language generation point of view. Among them, we can highlight the creation of models of geographical descriptors, comparing models between both subject groups, studying combinations of models of cardinal directions, and researching on geographical referring expression generation. Furthermore, insights about the semantics of geographical concepts could be inferred under a more thorough analysis.",
"One of the inconveniences that our data sets present is the appearance of the issues described in Sec. SECREF10 . It could be necessary to filter some of the answers according to different criteria (e.g., deviation of the centroid location, deviation of size, etc.). For more applied cases, manually filtering can also be an option, but this would require a certain knowledge of the geography of Galicia. In any case, the squared-like shape of this region may allow researchers to become rapidly familiar with many of the descriptors listed in Table TABREF4 .",
"As future work, we believe it would be invaluable to perform similar data gathering tasks for other regions from different parts of the world. These should provide a variety of different shapes (both regular and irregular), so that it can be feasible to generalize (e.g., through data-driven approaches) the semantics of some of the more common descriptors, such as cardinal points, coastal areas, etc. The proposal of a shared task could help achieve this objective."
],
[
"This research was supported by the Spanish Ministry of Economy and Competitiveness (grants TIN2014-56633-C3-1-R and TIN2017-84796-C2-1-R) and the Galician Ministry of Education (grants GRC2014/030 and \"accreditation 2016-2019, ED431G/08\"). All grants were co-funded by the European Regional Development Fund (ERDF/FEDER program). A. Ramos-Soto is funded by the “Consellería de Cultura, Educación e Ordenación Universitaria” (under the Postdoctoral Fellowship accreditation ED481B 2017/030). J.M. Alonso is supported by RYC-2016-19802 (Ramón y Cajal contract).",
"The authors would also like to thank Juan Taboada for providing the list of most frequently used geographical expressions by MeteoGalicia, and José Manuel Ramos for organizing the survey at the high school IES Xunqueira I in Pontevedra, Spain. "
]
],
"section_name": [
"Introduction",
"The resource and its interest",
"Qualitative analysis of the data sets",
"A further analysis: apparent issues",
"Resource materials",
"Concluding remarks",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"28b6cedb469ce68555dbd02a6518fc81ea4b3068"
],
"answer": [
{
"evidence": [
"The resource is composed of data from two different surveys. In both surveys subjects were asked to draw on a map (displayed under a Mercator projection) a polygon representing a given geographical descriptor, in the context of the geography of Galicia in Northwestern Spain (see Fig. FIGREF1 ). However, the surveys were run with different purposes, and the subject groups that participated in each survey and the list of descriptors provided were accordingly different.",
"The first survey was run in order to obtain a high number of responses to be used as an evaluation testbed for modeling algorithms. It was answered by 15/16 year old students in a high school in Pontevedra (located in Western Galicia). 99 students provided answers for a list of 7 descriptors (including cardinal points, coast, inland, and a proper name). Figure FIGREF2 shows a representation of the answers given by the students for “Northern Galicia” and a contour map that illustrates the percentages of overlapping answers.",
"The second survey was addressed to meteorologists in the Galician Weather Agency BIBREF12 . Its purpose was to gather data to create fuzzy models that will be used in a future NLG system in the weather domain. Eight meteorologists completed the survey, which included a list of 24 descriptors. For instance, Figure FIGREF3 shows a representation of the answers given by the meteorologists for “Eastern Galicia” and a contour map that illustrates the percentage of overlapping answers."
],
"extractive_spans": [],
"free_form_answer": "two surveys by two groups - school students and meteorologists to draw on a map a polygon representing a given geographical descriptor",
"highlighted_evidence": [
"The resource is composed of data from two different surveys. In both surveys subjects were asked to draw on a map (displayed under a Mercator projection) a polygon representing a given geographical descriptor, in the context of the geography of Galicia in Northwestern Spain (see Fig. FIGREF1 ).",
"The first survey was run in order to obtain a high number of responses to be used as an evaluation testbed for modeling algorithms. It was answered by 15/16 year old students in a high school in Pontevedra (located in Western Galicia). 99 students provided answers for a list of 7 descriptors (including cardinal points, coast, inland, and a proper name).",
"The second survey was addressed to meteorologists in the Galician Weather Agency BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"Which two datasets does the resource come from?"
],
"question_id": [
"a4ff1b91643e0c8a0d4cc1502d25ca85995cf428"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Figure 1: Snapshot of the version of the survey answered by the meteorologists (translated from Spanish).",
"Figure 3: Representation of polygon drawings by experts and associated contour plot showing the percentage of overlapping answers for “Eastern Galicia”.",
"Table 1: List of geographical descriptors in the resource.",
"Figure 2: Representation of polygon drawings by students and associated contour plot showing the percentage of overlapping answers for “Northern Galicia”.",
"Figure 4: Areas overlapping “north” and “south” for both subject groups (in blue).",
"Figure 5: Contour maps of student answers for “Western Galicia” and “Eastern Galicia”.",
"Figure 6: Representation of polygon drawings by experts and associated contour plots showing the percentage of overlapping answers for “Northeastern Galicia”."
],
"file": [
"2-Figure1-1.png",
"2-Figure3-1.png",
"2-Table1-1.png",
"2-Figure2-1.png",
"3-Figure4-1.png",
"3-Figure5-1.png",
"4-Figure6-1.png"
]
} | [
"Which two datasets does the resource come from?"
] | [
[
"1809.02494-The resource and its interest-2",
"1809.02494-The resource and its interest-1",
"1809.02494-The resource and its interest-0"
]
] | [
"two surveys by two groups - school students and meteorologists to draw on a map a polygon representing a given geographical descriptor"
] | 381 |
1909.07734 | SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats | We present an overview of the EmotionX 2019 Challenge, held at the 7th International Workshop on Natural Language Processing for Social Media (SocialNLP), in conjunction with IJCAI 2019. The challenge entailed predicting emotions in spoken and chat-based dialogues using augmented EmotionLines datasets. EmotionLines contains two distinct datasets: the first includes excerpts from a US-based TV sitcom episode scripts (Friends) and the second contains online chats (EmotionPush). A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their predictions performance evaluation. The top-scoring team achieved a micro-F1 score of 81.5% for the spoken-based dialogues (Friends) and 79.5% for the chat-based dialogues (EmotionPush). | {
"paragraphs": [
[
"Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0.",
"Detecting and recognizing emotions is a difficult task for machines. Nevertheless, following the successful use of computational linguistics to analyze sentiment in texts, there is growing interest in the more difficult task of the automatic detection and classification of emotions in texts.",
"The detection of emotions in text is a complicated challenge for multiple reasons: first, emotions are complex entities, and no universally-agreed upon psychological model of emotions exists. Second, isolated texts convey less information compared to a complete human interaction in which emotions can be detected from the other person's facial expressions, listening to their tone of voice, etc. However, due to important applications in fields such as psychology, marketing, and political science, research in this topic is now expanding rapidly BIBREF1.",
"In particular, dialogue systems such as those available on social media or instant messaging services are rich sources of textual data and have become the focus of much attention. Emotions of utterances within dialogues can be detected more precisely due to the presence of more context. For example, a single utterance (“OK!”) might convey different emotions (happiness, anger, surprise), depending on its context. Taking all this into consideration, in 2018 the EmotionX Challenge asked participants to detect emotions in complete dialogues BIBREF2. Participants were challenged to classify utterances using Ekman's well-known theory of six basic emotions (sadness, happiness, anger, fear, disgust, and surprise) BIBREF3.",
"For the 2019 challenge, we built and expanded upon the 2018 challenge. We provided an additional 20% of data for training, as well as augmenting the dataset using two-way translation. The metric used was micro-F1 score, and we also report the macro-F1 score.",
"A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their data for performance evaluation, and seven of them submitted technical papers for the workshop. Approaches used by the teams included deep neural networks and SVM classifiers. In the following sections we expand on the challenge and the data. We then briefly describe the various approaches used by the teams, and conclude with a summary and some notes. Detailed descriptions of the various submissions are available in the teams' technical reports."
],
[
"The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table .",
"We employed workers using Amazon Mechanical Turk (aka AMT or MTurk) to annotate the dialogues BIBREF5. Each complete dialogue was offered as a single MTurk Human Intelligence Task (HIT), within which each utterance was read and annotated for emotions by the worker. Each HIT was assigned to five workers. To ensure workers were qualified for the annotation task, we set up a number of requirements: workers had to be from an English-speaking country (Australia, Canada, Great Britain, Ireland, New Zealand, or the US), have a high HIT approval rate (at least 98%), and have already performed a minimum of 2,000 HITs.",
"In the datasets, each utterance is accompanied by an annotation and emotion. The annotation contains the raw count of votes for each emotion by the five annotators, with the order of the emotions being Neutral, Joy, Sadness, Fear, Anger, Surprise, Disgust. For example, an annotation of “2000030” denotes that two annotators voted for “neutral”, and three voted for “surprise”.",
"The labeled emotion is calculated using the absolute majority of votes. Thus, if a specific emotion received three or more votes, then that utterance is labeled with that emotion. If there is no majority vote, the utterance is labeled with “non-neutral” label. In addition to the utterance, annotation, and label, each line in each dialogue includes the speaker's name (in the case of EmotionPush, a speaker ID was used). The emotion distribution for Friends and EmotionPush, for both training and evaluation data, is shown in Table .",
"We used Fleiss' kappa measure to assess the reliability of agreement between the annotators BIBREF6. The value for $\\kappa $-statistic is $0.326$ and $0.342$ for Friends and EmotionPush, respectively. For the combined datasets the value of the $\\kappa $-statistic is $0.345$.",
"Sample excerpts from the two datasets, with their annotations and labels, are given in Table ."
],
[
"NLP tasks require plenty of data. Due to the relatively small number of samples in our datasets, we added more labeled data using a technique developed in BIBREF7 that was used by the winning team in Kaggle's Toxic Comment Classification Challenge BIBREF8. The augmented datasets are similar to the original data files, but include additional machine-computed utterances for each original utterance. We created the additional utterances using the Google Translate API. Each original utterance was first translated from English into three target languages (German, French, and Italian), and then translated back into English. The resulting utterances were included together in the same object with the original utterance. These “duplex translations” can sometimes result in the original sentence, but many times variations are generated that convey the same emotions. Table shows an example utterance (labeled with “Joy”) after augmentation."
],
[
"A dedicated website for the competition was set up. The website included instructions, the registration form, schedule, and other relevant details. Following registration, participants were able to download the training datasets.",
"The label distribution of emotions in our data are highly unbalanced, as can be seen in Figure FIGREF6. Due to the small number of three of the labels, participants were instructed to use only four emotions for labels: joy, sadness, anger, and neutral. Evaluation of submissions was done using only utterances with these four labels. Utterances with labels other than the above four (i.e., surprise, disgust, fear or non-neutral) were discarded and not used in the evaluation.",
"Scripts for verifying and evaluating the submissions were made available online. We used micro-F1 as the comparison metric."
],
[
"A total of eleven teams submitted their evaluations, and are presented in the online leaderboard. Seven of the teams also submitted technical reports, the highlights of which are summarized below. More details are available in the relevant reports."
],
[
"BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function."
],
[
"BIBREF10 BERT is post-trained via Masked Language Model (MLM) and Next Sentence Prediction (NSP) on a corpus consisting of the complete and augmented dialogues of Friends, and the EmotionPush training data. The resulting token embeddings are max-pooled and fed into a dense network for classification. A $K$-fold cross-validation ensemble with majority voting was used for prediction. To deal with the class imbalance problem, weighted cross entropy was used as a training loss function."
],
[
"BIBREF11 A pre-trained BERT is fine-tuned using filtered training data which only included the desired labels. Additional augmented data with joy, sadness, and anger labels are also used. BERT is then fed into a standard feed-forward-network with a softmax layer used for classification."
],
[
"BIBREF12 A support vector machine (SVM) was used for classification. Words are ranked using a per-emotion TF-IDF score. Experiments were performed to verify whether the previous utterance would improve classification performance. Input to the Linear SVM was done using one-hot-encoding of top ranking words."
],
[
"BIBREF13 The classifier uses a pre-trained BERT model followed by a feed-forward neural network with a softmax output. Due to the overwhelming presence of the neutral label, a classifying cascade is employed, where the majority classifier is first used to decide whether the utterance should be classified with “neutral” or not. A second classifier is used to focus on the other emotions (joy, sadness, and anger). Dealing with the imbalanced classes is done through the use of a weighted loss function."
],
[
"BIBREF14 BERT is first used to generate word and sentence embeddings for all utterances. The resulting calculated word embeddings are fed into a Convolutional Neural Network (CNN), and its output is then concatenated with the BERT-generated sentence embeddings. The concatenated vectors are then used to train a bi-directional GRU with a residual connection followed by a fully-connected layer, and finally a softmax layer produces predictions. Class imbalance is tackled using focal loss BIBREF15."
],
[
"BIBREF16 A word embedding layer followed by a bi-directional GRU-based RNN. Output from the RNN was fed into a single-node classifier. The augmented dataset was used for training the model, but “neutral”-labeled utterances were filtered to deal with class imbalance."
],
[
"The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively."
],
[
"An evaluation summary of the submissions is available in Tables and . We only present the teams that submitted technical reports. A full leaderboard that includes all the teams is available on the challenge website. This section highlights some observations related to the challenge. Identical utterances can convey different emotions in different contexts. A few of the models incorporated the dialogue context into the model, such as the models proposed by teams IDEA and KU."
],
[
"Most of the submissions used deep learning models. Five of the models were based on the BERT architecture, with some using pre-trained BERT. Some of the submissions enhanced the model by adding context and speaker related encoding to improve performance. We also received submissions using more traditional networks such as CNN, as well as machine learning classics such as SVM. The results demonstrate that domain knowledge, feature engineering, and careful application of existing methodologies is still paramount for building successful machine learning models."
],
[
"Emotion detection in text often suffers from a data imbalance problem, our datasets included. The teams used two approaches to deal with this issue. Some used a class-balanced loss functions while others under-sampled classes with majority label “neutral”. Classification performance of underrepresented emotions, especially sadness and anger, is low compared to the others. This is still a challenge, especially as some real-world applications are dependent on detection of specific emotions such as anger and sadness."
],
[
"The discrete 6-emotion model and similar models are often used in emotion detection tasks. However, such 1-out-of-n models are limited in a few ways: first, expressed emotions are often not discrete but mixed (for example, surprise and joy or surprise and anger are often manifested in the same utterance). This leads to more inter-annotator disagreement, as annotators can only select one emotion. Second, there are additional emotional states that are not covered by the basic six emotions but are often conveyed in speech and physical expressions, such as desire, embarrassment, relief, and sympathy. This is reflected in feedback we received from one of the AMT workers: “I am doing my best on your HITs. However, the emotions given (7 of them) are a lot of times not the emotion I'm reading (such as questioning, happy, excited, etc). Your emotions do not fit them all...”.",
"To further investigate, we calculated the per-emotion $\\kappa $-statistic for our datasets in Table . We see that for some emotions, such as disgust and fear (and anger for EmotionPush), the $\\kappa $-statistic is poor, indicating ambiguity in annotation and thus an opportunity for future improvement. We also note that there is an interplay between the emotion label distribution, per-emotion classification performance, and their corresponding $\\kappa $ scores, which calls for further investigation."
],
[
"One of the main requirements of successful training of deep learning models is the availability of high-quality labeled data. Using AMT to label data has proved to be useful. However, current data is limited in quantity. In addition, more work needs to be done in order to measure, evaluate, and guarantee annotation quality. In addition, the Friends data is based on an American TV series which emphasizes certain emotions, and it remains to be seen how to transfer learning of emotions to other domains."
],
[
"This research is partially supported by Ministry of Science and Technology, Taiwan, under Grant no. MOST108-2634-F-001-004- and MOST107-2218-E-002-009-."
]
],
"section_name": [
"Introduction",
"Datasets",
"Datasets ::: Augmentation",
"Challenge Details",
"Submissions",
"Submissions ::: IDEA",
"Submissions ::: KU",
"Submissions ::: HSU",
"Submissions ::: Podlab",
"Submissions ::: AlexU",
"Submissions ::: Antenna",
"Submissions ::: CYUT",
"Results",
"Evaluation & Discussion",
"Evaluation & Discussion ::: Deep Learning Models.",
"Evaluation & Discussion ::: Unbalanced Labels.",
"Evaluation & Discussion ::: Emotional Model and Annotation Challenges.",
"Evaluation & Discussion ::: Data Sources.",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"28e5993bee909e14b3ce914e076195ded918a615"
],
"answer": [
{
"evidence": [
"BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function."
],
"extractive_spans": [
"Two different BERT models were developed"
],
"free_form_answer": "",
"highlighted_evidence": [
"IDEA\nBIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4e1dbe0d9eb8959371cf5e703a7b15c53a6e97b8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"7bc0385b8b5772cd1b55680e88b8ebded12ef4d0"
],
"answer": [
{
"evidence": [
"The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table ."
],
"extractive_spans": [],
"free_form_answer": "1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation",
"highlighted_evidence": [
"For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"d569a4ae2ac27040dda6e42d7620d0f03cf9149e"
],
"answer": [
{
"evidence": [
"The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table ."
],
"extractive_spans": [],
"free_form_answer": "1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation",
"highlighted_evidence": [
"For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"8d5a7678b89ebf7028f783e22c3e6156870e6a49"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 6: F-scores for Friends (%)",
"FLOAT SELECTED: Table 7: F-scores for EmotionPush (%)",
"The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively."
],
"extractive_spans": [],
"free_form_answer": "IDEA",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: F-scores for Friends (%)",
"FLOAT SELECTED: Table 7: F-scores for EmotionPush (%)",
" For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"What model was used by the top team?",
"What was the baseline?",
"What is the size of the second dataset?",
"How large is the first dataset?",
"Who was the top-scoring team?"
],
"question_id": [
"544e29937e0c972abcdd27c953dc494b2376dd76",
"b8fdc600f9e930133bb3ec8fbcc9c600d60d24b0",
"bdc93ac1b8643617c966e91d09c01766f7503872",
"4ca0d52f655bb9b4bc25310f3a76c5d744830043",
"d2fbf34cf4b5b1fd82394124728b03003884409c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Reliability of Agreement (κ)",
"Table 2: Emotion Label Distribution",
"Table 3: Dialogue Length Distribution and Number of Utterances",
"Table 4: Example of Augmented Utterance",
"Table 5: Dialogue Excerpts from Friends (top) and EmotionPush (bottom)",
"Table 6: F-scores for Friends (%)",
"Table 7: F-scores for EmotionPush (%)",
"Table 8: Per-emotion Reliability of Agreement (κ)"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"3-Table4-1.png",
"4-Table5-1.png",
"4-Table6-1.png",
"5-Table7-1.png",
"5-Table8-1.png"
]
} | [
"What is the size of the second dataset?",
"How large is the first dataset?",
"Who was the top-scoring team?"
] | [
[
"1909.07734-Datasets-0"
],
[
"1909.07734-Datasets-0"
],
[
"1909.07734-5-Table7-1.png",
"1909.07734-4-Table6-1.png",
"1909.07734-Results-0"
]
] | [
"1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation",
"1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation",
"IDEA"
] | 382 |